text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
VARIATIONAL STRUCTURE OF THE OPTIMAL ARTIFICIAL DIFFUSION
METHOD FOR THE ADVECTION-DIFFUSION EQUATION
K. B. NAKSHATRALA AND A. J. VALOCCHI
arXiv:0905.4771v3 [] 15 Dec 2010
Abstract. In this research note, we provide a variational basis for the optimal artificial diffusion method, which has been a cornerstone in developing many stabilized methods. The optimal
artificial diffusion method produces exact nodal solutions when applied to one-dimensional problems with constant coefficients and forcing function. We first present a variational principle for
a multi-dimensional advective-diffusive system, and then derive a new stable weak formulation.
When applied to one-dimensional problems with constant coefficients and forcing function, this
resulting weak formulation will be equivalent to the optimal artificial diffusion method. We present
representative numerical results to corroborate our theoretical findings.
1. INTRODUCTION
Many transport-related processes are modeled as advective-diffusive system. For example, transport of contaminants in subsurface flows is modeled as an advection-diffusion equation coupled with
Darcy flow. Except for very simple and limited problems, one cannot find analytical solutions and
hence must resort to numerical solutions. However, it is well-known that great care should be taken
in developing numerical formulations in order to avoid spurious oscillations due to the advective
term.
Many numerical formulations have been proposed, which fall under the realm of stabilized methods. One such method is the optimal artificial diffusion method, which has been the basis for
developing many stabilized methods and has also served as a benchmark for comparison [6, 2].
The optimal artificial diffusion method is derived by imposing the condition that it should produce
exact nodal solutions for one-dimensional problems with constant coefficients and forcing function.
This derivation, as one can see, does not have a variational basis.
Herein we outline a variational structure behind the optimal artificial diffusion method. We
start with a variational statement for the advection-diffusion equation (which is not a self-adjoint
operator), and derive a stable weak formulation. The resulting weak formulation when applied
to one-dimensional problems with constant coefficients and forcing function produces the same
difference equation as the optimal artificial diffusion method, which produces exact nodal solutions
Date: December 21, 2017.
Key words and phrases. variational principles, Euler-Lagrange equations, advection-diffusion equation, optimal
artificial diffusion.
1
when applied to such problems. This shows that the optimal artificial diffusion method has a firm
variational basis. This paper also highlights other possible routes in developing stable formulations
for non-self-adjoint operators.
An outline of this short paper is as follows. In Section 2, we present governing equations for an
advective-diffusive system, and also outline the standard weighted residual method. In Section 3, we
describe the optimal artificial diffusion method. In Section 4, we briefly discuss Vainberg’s theorem,
which provides a connection between the weighted residual statement and its corresponding scalar
functional (if it exists). In Section 5, we present a variational principle for the advection-diffusion
equation, and then derive a stable weak formulation. We then show that the resulting weak
formulation produces the same difference equation as the optimal artificial diffusion method for onedimensional problems with constant coefficients and forcing function. Finally, we draw conclusions
in Section 6.
Remark 1.1. It should be noted that there is a huge literature on developing stabilized (finite
element) formulations for an advective-diffusive system. For example, see [1, 3, 12, 4] and references
therein. A thorough discussion of these works is beyond the scope of this paper. In addition, these
discussions are not relevant to the subject matter of this paper as none of them discuss a variational
principle (that is, constructing a scalar functional) for an advection-diffusion system.
2. GOVERNING EQUATIONS: ADVECTIVE-DIFFUSIVE SYSTEM
Let Ω ⊂ Rd be a smooth and bounded domain, where “d” denotes the number of spatial dimensions, and ∂Ω denotes its smooth boundary. As usual, the boundary is divided into ΓD (the
part of the boundary on which Dirichlet boundary condition is prescribed) and ΓN (the part of
the boundary on which Neumann boundary condition is prescribed) such that ΓD ∪ ΓN = ∂Ω and
ΓD ∩ ΓN = ∅. Let x ∈ Ω denote the position vector, and the gradient and divergence operators
are denoted as “grad” and “div”, respectively. Let u : Ω → R denote the concentration, v(x) the
velocity vector field, and k(x) the symmetric and positive-definite diffusivity tensor. For further
discussion consider the following steady advective-diffusive system
(2.1)
v(x) · grad[u(x)] − div [k(x) grad[u(x)]] = f (x) in Ω
(2.2)
u(x) = up (x) on ΓD
(2.3)
n(x) · k(x)grad[u(x)] = tp (x)
on ΓN
where up (x) is the prescribed Dirichlet boundary condition, tp (x) is the prescribed Neumann
boundary condition, f (x) is the prescribed volumetric source, and n(x) denotes the unit outward
normal to the boundary. Note that the advective-diffusive operator is not self-adjoint. It is wellknown that, even under smooth functions for k(x), v(x) and f (x), the solution u(x) to equations
2
(2.1)–(2.3) may exhibit steep gradients close to the boundary especially for advection-dominated
problems [6] (and also see Figure 1, which will be described later in Section 3).
2.1. Notation and preliminaries. In the next section, we present a weak formulation for an
advective-diffusive system. To this end, let us define the following function spaces:
U := u(x) ∈ H 1 (Ω) | u(x) = up (x) on ΓD
(2.4a)
W := w(x) ∈ H 1 (Ω) | w(x) = 0 on ΓD
(2.4b)
where H 1 (Ω) is a standard Sobolev space defined on Ω. Note that the inner product for the above
vector spaces is the standard L2 inner product. That is,
Z
(2.5)
a(x) · b(x) dΩ
(a; b) :=
Ω
Similarly, one can define the weighted
(2.6)
L2
inner product
Z
µ(x)a(x) · b(x) dΩ
(a; b)µ :=
Ω
where µ : Ω →
R+
(where
R+
denotes the set of positive real numbers) is the (scalar) weight
function or measure density. Note that a weight function (which is used to define the weighted
inner product) should not be confused with weighting functions (which are sometimes referred to
as test functions). Using this weighted inner product one can define weighted Sobolev spaces as
n
o
(2.7a)
L2µ (Ω) := u(x) | (u; u)µ < +∞
n
o
(2.7b)
Hµ1 (Ω) := u(x) ∈ L2µ (Ω) | (grad[u]; grad[u])µ < +∞
Corresponding to the function spaces given in (2.4), we can define the following weighted function
spaces
Uµ := u(x) ∈ Hµ1 (Ω) | u(x) = up (x) on ΓD
Wµ := w(x) ∈ Hµ1 (Ω) | w(x) = 0 on ΓD
(2.8a)
(2.8b)
2.2. Standard weighted residual method and the Galerkin formulation. Let w(x) denote
the weighting function corresponding to u(x). A weak formulation based on the standard weighted
residual method for the advective-diffusive system given by equations (2.1)-(2.3) can be written as
Find u(x) ∈ U such that F(w; u) = 0 ∀w(x) ∈ W
(2.9)
where the bilinear functional F is defined as
Z
Z
grad [w(x)] · k(x)grad [u(x)] dΩ
w(x)v(x) · grad [u(x)] dΩ +
F(w; u) :=
Ω
Ω
Z
Z
(2.10)
w(x)tp (x) dΓ
w(x)f (x) dΩ −
−
Ω
ΓN
3
A finite element formulation corresponding to the standard weighted residual formulation can be
written as
(2.11)
Find uh (x) ∈ U h such that F(wh ; uh ) = 0 ∀wh (x) ∈ W h
where U h and W h are finite dimensional subspaces of U and W, respectively. One obtains the
Galerkin formulation by using the same function space for both U h and W h (except on ΓD ).
The Galerkin formulation produces node-to-node spurious oscillations for advection-dominated
problems [6]. The Galerkin method loses its best approximation property when the non-symmetric
convective operator dominates in the transport equation (for example, see the performance of the
Galerkin formulation in Figure 1). In other words, the Galerkin method is not optimal for solving
advection-dominated problems. In principle, it is possible to choose a small enough grid such that
the element Péclet number is less than one, and avoid spurious oscillations under the Galerkin
method. However, it may not always be practical to choose such a fine grid, and therefore, one
needs to employ a stabilized formulation to avoid unphysical oscillations and get meaningful results
on coarse grids. To understand this anomalous behavior several theoretical and numerical studies
have been performed, see [4, 6, 2] and references therein. One of those studies is a simple method
that gives nodally exact solutions for one-dimensional problems with constant coefficients, which is
commonly referred to as the optimal diffusion method.
3. A NODALLY EXACT FORMULATION AND OPTIMAL ARTIFICIAL DIFFUSION
Consider the following one-dimensional advection-diffusion equation with homogeneous Dirichlet
boundary conditions, and constant coefficients (that is, velocity, diffusivity and forcing function are
constants):
du
d2 u
−k 2 =f
dx
dx
∀x ∈ (0, 1)
(3.1)
v
(3.2)
u(x = 0) = 0 and u(x = 1) = 0
The analytical solution for the above problem is given by
1 − exp(vx/k)
f
(3.3)
x−
u(x) =
v
1 − exp(v/k)
which is plotted in Figure 1 for various value of v/k. As one can see, for large v/k we have
steep gradients near the outflow boundary. The above problem has been used as a benchmark for
developing many stabilized formulations [6].
For a numerical solution, let us divide the unit interval into N equal-sized elements (and hence
N +1 nodes), and define h := 1/N . Let the nodes be numbered as j = 0, · · · , N . Then, the position
4
vector of node j is xj = jh. The difference equation at an intermediate node (j = 1, · · · , N − 1)
arising from the Galerkin formulation for the above problem can be written as
v
2
Peh − 1
Peh + 1
uj−1 + h uj +
uj+1 = f
−
(3.4)
2h
Peh
Pe
Peh
where Peh = vh/(2k) is the element Péclet number, and uj is an approximate numerical solution at
node j. That is,
uj ≈ u(xj ) ∀j = 0, · · · , N
(3.5)
The difference equation (3.4) can be rearranged as
(3.6)
v
uj+1 − 2uj + uj−1
uj+1 − uj−1
−k
=f
2h
h2
The above equation basically reveals that the Galerkin formulation approximates the first- and
second-derivatives using a central difference approximation at intermediate nodes.
It is well-known that the Galerkin formulation is under-diffusive when applied to the advectiondiffusion equation, which is considered to be the reason why the formulation gives node-to-node
spurious oscillations. In Figure 1, we compare the numerical solution from the Galerkin formulation
with the analytical solution for the aforementioned one-dimensional problem. As one can see from
the figure, the Galerkin formulation produces spurious node-to-node oscillations for high Péclet
numbers.
To also understand this anomalous behavior of the Galerkin formulation, we now outline a nodally
exact formulation for the above problem, which is commonly referred to as the optimal artificial
diffusion formulation in the literature. To this end, we start with a difference equation at node j
of the form
(3.7)
β−1 uj−1 + β0 uj + β1 uj+1 = f
and the unknown coefficients (β−1 , β0 , and β1 ) are determined so that the above difference equation
gives nodally exact solution for the model problem given by equations (3.1)-(3.2) on the uniform
computational mesh described above. After simplification, the difference equation for the nodally
exact formulation can be written as: (see References [2, 6, 16]),
(3.8)
i
v h
−(1 + coth(Peh ))uj−1 + (2 coth(Peh ))uj + (1 − coth(Peh ))uj+1 = f
2h
where “coth” denotes the hyperbolic cotangent function. By rearranging the terms one can write
the above equation as follows
(3.9)
v
uj+1 − uj−1
uj+1 − 2uj + uj−1
− (k + k̄)
=f
2h
h2
5
where the artificial diffusion coefficient k̄ (which depends on the mesh size, and medium and flow
properties) is given by
vh
k̄ :=
2
(3.10)
coth(Peh )
1
− h
Pe
The difference equation (3.9) reveals that the nodally exact formulation (or the optimal artificial diffusion method) also employs a central difference approximation for the first- and second-derivatives
but solves a modified advective-diffusive system with an additional artificial diffusion given by the
coefficient k̄.
It should be noted that the above derivation for the optimal artificial diffusion method is not
based on a variational principle. In this paper we present a variational structure behind the
optimal artificial diffusion method, which, to the best of our knowledge, has not been reported in
the literature. We start by presenting a variational principle for an advective-diffusive system and
then derive a weak formulation. A finite element approximation of this weak form gives the optimal
artificial diffusion method.
We now present a variational principle for an advective-diffusive system. We then outline the
underlying variational structure behind the optimal artificial diffusion method.
4. VAINBERG’S THEOREM AND EXISTENCE OF A SCALAR FUNCTIONAL
Vainberg’s theorem [15] provides a connection between scalar functionals (also referred to as
“energy” functionals) and weighted residual statements. The theorem provides a criterion to establish when a scalar functional exists. The theorem also provides a formula to compute the scalar
functional (if it exists) from the weighted residual statement, which we will not invoke in this paper.
Vainberg’s theorem can be stated as follows. Let G(w; u) be a weighted residual functional,
which is linear with respect to w (but need not be linear with respect to u). There exists a scalar
functional E(u) such that
d
G(w; u) =
E(u + ǫw)
dǫ
ǫ=0
if and only if
(4.1)
d
d
G(w, u + ǫw̄)
=
G(w̄, u + ǫw)
dǫ
dǫ
ǫ=0
ǫ=0
For a simple proof of Vainberg’s theorem see Reference [9, Chapter 9].
Returning back to our discussion on advective-diffusive system, using Vainberg’s theorem one
can conclude that there exists no scalar functional E(u) whose directional derivative along w(x)
gives the (bilinear) functional in the standard weighted residual method (which is given by equation
6
(2.10)). That is, there is no scalar functional E(u) such that
d
F(w; u) =
E(u + ǫw)
dǫ
ǫ=0
However, we should note conclude that the advective-diffusive system does not have a variational
principle. All we concluded above (invoking Vainberg’s theorem) is that the standard weighted
residual statement (2.10) cannot be obtained as a directional derivative of a scalar functional. It
may be possible to construct a different weighted residual statement with different weight function
(or measure) that can be obtained as a directional derivative of a scalar functional. Stated differently, just because the operator is non-self-adjoint does not mean that there exists no variational
statement.
In a seminal paper, Tonti [14] has correctly highlighted the point that the symmetry of a bilinear
form in the sense of equation (4.1) (which according to Vainberg’s theorem is a necessary and
sufficient condition for the existence of a variational formulation) depends on the choice of the inner
product. (Note that Tonti has used the acronym “bilinear functional” to mean inner product.) One
of the ways to meet the symmetry requirement is by changing the underlying inner product. Tonti
[14] has also shown that changing the inner product is equivalent to transforming the given problem
into a different problem that has the same solution(s). Another related work is by Thangaraj and
Venkatarangan [13] who have presented dual variational principles with applications to magnetohydrodynamics. A related and important work is by Magri [11] who has developed a procedure to
select an appropriate inner product for any given linear operator to meet the symmetry requirement.
Following the discussion in References [14, 13], in the next section we present a variational principle
for an advection-diffusion system.
Remark 4.1. For self-adjoint operators (e.g., the Laplacian operator) one can show that the standard weighted residual method can be obtained as a directional derivative of a scalar functional.
5. A VARIATIONAL PRINCIPLE FOR THE ADVECTION-DIFFUSION EQUATION
Consider the following constrained extremization problem
Z
(5.1)
F (x, u(x), grad[u(x)]) dΩ
extremize
u(x)
Ω
where F : Ω × R × Rd → R is a known scalar functional. It is well-known (for example, see [5, 10])
that the Euler-Lagrange equation for the above extremization problem is given by
(5.2)
Fu (x, u(x), grad[u(x)]) − div [Fp (x, u(x), grad[u(x)])] = 0 ∀x ∈ Ω
where Fu denotes the partial derivative with respect to u(x), and Fp denotes the vector of partial
derivatives with respect to the components of grad[u(x)].
7
We now construct a scalar functional for the advection-diffusion equation. To this end, let us
define the scalar function α(x) as
Z
α(x) := exp −
(5.3)
k
−1
(x)v(x) · dx
Note that the integral inside the exponential operator in equation (5.3) is an indefinite line integral.
Since the tensor k(x) is positive-definite, its inverse exists, and hence the scalar function α(x) is
well-defined. It is important to note that α(x) > 0 ∀x ∈ Ω (as we have assumed the domain to be
bounded and smooth). Now define the functional for the advection-diffusion equation as
1 p
F (x, u(x), grad[u(x)]) := α(x)
(5.4)
∀x ∈ Ω
k k(x)grad [u(x)] k2 − u(x)f (x)
2
p
where k · k denotes the standard Euclidean norm, and k(x) denotes the square root of the
symmetric and positive-definite tensor k(x). (Note that the square root of a symmetric positive-
definite tensor is well-defined, and is also symmetric and positive-definite. See Halmos [8] and also
Gurtin [7, page 13].) A straightforward calculation shows that the Euler-Lagrange equation of the
functional (5.4) is
(5.5)
v(x) · grad[u(x)] − div [k(x) grad[u(x)]] = f (x)
which is the (multi-dimensional) advection-diffusion equation. For the advective-diffusive system
given by equations (2.1)-(2.3) (that is, including the boundary conditions), the variational statement
can be written as
I(u)
(5.6)
extremize
(5.7)
subject to u(x) = up (x) on ΓD
u(x)
where the functional I(u) is defined as
Z
Z
(5.8)
F (x, u(x), grad[u(x)]) dΩ −
I(u) :=
Ω
α(x)u(x)tp (x) dΓ
ΓN
and the scalar function α(x) is same as before, equation (5.3).
We now use this variational principle to derive a new weak formulation for the advective-diffusive
system. We later show that for constant coefficients and homogeneous Dirichlet boundary conditions, this formulation is same as the optimal artificial diffusion method.
Remark 5.1. It is well-known that, using the least-squares method one can construct a minimization problem for a given partial differential equation of the form Lu − f = 0 as follows:
Z
(5.9)
kLu − f k2 dΩ
minimize
u(x)
Ω
8
But it should be noted that the Euler-Lagrange equation of the above minimization problem need
not be the original differential equation Lu − f = 0. For example, if we consider the differential
equation to be d2 u/dx2 = 0 in Ω = (0, 1), the corresponding minimization problem based on the
least-squares method takes the following form:
(5.10)
minimize
Z
0
1 2 2
d u
dx2
dx
However, the Euler-Lagrange equation of the above minimization problem is d4 u/dx4 = 0 (which
is not same as the differential equation we considered). Also, another difference between the minimization problem based on the least-squares method and the variational problem is in the regularity
requirements. For example, in the minimization problem based on the least-squares method (5.10),
u should be twice differentiable (or, more precisely, u ∈ H 2 ((0, 1))). On the other hand, the (standard) variational principle for the differential equation d2 u/dx2 = 0 requires that u be differentiable
once (that is, u ∈ H 1 ((0, 1))).
5.1. A new stable formulation. A necessary condition that the extremum of the functional I(u)
has to satisfy is
(5.11)
d
I(u(x) + ǫw(x))
= 0 ∀w(x)
dǫ
ǫ=0
For convenience let us define the bilinear functional B(w; u) as
Z
d
α(x)grad [w(x)] · (k(x)grad [u(x)]) dΩ
I(u(x) + ǫw(x))
=
B(w; u) :=
dǫ
Ω
ǫ=0
Z
Z
(5.12)
α(x)w(x)tp (x) dΓ
− α(x)w(x)f (x) dΩ −
ΓN
Ω
A new weak formulation for the advective-diffusive system can be written as
(5.13)
Find u(x) ∈ Uα such that B(w; u) = 0 ∀w(x) ∈ Wα
where Uα and Wα are weighted function spaces defined in equation (2.8) with weight function
µ(x) = α(x). Recall that the scalar function α(x) is defined in equation (5.3). A corresponding
finite element formulation can be written as
(5.14)
Find uh (x) ∈ Uαh such that B(wh ; uh ) = 0 ∀wh (x) ∈ Wαh
where Uαh and Wαh are finite dimensional subspaces of Uα and Wα , respectively. It is important to
note the formulation (5.13) (and hence the formulation (5.14)) is valid even for spatial dimensions
d = 1, 2, 3.
9
Remark 5.2. Some notable differences between the bilinear functions F(w; u) and B(w; u) are as
R
follows. A non-symmetric term similar to Ω w(x)v(x) · grad[u(x)] dΩ is not present in the bilinear
functional B(w; u). The bilinear functional B(w; u) is symmetric in the sense that
d
d
B(w; u + ǫw̄)
=
B(w̄; u + ǫw)
dǫ
dǫ
ǫ=0
ǫ=0
which is not the case for the bilinear functional F(w; u). The symmetry of B(w; u) (which, according
to Vainberg’s theorem, is necessary and sufficient for the existence of a scalar functional) should
not be surprising as we defined the bilinear functional B(w; u) (5.12) as a directional derivative of
a scalar functional. Another notable difference is that the bilinear functional F(w; u) is based on
the standard L2 inner product, which has the weight function (or measure density) to be 1. The
bilinear functional B(w; u) is based on the weighted L2 inner product with the weight function equal
to α(x) > 0 (which is defined in equation (5.3)).
5.2. Application to the one-dimensional problem. We now apply the new formulation to the
one-dimensional problem defined in Section 3, and compare the difference equation produced by
this new stable formulation with the one produced by the optimal artificial diffusion method given
by equation (3.9) for an intermediate node. For the case at hand, the coefficients v and k are
constants, and hence α(x) = exp[−vx/k]. The variable u(x) is interpolated as
(5.15)
u(x) = uj−1 N−1 (x) + uj N0 (x) + uj+1 N+1 (x) xj−1 ≤ x ≤ xj+1
where the shape functions are defined as
(5.16a)
(5.16b)
(5.16c)
(x−xj )
h
N−1 (x) =
(
−
N0 (x) =
(
(x−xj )+h
h
h−(x−xj )
h
N+1 (x) =
(
0
−h ≤ x − xj ≤ 0
(x−xj )
h
0 < x − xj ≤ +h
−h ≤ x − xj ≤ 0
0 < x − xj ≤ +h
0
−h ≤ x − xj ≤ 0
0 < x − xj ≤ +h
The weighting function is also similarly interpolated. The difference difference equation produced
by the new stable formulation at an intermediate node can be written as
(5.17)
γ−1 uj−1 + γ0 uj + γ+1 uj+1 = f
10
The coefficients γ−1 , γ0 and γ+1 can be written as
R +h
′
′
−h α(x)k(x)N0 (x)N−1 (x) dx
γ−1 =
(5.18a)
R +h
−h α(x)N0 (x) dx
′ 2
R +h
α(x)k(x)
N0 (x) dx
−h
(5.18b)
γ0 =
R +h
−h α(x)N0 (x) dx
R +h
′
′
−h α(x)k(x)N0 (x)N+1 (x) dx
(5.18c)
γ+1 =
R +h
−h α(x)N0 (x) dx
where a superscript prime denotes a derivative with respect to x. After simplification, the coeffi-
cients can be written as
(5.19)
γ−1 = −
v
v
v
1 + coth(Peh ) , γ0 = coth(Peh ), γ+1 =
1 − coth(Peh
2h
h
2h
As one can see, the coefficients γ−1 , γ0 and γ+1 are, respectively, the same as the coefficients
β−1 , β0 and β+1 (see equations (3.7) and (3.8)), which are obtained using the optimal artificial
diffusion method. In Figure 2, we compare the numerical solutions obtained using the new stable
formulation with the analytical solutions. As predicted by the theory, the new stable method
produces nodally exact solution for all Péclet numbers. Both the theoretical and numerical studies
have shown that the stable weak formulation proposed in the previous section is equivalent to the
optimal artificial diffusion method. Hence, we have provided a variational basis for the artificial
diffusion method, which has academic importance, and also has been the basis in developing and
testing many stabilized formulations.
Although the new stable formulation is valid even in two- and three-dimensions, preliminary
numerical studies have indicated that this formulation may not be practically feasible for largescale problems due to the following reasons: evaluating the exponential function is expensive, one
may require more Gauss points per element in order to evaluate integrals in the weak form, and
also the resulting stiffness matrix will be ill-conditioned without an appropriate preconditioner.
However, the new stable formulation, as shown in this paper, does have theoretical significance. It
also highlights a possible route of developing robust stable formulations. This paper also highlights
the relevance of the theoretical studies by Tonti on variational principles (for example, Reference
[14]) to develop new stabilized formulations.
6. CONCLUSIONS
It is well-known that the classical Galerkin formulation (which is based on the standard weighted
residual method) produces spurious node-to-node oscillations for the advection-diffusion equation
for advection-dominated problems. For self-adjoint operators (e.g., the Laplacian equation), the
11
standard weighted residual formulation can be obtained as a directional derivative of a scalar
functional. However, for the advection-diffusion equation (using Vainberg’s theorem) it can be
shown that the standard weighted residual method cannot be obtained as a directional derivative
of a scalar functional.
In this paper, we presented a variational principle for an advective-diffusive system, and derived a stable weak formulation. This resulting weak formulation when applied to one-dimensional
problems gives rise to the same difference equation as the optimal artificial diffusion method, which
produces exact nodal solutions when applied to one-dimensional problems with constant coefficients
and forcing function. Hence, we have provided a variational basis for the optimal artificial diffusion
method, which has been the cornerstone in developing many stabilized methods. We presented
representative numerical results to corroborate our theoretical findings
ACKNOWLEDGMENTS
The first author (K. B. Nakshatrala) acknowledges the financial support given by the Texas
Engineering Experiment Station (TEES). The second author (A. J. Valocchi) was supported by
the Department of Energy through a SciDAC-2 project (Grant No. DOE DE-FCO207ER64323).
The opinions expressed in this paper are those of the authors and do not necessarily reflect that of
the sponsors.
References
[1] R. Codina. Comparison of some finite element methods for solving the diffusion-convection-reaction equation.
Computer Methods in Applied Mechanics and Engineering, 156:185–210, 1998.
[2] J. Donea and A. Huerta. Finite Element Methods for Flow Problems. John Wiley & Sons, Inc., Chichester, UK,
2003.
[3] L. P. Franca, S. L. Frey, and T. J. R. Hughes. Stabilized finite element methods: I. Application to the advectivediffusive model. Computer Methods in Applied Mechanics and Engineering, 95:253–276, 1992.
[4] L. P. Franca, G. Hauke, and A. Masud. Revisiting stabilized finite element methods for the advective-diffusive
equation. Computer Methods in Applied Mechanics and Engineering, 195:1560–1572, 2006.
[5] I. M. Gelfand and S. V. Fomin. Calculus of Variations. Dover Publications, New York, USA, 2000.
[6] P. M. Gresho and R. L. Sani. Incompressible Flow and the Finite Element Method: Advection-Diffusion, volume 1.
John Wiley & Sons, Inc., Chichester, UK, 2000.
[7] M. E. Gurtin. An Introduction to Continuum Mechanics. Academic Press, San Diego, USA, 1981.
[8] P. R. Halmos. Finite-Dimensional Vector Spaces. Springer-Verlag, New York, USA, 1993.
[9] K. D. Hjelmstad. Fundamentals of Structural Mechanics. Springer Science+Business Media, Inc., New York,
USA, second edition, 2005.
[10] J. Jost and X. Li-Jost. Calculus of Variations. Cambridge University Press, Cambridge, UK, 1998.
[11] F. Magri. Variational formulation for every linear equation. International Journal of Engineering Science, 12:537–
549, 1974.
12
[12] A. Masud and R. A. Khurram. A multiscale/stabilized finite element method for the advection-diffusion equation.
Computer Methods in Applied Mechanics and Engineering, 193:1997–2018, 2004.
[13] D. Thangaraj and S. N. Venkatarangan. Dual variational principles for a class of non-linear partial differential
equations. IMA Journal of Applied Mathematics, 30:21–26, 1983.
[14] E. Tonti. Variational formulation for every nonlinear problem. International Journal of Engineering Science,
22:1343–1371, 1984.
[15] M. M. Vainberg. Variational Methods for the Study of Nonlinear Operators. Holden-Day, Inc., San Francisco,
USA, 1964.
[16] O. C. Zienkiewicz and R. L. Taylor. The Finite Element Method: Fluid Dynamics. John Wiley & Sons, Inc.,
New York, USA, fifth edition, 2000.
13
1.6
1.4
1.2
u(x)
1
h
e
P = 5.0
0.8
0.6
0.4
h
e
h
e
P = 0.5
P = 0.1
0.2
h
0
0
0.1
0.2
0.3
0.4
P = 0.05
e
0.5
0.6
0.7
x
0.8
0.9
1
Figure 1. In this figure, we compare the Galerkin formulation (which is denoted
using solid squares and dotted lines) with the analytical solution (which is denoted
using solid continuous lines) for one-dimensional advection-diffusion equation with
homogeneous Dirichlet boundary conditions for various v/k ratios. We have taken
the forcing function to be unity (i.e., f = 1), and h = 0.1. As expected, the Galerkin
formulation produced spurious node-to-node oscillations for high Péclet numbers.
Correspondence to: Kalyana Babu Nakshatrala, Department of Mechanical Engineering, 216 Engineering/Physics Building, Texas A&M University, College Station, Texas-77843, USA. TEL: +1-979845-1292
E-mail address: [email protected]
Albert Valocchi, Department of Civil and Environmental Engineering, University of Illinois at
Urbana-Champaign, Urbana, Illinois-61801, USA.
E-mail address: [email protected]
14
1
h
e
P = 5.0
0.8
h
e
0.6
u(x)
P = 0.5
0.4
h
e
P = 0.1
0.2
0
0
h
P
e
0.1
0.2
0.3
0.4
0.5
x
= 0.05
0.6
0.7
0.8
0.9
1
Figure 2. In this figure, we compare the numerical solution using the new stable formulation (which is denoted using solid squares) with the analytical solution (which is denoted using solid continuous lines) for one-dimensional advectiondiffusion equation with homogeneous Dirichlet boundary conditions for various v/k
ratios. We have taken the forcing function to be unity (i.e., f = 1), and h = 0.1. As
predicted by the theory, the new stable formulation produces nodally exact solutions
for the chosen one-dimensional problem for all Péclet numbers. The continuous line
denotes the analytical solution, and solid squares denote the numerical solution from
the new stable formulation.
15
| 5 |
arXiv:1706.06638v1 [] 20 Jun 2017
On convergence of the sample correlation matrices
in high-dimensional data
Sévérien Nkurunziza∗
and
Yueleng Wang†
Abstract
In this paper, we consider an estimation problem concerning the matrix of correlation coefficients in context of high dimensional data settings. In particular, we revisit
some results in Li and Rolsalsky [Li, D. and Rolsalsky, A. (2006). Some strong limit
theorems for the largest entries of sample correlation matrices, The Annals of Applied
Probability, 16, 1, 423–447]. Four of the main theorems of Li and Rolsalsky (2006)
are established in their full generalities and we simplify substantially some proofs of
the quoted paper. Further, we generalize a theorem which is useful in deriving the
existence of the pth moment as well as in studying the convergence rates in law of
large numbers.
Keywords: Convergence almost surely; Correlation coefficient; Strong Law of Large numbers; convergence rates.
∗
University of Windsor, 401 Sunset Avenue, Windsor, Ontario, N9B 3P4. Email: [email protected]
†
University of Windsor, 401 Sunset Avenue, Windsor, Ontario, N9B 3P4. Email: [email protected]
1 Introduction
As in Li and Rolsalsky (2006), we consider an estimation problem concerning the matrix of correlation coefficients in context of high-dimensional data. In particular, as in
the quoted paper, we are interested in asymptotic properties of the largest entries of the
matrix of the correlation coefficients when the sample size may be smaller than the parameters. Thus, we use the same notations as in Li and Rolsalsky (2006). Namely, let
X = (X1 , X2, ..., X p) be a p-variate random vector, and Mn,pn = (Xk,i )1≤k≤n,1≤i≤pn , rows
of Mn,pn are independent copies of X. Let M = {Xk,i ; i ≥ 1, k ≥ 1} be an array of i.i.d.
(n)
random variables. Further, let X̄i
n
=
max
means ith column of Mn,pn , and let
k=1
n
Wn =
(n)
∑ Xk,i/n, Xi
∑ Xk,iXk, j .
1≤i< j≤pn k=1
Li and Rosalsky (2006) studied the limit behavior of Wn and
that of Ln which is defined as
n
Ln =
(n)
max |ρ̂i, j | with
1≤i< j≤pn
(n)
ρ̂i, j =
(n)
∑ (Xk,i − X̄i
k=1
n
∑
k=1
(n) 1/2
(Xk,i − X̄i )2
(n)
)(Xk, j − X̄ j )
n
∑
k=1
.
(n) 1/2
(Xk, j − X̄ j )2
In Jiang (2004), the author derived the asymptotic properties of the statistic Ln and he
proposed a test for testing if the components of the p-column vector X are uncorrelated.
In this paper, we establish the asymptotic results which refine the analysis in Li and
Rosalsky (2006). More specifically, with respect to the similar work in literature, we extend
the existing findings in three ways. First, Theorems 2.1 and 2.3 of Li and Rosalsky (2006)
are established in their full generalities. Thanks to the established results, we also revisit
the statement given in Remark 2.1 in Li and Rosalsky (2006). Further, from the established
results, we simplify remarkably the proof of the result stated in Remark 2.3 of Li and
2
Rosalsky (2006). More precisely, we prove that the condition which is stated as sufficient
is also necessary. Second, we refine Theorems 3.2 and Theorem 3.3 of Li and Rosalsky
(2006) and, as compared to the proofs given in Li and Rosalsky (2006), we provide the
proofs which are significantly shorter than that in the quoted paper. Third, we generalize
Theorem 3.2.1 in Chung (1974) which is useful in establishing the existence of the pth moment as well as in studying the convergence rates in Law of Large numbers. Specifically,
the established result is useful in deriving the main results in Erdös (1949), Baum and Katz
(1965) and Katz (1963) among others.
The remainder of this paper is organized as follows. Section 2 presents some preliminary results. In Section 3, we present the main results of this paper. Finally, for the
convenience of the reader, we present some proofs in the Appendix A and, we recall in the
Appendix B some existing results used in this paper.
2 Some Preliminary Results
In this section, we derive some results which are useful in establishing the main results
of this paper. In particular, the results of this section generalize Theorem 3.2.1 given in
Chung (1974). For the convenience of the reader, the quoted theorem stipulates that for a
∞
∞
random variable X , we have
∞
E(|X |) < ∞ if and only if
∑ P(|X | > n) 6 E(|X |) 6 1 + ∑ P(|X | > n). Thus,
n=1
n=1
∑ P(|X | > n) < ∞. To introduce some notations, let an = O(bn)
n=1
stand for the sequence an /bn , n=1,2,. . . is bounded.
∞
Theorem 2.1. Let {αn }∞
n=0 be nonnegative sequence of real numbers, let {βn }n=0 be non-
3
negative and nondecreasing sequence of real numbers such that β0 = 0, lim βn = ∞,
n→∞
βn+1 − βn = O(βn) and c−1 αn ≤ βn − βn−1 ≤ cαn
n = 1, 2, . . ., for some
c > 1. Then, for any random variable X ,
∞
∞
c−1 ∑ αn P(|X | > βn ) 6 E(|X |) 6 β1 + (B + 1)c ∑ αn P(|X | > βn ), for some B > 0.
n=1
n=1
∞
Thus, E(|X |) < ∞ if and only if
∑ αnP(|X | > βn) < ∞.
n=1
The proof of this theorem is given in the Appendix A. To illustrate how the established
result generalizes Theorem 3.2.1 in Chung (1974), we note first that, from the quoted the∞
orem, we also have,
∞
∑ P(|X | > n) 6 E(|X |) 6 1 + 2 ∑ P(|X | > n). For the particular case
n=1
n=1
where αn = 1, βn = n, Theorem 2.1 yields Theorem 3.2.1 in Chung (1974) with c = B = 1.
By using this theorem, we establish the following result which is useful in deriving one of
the main result of this paper.
Corollary 2.2. Let α > 0, let β > 0 and let X be a random variable. Then,
α +1
∞
α
β
P
|X
|
>
n
<
∞
if
and
only
if
E
|X | β < ∞.
n
∑
n=1
The proof of this corollary is given in the Appendix A. Further, from Theorem 2.1 and
Corollary 2.2, we derive the following corollary which is useful in establishing the second
main result of this paper.
∞
Corollary 2.3. Let α , β > 0, let {αn }∞
n=1 and {βn }n=1 be nonnegative sequences of real
numbers and suppose that αn /nα and βn /nβ are bounded away from 0 and ∞. Then, for a
α +1
∞
random variable X , ∑ αn P |X | > βn < ∞ if and only if E |X | β < ∞.
n=1
The proof follows directly from Corollary 2.2. Below, we derive a lemma which plays a
central role in establishing the main results. Thank to the established lemma, we also sim4
plify significantly the proof of the statement in Remark 2.3 of Li and Rosalsky (2006). To
introduce some notations, we consider that X1,i , i = 1, 2, ..., are independent and identically
distributed random variables, and Xk = (Xk,1 , Xk,2, ..., ) is an independent copy of a random
vector X1 = (X1,1 , X1,2 , ..., ).
Lemma 2.4. Let m be a fixed positive integer and let {un }∞
n=1 be a nonnegative sequence
of real numbers. Then,
∞
m
n=1
h=1
∑ nmP ∏ |X1,h| ≥ un
!
∞
< ∞ if and only if
m
∑P
∏ |X1,ih | ≥ un
max
1≤i1 <i2 <···<im ≤n h=1
n=1
!
< ∞.
The proof of this lemma is given in the Appendix A. Note that the if part is obvious
as it follows directly from the sub-additivity of a probability measure. We also prove the
following lemma which is useful in deriving the main results of this paper.
Lemma 2.5. Let m be a fixed positive integer. Let {un }∞
n=1 be a nonnegative and nondecreasing sequence of real numbers such that lim un = ∞. Further, suppose that there exists
n→∞
.
a nonnegative, continuous and increasing function f such that f (un ) nβ is bounded away
from 0 and from infinity, for some β > 0. Then,
!
∞
∑P
n=1
max
∏ |X1,ih | ≥ un
1≤i1 <i2 <···<im ≤n h=1
∞
Proof. From Lemma 2.4, ∑ P
!n=1
∞
∑n
m
!# m+1
β
< ∞.
∏ |X1,h|
"
m
m
< ∞ if and only if E f
m
∏ |X1,ih | ≥ un
max
1≤i1 <i2 <···<im ≤n h=1
!
h=1
< ∞ if and only if
m
P
n=1
∏ |X1,h| ≥ un
h=1
< ∞. Since the function f is increasing and continuous, this last
"
!
#
∞
statement is equivalent to
∑ nm P
n=1
m
f
∏ |X1,h|
h=1
lary 2.3, this last statement is equivalent to E f
the proof.
5
≥ f (un ) < ∞. Then, by using Corol!! m+1
m
∏ |X1,h|
h=1
β
< ∞, this completes
∞
Corollary 2.6. We have
,
m
∑P
n=1
m
E ∏ |X1,h|2(m+1)
h=1
∏ |X1,ih | ≥
max
1≤i1 <i2 <···<im ≤n h=1
m
ln e + ∏ |X1,h |
h=1
!!m+1
p
n ln(n)
!
< ∞ if and only if
< ∞, with m a fixed positive integer.
The proof of this corollary is given in the Appendix A. From this corollary, we establish
the following result which improves the statement in Remark 2.3 of Li and Rosalsky (2006).
For the convenience of the reader, we recall that, in Remark 2.3 in Li and Rosalsky (2006),
∞
p
the authors conclude that ∑ P max |X1X2 | ≥ n log n < ∞ implies E|X1 |β < ∞, for
1≤i< j≤n
n=1
0 6 β < 6. This becomes a special case of the following result by taking m = 2.
∞
m
!
p
max
Corollary 2.7. Suppose that ∑ P
∏ |X1,ih | ≥ n ln(n) < ∞, for a fixed
1≤i1 <i2 <···<im ≤n h=1
n=1
i
h
positive integer m. Then, E |X1,1 |β < ∞, for all 0 6 β < 2(m + 1).
The proof of this corollary follows directly from Corollary 2.6 along with classical
properties of expected value of random variables.
Corollary 2.8. Suppose that the conditions of Lemma 2.5 hold with un /nβ bounded away
!
∞
m
from 0 and from infinity, for some β > 0. Then, ∑ P
n=1
m+1
if and only if E |X1,1 | β < ∞.
max
∏ |X1,ih | ≥ un
1≤i1 <i2 <···<im ≤n h=1
<∞
Proof. This result follows immediately from Lemma 2.5 with f being an identity function,
and the fact that X1,i , i = 1, 2, ... are i.i.d. random variables.
Corollary 2.9. Let {un }∞
n=1 be a nonnegative sequence of real numbers. Then,
∞
∞
2
∑ n P (|X1,1X1,2| ≥ un ) < ∞ if and only if ∑ P max |X1,iX1, j | ≥ un < ∞.
n=1
n=1
1≤i< j≤n
Proof. The proof follows directly from Lemma 2.4 by taking m = 2.
6
3 Main Results
In this section, we present the main results of this paper. In particular, Theorems 2.1 and 2.3
of Li and Rosalsky (2006) are established in their full generalities. We also refine Theorem 3.2 and 3.3 in Li and Rosalsky (2006) and we provide the proofs which are significantly
shorter than that given in the quoted paper. In the sequel, as in Li and Rosalsky (2006), let
(Uk,i ,Vk,i ); i > 1, k > 1 be iid two-dimensional random vectors, and let {pn , n > 1} be
n
a sequence of positive integers. Further,let Tn =
max
∑ Uk,iVk, j , n = 1, 2, . . ., let
16i6= j6pn k=1
{Yn , n = 1, 2, . . .} be a sequence of iid random variables where Y1 is distributed as U1,1V1,2
n
and let Sn =
∑ Yk , n = 1, 2, . . .
k=1
Theorem 3.1. Suppose that n/pn is bounded away from 0 and ∞, and let 1/2 < α ≤ 1.
Then, the following conditions are equivalent.
Tn
n→∞ nα
∞
(1.) lim
(2).
∑P
n=1
= 0 a.s.
max |U1,iV1, j | ≥ n
1≤i6= j≤n
α
<∞
and E(U1,1 )EV1,1 ) = 0.
(3). E |U1,1 |3/α |V1,2 |3/α < ∞ and E(U1,1 )EV1,1 ) = 0.
Proof. The equivalence between parts (2) and (3) follows directly from Corollary 2.8 by
taking m = 2 and X1,1 = U1,1V1,2 . As for the equivalence between part (1) and part (2), the
proof of "only if" part is similar to that given in Li and Rosalsky (2006) and thus, we need to
give the proof of the "if" part. To this end, note that by Corollary 2.9 and Corollary 2.3, the
∞
3
α
condition ∑ P
max |U1,iV1, j | ≥ n
< ∞ is equivalent to E|U1,1V1,2 | α < ∞. Then, by
n=1
1≤i6= j≤n
the celebrated theorem of Baum and Katz (1965) (or see Theorem B.1 in the Appendix B),
7
∞
|Sm |
we have, ∑ nP sup α > ε
m>n m
n=1
which means
p2n
n
<
c2 n2
n
<∞
for all ε > 0. Then, since c−1 ≤ pn /n < c, n ≥ 1,
= c2 n, then
∞
|Sn |
p2n
∑ P nα > ε < ∞,
n=1 n
for all ε > 0.
(3.1)
[cn]α
Further, one verifies that lim lim sup α = 1. Futher, note that p2n /n > c−2 n, and then,
n
c↓1 n→∞
P
|S |
|S |
by (3.1), we have c−2 nP nαn > ε → 0 and P nαn > ε → 0, i.e. nSαn −−−→ 0. Hence, by
n→∞
Tn
Theorem 3.1 of Li and Rosalsky (2006), we have lim sup α 6 ε
n→∞ n
a.s. for all ε > 0, and
then, by letting ε ↓ 0, we get the desired result, and this completes the proof.
Note that, by part (3) of Theorem 3.1, we generalizes Theorem 3.2 of Li and Rosalsky (2006). In addition, we present a very short proof of the equivalence between part (1)
and part (2) as compared to the proof given in the quoted paper.
Theorem 3.2. Suppose that n/pn is bounded away from 0 and ∞.
If E(U1,1 )E(V1,1 ) = 0,
E
|U1,1V1,2 |6
2 )E(V 2 ) = 1 and
E(U1,1
1,1
(log(e + |U1,1V1,2 |))3
!
∞
< ∞ or
∑P
n=1
p
max |U1,iV1, j | ≥ n log n < ∞,
1≤i6= j≤n
(3.2)
then
lim sup √
n→∞
Tn
≤ 2 a.s.
n log n
Tn
Conversely, if lim sup √
< ∞ a.s., then (3.2) hold, E(U1,1 )E(V1,1 ) = 0, and
n log n
n→∞
E |U1,1 |β E |V1,2 |β < ∞ for all 0 6 β < 6.
Proof. The proof of the second part of the theorem follows from Corollary 2.7 and by
following the same steps as in proof of the only if part of Theorem 3.3 of Li and Rosal8
sky (2006). To prove the first part of the theorem, we note first that the equivalence between
the conditions in (3.2) follows directly from Corollary 2.6 by taking m = 2. Further, if
E
|U1,1V1,2 |6
(log(e + |U1,1V1,2 |))3
!
< ∞,
by Theorem 3 of Lai (1974) (or see Theorem B.2 in the Appendix B), we have
∞
|Sn |
∑ nP √n log n > λ
n=2
<∞
for all λ > 2,
∞
|Sn |
p2n
∑ n P √n log n > λ < ∞ for all λ > 2. Therefore,
n=2
Tn
by Theorem 3.1 of Li and Rosalsky (2006), we get lim sup α 6 λ a.s. for all λ > 2, and
n→∞ n
Sn
P
and then, √
−−−→ 0 and
n→∞
n log n
then, letting λ ↓ 2, we get the desired result, and this completes the proof.
Note that by taking β = 2 in the second part of Theorem 3.2, we have the statement in
Li and Rosalsky (2006). Further, in the first part of Theorem 3.2, the condition (3.2) relaxes
the condition (3.15) of Li and Rosalsky (2006). Moreover, in addition to state in its full
generality the result of Theorem 3.3 of Li and Rosalsky (2006), we simplify substantially
the proof. By using Corollary 2.9, we establish the following result which generalizes
Theorem 2.1 in Li and Rosalsky (2006).
Theorem 3.3. Suppose that n/pn is bounded away from 0 and ∞. Let 1/2 < α ≤ 1. Then,
the following statements are equivalent.
∞
(1).
∑ n2P(|X1,1X1,2| ≥ nα ) < ∞
and
E(X1,1 ) = 0.
n=1
∞
(2).
max |X1,i X1, j | ≥ nα ) < ∞
∑ P(1≤i<
j≤n
and
n=1
Wn
= 0. a.s.
n→∞ nα
(3). lim
9
E(X1,1 ) = 0.
(4). E |X1,1|3/α < ∞
and
E(X1,1 ) = 0.
Proof. The equivalence between (1) and (2) follows directly form Corollary 2.9 by taking
un = nα . Further, the equivalence between the statements in (2) and (3) is established in Li
and Rosalsky (2006). Finally, the equivalence between the statements (2) and (4) follows
from Corollary 2.8 by taking m = 2, this completes the proof.
Further, by using Lemma 2.4 and Corollary 2.6, we establish in its full generality Theorem 2.3 of Li and Rosalsky (2006).
Theorem 3.4. Suppose that n/pn is bounded away from 0 and ∞. Then, the following
statements are equivalent.
∞
(1). E(X1,1 ) = 0,
2 )=
E(X1,1
p
1and ∑ n P |X1,1 X1,2 | ≥ n log n < ∞.
2
n=1
∞
(2). E(X1,1 ) = 0,
2 )=
E(X1,1
1 and
∑P
n=1
(3). lim √
n→∞
Wn
= 2 a.s.
n log n
(4). E(X1,1 ) = 0,
2 )=
E(X1,1
1and E
max |X1,i X1, j | ≥
1≤i< j≤n
p
n log n < ∞.
(X1,1 X1,2)6
< ∞.
log3 (e + |X1,1X1,2 |)
Proof. The Equivalence between (1) and (2) follows directly from Lemma 2.4 by taking
m = 2 and un =
p
n ln(n). Further, the equivalence between (2) and (3) is established in
Li and Rosalsky (2006), and the equivalence between (2) and (4) follows directly from
Corollary 2.6 by taking m = 2, this completes the proof.
10
A Proofs of some preliminary results
Proof of Theorem 2.1. Let Λn = {βn ≤ |X | < βn+1 }, n = 0, 1, . . .. We have
Z
E(|X |) = [
∞
Λn
|X | dP =
∞ Z
∑
n=0 Λn
|X | dP,
n=0
and then,
∞
∞
∞
∞
∑ βnP(Λn ) ≤ E(|X |) ≤ ∑ βn+1P(Λn ) = ∑ (βn+1 − βn)P(Λn) + ∑ βnP(Λn ).
n=0
n=0
(A.1)
n=0
n=0
Further, there exists B > 0 such that (βn+1 − βn ) 6 Bβn , n = 1, 2, . . .. Then,
∞
∞
∞
∞
∑ (βn+1 − βn)P(Λn) + ∑ βnP(Λn) 6 β1P(Λ0) + B ∑ βnP(Λn) + ∑ βnP(Λn )
n=0
n=0
n=1
∞
n=1
= β1 P(Λ0 ) + (B + 1) ∑ βn P(Λn ).
n=1
Then, from (A.1), we have
∞
∞
∑ βnP(Λn) ≤ E(|X |) ≤ β1 + (B + 1) ∑ βnP(Λn).
n=1
(A.2)
n=1
Observe that
N
N
∑ βnP(Λn) = ∑ (βn − βn−1 )P(|X | ≥ βn) − βN P(|X | ≥ βN+1 ),
n=1
N = 1, 2, . . .,
(A.3)
n=1
then,
N
N
∑ βnP(Λn ) 6
n=1
∑ (βn − βn−1 )P(|X | ≥ βn),
N = 1, 2, . . .,
(A.4)
n=1
this gives
∞
∑ βn P(Λn) 6
n=1
∞
∞
∑ (βn − βn−1 )P(|X | ≥ βn) 6 c ∑ αnP(|X | ≥ βn),
n=1
n=1
11
(A.5)
and then, combining (A.2) and (A.5), we have
∞
∞
∞
∑ βnP(Λn ) ≤ E(|X |) ≤ β1 + (B + 1) ∑ βnP(Λn ) 6 β1 + (B + 1)c ∑ αnP(|X | ≥ βn).
n=1
n=1
n=1
(A.6)
∞
First, suppose that E(|X |) = ∞, by (A.6), we have
∞
∞
n=1
n=1
∑ αn P(|X | ≥ βn) = ∞ and then,
n=1
c−1 ∑ αn P(|X | ≥ βn ) = E(|X |) = (B+1)c ∑ αn P(|X | ≥ βn ) = ∞, this proves the statement.
Second, suppose that E(|X |) < ∞. By Lebesgue dominated convergence Theorem, we
have lim E(|X |I{|X|≥βN+1} ) = 0 and then,
N→∞
βN P(|X | ≥ βN+1 ) ≤ βN+1 P(|X | ≥ βN+1 ) ≤ E(|X |I{|X|≥βN+1} ) −−−→ 0.
N→∞
Thus, from (A.3), we have
∞
∞
∑ (βn − βn−1 )P(|X | ≥ βn) =
n=1
∑ βnP(Λn),
(A.7)
n=1
then, since c−1 αn 6 βn − βn−1 6 cαn , by combining (A.6) and (A.7), we have
c−1
∞
∞
∞
∑ αnP(|X | ≥ βn) 6 ∑ βn P(Λn) 6 E(|X |) 6 β1 + (B + 1)c ∑ αnP(|X | ≥ βn,
n=1
n=1
n=1
this completes the proof.
∞
Proof of Corollary 2.2. The condition that
∞
α
β
< ∞ is equivalent with
P
|X
|
>
n
n
∑
n=1
α +1
α
β > nα +1 < ∞. Now let α := nα and β := nα +1 . We have
n
P
|X
|
n
n
∑
n=1
α +1 − 1 ≤ (2α +1 − 1) · nα +1 = O(nα +1 ),
βn+1 − βn = (n + 1)α +1 − nα +1 = nα +1 ( n+1
n )
and
h
i
n−1 α
βn − βn−1 = nα +1 − (n − 1)α +1 = nα n − (
) · (n − 1) ≥ nα = αn
n
12
α
Also, since n − ( n−1
n ) (n − 1) → 1, by the fact every convergent sequence is bounded, there
α
exists a constant C such that for all n ≥ 1, n − ( n−1
n ) (n − 1) ≤ C. Let c = max{2,C}. We
verified that c−1 αn ≤ βn − βn−1 ≤ cαn
∀n, for some c ≥ 1. Therefore, the rest of the
proof follows directly from Theorem 2.1.
Proof of Corollary 2.3. The proof follows form Corollary 2.2. Since αn /nα and βn /nβ are
bounded away from 0 and ∞. Then exists a > 1 s.t. a−1 < nααn < a and a−1 < nββn < a. Thus
∞
∞
−1 α
β
α
β
P
|X
|
>
an
P
|X
|
>
<
∞
implies
a
n
< ∞, and this is equivalent to
n
∑
∑ n
n=1
∞
n=1
α +1
α +1
β
|X|
<
∞,
i.e.
E
|X | β < ∞.
> nβ < ∞. Now by Corollary 2.2, E
a
a
n=1
α +1
α +1
β
Conversely, if E(|X | ) < ∞, we have E (a · |X |) β < ∞. Then, by Corollary 2.2,
∞
∞
nβ
α
∑ n P |X | > a < ∞, which implies ∑ αn P |X | > βn , this complete the proof.
n=1
n=1
∑ nα P
|X |
Proof of Lemma 2.4. The sufficient condition follows directly from the sub-additivity. To
m
|X
|
≥
u
prove the necessary condition, let A = {n : P
max
∏ 1,ih n = 0}. We
1≤i1 <i2 <···<im ≤n h=1
have
∞
∑P
n=1
m
|X
|
≥
u
=
n
1,i
∏ h
max
1≤i1 <i2 <···<im ≤n h=1
and note that if n ∈ A, P
Then,
∞
∑ nm P
n=1
m
∏ |X1,h| ≥ un
h=1
∑
n∈N\A
m
|X
|
≥
u
∏ 1,h n =
∏ |X1,ih | ≥ un
max
1≤i1 <i2 <···<im ≤n h=1
= 0 and then
nm P
m
∏ |X1,h| ≥ un
h=1
m
h=1
P
∑
n∈N\A
nm P
(A.8)
= 0, ∀n ∈ A.
m
|X
|
≥
u
∏ 1,h n .
(A.9)
h=1
Hence, from (A.8) and (A.9), it suffices to prove that if
m
m
m
max
∑ P
∏ |X1,ih | ≥ un < ∞ then ∑ n P ∏ |X1,h| ≥ un < ∞. Hence,
n∈N\A
1≤i1 <i2 <···<im ≤n h=1
n∈N\A
13
h=1
in the sequel, we suppose without loss of generality that P
∞
n = 1, 2, . . . Thus, if
∑P
n=1
lim P
n→∞
∏ |X1,ih | ≥ un
1≤i1 <i2 <···<im ≤n h=1
m
∏ |X1,ih | ≥ un
max
m
|X
|
≥
u
> 0 for all
n
1,h
∏
h=1
m
max
1≤i1 <i2 <···<im ≤n h=1
< ∞, we have
= 0.
(A.10)
Then, it suffices to prove that
m!P
m
m
m
|X
|
≥
u
∼
n
P
|X
|
≥
u
∏ 1,ih n
∏ 1,h n ,
max
1≤i1 <i2 <···<im ≤n h=1
h=1
as n → ∞.
For convenience X1,i is written as Xi .
First, observe that log x ≤ x − 1, ∀x > 0. Then,
log P
m
m
|X
|
≤
u
≤
P
|X
|
≤
u
∏ h n
∏ h n − 1 = −P
h=1
h=1
m
∏ |Xh| > un
h=1
,
and then,
n!
log P
(n − m)!m!
m
∏ |Xh| ≤ un
h=1
!
≤−
m
n!
P( ∏ |Xh| > un ).
(n − m)!m! h=1
Taking the exponential both side, we have
h
P
m
∏ |Xh| ≤ un
h=1
n!
i (n − m)!m!
h
≤ exp −
i
m
n!
P( ∏ |Xh| > un ) .
(n − m)!m! h=1
Thus,
P
m
∏ |X1,ih | ≤ un
max
1≤i1 <i2 <···<im ≤n h=1
i
m
n!
≤ exp −
P( ∏ |Xh | > un ) ,
(n − m)!m! h=1
h
and then
P
m
max
∏ |X1,ih | > un
1≤i1 <i2 <···<im ≤n h=1
"
n!
P
≥ 1 − exp −
(n − m)!m!
m
∏ |Xh| > un
h=1
!#
> 0.
(A.11)
14
Then, by combining (A.10) and (A.11), we get
lim P
n→∞
m
∏ |X1,ih | > un
1≤i1 <i2 <···<im ≤n
max
h=1
m
n!
P( ∏ |Xh| > un ) = 0. (A.12)
n→∞ (n − m)!m!
h=1
= lim
By combining (A.11) and (A.12) along with the fact that 1 − e−x ∼ x as x → 0, we get
P
lim
n→∞
m
∏ |X1,ih | > un
max
1≤i1 <i2 <···<im ≤n h=1
m
n!
P( ∏ |Xh| > un )
(n − m)!m! h=1
≥ 1.
(A.13)
Further, by sub-additivity, we have
m
m
n!
P( ∏ |Xh| > un ) > P
max
|X
|
>
u
,
n
1,i
∏ h
(n − m)!m! h=1
1≤i1 <i2 <···<im ≤n h=1
and then,
P
lim
n→∞
m
|X
|
>
u
∏ 1,ih n
1≤i1 <i2 <···<im ≤nh=1
! 6 1.
m
n!
P ∏ |Xh| > un
(n − m)!m!
h=1
max
(A.14)
Hence, by combining (A.13) and (A.14), we have
m
n!
P
(n − m)!m!
∏ |Xh| > un
h=1
!
∼P
m
∏ |X1,ih | > un .
max
1≤i1 <i2 <···<im ≤n h=1
Therefore,
nm
P
m!
m
∏ |Xh| > un
h=1
!
∼P
m
max
|X
|
>
u
,
n
∏ 1,ih
1≤i1 <i2 <···<im ≤n h=1
this completes the proof.
∞
Proof of Corollary 2.6. By Lemma 2.4,
m
∑P
n=1
if and only if
∞
m
n=3
h=1
∑ nmP ∏ |X1,h| ≥
p
n ln(n)
!
15
max
∏ |X1,ih | ≥
1≤i1 <i2 <···<im ≤n h=1
p
< ∞, and this is equivalent to
!
n ln(n) < ∞
∞
∑n
m
n=3
m
P
∏ |X1,h|
2
h=1
≥ n ln(n)
!
< ∞. Then, by taking f (x) =
x√
,
ln(e+ x)
x > 3, one can ver-
ify that f (n ln(n)) n > 1/2 for all n > 3 and lim f (n ln(n)) n = 2 and this implies that
n→∞
f (n ln(n)) n > 1/2 is bounded away from 0 and from infinity. Therefore, the proof fol-
lows from Lemma 2.5 by taking β = 1.
B Theorems of Baum and Katz (1965) and Lai (1974) used
For the convenience of the reader, we recall in this Appendix two results from the
celebrated theorems of Baum and Katz (1965), and Lai (1974) which are used in this paper.
These results can also been found in Li and Rosalsky (2006).
Theorem B.1 (Baum and Katz (1965)). Let {Sn , n > 1} be a sequence of partial sums as
defined in Section 3. Let β > 0 and α > 1/2, and suppose that E(Y1 ) = 0 if α 6 1. Then,
the following are equivalent:
∞
∑n
2β −1
n=1
∞
∑n
2β −1
n=1
|Sn |
P
>ε
nα
< ∞
for all ε > 0,
|Sm |
P sup α > ε
< ∞ for all ε > 0,
m>n m
E |Y1 |(2β +1)/α
< ∞.
Theorem B.2 (Lai (1974)). Let β > 0, let Sn be as in Theorem B.1 and suppose that
!
β
+2
4
|Y1 |
< ∞.
E(Y1 ) = 0, E(Y12 ) = 1, and E
(ln (e + |Y1 |))2β +1
Then,
∞
∑ n2β −1 P
n=1
p
|Sn |
n ln(n)
>λ
!
16
< ∞
p
for all λ > 2 β .
References
[1] K. L. Chung (1974). A Course in Probability Theory, Third Edition, Academic Press.
[2] P. Erdös (1949). On a theorem of Hsu and Robbins, Ann. Math. Statist. 20, 286–291.
[3] L. E. Baum, and M. Katz (1965). Convergence rates in the law of large numbers.,
Trans. Amer. Math. Soc., 120, 108–123.
[4] M. Katz (1963). The probability in the tail of a distribution, Ann. Math. Statist. 34,
312–318.
[5] T. Jiang (2004). The asymptotic distributions of the largest entries of sample correlation matrices, Ann. Appl. Probab., 14, 865–880.
[6] D. Li, and A. Rosalsky (2006). Some strong limit theorems for the largest entries of
sample correlation matrices, Ann. App. Prob., 16, 1, 423–447.
[7] T. L. Lai (1974). Limit theorems for delayed sums, Ann. Probab., 2, 3, 432–440.
17
| 10 |
GENERATING THE IDEALS DEFINING UNIONS OF SCHUBERT VARIETIES
arXiv:1405.2945v1 [math.AG] 12 May 2014
ANNA BERTIGER
A BSTRACT. This note computes a Gröbner basis for the ideal defining a union of Schubert varieties.
More precisely, it computes a Gröbner basis for unions of schemes given by northwest rank conditions on the space of all matrices of a fixed size. Schemes given by northwest rank conditions include
classical determinantal varieties and matrix Schubert varieties–closures of Schubert varieties lifted
from the flag manifold to the space of matrices.
1. I NTRODUCTION
We compute a Gröbner basis, and hence ideal generating set, for the ideal defining a union of
schemes each given by northwest rank conditions with respect to an “antidiagonal term order.”
A scheme defined by northwest rank conditions is any scheme whose defining equations are of
the form “all k × k minors in the northwest i × j sub-matrix of a matrix of variables,” where i, j,
and k can take varying values. These schemes represent a generalization of classical determinantal
varieties–those varieties with defining equations all (r+1)×(r+1) minors of a matrix of variables.
One geometrically important collection of schemes defined by northwest rank conditions is the set
of matrix Schubert varieties. Matrix Schubert varieties are closures of the lift of Schubert varieties
from the complete flag manifold to matrix space [Ful92]. In general, a matrix Schubert variety
for a partial permutation π is the subvariety of matrix space given by the rank conditions that
the northwest i × j sub-matrix must have rank at most the number of 1s in the northwest i × j
sub-matrix of the partial permutation matrix for π. Notice that the set of matrix Schubert varieties
contains the set of classical determinantal varieties, which are the zero locus of all minors of a
fixed size on the space of all matrices of fixed size.
Matrix Schubert varieties associated to honest, that is non-partial, permutations are the closures
of the lifts of the corresponding Schubert varieties in the flag manifold, B− \GLn . If Xπ is the matrix
Schubert variety for an honest permutation π the projection
{full rank matrices} B− \ GLn C = F`Cn
sends Xπ ∩ GLn C onto the Schubert variety Xπ ⊆ F`Cn . Schubert varieties, orbits of B+ , stratify
F`Cn and give a basis for H∗ (F`Cn ). It is this application that led to the introduction of matrix
Schubert varieties in [Ful92]. Knutson and Miller showed that matrix Schubert varieties have a
rich algebro-geometric structure corresponding to beautiful combinatorics [KM05]. Fulton’s generators are a Gröbner basis with respect to any antidiagonal term order and their initial ideal is
the Stanley-Reisner ideal of the “pipe dream complex.” Further, Knutson and Miller show that
the pipe dream complex is shellable, hence the original ideal is Cohen-Macaulay. Pipe dreams,
the elements of the pipe dream complex, were originally called RC graphs and were developed
by Bergeron and Billey [BB93] to describe the monomials in polynomial representatives for the
classes corresponding to Schubert varieties in H∗ (F`Cn ).
The importance of Schubert varieties, and hence matrix Schubert varieties, to other areas of
geometry has become increasing evident. For example, Zelevinsky [Zel85] showed that certain
quiver varieties, sequences of vector space maps with fixed rank conditions, are isomorphic to
Date: May 14, 2014.
1
Schubert varieties. Knutson, Miller and Shimozono, [KMS06] produce combinatorial formulae for
quiver varieties using many combinatorial tools reminiscent of those for Schubert varieties.
1.1. Notation and Background. Much of the background surveyed here can be found in [MS05].
Let B− (respectively B+ ) denote the group of invertible lower triangular (respectively upper triangular) n × n matrices. Let M = (mi,j ) be a matrix of variables. In what follows π will be a possibly
partial permutation, written in one-line notation π(1) . . . π(n), with entries for π(i) undefined are
written ?. We shall write permutation even when we mean partial permutation in cases where
there is no confusion. A matrix Schubert variety Xπ is the closure B− πB+ in the affine space of all
matrices, where π is a permutation matrix and B− and B+ act by downward row and rightward
column operations respectively. Notice that for π an honest permutation Xπ is the closure of the
lift of Xπ = B− \B− πB+ ⊆ B− \GLn C to the space of n × n matrices.
The Rothe diagram of a permutation is found by looking at the permutation matrix and crossing
out all of the cells weakly below, and the cells weakly to the right of, each cell containing a 1. The
remaining empty boxes form the Rothe diagram. The essential boxes [Ful92] of a permutation are
those boxes in the Rothe diagram that do not have any boxes of the diagram immediately south
or east of them. The Rothe diagrams for 2143 and 15432 are given in Figure 1.1. In both cases the
essential boxes are marked with the letter e.
1
e
e
1
e
1
e
e
1
1
1
1
1
1
F IGURE 1.1. The Rothe diagrams and essential sets of 2143 (left) and 15432 (right).
The rank matrix of a permutation π, denoted r(π), gives in each cell r(π)ij the rank of the i × j
northwest-justified sub-matrix of the permutation matrix for π. For example, the rank matrix of
15432 is
1 1 1 1 1
1 1 1 1 2
1 1 1 2 3 .
1 1 2 3 4
1 2 3 4 5
Theorem 1.1 ( [Ful92]). Matrix Schubert varieties have radical ideal I(Xπ ) = Iπ given by determinants
representing conditions given in the rank matrix r(π), that is, the (r(π)ij + 1) × (r(π)ij + 1) determinants
of the northwest i × j sub-matrix of a matrix of variables. In fact, it is sufficient to impose only those rank
conditions r(π)ij such that (i, j) is an essential box for π.
Hereafter we call the determinants corresponding the to essential rank conditions, or the analogous determinants for any ideal generated by northwest rank conditions, the Fulton generators.
One special form of ideal generating set is a Gröbner basis. To define a Gröbner basis we set
a total ordering on the monomials in a polynomial ring such that 1 ≤ m and m < n implies
mp < np for all monomials m, n and p. Let init f denote the largest monomial that appears in the
polynomial f. A Gröbner basis for the ideal I is a set {f1 , . . . fr } ⊆ I such that init I := hinit f : f ∈
Ii = hinit f1 , . . . init fr i. Notice that a Gröbner basis for I is necessarily a generating set for I.
2
The antidiagonal of a matrix is the diagonal series of cells in the matrix running from the most
northeast to the most southwest cell. The antidiagonal term (or antidiagonal) of a determinant
is the product of the entries in the antidiagonal. For example, the antidiagonal of ac db is the
cells occupied by b and c, and correspondingly, in the determinant ad − bc the antidiagonal term
is bc. Term orders that select antidiagonal terms from a determinant, called antidiagonal term
orders have proven especially useful in understanding ideals of matrix Schubert varieties. There
are several possible implementations of an antidiagonal term order on an n×n matrix of variables,
any of which would suit the purposes of this paper. One example is weighting the top right entry
highest and decreasing along the top row before starting deceasing again at the right of the next
row; monomials are then ordered by their total weight.
Theorem 1.2 ( [KM05]). The Fulton generators for Iπ form a Gröbner basis under any antidiagonal term
order.
Typically we will denote the cells of a matrix that form antidiagonals by A or B. In what follows if A is the antidiagonal of a sub-matrix of M we will use the notation det(A) to denote the
determinant of this sub-matrix. We shall be fairly liberal in exchanging antidiagonal cells and the
corresponding antidiagonal terms, thus, for any antidiagonal term order, A = init det(A).
1.2. Statement of Result. Let I1 , . . . Ir be ideals defined by northwest rank conditions. We will
produce a Gröbner basis, and hence ideal generating set, for I1 ∩ · · · ∩ Ir . For each list of antidiagonals A1 , . . . , Ar , where Ai is the antidiagonal of a Fulton generator of Ii , we will produce a
Gröbner basis element gA1 ,...,Ar for ∩Ii . The generators gA1 ,...,Ar will be products of determinants,
though not simply the product of the r determinants corresponding to the Ai . For a fixed list of
antidiagonals A1 , . . . , Ar , build the generator gA1 ,...,Ar by:
(1) Begin with gA1 ,...,Ar = 1
(2) Draw a diagram with a dot of color i in each box of Ai and connect the consecutive dots of
color i with a line segment of color i.
(3) Break the diagram into connected components. Two dots are connected if they are either
connected by lines or are connected by lines to dots that occupy the same box.
(4) For each connected component, remove the longest series of boxes B such that there is
exactly one box in each row and column and the boxes are all in the same connected component. If there is a tie use the most northwest of the longest series of boxes. Note that B
need not be any of A1 , . . . , Ar . Multiply gA1 ,...,Ar by det(B). Remove this antidiagonal from
the diagram of the connected component, break the remaining diagram into components
and repeat.
Theorem 1.3. {gA1 ...Ar : Ai is an antidiagonal of a Fulton generator of Ii , 1 ≤ i ≤ r} form a Gröbner
basis, and hence a generating set, for ∩ri=1 Ii .
1.3. Acknowledgements. This work constitutes a portion of my PhD thesis completed at Cornell
University under the direction of Allen Knutson. I wish to thank Allen for his help, advice and
encouragement in completing this project. Thanks also go to Jenna Rajchgot for helpful discussions in the early stages of this work. I’d also like to thank the authors of computer algebra system
Macaulay2, [GS] which powered the computational experiments nessecary to do this work. I’m especially grateful to Mike Stillman who patiently answered many of my Macaulay2 questions over
the course of this work. Kevin Purbhoo gave very helpful comments on drafts of this manuscript
for which I cannot thank him enough.
2. E XAMPLES
We delay the proof of Theorem 1.3 to Section 3 and first give some examples of the generators
produced for given sets of antidiagonals. These examples are given by pictures of the antidiagonals on the left and corresponding determinantal equations on the right. Note that we only give
3
particular generators, rather than entire generating sets, which might be quite large. We then give
entire ideal generating sets for two smaller intersections.
If r = 1 then for each Fulton generator with antidiagonal A the algorithm produces the generator gA = det(A). Therefore, if we intersect only one ideal the algorithm returns the original set of
Fulton generators. The generator for the antidiagonal shown is exactly the determinant of the one
antidiagonal pictured:
m1,1 m1,2 m1,4
m3,1 m3,2 m3,4
m4,1 m4,2 m4,4
.
The generator for two disjoint antidiagonals is the product of the determinants corresponding
to the two disjoint antidiagonals:
m1,1 m1,2
m2,1 m3,2
m1,1 m1,2 m1,4
m3,1 m3,2 m3,4
m4,1 m4,2 m4,4
.
In general, if A1 , . . . Ar are disjoint antidiagonals then the then the algorithm looks at each Ai separately as they are part of separate components and the result is that gA1 ,...Ar = det(A1 ) · · · det(Ar ).
If A1 , . . . Ar overlap to form one antidiagonal X then the last step of the algorithm will occur
only once and will produce gA1 ,...Ar = det(X). For example,
m1,1
m2,1
m3,1
m4,1
m1,2
m2,2
m3,2
m4,2
m1,3
m2,3
m3,3
m4,3
m1,4
m2,4
m3,4
m4,4
.
In this example, there are two longest possible antidiagonals, the three cells occupied by the
green dots and the three cells occupied by the red dots. The ones occupied by the green dots are
more northwest, hence the generator for the three antidiagonals shown below is
4
m1,1 m1,2
m2,1 m2,2
m1,2 m1,3 m1,3
m2,2 m2,3 m2,3
m3,2 m3,3 m3,4
m4,2 m4,3
m5,2 m2,2
.
In the picture below, the longest possible anti diagonal uses all of the cells in the green anti
diagonal but only some of the cells in the red antidiagonal, however, there is only one possible
longest antidiagonal. Thus the generator is
m1,1 m1,2
m2,1 m2,2
m1,1
m2,1
m3,1
m4,1
m1,2
m2,2
m3,2
m4,2
m1,4
m2,4
m3,4
m4,4
m1,5
m2,5
m3,5
m4,5
m5,1
.
We now give two examples where the complete ideals are comparatively small. Firstly, we
calculate I(X231 ∪ X312 ) = I(X231 ) ∩ I(X312 ). I(X231 ) = hm1,1 , m2,1 i and I(X312 ) = hm1,1 , m1,2 i. The
antidiagonals and corresponding generators are shown below with antidiagonals from generators
of I(X231 ) shown in red and antidiagonals of generators of I(X312 ) shown in blue. Note that the
antidiagonals are only one cell each in this case.
m1,1
m1,1 m1,2
m1,1 m2,1
m1,2 m2,1
Theorem 1.3 results in
I(X231 ∪ X312 ) = I(X231 ) ∩ I(X312 ) = hm1,1 , m1,1 m1,2 , m1,1 m2,1 , m1,2 m2,1 i.
As a slightly larger example, consider I(X1423 ∪ X1342 ) = I(X1423 ) ∩ I(X1342 ). These generators
are given below in the order that the antidiagonals are displayed reading left to right and top to
bottom. The antidiagonals for I(X1423 are shown in red while the antidigaonals I(X1342 ) are shown
in blue. for Note that the full 4 × 4 grid is not displayed, but only the northwest 3 × 3 portion
where antidiagonals for these two ideals may lie.
5
Here Theorem 1.3 produces
m1,1
m1,1 m1,2
,
m2,1
m2,1 m2,2
m1,1 m1,2
m1,1
+
m3,1 m2,2
m1,1 m1,2
m1,2
+
m2,1 m3,2
m2,2
m1,2
m2,2
m1,1
m1,1 m1,3
,
m2,1 m2,3
m1,3
m2,2
m2,1 m2,2
m3,1 m2,2
m3,1 m1,2
,
m2,1 m2,2
m1,1 m1,3
m3,1 m3,2
m1,1 m1,2
,
m3,1 m2,3
m1,1 m1,3
,
m3,1 m2,3
m1,2 m1,3
m2,2 m2,2
m2,1 m2,2
m3,1 m2,3
m1,1 m1,2 m1,3
, m2,1 m2,2 m2,3
m3,1 m3,2 m3,4
3. P ROOF OF T HEOREM 1.3
We now prove the main result of this paper, Theorem 1.3, which states that the gA1 ,...,Ar generate
I1 ∩ · · · ∩ Ir .
We begin with a few fairly general statements:
Theorem 3.1 ( [Knu]). If {Ii : i ∈ S} are ideals generated by northwest rank conditions then init(∩i∈S Ii ) =
∩i∈S (init Ii ).
Lemma 3.2 ( [KM05]). If J ⊆ K are homogeneous ideals in a polynomial ring such that init J = init K
then J = K.
Lemma 3.3. Let IA and IB be ideals that define schemes of northwest rank conditions and let det(A) ∈ IA
and det(B) ∈ IB be determinants with antidiagonals A and B respectively such that A ∪ B = X and
A ∩ B 6= ∅. Then det(X) is in IA ∩ IB .
Proof. Let VX = V(det(X)), VA = V(IA ) and VB = V(IB ) be the varieties corresponding to the ideals
hdet(X)i, IA and IB . It is enough to show that VA ⊆ VX and VB ⊆ VX .
We will show that given a matrix with antidiagonal X with a sub-matrix with antidiagonal
A ⊆ X where the sub-matrix northwest of the cells occupied by A has rank at most length(A) − 1
then the full matrix has rank at most length(X) − 1. The corresponding statement for sub-matrix
with antidiagonal B can be proven by replacing A with B everywhere.
The basic idea of this proof is that we know the rank conditions on the rows and columns northwest of those occupied by A. The rank conditions given by A then imply other rank conditions as
adding either a row or a column to a sub-matrix can increase its rank by at most one.
6
column t + 1 column c
rank at most l
northwest of row k − t column c
rank at most l + k − c ≤ k − t − 2
northwest of column k row k − t
rank at most k − 2
row k − t
northwest of column k row k
row k
F IGURE 3.1. The proof of Lemma 3.3. The antidiagonal cells in A are marked in
black and the antidiagonal cells in X − A ⊆ B are marked in white.
Let k be the number of rows, also the number of columns in the antidiagonal X. Let the length
of A be l + 1, so the rank condition on all rows and columns northwest of those occupied by A
is at most l. Assume that the rightmost column of A is c and the leftmost column of A is t + 1.
Notice that this implies that the bottom row occupied by A is k − t, as the antidiagonal element in
column t + 1 is in row k − t. Thus, the northwest (k − t) × c of matrices in VA has rank at most l.
Notice c ≥ (t+1)+(l+1), with equality if A occupies a continuous set of columns, so matrices in
VA have rank at most l in the northwest (k−t)×(t+l+2). Adding k−c ≤ k−(n−t−l−2) columns to
this sub-matrix gives a (k−t)×k sub-matrix with rank at most l+k−c ≤ r+(k−t−l−2) = k−t−2.
Further, by the same principle, moving down t rows, the northwest k × k, i.e. the whole matrix
with antidiagonal X, has rank at most k − t − 2 + t = k − 2, hence has rank at most k − 1 and so is
in VX .
For a visual explanation of the proof of Lemma 3.3 see Figure 3.1.
Lemma 3.4. gA1 ,...,Ar ∈ Ii for 1 ≤ i ≤ r and hence
hgA1 ,...,Ar : Ai ranges over all antidiagonals for Fulton generators of Ii i ⊆ ∩ri=1 Ii .
Proof. Fix i. Let S be the first antidiagonal containing a box occupied by a box contained in Ai
added to gA1 ,...,Ar . We shall show that det(S) is in Ii and hence gA1 ,...,Ar ∈ Ii as it is a multiple of
det(S). If Ai ⊆ S then det(S) ∈ Ii either because S = Ai or S ( Ai in which case we apply Lemma
3.3. Otherwise, |S| ≥ |Ai | and S is weakly to the northwest of Ai . Therefore, there is a subset B
of S such that |B| = |Ai |, and B is weakly northwest of Ai . Hence, B is an antidiagonal for some
determinant in Ii , and again by Lemma 3.3 det(S) ∈ Ii .
Lemma 3.5. init gA1 ,...,Ar = A1 ∪ · · · ∪ Ar under any antidiagonal term order.
Proof. init gA1 ,...,Ar is a product of determinants, with collective antidiagonals A1 ∪ · · · ∪ Ar .
When we combine Lemma 3.5 and Theorem 3.1 we see that inithgA1 ,...Ar i = init(∩Ii ). Then,
Lemmas 3.2 and 3.4 combine to complete the proof of Theorem 1.3.
7
Note that Theorem 1.3 may produce an oversupply of generators. For example, if I1 = I2 , then
inputting the same set of p Fulton generators twice results in a Gröbner basis of p2 polynomials
for I1 ∩ I2 = I1 = I2 .
R EFERENCES
Nantel Bergeron and Sara Billey, RC-graphs and Schubert polynomials, Experiment. Math. 2 (1993), no. 4, 257–
269. MR 1281474 (95g:05107)
[Ful92] William Fulton, Flags, Schubert polynomials, degeneracy loci, and determinantal formulas, Duke Math. J. 65 (1992),
no. 3, 381–420. MR 1154177 (93e:14007)
[GS]
Daniel R. Grayson and Michael E. Stillman, Macaulay2, a software system for research in algebraic geometry,
Available at http://www.math.uiuc.edu/Macaulay2/.
[KM05] Allen Knutson and Ezra Miller, Gröbner geometry of Schubert polynomials, Ann. of Math. (2) 161 (2005), no. 3,
1245–1318. MR 2180402 (2006i:05177)
[KMS06] Allen Knutson, Ezra Miller, and Mark Shimozono, Four positive formulae for type A quiver polynomials, Invent.
Math. 166 (2006), no. 2, 229–325. MR 2249801 (2007k:14098)
[Knu]
Allen Knutson, Frobenius splitting, point-counting and degeneration, Preprint, arXiv:0911.4941v1.
[MS05] Ezra Miller and Bernd Sturmfels, Combinatorial commutative algebra, Graduate Texts in Mathematics, vol. 227,
Springer-Verlag, New York, 2005. MR 2110098 (2006d:13001)
[Zel85] A. V. Zelevinskiı̆, Two remarks on graded nilpotent classes, Uspekhi Mat. Nauk 40 (1985), no. 1(241), 199–200.
MR 783619 (86e:14027)
[BB93]
8
| 0 |
arXiv:1302.0312v2 [] 20 Sep 2013
ON THE INTERSECTION ALGEBRA OF PRINCIPAL IDEALS
SARA MALEC
Abstract. We study the finite generation of the intersection algebra of two principal
ideals I and J in a unique factorization domain R. We provide an algorithm that produces
a list of generators of this algebra over R. In the special case that R is a polynomial
ring, this algorithm has been implemented in the commutative algebra software system
Macaulay2. A new class of algebras, called fan algebras, is introduced.
1. Introduction
In this paper, we study the intersection of powers of two ideals in a commutative Noetherian ring. This is achieved by looking at the structure called the intersection algebra, a
recent concept, which is associated to the two ideals.
The purpose of the paper is to study the finite generation of this algebra, and to show
that it holds in the particular important case of principal ideals in a unique factorization
domain (UFD). In the general case, not much is known about the intersection algebra, and
there are many questions that can be asked. Various aspects of the intersection algebra
have been studied by J. B. Fields in [3, 4]. There, he proved several interesting things, including the finite generation of the intersection algebra of two monomial ideals in the power
series ring over a field. He also studied the relationship between the finite generation of the
intersection algebra and the polynomial behavior of a certain function involving lengths of
Tors. It is interesting to note that this algebra is not always finitely generated, as shown
by Fields. The finite generation of the intersection algebra has also appeared in the work
of Ciupercă, Enescu, and Spiroff in [1] in the context of asymptotic growth powers of ideals.
We will start with the definition of the intersection algebra. Throughout this paper, R
will be a commutative Noetherian ring.
Definition 1.1. Let
LR be ar rings with two ideals I and J. Then the intersection algebra
of I and J is B =
r,s∈N I ∩ J . If we introduce two indexing variables u and v, then
P
r
BR (I, J) = r,s∈N I ∩ J s ur v s ⊆ R[u, v]. When R, I and J are clear from context, we will
simply denote this as B. We will often think of B as a subring of R[u, v], where there is a
natural N2 -grading on monomials b ∈ B given by deg(b) = (r, s) ∈ N2 . If this algebra is
finitely generated over R, we say that I and J have finite intersection algebra.
Example 1.2. If R = k[x, y], I = (x2 y) and J = (xy 3 ), then an example of an element in
B is 2 + 3x5 y 9 u2 v 3 + x10 y 15 u4 v , since 2 ∈ I 0 ∩ J 0 = k, x5 y 9 u2 v 3 ∈ I 2 ∩ J 3 u2 v 3 = (x4 y 2 ) ∩
(x3 y 9)u2 v 3 = (x4 y 9 )u2 v 3 , and x10 y 15u4 v ∈ I 4 ∩ Ju4 v = (x8 y 4) ∩ (xy 3 )u4 v = (x8 y 4)u4 v.
1
2
SARA MALEC
We remark that the intersection algebra has connections to the double Rees algebra
R[Iu, Jv], although in practice they can be very different. This relationship is significant
due to the importance of the Rees algebra, but the two objects behave differently. The
source for the different behaviour lies in the obvious fact that the intersection I r ∩ J s is
harder to predict than I r J s as r and s vary. These differences in behavior are of great
interest and should be further explored.
In this paper, we produce an algorithm that gives a set of generators for the algebra for
two principal ideals in a UFD, and we implement the algorithm in Macaulay2 for the case
of principal monomial ideals in a polynomial ring over a field.
The finite generation of an N2 -graded algebra, such as the intersection algebra, can be
rephrased in the following way:
L
Proposition 1.3. Let B = r,s∈N Br,s be an N2 -graded algebra over a ring R. Then B is
finitely generated if and only if there exists an N ∈ N such that for every r, s ∈ N with
r, s > N,
Br,s =
N
X Y
I
j,k=0
i
jk
,
Bj,k
where I is the set of all N × N matrices I = (ijk ) such that
r=
N
X
jijk and s =
N
X
kijk .
j,k=0
j,k=0
Proof. If B is finitely generated over R, there exists an N ∈ N such that all the generators
for B come from components Br,s with r, s ≤ N. Let b ∈ Br,s with r, s > N. So b can be
written as a polynomial in the generators with coefficients in R, in other words
b∈
N
X Y
finite j,k=0
i
jk
Bj,k
.
Also, the right hand side must be in
Pthe (r, s)-graded
P component of B: i.e. the sum can
only run over all j, k such that r =
jijk and s =
kijk and j and k both go from 0 to
N. So b ∈ Br,s must be in the right hand side above as claimed. For the other inclusion,
note again that the degrees match up:
N
Y
j,k=0
P
i
jk
⊂ Br,s
Bj,k
P
as long as r =
jijk and s =
kijk , so obviously sums of such expressions are also
included in Br,s . The other direction is obvious.
2. The UFD Case
Theorem 2.1. If R is a UFD and I and J are principal ideals, then B is finitely generated
as an algebra over R.
ON THE INTERSECTION ALGEBRA OF PRINCIPAL IDEALS
3
The proof for the finite generation of B relies heavily on semigroup theory. The following
definitions and theorems provide the necessary framework for the proof.
Definition 2.2. A semigroup is a set together with a closed associative binary operation.
A semigroup generalizes a monoid in that it need not contain an identity element. We call
a semigroup an affine semigroup if it isomorphic to a subgroup of Zd for some d. An affine
semigroup is called pointed if it contains the identity, which is the only invertible element
of the semigroup.
Definition 2.3. A polyhedral cone in Rd is the intersection of finitely many closed linear
half-spaces in Rd , each of whose bounding hyperplanes contains the origin. Every polyhedral cone C is finitely generated, i.e. there exist c1 , . . . , cr ∈ Rd with
C = {λ1 c1 + · · · + λr cr |λ1 , . . . , λr ∈ R≥0 }.
We call the cone C rational if c1 , . . . , cr can be chosen to have rational coordinates, and
C is pointed if C ∩ (−C) = {0}.
We present the following two important results without proof: the full proofs are contained in [2] as Theorem 7.16 and 7.15, respectively.
Theorem 2.4. (Gordan’s Lemma) If C is a rational cone in Rd , then C ∩ A is an affine
semigroup for any subgroup A of Zd .
Theorem 2.5. Any pointed affine semigroup Q has a unique finite minimal generating set
HQ .
Definition 2.6. Let C be a rational pointed cone in Rd , and let Q = C ∩ Zd . Then HQ is
called the Hilbert Basis of the cone C.
We will use these results to provide a list of generators for B. Notice that for any two
strings of numbers
a = {a1 , . . . , an }, b = {b1 , . . . , bn } with ai , bi ∈ N,
we can associate to them a fan of pointed, rational cones in N2 .
Definition 2.7. We will call two such strings of numbers fan ordered if
ai
ai+1
≥
for all i = 1, . . . , n.
bi
bi+1
Assume a and b are fan ordered. Additionally, let an+1 = b0 = 0 and a0 = bn+1 = 1. Then
for all i = 0, . . . , n, let
Ci = {λ1 (bi , ai ) + λ2 (bi+1 , ai+1 )|λ1 , λ2 ∈ R≥0 }.
Let Σa,b be the fan formed by these cones and their faces, and call it the fan of a and b in
N2 . Hence
Σa,b = {Ci |i = 0, . . . , n}.
4
SARA MALEC
Then, since each Ci is a pointed rational cone, Qi = Ci ∩ Z2 has a Hilbert Basis, say
HQi = {(ri1 , si1 ), . . . , (rini , sini )}.
Note that any Σa,b partitions all of the first quadrant of R2 into cones, so the collection
{Qi |i = 0, . . . , n + 1} partitions all of N2 as well, so for any (r, s) ∈ N2 , (r, s) ∈ Qi for some
i = 0, . . . , n + 1.
In this paper, we are studying the intersection algebra when I and J are principal, so
the order of the exponents in their exponent vectors does not matter. In general, for any
two strings of numbers a and b, there is a unique way to rearrange them so that they are
fan ordered. So a unique fan can be associated to any two vectors. For the purposes of this
paper, we will assume without loss of generality that the exponent vectors are fan ordered.
Theorem 2.8. Let R be a UFD with principal ideals I = (pa11 · · · pann ) and J = (pb11 · · · pbnn ),
where pi , i = 0, . . . , n, and let Σa,b be the fan associated to a = (a1 , . . . , an ) and b =
(b1 , . . . , bn ). Then B is generated over R by the set
a rij
{p11
ar
b
i+1
· · · pi i ij pi+1
sij
· · · pbnn sij urij v sij |i = 0, . . . , n, j = 1, . . . , ni },
where (rij , sij ) run over the Hilbert basis for each Qi = Ci ∩ Z2 for every Ci ∈ Σa,b.
Proof. Since B has a natural N2 grading, it is enough to consider only homogeneous monomials b ∈ B with deg(b) = (r, s). Then (r, s) ∈ Qi = Ci ∩ Z2 for some Ci ∈ Σa,b . In other
words, r, s ∈ N2 and
ai
s
ai+1
≥ ≥
.
bi
r
bi+1
So ai r ≥ bi s, and by the ordering on the ai and the bi , aj r ≥ bj s for all j < i. Also,
ai+1 r ≤ bi+1 s, and again by the ordering, aj r ≤ bj s for all j > i. So
b ∈ I r ∩ J s ur v s = (pa11 · · · pann )r ∩ (pb11 · · · pbnn )s ur v s
b
s
i+1
· · · pbnn s )ur v s
= (pa11 r · · · piai r · pi+1
ON THE INTERSECTION ALGEBRA OF PRINCIPAL IDEALS
b
5
s
i+1
So b = f · p1a1 r · · · piai r · pi+1
· pbnn s ur v s for some monomial f ∈ R.
Since (r, s) ∈ Qi , P
the pair has a decomposition into a sum
Hilbert basis
Pnelements.
Pnof
ni
i
i
mj sij .
So we have (r, s) = j=1 mj (rij , sij ) with mj ∈ N, and r = j=1 mj rij , s = j=1
Therefore
b
s
i+1
b =f (pa11 r · · · piai r pi+1
· · · pbnn s ur v s )
ni
Y
m (a r )
m (a r ) mj (bi+1 sij )
=f
p1 j 1 ij · · · pi j i ij pi+1
· · · pnmj (bn sij ) umj (rij ) v mj (sij )
=f
j=1
ni
Y
a rij
(p11
j=1
ar
b
i+1
· · · pi i ij pi+1
sij
· · · pbnn sij urij v sij )mj
So b is generated over R by the given finite set as claimed.
Remark 2.9. This theorem extends and refines the main result in [5]
√
Remark 2.10. For any two ideals I and J in R with J ⊂ I, where I is not nilpotent and
∩k I k = (0), define vI (J, m) to be the largest integer n such that J m ⊆ I n and wJ (I, n) to be
the smallest m such that J m ⊆ I n . The two sequences {vI (J, m)/m}m and {wJ (I, n), n}n
have limits lI (J) and LJ (I), respectively. See [6, 7] for related work.
Given two principal ideals I and J in a UFD R whose radicals are equal (i.e. the factorizations of their generators use the same irreducible elements), our procedure to determine
generators also shows that the vectors (b1 , a1 ) and (bn , an ) are related to the pairs of points
(r, s) where I r ⊆ J s (respectively J s ⊆ I r ): notice that C0 is the cone between the y-axis
and the line through the origin with slope a0 /b0 , and for all (r, s) ∈ C0 ∩ N2 , I r ⊆ J s .
Therefore lJ (I) = a0 /b0 . Similarly, Cn , the cone between the x-axis and the line through
the origin with slope an /bn , contains all (r, s) ∈ N2 where J s ⊆ I r , so lJ (I) = an /bn . Then,
since lI (J)LJ (I) = 1, this gives that LJ (I) = a1 /b1 and LI (J) = bn /an as well. This agrees
with the observations of Samuel and Nagata as mentioned in [1].
3. The Polynomial Case
In this section, we will show that in the special case where R is a polynomial ring in
finitely many variables over a field, then the intersection algebra of two principal monomial
ideals is a semigroup ring whose generators can be algorithmically computed.
Definition 3.1. Let k be a field. The semigroup ring k[Q] of a semigroup Q is the k-algebra
with k-basis {ta |a ∈ Q} and multiplication defined by ta · tb = ta+b .
Note that when F = {f1 , . . . , fq } is a collection of monomials in R, k[F ] is equal to
the semigroup ring k[Q], where Q = Nlog(f1 ) + · · · + Nlog(fq ) is the subsemigroup of Nq
generated by log(F ). It is easy to see that multiplying monomials in the semigroup ring
amounts to adding exponent vectors in the semigroup, as in the following example:
We can consider B both as an R-algebra and as a k-algebra, and it is important to keep
in mind which structure one is considering when proving results. While there are important
6
SARA MALEC
distinctions between the two, finite generation as an algebra over R is equivalent to finite
generation as an algebra over k.
Theorem 3.2. Let R be a ring that is finitely generated as an algebra over a field k. Then
B is finitely generated as an algebra over R if and only if it is finitely generated as an
algebra over k.
Proof. Let B be finitely generated over k. Then since k ⊂ R, B is automatically finitely
generated over R. Now let
generated over R, say by elements b1 , . . . , bn ∈ B.
PqB be finitely
αi
Then for any b ∈ B, b = i=1 ri bi with ri ∈ R. But R is finitely generated over k, say by
P
P P
β
β
elements k1 , . . . , km , so ri = pj=1 aij kj ij , with aij ∈ k. So b = qi ( pj aij kj ij )bαi i , and B is
finitely generated as an algebra over k by {b1 , . . . , bn , k1 , . . . , km }.
A few definitions are required before stating the main results of this section.
Definition 3.3. Let R = k[x] = k[x1 , . . . , xn ] be the polynomial ring over a field k in n
variables. Let F = {f1 , . . . , fq } be a finite set of distinct monomials in R such that fi 6= 1
for all i. The monomial subring spanned by F is the k-subalgebra
k[F ] = k[f1 , . . . , fq ] ⊂ R.
Definition 3.4. For c ∈ Nn , we set xc = xc11 · · · xcnn . Let f be a monomial in R. The
exponent vector of f = xα is denoted by log(f ) = α ∈ Nn . If F is a collection of monomials
in R, log(F ) denotes the set of exponent vectors of the monomials in F .
Theorem 3.5. If R is a polynomial ring in n variables over k, and I and J are ideals
generated by monomials (i.e. monic products of variables) in R, then B is a semigroup
ring.
Proof. Since I and J are monomial ideals, I r ∩ J s is as well for all r and s. So each (r, s)
component of B is generated by monomials, therefore B is a subring of k[x1 , . . . , xn , u, v]
generated over k by a list of monomials {bi |i ∈ Λ}. Let Q be the semigroup generated by
{log(bi )|i ∈ Λ}. Then B = k[Q], and B is a semigroup ring over k.
Theorem 3.6. Let I = (xa11 · · · xann ) and J = (xb11 · · · xbnn ) be principal ideals in R =
k[x1 , . . . , xn ], and let Σa,b be the fan associated to a = (a1 , . . . , an ) and b = (b1 , . . . , bn ).
Let
Qi = Ci ∩ Z2 for every Ci ∈ Σa,b
and HQi be its Hilbert basis of cardinality ni for all i = 0, . . . , n. Further, let Q be the
subsemigroup in N2 generated by
{(a1 rij , . . . , ai rij , bi+1 sij , . . . , bn sij , rij , sij )|i = 0, . . . , n, j = 1, . . . , ni } ∪ log(x1 , . . . , xn ),
where (rij , sij ) ∈ HQi for every i = 0, . . . n, j = 1, . . . , ni . Then B = k[Q].
Proof. Since R is a UFD, by Thm 2.8, B is generated over R by
a rij
{x11
ar
b
i+1
· · · xi i ij xi+1
sij
· · · xbnn sij urij v sij |i = 0, . . . , n, j = 1, . . . , ni }.
Then, since R is generated as an algebra over k by x1 , . . . , xn , it follows that B ⊂ k[x1 , . . . , xn , u, v]
is generated as an algebra over k by the set
a rij
P = {x1 , . . . , xn , x11
ar
b
i+1
· · · xi i ij xi+1
sij
· · · xbnn sij urij v sij |i = 0, . . . , n, j = 1, . . . , ni }.
ON THE INTERSECTION ALGEBRA OF PRINCIPAL IDEALS
7
This is a set of monomials in k[x1 , . . . , xn , u, v]. Now note that therefore
log(P ) ={(a1 rij , . . . , ai rij , bi+1 sij , . . . , bn sij , rij , sij )|i = 0, . . . , n, j = 1, . . . , ni }
∪ log(x1 , . . . , xn ).
In conclusion, log(P ) = Q and hence B = k[Q].
Example 3.7. Let I = (x5 y 2 ) and J = (x2 y 3). Then a1 = 5, a2 = 2 and b1 = 2, b2 = 3,
and 5/2 ≥ 2/3. Then we have the following cones:
C0 = {λ1 (0, 1) + λ2 (2, 5)|λi ∈ R≥0 }
C1 = {λ1 (2, 5) + λ2 (3, 2)|λi ∈ R≥0 }
C2 = {λ1 (3, 2) + λ2 (1, 0)|λi ∈ R≥0 }
C0 is the wedge of the first quadrant between the y-axis and the vector (2, 5), C1 is the
wedge between (2, 5) and (3, 2), and C3 is the wedge between (3, 2) and (1, 0). It is easy to
see that this fan fills the entire first quadrant. Intersecting these cones with Z2 is equivalent
to only considering the integer lattice points in these cones.
The Hilbert Basis of Q0 = C0 ∩ Z2 is {(0, 1), (1, 3), (2, 5)}, and their corresponding
monomials in B are given by the generators of Br,s for each (r, s):
(0, 1) : (I 0 ∩ J 1 )u = (x2 y 3 )v − generator is x2 y 3v
(1, 3) : (I 1 ∩ J 3 )uv 3 = ((x5 y 2 ) ∩ (x6 y 9))uv 3 = (x6 y 9)uv 3 − generator is x6 y 9 uv 3
(2, 5) : (I 2 ∩ J 5 )u2 v 5 = ((x10 y 4 ) ∩ (x10 y 15))u2 v 5 = (x10 y 15 )u2v 5 − generator is x10 y 15 u2 v 5
Notice that all the generator monomials are of the form xb1 s y b2 s ur v s , with b1 = 2, b2 = 3,
and (r, s) is a Hilbert Basis element, as shown earlier.
The Hilbert Basis of Q1 is {(1, 1), (1, 2), (3, 2), (2, 5)}. In the same way as above, their
monomials are x5 y 3 uv, x5y 6 uv 2 , x15 y 6u3 v 2 , x10 y 15 u2 v 5 , all of which have the form xa1 r y b2 s ur v s
with a1 = 5, b2 = 3 and (r, s) a basis element.
Lastly, the Hilbert Basis of Q2 is {(1, 0), (2, 1), (3, 2)}, which gives rise to generators
x5 y 2u, x10 y 4u2 , x15 y 6 u3 v 2 , all of which look like xa1 r y a2 r ur v s with a1 = 5, a2 = 2.
Notice there are a few redundant generators in this list: those arise from lattice points
that lie on the boundaries of the cones. So B is generated over R by
{x5 y 2 u, x10 y 4 u2 , x15 y 6u3 v 2 , x5 y 3 uv, x5y 6 uv 2 , x2 y 3 v, x6 y 9 uv 3, x10 y 15 u2 v 5 }.
Using this technique, we have written a program in Macaulay2 that will provide the list
of generators of B for any I and J. First it fan orders the exponent vectors, then finds
8
SARA MALEC
the Hilbert Basis for each cone that arises from those vectors. Finally, it computes the
corresponding monomial for each basis element. The code is below:
loadPackage "Polyhedra"
--function to get a list of exponent vectors from an ideal I
expList=(I) ->(
flatten exponents first flatten entries gens I
)
algGens=(I,J)->(
B:=(expList(J))_(positions(expList(J),i->i!=0));
A:=(expList(I))_(positions(expList(J),i->i!=0));
L:=sort apply(A,B,(i,j)->i/j);
C:=flatten {0,apply(L,i->numerator i),1};
D:=flatten {1, apply(L,i->denominator i),0};
M:=matrix{C,D};
G:=unique flatten apply (#C-1, i-> hilbertBasis
(posHull submatrix(M,{i,i+1})));
S:=ring I[u,v];
flatten apply(#G,i->((first flatten entries gens
intersect(I^(G#i_(1,0)),J^(G#i_(0,0)))))*u^(G#i_(1,0))*v^(G#i_(0,0)))
)
4. Fan Algebras
The intersection algebra is in fact a specific case of a more general class of algebras that
can be naturally associated to a fan of cones. We will call such objects fan algebras, and
the first result in this section shows that they are finitely generated. First a definition:
Definition 4.1. Given a fan of cones Σa,b , a function f : N2 → N is called fan-linear if it is
nonnegative and linear on each subgroup Qi = Ci ∩ Z2 for each Ci ∈ Σa,b , and subadditive
on all of N2 , i.e.
f (r, s) + f (r ′ , s′ ) ≥ f (r + r ′ , s + s′ ) for all (r, s), (r ′, s′ ) ∈ N2 .
In other words, f (r, s) is a piecewise linear function where
f (r, s) = gi (r, s) when (r, s) ∈ Ci ∩N2 for each i = 0, . . . n, and each gi is linear on Ci ∩N2 .
Note that each piece of f agrees on the faces of the cones, that is gi = gj for every
(r, s) ∈ Ci ∩ Cj ∩ N2 .
Example 4.2. Let a = {1} = b, so Σa,b is the fan defined by
C0 = {λ1 (0, 1) + λ2 (1, 1)|λi ∈ R≥0 }
C1 = {λ1 (1, 1) + λ2 (1, 0)|λi ∈ R≥0 },
and set Qi = Ci ∩ Z2 . Also let
g0 (r, s) = r + 2s if (r, s) ∈ Q0
f=
g1 (r, s) = 2r + s if (r, s) ∈ Q1
ON THE INTERSECTION ALGEBRA OF PRINCIPAL IDEALS
9
Then f is a fan-linear function. It is clearly nonnegative and linear on both Q0 and Q1 .
The function is also subadditive on all of N2 : Let (r, s) ∈ Q0 and (r ′ , s′ ) ∈ Q1 , and say that
(r + r ′ , s + s′ ) ∈ Q0 . Then
f (r, s) + f (r ′, s′ ) = g0 (r, s) + g1 (r ′, s′ ) = r + 2s + 2r ′ + s
f (r + r ′ , s + s′ ) = g0 (r + r ′ , s + s′ ) = r + r ′ + 2(s + s′ )
Comparing the two, we see that
f (r, s) + f (r ′, s′ ) ≥ f (r + r ′ , s + s′ ) whenever r + 2s + 2r ′ + s′ ≥ r + r ′ + 2(s + s′ ),
or equivalently when r ′ ≥ s′ . But that is true, since (r ′ , s′ ) ∈ Q1 . The proof for (r + r ′ , s +
s′ ) ∈ Q1 is similar. The two pieces of f also agree on the boundary between Q0 and Q1 ,
since the intersection of Q0 and Q1 is the ray in N2 where r = s, and
g0 (r, r) = 3r = g1 (r, r).
So f is a fan-linear function.
Theorem 4.3. Let I1 . . . , In be ideals in a domain R and Σa,b be a fan of cones in N2 . Let
f1 , . . . , fn be fan-linear functions. Then the algebra
M f (r,s)
B=
I1 1
· · · Infn (r,s) ur v s
r,s
is finitely generated.
Proof. First notice that the subadditivity of the functions fi guarantees that B is a subalgebra of R[u, v] with the natural grading. Since B has a natural N2 -grading, it is
enough to consider only homogeneous monomials b ∈ B with deg(b) = (r, s). Then
(r, s) ∈ Qi = Ci ∩ Z2 for some Ci ∈ Σa,b . Since Qi is a pointed rational cone, it has
a Hilbert basis
HQi = {(ri1 , si1 ), . . . , (rini , sini )}.
So we can write
ni
X
(r, s) =
mj (rij , sij ).
j=1
Then, since each fk is nonnegative and linear on Qi , we have
fk (r, s) =
ni
X
mj fk (rij , sij )
j=1
for each k = 1, . . . , n. Since R is Noetherian, for each i, there exists a finite set Λi,j,k ⊂ R
such that
f (rij ,sij )
Ik k
= (xk |xk ∈ Λi,j,k ).
10
SARA MALEC
So
f (r,s)
b ∈ Br,s = I1 1
Pni
= I1
j=1
· · · Infn (r,s) ur v s
mj f1 (rij ,sij )
m1 f1 (ri1 ,si1 )
Pni
· · · In
j=1
Pni
mj fn (rij ,sij ) Pni m r
j ij
j=1
j=1 mj sij
u
v
mn f1 (rin ,sin )
mn fn (rin ,sin )
i
i
i
i
· · · I1 i
· · · Inm1 fn (ri1 ,si1 ) · · · In i
um1 ri1 · · · umni rini v m1 si1 · · · v mni sini
m1
f1 (rini ,sini )
fn (rini ,sini ) rin sin mni
f1 (ri1 ,si1 )
fn (ri1 ,si1 ) ri1 si1
i
i
.
= I1
· · · In
u v
· · · I1
· · · In
u v
= I1
So B is generated as an algebra over R by the set
{x1 · · · xn urij v sij |(rij , sij ) ∈ HQi , xk ∈ Λi,j,k }.
This result justifies the following definition.
Definition 4.4. Given ideals I1 , . . . , In in a domain R, Σa,b a fan of cones in N2 , and
f1 , . . . , fn are fan-linear functions, we define
M f (r,s)
B(Σa,b , f ) =
I1 1
· · · Infn (r,s) ur v s
r,s
to be the fan algebra of f on Σa,b , where f = (f1 , . . . , fn ).
Remark 4.5. The intersection algebra of two principal ideals I = (pa11 · · · pann ) and J =
(pb11 · · · pbnn ) in a UFD is a special case of a fan algebra. Let Ii = (pi ) and fi = max(rai , sbi )
for each i = 1, . . . , n, and define the fan Σa,b to be the fan associated to a = (a1 , . . . , an )
and b = (b1 , . . . , bn ). Then
M
B(I, J) =
(p1 )max(ra1 ,sb1 ) · · · (pn )max(ran ,sbn ) ur v s .
r,s
This is a fan algebra since the max function is fan-linear: it’s subadditive on all of N2 ,
and linear and nonnegative on each cone, since the faces of each cone in Σa,b are defined
by lines through the origin with slopes ai /bi for each i = 0, . . . , n. So, as in the proof of
Theorem 2.8, for any pair (r, s) ∈ Qi = Ci ∩ Z2 for every Ci ∈ Σa,b , we have that
s
ai+1
ai
≥ ≥
.
bi
r
bi+1
So ai r ≥ bi s, and by the ordering on the ai and the bi , aj r ≥ bj s for all j < i. Also,
ai+1 r ≤ bi+1 s, and again by the ordering, aj r ≤ bj s for all j > i. Since fk = max(rak , sbk )
for all k = 1, . . . , n, we have that
fk = rak for all k ≤ i and fk = sbk for all k > i.
So each fk is linear on each cone, and the above theorem applies.
It is important to note that the intersection algebra is not always Noetherian. One such
example, given in [3], is constructed by taking an ideal P in R such that the algebra
R ⊕ P (1) ⊕ P (2) ⊕ · · ·
ON THE INTERSECTION ALGEBRA OF PRINCIPAL IDEALS
11
is not finitely generated. Then, it is shown that there exists an f ∈ R such that (P a : f a ) =
P (a) for all a. It follows that the intersection algebra of P and (f ) is not finitely generated.
One important question that should be considered is what conditions on f and P are
necessary to ensure that the intersection algebra of f and P is Noetherian.
Proposition 4.6. Let R be a standard N2 -graded ring with maximal ideal m and f a
homogeneous element in m. Then
M
(f )r ∩ ms ur v s
B = BR (f, m) =
r,s∈N2
is finitely generated as an R-algebra.
Proof. Say deg f = a, and let x ∈ (f )r ∩ ms . Then x = f r · y ∈ ms , so y ∈ (ms : f r ).
Decompose y into its homogeneous pieces, y = y0 + · · · + ym . Then f r (y0 + · · · + ym ) ∈ ms ,
so
ra + deg yi ≥ s for all i = 0, . . . , n,
or, equivalently deg yi ≥ s − ra. Therefore y ∈ ms−ra , and so
(f )r ∩ (ms ) = f r · ms−ra .
Then
B=
X
r,s∈N2
f r · ms−ra ur v s =
Let
B̃ =
X
X
ms−ra (f u)r v s .
r,s∈N2
ms−ra w r v s ,
r,s∈N2
and ϕ : B̃ → B be the map that sends w to f u and is the identity on R and v. This map
is obviously surjective, therefore, if B̃ is finitely generated, so is B. But B̃ is a fan algebra
with I1 = m and
s − ra if s/r ≥ a
,
f1 (r, s) =
0
if s/r < a
which is certainly fan-linear on the fan formed by two cones
C0 = {λ1 (0, 1) + λ2 (1, a))|λi ∈ R≥0 } and C1 = {λ1 (1, a) + λ2 (1, 0)|λi ∈ R≥0 },
since C0 contains the collection of all (r, s) ∈ N2 where s/r ≥ a and C1 contains all (r, s) ∈ N
where s/r < a.
So by Theorem 4.3, if we define Λij ⊂ R to be a finite subset where msij −rij a = (x|x ∈ Λij ),
then B̃ is generated over R by the set
{xur0j v s0j |(r0j , s0j ) ∈ HQ0 , x ∈ Λij } ∪ {ur1j v s1j |(r1j , s1j ) ∈ HQ1 },
and therefore B is finitely generated as an R-algebra.
Remark 4.7. We are grateful to Mel Hochster, who noticed that the above proof works
the same in the case where R is regular local by replacing the degree of f with its order.
When (R, m) is a regular local ring, the order defines a valuation on R because grm (R) is a
polynomial ring over R/m (and hence a domain). Therefore, when (R, m) is a regular local
ring and f ∈ R, B(f, m) is a finitely generated R-algebra.
12
SARA MALEC
Acknowledgements. The author thanks her advisor, Florian Enescu, for many fruitful
discussions, as well as Yongwei Yao for his useful suggestions.
References
[1] C. Ciupercă, F. Enescu, and S. Spiroff. Asymptotic growth of powers of ideals. Illinois Journal of
Mathematics, 51(1):29–39, 2007.
[2] B. Sturmfels E. Miller. Combinatorial Commutative Algebra. Springer, 2005.
[3] J. B. Fields. Length functions determined by killing powers of several ideals in a local ring. Ph.D.
Dissertation, University of Michigan, Ann Arbor, Michigan, 2000.
[4] J. B. Fields. Lengths of Tors determined by killing powers of ideals in a local ring. Journal of Algebra,
247(1):104–133, 2002.
[5] Sara Malec. Noetherian filtrations and finite intersection algebras. Master’s thesis, Georgia State University, 2008.
[6] M. Nagata. Note on a paper of Samuel concerning asymptotic properties of ideals. Mem. College Sci.
Univ. Kyoto Ser. A Math, 30(2):165–175, 1957.
[7] P. Samuel. Some asymptotic properties of powers of ideals. Annals of Mathematis (2), 56(1):11–21,
1952.
Department of Mathematics, University of the Pacific, Stockton, CA 95207
E-mail address: [email protected]
| 0 |
Improved Adaptive Resolution Molecular Dynamics
Simulation
Iuliana Marin1, Virgil Tudose2, Anton Hadar2, Nicolae Goga1,3, Andrei Doncescu4
Affiliation 1: Faculty of Engineering in Foreign Languages
University POLITEHNICA of Bucharest, Bucharest, Romania, [email protected]
Affiliation 2: Department of Strength of Materials, University POLITEHNICA of Bucharest, Bucharest, Romania
Affiliation 3: Molecular Dynamics Group, University of Groningen, Groningen, Netherlands
Affiliation 4: Laboratoire d'analyse et d'architecture des systemes, Université Paul Sabatier de Toulouse, Toulouse, France
Abstract—Molecular simulations allow the study of properties
and interactions of molecular systems. This article presents an
improved version of the Adaptive Resolution Scheme that links two
systems having atomistic (also called fine-grained) and coarsegrained resolutions using a force interpolation scheme. Interactions
forces are obtained based on the Hamiltonian derivation for a given
molecular system. The new algorithm was implemented in
GROMACS molecular dynamics software package and tested on a
butane system. The MARTINI coarse-grained force field is applied
between the coarse-grained particles of the butane system. The
molecular dynamics package GROMACS and the Message Passing
Interface allow the simulation of such a system in a reasonable
amount of time.
Keywords—adaptive resolution scheme; molecular dynamics;
MPI; stochastic dynamics; coarse-grained; fine-grained.
INTRODUCTION
The traditional computational molecular modeling
indicates the general process of describing complex chemical
systems in terms of a realistic atomic model, with the goal of
understanding and predicting macroscopic properties based on
detailed knowledge at an atomic scale [1]. In literature, the
study of a system at atomistic level can be defined as finegrained (FG) modeling [2].
I.
Coarse-graining is a systematic way of reducing the
number of degrees of freedom for a system of interest.
Typically whole groups of atoms are represented by single
beads and the coarse-grained (CG) force fields describe their
effective interactions. Coarse-grained models are designed to
reproduce certain properties of a reference system. This can be
either a full atomistic model or a set of experimental data. The
coarse-grained potentials are state dependent and should be reparameterized depending on the system of interest and the
simulation conditions [1].
The main disadvantage of a coarse-grained model is that
the precise atomistic details are lost. In many applications it is
important to preserve atomistic details for some region of
special interest [2]. To combine the two systems, fine-grained
and coarse-grained, it was developed a so-called multiscale
simulation technique [3, 4, 5, 6, 7]. This method combines the
two resolutions by coupling them with a mixing parameter .
The multiscale interaction forces are computed as a weighted
sum of the interactions on fine-grained and coarse-grained
levels [2].
The adaptive resolution scheme (AdResS) couples two
systems with different resolutions by a force interpolation
scheme. The two resolutions are called atomistic and coarsegrained [3, 4]. The atomistic representation of system
describes the phenomena produced at the atomic level (noted
fine-grained or FG) and the other one, coarse-grained,
describes the system at molecular level (noted coarse-grained
or CG).
The AdResS scheme works in the following conditions:
•
the forces between molecules are scaled and the
potential energy is not scaled. Consequently, the Hamiltonian
cannot be used to obtain the energies;
•
just non-bonded interactions between molecules are
scaled. In this way, some computational resources are saved;
•
particles are kept together via bonded atomistic
forces that are computed everywhere, including the coarsegrained part (no pure coarse-grained in that part);
•
in the transition regions there is a potential which is
added (that takes into account the difference in chemical
potentials between the two representations). In this paper it is
denoted by .
In the current paper, it is proposed a method for obtaining
the interaction forces by applying the Hamiltonian derivation,
where potentials can be scaled. The advantage of this method
is that, derivation is based on the sound Hamiltonian model and
energies can be reported correctly. The improved algorithm
was implemented in GROMACS, a molecular dynamics
software, that runs in parallel using the Message Passing
Interface (MPI). Simulations have been done on butane. The
MARTINI coarse-grained force field which was implemented
by the Molecular Dynamics Group from the University of
Groningen was also applied.
The paper is organized as follows. In the next section the
improved AdResS multiscaling resolution scheme is presented.
Section 3 describes the implementation of AdResS in
GROMACS using MPI and its testing on a butane molecular
system. Section 4 outlines the obtained results. The last section
presents the final conclusions.
II. RELATION TO EXISTING THEORIES AND WORK
In this section it is presented the relevant theory for the
algorithm implemented in GROMACS – the improved
AdResS multiscaling resolution scheme [3,4].
A convention is made: for a coarse-grained molecule are
used capital letters for coordinates and for a fine-grained
molecule, small letters. For simplifying the presentation, it is
considered that the space factor is function only on the
coordinate of the reference system. On the and coordinate
axis, is constant.
It is denoted by
the
coordinate of a fine-grained
particle with the number
in the coarse-grained molecule
with number . The coarse-grained molecule is centered in
the center of mass ( ) of the ensemble of corresponding finegrained particles .
The relation between the two parameters is:
∑
(1)
∑
Fig. 2. The coordinates
is
for a coarse-grained particle
A. Multiscaling force
Starting from the Hamiltonian that scales the non-bonded
forces, it is added an extra potential in the transition region
that adjusts for the difference in chemical potentials between
the fine-grained and coarse-grained regions.
As explained above, fine-grained bonded interactions are
computed everywhere, including the coarse-grained region.
The potential energy in the multiscaling model
is a
function of coordinates only and it is chosen as: The
multiscaling factor is defined as follows:
"/
-.
4,
(2)
where
is the mass of the fine-grained particle and
the mass of the coarse-grained particle.
and
"∙
.
+,
-.
+,
/
.
4,
/ 01 2
" /
"3 ∙
(4)
The multiscaling factor is defined as follows:
#
!" $
In Figure 1 more coarse-grained molecules with
corresponding fine-grained particles and the parameters
and are represented.
%
∙ (
'
(5)
Using (4) in (5), it is obtained:
25
6
6
89
:7
∙
/
/
89
57
6 957
6
Because is function of
variable. Then
Fig. 1. The coordinates
and
<=
In AdResS, the space factor is function of the coordinate
of a coarse-grained particle ( ):
#
!" $
%
∙ (
'
(3)
where
is the coordinate in the reference system for the
center of fine-grained region, ) is half of the length of the
fine-grained region and * is the length of transition (hybrid
fine-grained - coarse-grained) region. In Figure 2 are
displayed these parameters.
6 89
57
6
6 9:7
6
/ 12
/
6 ;
6
∙
6 89
:7
6
2
6
6
∙
(6)
, it is necessary to change the
<=
<
for a coarse-grained particle
/
∙
<
∙
<
<
(7)
According to (1), it is obtained
<
<
(8)
Using (8) in (7), results
<=
<
∙ ′
where ′ is the derivative of (3) with respect
computed at the end of section (see formula (18)).
(9)
which is
Taking that into account
BC
<?@A
<
25 -.
(10)
25 .
(11)
C
<?@A
<
are the non-bonded, respectively bonded, forces in the atom
and
BC
<?DA
<
,
25 -.
(12)
25 .
(13)
C
<?DA
<
are the non-bonded, respectively bonded, force in the coarsegrained molecule , formula (6) becomes:
6
6
25
2
∙ ′∙
89
:7
∙
∙
′
89
57
2
6 9:7
2 59 /
6
By making the notation
∙
6
6
∙ 589 / 1 2
/
∙
6 ;
6
6 89
:7
6
The implementation of the new algorithm is based on MPI
parallelization (see Figure 3). MPI is used by dividing the
simulation box into several boxes. The number of processors
involved in computation is equal to the number of the smaller
simulation boxes. Data is passed between the neighboring
simulation boxes through the use of messages which is
constant. The MARTINI coarse-grained force field which is
applied on the particles of the system lowers the initial energy
of atoms until the reference temperature is reached and it is
maintained at that state [8].
The parallelization and the improved AdResS algorithm are
depicted in Figure 3.
∙
(14)
25
<?E
<
III. TECHNOLOGY APPROACH
The implementation of the improved AdResS is done in
GROMACS, a package specialized for running molecular
dynamics simulations. GROMACS was firstly developed at the
University of Groningen. Because the simulation time might
last for a long time, even in terms of months, the Message
Passing Interface (MPI) is used for parallelizing the
computations within the molecular system. The thermostat that
was used is stochastic dynamics.
(15)
it is obtained:
25
2
∙
′
∙
89
:7
∙
′
∙
89
57
2 59 2 59 ∙
2
∙ 589 2 1 2
2 5;
∙ 589 ∙
(16)
Then it results:
5
5
89
:7
2
/ 12
I ′∙
89
:7
∙
′
∙ 589 ∙
G2
/ 12
∙
89
57
∙
′
<=
K
'
%
∙ cos O
P
!
P
!
/ 59 ∙
89
57
/
/ 5;
%
# $
∙ Q ∙ sin O
'
∙
′
⇒
∙ 589 / 59 H /
with respect
!
#
∙ 589 / 59 /
∙ 589 / 59 J / 5;
Finally, the derivative of
formula (3):
<
∙
/
∙
(17)
is obtained from
P
!
%
# $
∙ Q∙
'
(18)
Fig. 3. The simulation box and its processes
Parallelization is done through MPI. The computation is
divided between the involved processors. Each processor will
follow the algorithm that starts by firstly computing the bonded
.
.
fine-grained and coarse-grained interactions +,
, 4,
. The next
step is the computation of the non-bounded fine-grained and
-.
-.
, 4,
. The multiscaling forces
coarse-grained interactions +,
are computed according to formula (17). The velocities and the
positions of the particles get after that updated.
IV.
FINDINGS
The simulation was done on an Asus ZenBook Pro
UX501VW computer having an Intel Core i7-6700HQ
processor with 8 threads and a frequency of 2.6 GHz. The
RAM has a capacity of 16 GB. The simulation box comprises
butane with 36900 atoms. The dimension of an atom is equal to
10 nm on each direction (x, y, z). The reference temperature is
set at 323 K. The system of atoms has been simulated on a
different number of processors for 10,000 simulation steps. For
each processor, a number of eight executions were made, each
of the reported values being an average of the eight execution
times.
The execution time expressed in ns/day and in hour/ns are
presented in Table 1.
TABLE I.
EXECUTION TIMES AND TEMPERATURE FOR A DIFFERENT
NUMBER OF PROCESSORS
Performance
Temperature
Test
No.
[ns/day]
[hour/ns]
[K]
1
7.421
3.234
326.487
2
15.326
1.566
326.497
3
20.146
1.191
326.440
4
26.127
0.919
326.481
5
19.328
1.242
326.543
6
28.193
0.851
326.498
7
23.697
1.013
326.525
8
35.591
0.674
326.466
In the table above the standard deviations are no larger than
0.036 for the execution time and 0.110 for the temperature.
Execution time [hour/ns]
A different number of processors was used to analyze how
parallelization evolves at run time for the butane system. The
execution time according to the number of processors is
presented in Fig. 4.
4
1.566 1.191
0
1
The reference temperature for the system was of 323 K.
The obtained value with a difference of around ± 3K is within
the statistical accepted errors for such simulations.
V. CONCLUSIONS
The article presents the improved AdResS through the MPI
parallelization implemented in the GROMACS molecular
dynamics software and its testing on a butane molecular
system, along with the experimental results. Usually the
simulation time is large. For this reason parallelization is used
for obtaining the results in a shorter period of time.
The algorithm depends on the hardware used and on the
number of processors on which it is run. There is a significant
difference in time for the simulation where only one processor
is involved as compared to the case when several processors
interact through the use of MPI - a speedup of about four times
more was obtained when six processors were used as compared
with the case of one processor. The temperature is kept within
the accepted statistical ranges for such simulations.
Further work include the testing of this implementation on
more systems, generalizing the theory for cylinder coordinates,
further improvements for AdResS algorithm.
3
2
3.234
1
Fig. 5. Temperature variation in time
2
3
0.919 1.242 0.851 1.013 0.674
4
5
6
7
REFERENCES
[1]
8
Processor No
Fig. 4. Execution time expressed in hour/ns as a function of the processor
number used
It can be observed that as the number of processors
increased, the execution time linearly decreased, after which a
plateau was obtained for the considered butane system with
36900 atoms.
There is a significant difference in time for the simulation
where only one processor is involved compared to the case
when several processors interact through the use of MPI - a
speedup of about four times more was obtained when six
processors were used as compared with the case of one
processor.
In Figure 5 is represented the variation of temperature in
time. By using the stochastic dynamics thermostat and the
improved AdResS, the temperature of the system is kept
around the value of 326 K as it can be observed in Table 1.
[2]
[3]
[4]
[5]
[6]
[7]
[8]
M. J. Abraham, D. Van Der Spoel, E. Lindahl, B. Hess and the Gromacs
Development Team, “GROMACS User Manual version 4.6.7”,
www.gromacs.org, p. 110, 2014.
N. Goga et al., "Benchmark of Schemes for Multiscale Molecular
Dynamics Simulations", J. Chem. Theory Comp., vol. 11, 2015, pp.
1389-1398.
M. Praprotnik, L. Delle Site, K. Kremer, "Adaptive resolution
molecular-dynamics simulation: Changing the degrees of freedom on the
fly", J. Chem. Phys. 2005.
M. Praprotnik, L. Delle Site, K. Kremer, "Multiscale simulation of soft
matter: From scale bridging to adaptive resolution", Annu. Rev. Phys.
Chem., vol. 59, 2008, pp.545-571.
M. Christen, W. F. van Gunsteren, "Multigraining: An algorithm for
simultaneous fine-grained and coarse-grained simulation of molecular
systems", J. Chem. Phys., vol. 124, 2006.
B. Ensing, S. O. Nielsen, P. B. Moore, M. L. Klein, M. J. Parrinello,
"Energy conservation in adaptive hybrid atomistic/coarse-grain
molecular dynamics", J. Chem. Theory Comp., vol. 3, 2007, pp. 11001105.
L. Delle Site, "What is a Multiscale Problem in Molecular Dynamics?",
Entropy, vol. 15, 2014, pp. 23-40.
S. J. Marrink, H.J. Risselada, S. Yefimov, D. P. Tieleman, A. H. de
Vries, “The MARTINI Force Field: Coarse Grained Model for
Biomolecular Simulations”, J. Phys. Chem., 111, pp. 7812-7824, 2007.
| 5 |
arXiv:1611.02660v2 [] 5 Feb 2018
This article has been accepted by IEEE Transactions on Vehicular Technology.
Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other
purposes must be obtained from the IEEE by sending a request to [email protected].
Z. Ye, C. Pan, H. Zhu and J. Wang,“Tradeoff Caching Strategy of Outage Probability and Fronthaul Usage in Cloud-RAN,”
in IEEE Transactions on Vehicular Technology, vol. PP, no. 99, pp. 1-1.
doi: 10.1109/TVT.2018.2797957
https://doi.org/10.1109/TVT.2018.2797957
1
Tradeoff Caching Strategy of Outage Probability
and Fronthaul Usage in Cloud-RAN
Zhun Ye, Member, IEEE, Cunhua Pan, Member, IEEE,
Huiling Zhu, Senior Member, IEEE, and Jiangzhou Wang, Fellow, IEEE
Abstract—In this paper, tradeoff content caching strategy is
proposed to jointly minimize the cell average outage probability
and fronthaul usage in cloud radio access network (CloudRAN). At first, an accurate closed form expression of the outage
probability conditioned on the user’s location is presented, and
the cell average outage probability is obtained through the
composite Simpson’s integration. The caching strategy for jointly
optimizing the cell average outage probability and fronthaul
usage is then formulated as a weighted sum minimization
problem, which is a nonlinear 0-1 integer problem. Two heuristic
algorithms are proposed to solve the problem. Firstly, a genetic
algorithm (GA) based approach is proposed. Numerical results
show that the performance of the proposed GA-based approach
with significantly reduced computational complexity is close to
the optimal performance achieved by exhaustive search based
caching strategy, and the GA-based approach can improve
the performance by up to 47.5% on average than the typical
probabilistic caching strategy. Secondly, in order to further
reduce the computational complexity, a mode selection approach
is proposed. Numerical results show that this approach can
achieve near-optimal performance over a wide range of the
weighting factors through a single computation.
Index Terms—Caching strategy, Cloud-RAN, joint optimization, outage probability, fronthaul usage.
I. I NTRODUCTION
T
HE combination of network densification and coordinated multipoint transmission is a major technical trend
in the fifth generation (5G) wireless mobile systems to improve
the overall system performance [2]–[5]. In the traditional radio
access network (RAN) architecture, each cell has its own base
station (BS), where the radio functionality is statically assigned
Copyright (c) 2015 IEEE. Personal use of this material is permitted.
However, permission to use this material for any other purposes must be
obtained from the IEEE by sending a request to [email protected].
Part of this work was presented in IEEE International Conference on
Communications (ICC), Paris, 2017 [1].
This work was supported by the China Scholarship Council (CSC), the
Fundamental Research Funds of Shandong University (No. 2015ZQXM008),
a Marie Curie International Outgoing Fellowship within the 7th European
Community Framework Programme under the Grant PIOFGA-2013-630058
(CODEC), the UK Engineering and Physical Sciences Research Council under
the Project EP/L026031/1 (NIRVANA), and the Framework of EU Horizon
2020 Programme under the Grant 644526 (iCIRRUS).
Zhun Ye is with the School of Mechanical, Electrical and Information
Engineering, Shandong University, Weihai, Shandong Province, 264209, P.
R. China, e-mail: [email protected]
Cunhua Pan is with the School of Electronic Engineering and Computer
Science, Queen Mary University of London, London, E1 4NS, United
Kingdom, Email: [email protected]
Huiling Zhu and Jiangzhou Wang are with the School of Engineering
and Digital Arts, University of Kent, Canterbury, Kent, CT2 7NT, United
Kingdom, e-mail: {h.zhu, j.z.wang}@kent.ac.uk
to the base band processing module. Adding more base stations and introducing multiple input multiple output (MIMO)
technology will increase the complexity of the network and
result in higher total cost of ownership (TCO) for the mobile
operators [6], [7].
Consisting of centralized base band processing resources,
known as base band unit (BBU) pool, and distributed remote
radio heads (RRHs) or remote antenna units [8]–[10], cloud
radio access network (Cloud-RAN) becomes a new type
of RAN architecture to support multipoint transmission and
access point densification required by 5G systems [6], [7],
[11], [12]. The scalable, virtualized, and centralized BBU pool
is shared among cell sites, and its computing resources can
be dynamically allocated to different cells according to their
traffic. The RRHs are responsible for the radio processing task,
and they are connected with the BBU pool through fronthauls,
while the BBU pool performs the base band processing task
and it is connected to the core network through backhauls.
Thanks to the novel architecture, Cloud-RAN has many advantages such as cost effective, lower energy consumption,
higher spectral efficiency, scalability and flexibility etc., which
makes itself a promising candidate for the 5G deployment.
Particularly, Cloud-RAN is also a competitive solution for the
heterogeneous vehicular networks, which can provide better
quality of service (QoS) to intense vehicular users in an urban
environment [13], [14]. However, existing fronthaul/backhaul
of Cloud-RAN cannot meet the requirements of the emerging
huge data and signaling traffic in terms of transmission bandwidth requirements, stringent latency constraints and energy
consumption etc. [15]–[17], which has become the bottleneck
of the evolution towards 5G.
Statistics showed that a large amount of the data traffic is
generated from a small amount of most popular content files.
These popular files are requested by a large amount of users,
which results in duplicated transmissions of the same content
files on the fronthaul and backhaul links. Therefore, content
caching in RAN can be a promising solution to significantly
reduce the fronthaul/backhaul traffic [18]–[20]. During offpeak times, popular content files can be transferred to the
cache-enabled access points (macro base station, small cell,
relay node etc.). If the files requested by mobile users are
cached in the access points of the RAN, the files will be transmitted directly from the RAN’s cache without being fetched
from the core network, which can significantly reduce the
fronthaul/backhaul traffic and meanwhile shorten the access
latency of the files, thus improve users’ quality of experience
(QoE). In Cloud-RAN, thanks to the ongoing evolution of
2
fronthaul technology and function splitting between the BBU
and RRHs [16], [21], there comes possibility to realize content
caching in RRHs, which allows users fetching required content
files directly from RRHs and thus can further reduce fronthaul
traffic.
There are two stages related with caching: caching placement stage and caching delivery stage [20]. Caching placement, or known as caching strategy, is the stage to determine
which files should be stored in which cache-enabled access
points, and delivery stage refers to stage of transmitting the
requested files from access points to mobile users through
wireless channels. Among these two stages, caching placement
is performed for a relative long-timescale. Once a caching
placement is carried out, it will not change very frequently.
The reason is that the popularity of the content files will
remain the same for a relative long period such as several
hours, one day, or even longer time. On the other hand,
delivery stage runs in a short-timescale. In a delivery stage,
the wireless transmission scheme should be able to adapt to
the instantaneous channel state information (CSI) which varies
very rapidly.
There are many researches investigating the delivery stage
with the target of data association or/and energy consumption
optimization under a given caching strategy, such as [22]–
[24]. On the other hand, caching strategy is of importance
because it is the initial step to perform caching and obviously
it will have an impact on the performance of the delivery stage.
The researches investigating caching strategies generally focus
on reducing the file access latency [25]–[27], or minimizing
the transmission cost of the backhaul [28], [29], or both of
them [30]. However, the wireless transmission characteristics
such as fading were not considered in the aforementioned
researches, i.e., it was assumed that the wireless transmission
is error-free. The caching strategy will affect the wireless
transmission performance such as outage probability, which
is an important metric of the system’s performance. For
the fronthaul/backhaul traffic or average file access delay
reduction, caching different files in the RAN will be optimal,
however there is no transmit diversity to combat fading in the
file delivery stage, which may decrease the reliability of the
wireless transmission. Hence, caching strategy should be optimized by taking into consideration the wireless transmission
performance.
There are some papers considering wireless fading characteristics when designing caching strategy [31], [32]. The authors only considered small scale Rayleigh fading by assuming
that the user has the same large scale fading at any location.
However, in reality, several RRHs will jointly serve the user
in Cloud-RAN, and obviously the distance between each RRH
and the user will not be the same, so it is important to consider
large scale fading in wireless transmission. In addition, they
focused on single-objective optimization without considering
the fronthaul/backhaul usage.
The aim of caching in RRHs of Cloud-RAN is to significantly reduce the fronthaul traffic. Fronthaul usage, i.e.,
whether the fronthaul is used, is a metric which can reflect not
only the file delivery latency but also the energy consumption
of the fronthaul. For example, lower fronthaul usage implies
there are more possibilities that mobile user can access the
content files in near RRHs, which will shorten the file access
latency, meanwhile the fronthaul cost (i.e., the energy consumption) will be lower. On the other hand, outage probability
is an important performance metric of the system, which
reflects the reliability of the wireless transmission, i.e., whether
the requested content files can be successfully transferred to
the user, and it also reflects the utility of the wireless resources.
If replicas of certain content files are cached in several RRHs,
the outage probability will be reduced due to the transmit
diversity in wireless transmissions, while the fronthaul usage
will become higher because the total number of different files
cached in the RRHs are reduced and there is a high possibility
to fetch files from the BBU pool. On the other hand, caching
different files in the RRHs will reduce the fronthaul usage,
while the outage probability will become relatively higher due
to the decrease of wireless diversity. Therefore, there exists
tradeoff between fronthaul usage and outage probability.
In this paper, we investigate downlink transmission in a
virtual cell in Cloud-RAN, such as a hot spot area, shopping
mall, or an area covered by the Cloud-RAN based vehicular
network etc. The tradeoff caching strategy is proposed to
jointly minimize the cell average outage probability and the
fronthaul usage. A realistic fading channel is adopted, which
includes path loss and small scale Rayleigh fading. The
caching strategy is designed based on the long-term statistics
about the users’ locations and content file request profiles. The
major contributions of this paper are:
1) Closed form expression of outage probability conditioned
on the user’s location is derived, and the cell average outage
probability is obtained through the composite Simpson’s
integration. Simulation results show that the analysis is
highly accurate.
2) The joint optimization problem is formulated as a weighted
sum minimization of cell average outage probability and
fronthaul usage, which is a 0-1 integer problem. Two
heuristic algorithms are proposed to solve the problem:
a) An effective genetic algorithm (GA) based approach
is proposed, which can achieve nearly the same performance as the optimal exhaustive search, while the
computational complexity is significantly reduced.
b) In order to further reduce the computational complexity,
a mode selection approach is proposed. Simulation results show that it can achieve near-optimal performance
over a wide range of weighting factors through a single
computation.
The remainder of this paper is organized as follows. Section
II reviews the related works. System model is described in
Section III. The optimization problem is formulated in Section
IV and the cell average outage probability and fronthaul
usage are analyzed. The proposed GA-based approach and the
mode selection schemes are described in Section V. Numerical
results are given in Section VI and the conclusion is given in
Section VII.
Notations: E(·) denotes statistical expectation, and Re(·)
denotes the real part of a complex number. AL×N = {al,n }
denotes L × N matrix, al,n or A(l, n) represents the (l, n)-
3
th entry of the matrix. R+ denotes the set of positive real
numbers, and Z+ denotes positive integer set. CN (µ, σ 2 )
represents complex Normal distribution with mean µ and
variance σ 2 , and χ2 (k) is the central Chi-squared distribution
with k degrees of freedom.
II. R ELATED W ORKS
From the caching point of view, there exists significant similarities between Cloud-RAN, small cell network, marcocell
network and some vehicular networks etc. There are many
researches investigating the delivery stage under a certain
caching strategy, and the main target was to optimize the data
association (e.g., RRH clustering and transmit beamforming),
such as [22]–[24]. In [22] and [23], optimal base station
clustering and beamforming were investigated to reduce the
backhaul cost and transmit power cost under certain caching
strategy. In addition, the performances of different commonly
used caching strategies, such as popularity-aware caching,
random caching, and probabilistic caching, were compared in
[23]. In [24], assuming there are several small base stations
in an orthogonal frequency division multiple access (OFDMA)
macro cell, the optimal association of the users and small base
stations was investigated to reduce the long-term backhaul
bandwidth allocation. In these researches, the caching strategy
was assumed to be fixed when designing the delivery schemes,
which is because the delivery stage runs in a much shorter
timescale than the caching placement stage.
On the other hand, caching strategy has attracted widely
concern recently, the related researches mainly focused on the
reduction of file access latency [25]–[27], fronthaul/backhaul
transmissions [28], [29], or both of them [30]. In [25], a collaborative strategy of simultaneously caching in BS and mobile
devices was proposed to reduce the latency for requesting
content files. The proposed optimal strategy was to fill the BS’s
cache with the most popular files and then cache the remaining
files of higher popularity in the mobile devices. In [26], a
distributed algorithm with polynomial-time and linear-space
complexity was proposed to minimize the expected overall
access delay in a cooperative cell caching scenario. The delay
from different sources to the user was modeled as uniformly
distributed random variables within a certain range. In [27],
the probabilistic caching strategy was optimized in clustered
cellular networks, where the limited storage capacity of the
small cells and the amount of transferred contents within the
cluster were considered as two constraints to minimize the
average latency. The optimized caching probability of each
content file was obtained.
In [28], a coded caching placement was proposed to minimize the backhaul load in a small-cell network, where multicast was adopted. The file and cache sizes were assumed to
be heterogeneours. In [29], to minimize the total transmission
cost among the BSs and from the core network, each BS’s
cache storage was divided into two parts, the first part of
all the BSs cached same files with higher popularity ranks,
while the second part of all the BSs stored different files.
The cache size ratio of the two parts was optimized through
particle swarm optimization (PSO) algorithm. In [30], caching
strategy was investigated in a Cloud-RAN architecture based
networks, and the average content provisioning cost (e.g., latency, bandwidth etc.) was analyzed and optimized subjecting
to the sum storage capacity constraint. Analytical results of
the optimal storage allocation (how to partition the storage
capacity between the control BS and traffic BS) and cache
placement (decision on which file to cache) were obtained.
However, the aforementioned researches did not take wireless
transmission characteristics into consideration. In practice, the
caching strategy will have an impact on the performance of the
delivery stage, so wireless transmission performance should be
considered in order to optimize the caching strategy.
There are some papers considering wireless fading characteristics when designing/investigating caching strategy [31]–
[34]. Stochastic geometry was used to analyze large scale
networks in [33], [34]. In [33], considering a cache-enabled
two-tier heterogeneous network with one macrocell BS and
several small-cell BSs, outage probability, throughput, and
energy efficiency (EE) were analyzed. Each of the BSs caches
the most popular content files until the storage is full filled.
Numerical results showed that larger small-cell cache capacity
may leads to lower network energy efficiency when the density
of the small cells is low. In [34], the performance of probabilistic caching strategy was analyzed and optimized in a smallcell environment, and the aim was to maximize the successful
download probability of the content files. However, only
probabilistic content placement can be obtained through using
the tool of stochastic geometry [35], that is, the probability of
a certain file should be cached in the access points. In [31],
optimal caching placement was obtained through a greedy
algorithm to minimize the average bit error rate (BER) in a
macro cell with many cache-enabled helpers and each helper
can cache only one file. The user selects one helper with
the highest instantaneous received signal to noise ratio (SNR)
among the helpers which cache the requested file. If none of
the helpers cache the requested file, the user will fetch the file
from the BS. In [32], cache-enabled BSs are connected to a
central controller via backhaul links. The aim was to minimize
the average download delay. Similar to [31], the user selects
the BS with the highest SNR in the candidate BSs caching the
requested files. In [31] and [32], the authors only considered
small scale Rayleigh fading by assuming that the user has the
same large scale fading at any location, which is unpractical. In
addition, they focused on single-objective optimization without
considering the fronthaul/backhaul usage.
Inspired by the aforementioned researches, in this paper,
outage probability is used to reflect the wireless transmission performance, and fronthaul usage is used to reflect
the transmission latency and power consumption etc. Outage
probability and fronthaul usage are jointly considered when
designing the caching strategy, which leverages the tradeoff
between caching the same content files to obtain lower outage probability or caching different content files to reduce
fronthaul usage. Considering the distances from each RRH
to the user are different in a Cloud-RAN environment, a more
practical fading channel model which includes both large and
small scale fading is adopted.
4
III. S YSTEM M ODEL
Ċ
It is assumed that there are N cache-enabled RRHs in a
circular cell with radius R, and the set of RRH cluster is
denoted as N = {1, 2, · · · , N }. The file library with a total
of L content files is denoted as F = {F1 , F2 , · · · , FL }, where
Fl is the l-th ranked file in terms of popularity, i.e., F1 is the
most popular content file. The popularity distribution of the
files follows the Zipf’s law [36], and the request probability
of the l-th ranked content file is
l−β
Pl = PL
n=1
n−β
al,n =
Ċ
File l2
Ċ
Ċ
Ċ
File l2
2
3
1
RRH with cache
Ċ
Ċ
4
Mobile user
,
(1)
where β ∈ [0, +∞) is the skewness factor. The popularity is
uniformly distributed over content files when β = 0 (Pl =
1/L, ∀l) and becomes more skewed towards the most popular
files as β grows, while large popularity skewness is usually
observed in wireless applications.
For simplicity, it is assumed that all content files have the
same size, and the file size is normalized to 1. Even though
the file size will not be equal in practice, each file can be
segmented into equal-sized chunks for placement and delivery
[19], [37]. Considering the BBU pool can be equipped with
sufficient storage space, it is assumed that all the L content
files are cached in the BBU pool 1 . Some of the content files
can be further cached in the RRHs in order to improve the
system’s performance, and a file can be cached in one or more
RRHs depending on the cachingPstrategy. The n-th RRH can
N
cache Mn files, and generally n=1 Mn < L. That is, the
total caching storage space in all the RRHs is smaller than
that in the BBU pool. The caching placement of the content
files in the RRHs can be denoted by a binary placement matrix
AL×N , with the (l, n)-th entry
BBU pool
All L
File l1
content files Ċ
1, the n-th RRH caches the l-th file
0, otherwise
(2)
indicating whether
the l-th content file is cached in the n-th
PL
RRH, and l=1 al,n = Mn , ∀n.
Single user scenario is considered in this paper. However,
the proposed algorithms can be applied in practical multiuser
systems with orthogonal multiple access technique such as
OFDMA system, in which each user is allocated with different
subcarriers and there is no interference [38]–[40]. It is assumed
that the user can only request for one file at one time, and
all the RRHs caching the requested file will serve the user.
If none of the RRHs caches the requested file, the file will
be transferred to all the RRHs from the BBU pool through
fronthauls, and then to the user from all the RRHs through
1 Generally speaking, the backhaul connecting the BBU pool and the core
network will have larger transmission bandwidth than the fronthaul, so only
the fronthaul usage reduction is considered in this paper. In practice, the BBU
pool can not cache all the content files originated in the Internet, however, if
the requested file is not cached in the BBU pool, it can be fetched from the
core network through using backhaul, then it is the same as the file is already
cached in the BBU pool as we only focus on the fronthaul usage rather than
backhaul usage.
Fig. 1. System model and file delivery scheme. Red dashed and green solid
lines represent the file fetching routes when user requests for the l1 -th and
l2 -th content file, respectively.
wireless channels. The service RRH set for the user with
respect to (w.r.t.) the l-th file is denoted as
, ∃n such that al,n = 1
,
, al,n = 0 for ∀n
(3)
with cardinality |Φl |∈ {1, 2, · · · , N }, (l ∈ {1, 2, · · · , L}). The
system model and file delivery scheme are illustrated in Fig.1.
For example, when the user requests for the l1 -th file which
is not cached in any of the RRHs, the file will be transferred
from the BBU pool to all the RRHs through fronthauls and
then transmitted to the user. Then the user’s service RRH set
is Φl1 = {1, 2, 3, 4}. When the user requests for the l2 -th file
which is already cached in RRH 2 and RRH 3 via caching
placement, the service RRH set is Φl2 = {2, 3}.
The wireless channel is assumed to be block-fading, i.e.,
the channel’s gain is kept as constant within the duration
of a block, and different blocks experience independent and
identically distributed (i.i.d.) fading. When being requested,
a file would be transmitted through different blocks of the
wireless channel. Assuming that both the RRH and the user’s
device are equipped with single antenna, the user’s received
signal from the service RRH set when requesting for the l-th
file can be expressed as
Φl =
{n|al,n = 1, n ∈ N }
N
y=
Xq
pT Kd−α
n hn s + noise,
(4)
n∈Φl
where pT is the transmit power of each RRH, K is a constant
depending on the antenna characteristics and the average
channel attenuation, dn is the distance between the n-th RRH
and the user, α is the path loss exponent, hn ∼ CN (0, 1)
represents complex Gaussian small
scale fading, s represents
the transmitted symbol and E |s|2 = 1, and noise denotes
complex additive white Gaussian noise (AWGN) with zero
mean and variance σ 2 .
The main modeling parameters and notations are summarized in Table I.
5
high, a large value of η should be chosen to reduce the outage
probability, so that the price is to increase the fronthaul usage.
TABLE I
M ODELING PARAMETERS AND N OTATIONS
Symbol
N
L
Pl
β
Mn
AL×N
al,n
Φl
pT
dn
Definition
Number of RRHs
Number of content files
Request probability of the l-th file
Skewness factor of the Zipf’s distribution
The number of files that the n-th RRH can cache
Caching placement matrix
Binary variable indicating whether the l-th file is
cached in the n-th RRH, (l, n)-th entry of AL×N
The service RRH set for the user w.r.t. the l-th file
Transmit power of each RRH
Distance between the n-th RRH and the user
B. Outage Probability Analysis
IV. P ROBLEM F ORMULATION AND A NALYSIS
A. Problem Formulation
Define the normalized fronthaul usage w.r.t. the l-th file as
N
Y
1, al,n = 0 for ∀n
Tl (A) =
(1 − al,n ) =
,
0, ∃n such that al,n = 1
n=1
(5)
which indicates that if there is at least one copy of the
requested file cached in the RRHs, there will be no fronthaul
usage, i.e., Tl = 0, while if the requested file is not cached in
any of the RRHs, there will be fronthaul usage, i.e., Tl = 1.
Note that Tl does not depend on the user’s location.
The caching strategy should be designed according to the
long-term statistics over the user’s locations and content file
requests. The joint optimization problem can be formulated
through a weighted sum of the objectives [41],
min fobj (A) = η
L
X
h
i
(l)
Pl Ex0 Pout (x0 ) +(1 − η)
l=1
|
L
X
{z
}
al,n = Mn ,
Fγ (γ) =
Ji
I X
X
λj−1 Aij
i
i=1 j=1
·
Pl Tl ,
(j − 1)!
(j − 1)!
λij−1
−
e
−λi γ
j−1
X
(j − 1)!
γ j−1−k
k
(j
−
1
−
k)!
λ
i
k=0
| {z }
fronthaul usage
(6b)
!#
,
(9)
where
"
Ji #
(−λi )Ji −j dJi −j
1
Aij =
Mγ (s) 1 −
·s
(Ji − j)! dsJi −j
λi
l=1
al,n ∈ {0, 1}.
(8)
and the cumulative distribution function (CDF) is given by
"
(6a)
s.t.
Ji
I X
X
λji Aij j−1 −λi γ
fγ (γ) =
γ
e
,
(j − 1)!
i=1 j=1
l=1
cell average outage probability
L
X
When the user requests for the l-th file at location x0 , the
SNR of the received signal is given by
X
X
X pT
−α
2
2
Kd
|h
|
=
γ
S
|h
|
=
γn ,
γl (x0 ) =
n
0
n
n
n
σ2
n∈Φl
n∈Φl
n∈Φl
(7)
where γ0 = pσT2 is SNR at the transmitter of each RRH,
Sn = Kd−α
is the large scale fading, and γn = γ0 Sn |hn |2
n
represents the received SNR from the n-th RRH. For a specific
file, without ambiguity, we omit the subscript of file index l
and the user’s location x0 in the following analysis.
In the service RRH set Φ with cardinality |Φ|, the RRHs
with the same distance to the user are grouped together.
Assuming there are I (I ≤ |Φ|) groups, the
PI number of RRHs
in the i-th group is denoted by Ji , and i=1 Ji = |Φ|. The
distance between the user and the RRH in the i-th group is
1
denoted by di (i ∈ {1, 2, 3, · · · , I}). Letting λi = γ Kd
−α ,
0
i
the probability density function (PDF) of the received SNR
can be obtained as
,
s=λi
(10)
(6c)
where η ∈ [0, 1] is a weighting factor to balance the tradeoff
between outage probability and fronthaul usage, Ex0 denotes
(l)
expectation in terms of the user’s location x0 , Pout (x0 ) is the
outage probability when the user requests for the l-th file at
location x0 . Constraint (6b) describes the caching limit of each
RRH, and constraint (6c) indicates the joint optimization as a
0-1 integer problem.
Different values of η will lead to different balances between
outage probability and fronthaul usage. Given η, the caching
strategy can be determined through solving the optimization
problem in (6). In practice, η is chosen by the decision
maker (e.g., RAN’s operator) according to the system’s longterm statistics of outage probability and fronthaul usage. For
example, when the fronthauls’ average payload is heavy, a
small value of η should be chosen to reduce the fronthaul
usage, and the price is to increase the outage probability. On
the other hand, when the cell average outage probability is
and
Mγ (s) =
Y
n∈Φ
1
.
1 − γ0 Sn · s
(11)
The derivations of (8) and (9) are given in Appendix A.
When the distance between any service RRH and the user
is distinct, i.e., dn 6= dm , ∀n 6= m ∈ Φ, (8) and (9) are written
as
X 1 Y
Sn
γ
fγ (γ) =
exp −
γ0 Sn
Sn − Sm
γ0 Sn
n∈Φ
m∈Φ
m6=n
(12)
and
Fγ (γ) =
XY
n∈Φ
m∈Φ
m6=n
Sn
γ
,
1 − exp −
Sn − Sm
γ0 Sn
(13)
respectively.
6
placement matrix AL×N = {al,n }. However, the problem is
a 0-1 integer nonlinear problem, and it is difficult to obtain a
closed form solution. The following section will focus on how
to solve this problem.
1
0.9
Prob.(γ < abscissa)
0.8
0.7
0.6
scenario 1
V. C ACHING P LACEMENT S CHEME
scenario 2
In this section, two efficient approaches are proposed to
solve the joint optimization problem: one is GA-based approach and the other is mode selection approach.
0.5
0.4
0.3
scenario 3
0.2
A. Genetic Algorithm Based Approach
Simulation
Analysis
0.1
0
0
2
4
6
8
10
12
14
16
18
20
SNR γ (dB)
Fig. 2. CDF of the user’s received SNR at a fixed location.
The accuracy of the derived CDF of (9) (written as (13) in
special case) is illustrated in Fig. 2 through three scenarios.
Assuming there are 6 service RRHs for the user, and the
distances between the service RRHs and the user are denoted
by a vector D. The three different scenarios are (1) scenario 1:
D1 = [0.8R, 0.8R, 0.8R, 0.8R, 0.8R, 0.8R], (R is the cell radius), i.e., all the RRHs are with the same distance to the user;
(2) scenario 2: D2 = [0.6R, 0.7R, 0.7R, 0.8R, 0.8R, 0.8R],
i.e., some of the RRHs have same distance with the user;
(3) scenario 3: D3 = [0.5R, 0.6R, 0.7R, 0.8R, 0.9R, 1.0R],
i.e., all the RRHs are with different distances to the user.
It can be seen from Fig. 2 that the analytical results match
the simulation results, which demonstrates the accuracy of the
derived expression of (9).
The outage probability according to a certain SNR threshold
γth is
Pout (γth ) = Fγ (γth ).
(14)
It is difficult to find a closed form solution of the cell aver(l)
age outage probability w.r.t. the l-th file, i.e., Ex0 [Pout (x0 )].
However, we can use the composite Simpson’s integration
in forms of polar coordinates, where the user’s location is
denoted by (ρ, θ) and x0 = ρejθ .
h
i
(l)
Ex0 Pout (x0 )
Z 2π Z R
(l)
=
Pout (ρ, θ)fx0 (ρ, θ)ρdρdθ
(15)
0
0
≈
U
V
∆h∆k X X
(l)
wu,v ρu Pout (ρu , θv )fx0 (ρu , θv ),
9 u=0 v=0
where R is the cell radius, even integers U and V are
chosen such that ∆h = R/U and ∆k = 2π/V meeting the
requirement of calculation accuracy, ρu = u∆h, θv = v∆k,
fx0 (ρ, θ) is the probability density function of the user’s
location, which is 1/πR2 when the user’s location is uniformly
distributed in the cell, and {wu,v } are constant coefficients
(please refer to [42] and Chapter 4 in [43]).
Substituting (5), (9), (14) and (15) into (6a), the optimization problem is formulated as a function of the caching
Genetic algorithm is inherently suitable for solving optimization problems with binary variables [44]. The algorithm
structure is shown in Fig. 3. Firstly, Np candidate caching
placement matrices are generated, known as the initial population (with population size Np ), and each matrix is called
an individual. Then the objective value of each individual
is evaluated through (6a). Ne individuals with best objective
values are chosen as elites and passed into next generation (children of current generation population) directly. The
rest of the next generation population are generated through
crossover and mutation operations. The crossover function
operates on two individuals (known as parents) and generates
a crossover child, and the mutation function operates on a
single individual and generates a mutation child. The number of individuals generated through crossover and mutation
operations are denoted as Nc and Nm , respectively, where
Ne + Nc + Nm = Np , and the crossover fraction is defined
c
as fc = NcN
+Nm . The selection function selects 2Nc and
Nm individuals from the current generation for the crossover
and mutation function, respectively, where some individuals
will be selected more than once. Stochastic uniform sampling
selection [45] is adopted, and individuals with lower objective
values in current generations will have a higher probability to
generate offsprings. Repeat the evaluation-selection-generation
procedures until termination criterion is reached. Finally, the
best individual in the current population is chosen as the output
of the algorithm. The initial population, crossover function and
mutation function of the proposed GA approach are described
as follows.
1) Initial Population: The initial population is created as a
set of {AL×N }. For each column in each individual, Mn out
of the first L0 entries (i.e., {a1,n , a2,n , · · · , aL0 ,n }) are set to
be one randomly, and all the remaining entries are set to be
zero, where
XN
L0 =
Mn < L
(16)
n=1
is based on the fact that the total different files with higher popularity can be cached in the RRHs are {Fl |l = 1, 2, · · · , L0 }.
There is no benefit to cache files {Fl |l > L0 } with lower
popularity.
2) Crossover Function: The crossover function generates
a child Ac from parents A1 and A2 . A two-point crossover
function is used, which is described in Algorithm 1, in which
steps 9 to 14 are heuristic operations to meet constraint (6b).
7
Current generation
Elites
...
...
Evaluation
Eq. (6a)
Selection
Function
A L´ N
Crossover
Function
Evaluation
Termination
criterion
reached ?
Eq. (6a)
Mutation
Function
Yes
Output the best
individual in
current population
No
Initial population
Next generation
Fig. 3. Genetic algorithm structure.
Algorithm 1: Crossover function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(1)
(2)
Get parent A1 = {al,n } and A2 = {al,n } from selection
(c)
function, initialize their child Ac = {al,n } = 0L×N .
for n = 1, 2, · · · , N do
Generate random integers l1 , l2 ∈ [1, L0 ], l1 6= l2
if l1 < l2 then
(1)
Replace al,n , l = {l1 , l1 + 1, · · · , l2 } of A1 with
(2)
al,n , l = {l1 , l1 + 1, · · · , l2 } of A2 , and then set
(c)
(1)
al,n = al,n , ∀l ∈ {1, 2, · · · , L}.
else
(2)
Replace al,n , l = {l2 , l2 + 1, · · · , l1 } of A2 with
(1)
al,n , l = {l2 , l2 + 1, · · · , l1 } of A1 , and then set
(c)
(2)
al,n = al,n , ∀l ∈ {1, 2, · · · , L}.
end
PL (c)
while l=1 al,n > Mn do
(c)
Set nonzero al,n to 0 in descending order of l.
end
PL (c)
while l=1 al,n < Mn do
(c)
Set zero al,n to 1 in ascending order of l.
end
end
is the largest content diversity (LCD) caching [32], [46], [47].
In MPC, each RRH caches the most popular files, i.e., the nth RRH caches {Fl |l = 1, 2, · · · , Mn }, which will have low
outage probability while high
PN fronthaul usage. In the LCD
scheme, a total of L0 =
n=1 Mn (< L) different most
popular content files are cached in the RRHs, which can have
lowest fronthaul usage while relatively high outage probability.
If the LCD scheme is adopted in Cloud-RAN, the impact of
locations of caching content files on the cell average outage
probability needs to be considered. Assuming the locations of
the user are uniformly distributed in the cell, caching the most
popular files in the RRH nearest to the cell center will achieve
better outage probability performance, which is similar to the
RRH placement problem [48]. Therefore, for Cloud-RAN, we
improve the LCD scheme and propose a location-based LCD
(LB-LCD) scheme which is described in Algorithm 2.
Algorithm 2: Proposed LB-LCD caching strategy
1
2
Sort the RRH set as
Ns = {ni |i = 1, 2, · · · , N, Dn1 ≤ Dn2 ≤ · · · ≤ DnN },
where Dni denotes the distance between the ni -th RRH
and the cell center.
Fill the cache of the RRH set Ns in sequence from n1 to
PN
nN with content files {Fl |l = 1, 2, · · · , n=1 Mn } in
ascending order of l.
3) Mutation Function: The mutation function operates on
a single individual and generates its mutation child. For each
column of the individual, one of the first L0 entries is randomly
selected and the value is set to be the opposite (0 to 1 and
vice versa), then steps 9 to 14 described in Algorithm 1
are executed to meet constraint (6b). The mutation operation
reduces the probability that the algorithm converges to local
minimums.
If Ng generations are evaluated, there is a total of Np Ng
calculations of the objective values. In order to further reduce
the computational complexity of caching strategy, a mode
selection approach is proposed in next subsection.
For example, there are 3 RRHs {1, 2, 3}, and each RRH can
cache 3 files, so that all the RRHs can cache 9 different content
files. The distance between RRH i and the cell center is Di ,
assuming D1 < D2 < D3 . The LB-LCD caching strategy is
illustrated in Table II.
B. Mode Selection Approach
There are two particular caching placement schemes: one
is the most popular content (MPC) caching, and the other one
Proposition 1. The objective value of both the MPC scheme
and the LCD scheme is linear with η. When η is small, i.e.,
minimization of the fronthaul usage is weighted more, the LCD
TABLE II
E XAMPLE OF T HE LB-LCD C ACHING S TRATEGY
RRH
Files
cached
1
F1
F2
F3
2
F4
F5
F6
3
F7
F8
F9
8
scheme is superior to the MPC scheme. When η is large, i.e.,
minimization of the cell average outage probability is weighted
higher, the MPC scheme is superior to the LCD scheme. There
exists a crossover point of the two schemes, the weighting
factor of the crossover point is
1
h
i
(l)
(l)
P
E
P
(x
)
−
P
(x
)
l
x
0
0
.
0
out,M P C
out,LCD
l=1
1+
PL
l=1 Pl (Tl,LCD − Tl,M P C )
(17)
When Mn = M, ∀n, η0 can be further expressed as
η0 =
PL
1
η0 =
PN M
1+
l=1
Pl Ex0
h
(l)
Pout,LCD (x0 )
PN M
l=M +1
i
(l)
− Pout,M P C (x0 ) .
Pl
(18)
Proof. Please refer to Appendix B.
Based on proposition 1, we propose a mode selection
caching strategy. The RAN can make a decision of the tradeoff
according to the statistics of cell average outage probability
and fronthaul usage in the cell, and a tradeoff weighting
factor η is chosen. When η ≤ η0 , select the LB-LCD caching
scheme, while when η > η0 , select the MPC caching scheme.
C. Computational Complexity Analysis
The number of objective function calculations w.r.t. a certain
value of η is evaluated to measure the complexities of the
exhaustive search method, the proposed GA approach and
the proposed mode Q
selection approach.
The complexity of
N
exhaustive search is n=1 MLn . When Mn = M, ∀n, it is
clear that the complexity of exhaustive search is exponential
L N
. The complexity of the
w.r.t. the number of RRHs, i.e., M
proposed GA is Np Ng , where Np and Ng are the population
size and the number of generations evaluated, respectively. Ng
is determined by the convergence behavior of the GA. While
the complexity of the proposed mode selection scheme is only
2. The reason is that, once the value of η0 is solved from (17),
the RAN can choose a mode between MPC and LCD based
on whether η > η0 , and 2 objective function calculations are
involved in solving the equation. Further more, once η0 is
obtained, caching schemes for all values of η are obtained.
The computational complexities of the three approaches are
summarized in Table III.
TABLE III
C OMPUTATIONAL C OMPLEXITY
Scheme
Exhaustive search
Proposed GA-based approach
Proposed mode selection approach
Objective function calculations
QN
L
n=1 Mn
Np Ng
2
VI. N UMERICAL R ESULTS
In this section, the performances of the proposed two
caching strategies are investigated through some representative
numerical results. Firstly, the accuracy of the cell average
outage probability and fronthaul usage analysis are verified
by evaluating two typical caching schemes, i.e., the MPC and
the LB-LCD schemes. Then the effectiveness of the proposed
GA approach is verified by comparing its performance with
exhaustive search, where the Pareto optimal solutions [41] of
the joint optimization problem are presented. In the proposed
GA approach, placement matrices of the MPC and the LBLCD schemes are added into the initial population to further
improve the performance. Finally, performances of different
caching strategies are compared and the convergence behavior
of the proposed GA is presented.
The MATLAB software is used for the Monte-Carlo simulations and numerical calculations. Throughout the simulation, it
is assumed that each RRH has the same cache size Mn = M .
P
, where P is the
The transmit power of each RRH is pT = N
P
total transmit power in the cell and σ2 = 23 dB. The constant
K in (4) is chosen such that the received power attenuates
20 dB when the distance between the RRH and the user is R
[49]. In such setting, the outage probability does not depend
on the absolute value of R, that is, R can be regarded as
the normalized radius. The main simulation parameters are
summarized in Table IV.
TABLE IV
S IMULATION PARAMETERS
Parameter
Path loss exponent α
P/σ 2
SNR threshold γth
User location distribution
U and V in Simpson’s integration
Population size Np in GA
Selection function
Number of elites Ne
Crossover fraction fc
Value
3
23 dB
3 dB
uniform
6, 6
50
stochastic universal sampling
10
0.85
A. MPC and LB-LCD Caching Placements
PL
(l)
Cell average outage probability ( l=1 Pl Ex0 [Pout (x0 )])
PL
and average fronthaul usage ( l=1 Pl Tl ) of the MPC and LBLCD schemes are shown in Fig. 4 and Fig. 5, respectively.
There are L = 50 files, N = 7 RRHs with one RRH located
at the cell center and the other 6 RRHs evenly distributed on
the circle with radius 2R/3 [23], [50], and each RRH can
cache M = 5 files. Both the simulation and numerical results
are shown in this subsection. In the Monte-Carlo simulations,
there are 104 realizations of the user’s different locations and
content file requests.
Cell average outage probability with different SNR threshold γth and popularity skewness factor β is illustrated in Fig.
4. It can be seen that the outage probability of both the MPC
and the LB-LCD schemes increases with the increase of γth ,
and the outage probability of the MPC scheme is lower than
that of the LB-LCD scheme. The MPC curves of different
values of β coincide. The reason is as follows: according to
the file delivery scheme and the MPC caching strategy, no
matter whether the requested file is cached in the RRHs or
not, the file will be transmitted to the user from all the RRHs,
9
1
1
Simulation
Analysis
Simulation
Analysis
0.9
0.8
0.8
0.7
0.7
Average fronthaul usage
Cell average outage probability
0.9
0.6
0.5
LB-LCD
0.4
β = 0, 0.5, 1, 1.5, 2, 2.5, 3
0.3
0.2
0.1
0.6
0.5
LB-LCD
0.4
0.3
0.2
β = 0, 0.5, 1, 1.5, 2, 2.5, 3
MPC
MPC
0.1
0
0
0
1
2
3
4
5
0
6
0.5
1
1.5
2
2.5
3
Popularity skewness factor β
SNR threshold γth (dB)
Fig. 4. Cell average outage probability. L = 50, M = 5, N = 7.
Fig. 5. Cell average fronthaul usage. L = 50, M = 5, N = 7.
thus the cell average outage probability w.r.t. any l-th file is
(l)
the same, denoting as Ex0 [Pout (x0 )] = Pcell,out , then the cell
average outage probability expected on all the file requests is
which means when β is large enough (β > 2.0), the cell
average outage probability depends mainly on the first 5 most
popular files. These 5 files are cached in the RRH located at
the cell center, and the cell average outage probability w.r.t
any one of the 5 files is the same, so the cell average outage
probability is nearly the same for different values of β (> 2.0),
which approaches the maximum value.
Fig. 5 shows the fronthaul usage of the two caching
schemes. Because the fronthaul usage is independent of γth ,
the curve versus different values of the skewness factor β is
evaluated. The LB-LCD scheme has lower fronthaul usage
than the MPC scheme, which is because that the LB-LCD
scheme caches a total of M N = 5 × 7 = 35 different files
in the RRHs while the MPC scheme caches only M = 5
different files. The average fronthaul usage of both the MPC
and LB-LCD scheme decreases with the increase of β, which
is due to the same reason that as β increases, the popularity
becomes more skewed towards the first few files with higher
ranks, and there is a higher probability that these few files
are cached in the RRHs. As shown in (20), when β = 3, the
fronthaul usage almost depends on the first 5 popular files,
since they are all cached in the RRHs under both the MPC
and the LB-LCD caching strategies, the fronthaul usages of
both schemes approach zero.
It is seen from Fig. 4 and Fig. 5 that the analytical results
are highly consistent with the simulation results. Therefore,
analytical results will be used instead of time-consuming
simulations in the following evaluations.
L
X
l=1
(l)
Pl Ex0 [Pout (x0 )] = Pcell,out ·
L
X
Pl = Pcell,out ,
(19)
l=1
which is not related to β, i.e., no matter how the popularity is
distributed over the files, the cell average outage probability
is kept as a constant.
For the LB-LCD scheme, cell average outage probability
reaches the minimum value when β = 0, and it increases as
β increases and approaches the maximum value when β is
large enough, e.g., β = 2, 2.5, 3. The reason is explained as
follows. According to the file delivery scheme and the LBLCD caching strategy, if the requested file is not cached in
any of the RRHs, the file will be fetched from the BBU
pool, and then transmitted to the user from all the RRHs.
The outage probability will then achieve the minimum value
due to wireless diversity. While if the requested file is cached
in the RRH (only cached in one of the RRHs), the file
will be transmitted to the user from only one RRH, and the
outage probability will be relatively higher. When β = 0,
Pl = 1/L, ∀l, i.e., the request probability is the same for all
the content files, which means that the cell average outage
probability depends evenly on the outage probability of each
file, and the outage probability of the files which are not
cached in the RRHs is lower than that of the files cached
in the RRHs. As β increases, the more skewness of the
popularity will toward the first few files with high ranks, i.e.,
the cell average outage probability depends more on these
files, and there is a higher probability that there is only one
copy for each of these files cached in one certain RRH, and
the corresponding outage probability is high, so the outage
probability increases as β increases and the curve with β = 0
is the lower bound. Note that
5
0.90, β = 2.0
X
Pl = 0.96, β = 2.5
(20)
l=1
0.99, β = 3.0
B. Tradeoffs between Cell Average Outage Probability and
Fronthaul Usage
Tradeoffs between cell average outage probability and fronthaul usage obtained by exhaustive search and the proposed
GA-based approach are shown in this subsection. There are
three RRHs,
and the polar coordinates of which are R4 , 0 ,
R 2π
R 4π
3 , 3 , and 2 , 3 , respectively. There are L = 9 content
files, and the popularity skewness factor β = 1.5.
10
1
0.6
D
Pareto optimal solutions
all caching placements
0.9
A1
0.5
0.8
0.6~1
exhaustive search
proposed GA
M=1
M=2
M=3
0.7
0.6
Average fronthaul usage
Average fronthaul usage
0.1
0.08
0.5
0.06
B
C
0.4
0.55
A
0.3
0.57
0.59
0.61
0.4
A2
0.6~1
0.3
A3
0.2
B1
0.5
η = 0~0.3
0.6~1
0.3, 0.4
0.5
0.2
0.1
0.4
0.2, 0.3
B C
0
0.1
0.2
0.3
0.4
0.5
0.6
0.2
D
0.45
0.1
0
0.7
0.8
0.9
1
Cell average outage probability
Fig. 6. Cell average outage probability and fronthaul usage tradeoff region.
L = 9, M = 2, N = 3, β = 1.5. Each red point corresponds to a caching
placement, and the 5 points emphasized by small blue circles are the Pareto
optimal solutions of the joint optimization problem.
Fig. 6 is focused on the scenario when the cache size
M is equal to 2. All tradeoffs between cell average outage
probability and fronthaul usage are given by exhaustive search.
Since the popularities of the content files {Pl }, the fronthaul
(l)
usage {Tl } and the outage probability {Pout } are all discontinuous values w.r.t. integer l, the cell average outage probability
and average fronthaul usage region of all caching placements
is a set of discrete points as shown in the figure, where
each red point corresponds to a caching placement. The 5
points emphasized by small blue circles are the Pareto optimal
solutions (nondominated set [41]) of the joint optimization
problem, i.e., there is no other point dominating with the
Pareto optimal solutions in terms of both the cell average
outage probability and fronthaul usage.
The cell average outage probability is minimized when the
files cached in each RRH are the same, and the popularity of
these cached files will have an impact on the average fronthaul
usage. The corresponding points of these caching placements
lie on the line segment AD, i.e., line segment AD represents
the lower bound of the cell average outage probability. The
MPC scheme represented by point A achieves the minimum
fronthaul usage among these caching placements. The reason
is that the MPC scheme caches the most popular files which
can reduce the fronthaul usage to a minimum value among
these caching placements.
The fronthaul usage is minimized when all RRHs cache
different files with higher popularity ranks, and the cache
locations of these files will have an impact on the cell average
outage probability. The corresponding points of these caching
placements lie on the line segment BC, i.e., line segment BC
represents the lower bound of the average fronthaul usage.
The LB-LCD scheme represented by point B achieves the
minimum cell average outage probability among these caching
placements. The reason is that the LB-LCD scheme caches
the files with higher ranks in the RRHs near to the cell center,
Unless otherwise specified,
η is evaluated from 0 to 1
with step 0.1 in the proposed
GA-based approach
0.4, 0.5
0
0.1
0.2
0.3
B2
C
0.15
0.4
η = 0, 0.1
B3 η = 0, 0.1
0.5
0.6
0.7
0.8
0.9
1
Cell average outage probability
Fig. 7. Cell average outage probability and fronthaul usage tradeoff. L =
9, M = {1, 2, 3}, N = 3, β = 1.5. Unless otherwise specified in the figure,
η is evaluated from 0 to 1 with a step 0.1 in the proposed GA-based approach,
i.e., η = {0.1, 0.2, · · · , 1}.
which has the minimum cell average outage probability among
these caching placements.
Fig. 7 shows the Pareto optimal tradeoffs between the
cell average outage probability and the fronthaul usage with
different cache size M . The results obtained through the
proposed GA approach are almost the same as exhaustive
search, which means that the proposed GA approach can
achieve near-optimal performance. The minimum cell average
outage probability is achieved at point A1 when M = 1,
A2 when M = 2, and A3 when M = 3, respectively. The
minimum cell average outage probability represented by the
three points are the same, and the corresponding caching
placements of the three points are the MPC scheme. The
reason is that according to the file delivery scheme and the
MPC caching placement, all the RRHs will serve the user no
matter how many files the RRHs can cache. It is also seen that
the corresponding fronthaul usage of the three points decrease
as M increases, which is obvious because larger cache size
can cache more files thus the fronthaul usage can be reduced.
On the other hand, the minimum fronthaul usage is achieved
at point B1 when M = 1, B2 when M = 2, and B3 when
M = 3, respectively. The corresponding caching placements
of the three points are the LB-LCD scheme. Obviously, the
corresponding fronthaul usage of the three points decreases as
M increases. The fronthaul usage is zero at point B3 when
M = 3, the reason is that all the RRHs can cache a total
of M N = 3 × 3 = 9 files, which is equal to the number
of files in the file library, i.e., all the files are cached in the
RRHs. The corresponding cell average outage probability of
the three points increases as M increases. The reason is that
according to the file delivery scheme and the LB-LCD caching
strategy, more different files can be cached in the RRHs as M
increases, however, there is only one copy of each file and
the outage probability w.r.t. these cached files will be higher,
i.e., more different files cached in the RRHs, higher the cell
average outage probability is.
11
TABLE V
O PTIMAL C ACHING S TRATEGY O BTAINED BY T HE P ROPOSED GA
η=0
fobj = 0.0689
1 3 5
2 4 6
η = 0.3
fobj = 0.1938
1 1 1
2 3 4
η = 0.6
fobj = 0.2077
1 1 1
2 2 2
η = 0.9
fobj = 0.1561
1 1 1
2 2 2
η = 0.1
fobj = 0.1186
1 3 5
2 4 6
η = 0.4
fobj = 0.2087
1 1 1
2 3 4
η = 0.7
fobj = 0.1905
1 1 1
2 2 2
η = 1.0
fobj = 0.1390
1 1 1
2 2 2
η = 0.2
fobj = 0.1651
1 1 4
2 3 5
η = 0.5
fobj = 0.2144
1 1 1
3 2 2
η = 0.8
fobj = 0.1733
1 1 1
2 2 2
L=9
M =2
N =3
β = 1.5
According to the above evaluations, the MPC and LBLCD caching schemes are two special solutions of the joint
optimization problem when η = 1 and η = 0, respectively. The
former can achieve the lowest cell average outage probability
while the latter can achieve the minimum fronthaul usage. The
proposed GA-based approach can achieve different tradeoffs
between the cell average outage probability and fronthaul usage according to different weighting factors, which can achieve
better performance than the MPC and LB-LCD schemes.
C. Performances of the GA-based Approach and Mode Selection Approach
The performances of the proposed GA-based approach and
the mode selection approach are analyzed in this subsection.
Besides the MPC and the LB-LCD caching schemes, two other
widely used caching strategies are evaluated for comparison,
one is random caching, where each RRH caches the content
files independently and randomly regardless of the files’
popularity distribution, the other one is probabilistic caching,
where each RRH caches the files independently and randomly
according to the files’ popularity distribution, i.e., high-ranked
files have higher probability to be cached [23], [51]. There are
1
MPC
LB-LCD
proposed GA
proposed mode selection
random caching
probabilistic caching
0.9
0.8
Objective function value
Note that as the cache size M increases, the GA-based
approach should evaluate more values of η in order to obtain
all the Pareto optimal solutions of the joint optimization
problem. For example, when M = 3, additional values of
η = 0.15 and η = 0.45 are evaluated to obtain the Pareto
optimal solutions represented by point C and D.
Table V shows all the optimal caching placements obtained
by the proposed GA-based approach when M = 2. For
illustration, we use a M × N matrix to represent the caching
placement, with the (m, n)-th entry bm,n ∈ {1, 2, 3, · · · , L}
denotes the file index cached in the m-th cache space of
the n-th RRH. From (18), η0 = 0.3312. It can be seen
that the LB-LCD scheme is the optimal placement when
η = 0, 0.1 < η0 , while the MPC scheme is the optimal solution
when η = 0.6 ∼ 1.0 > η0 , and some files are duplicately
cached in the RRHs when η = 0.2 ∼ 0.5.
0.7
Crossover points of the MPC
and the LB-LCD schemes with
β = 0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Weighting factor η
Fig. 8. Objective function value versus weighting factor η. L = 50, M =
5, N = 7.
L = 50 files 2 , N = 7 RRHs with one RRH located at the cell
center and the other 6 RRHs evenly distributed on the circle
with radius 2R/3, and β = 1.5.
Fig. 8 shows the objective value of different caching strategies with M = 5. It can be seen from the figure that as the
weighting factor η increases, i.e., more focus on minimization
of outage probability, the objective value of MPC decreases
linearly, while the objective value of the LB-LCD scheme
increases linearly. The horizontal coordinate of the crossover
point of the MPC and LB-LCD scheme (η0 ) approaches zero
as the popularity skewness factor β increases. This is because
that when β increases, the requesting probability
l of the first
PNP
M
few popular files increase significantly, then l=M +1 Pl → 0
in (18), thus η0 → 0. That is, as β increases, the MPC
scheme will dominate with most values of η. This can also be
explained as follows. When β increases, the average fronthaul
usage will depend more and more on the few files with higher
ranks. These files can be cached in the RRHs under both
of the MPC and the LB-LCD schemes, thus the MPC and
the LB-LCD schemes are equivalent in terms of fronthaul
usage, while the MPC can achieve lower outage probability.
Therefore the MPC scheme is superior to the LB-LCD scheme.
The crossover point η0 = 0.23 when β = 1.5 calculated
through (18) exactly matches the simulation results. The above
mentioned results are consistent with Proposition 1.
The random caching strategy has a relative poor performance for all values of η, which is because the files cached
in the RRHs are selected randomly, there is neither a high
probability to cache the same file for reducing the outage probability nor to cache different high-ranked files for reducing the
fronthaul usage. While the probabilistic caching strategy can
achieve better performance than the proposed mode selection
approach in the middle range of η, e.g., for η = 0.2 ∼ 0.4,
2 Alougth there is a huge amount of content files in practice, they can
be classified into different categories [46], and the number of files in each
category (or subcategory) is relatively limited, so the proposed algorithms can
be performed on each category, the number of files evaluated in the simulation
will not lose meaningful insights of the tradeoff caching optimization.
12
1
1
0.7
proposed GA
proposed mode selection
0.9
0.6
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
Objective function value
Cell average outage probability
proposed GA
proposed mode selection
Average fronthaul usage
proposed GA
proposed mode selection
0.9
Crossover points of the MPC
and the LB-LCD schemes with
M = 1, 2, 3, 4, 5, 6, 7
0.5
LB-LCD
0.4
MPC
0.3
0.2
0.1
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0
0.1
0.2
0.3
0.4
Weighting factor η
where both the cell average outage probability minimization
and the fronthaul usage reduction are treated approximate
equally. Which is because that, in probabilistic caching, each
RRH will cache the high-ranked files with a higher probability,
so there is a high probability for different RRHs to cache
the same high-ranked files, which can reduce the cell average
outage probability, and meanwhile the inherent randomness
in the placement makes it possible to cache different files to
reduce the fronthaul usage. It is also seen that the proposed
GA-based approach can achieve better performance than the
other caching strategies, for instance, the objective function
value of the proposed GA algorithm is 18.25% lower than
a typical probabilistic caching scheme when η = 0.4, and
this improvement goes up to 87.9% when η = 1, the average
improvement over all values of η is 47.5%.
Cell average outage probability and fronthaul usage of
the proposed GA and the proposed mode selection approach
versus weighting factor are shown in Fig. 9. Note that the
mode selection scheme is actually the LB-LCD scheme when
η ≤ η0 and the MPC scheme when η > η0 , respectively.
For the proposed GA approach, the solution is exactly the
LB-LCD scheme when η = 0, as η increases, the cell
average outage probability decreases and the fronthaul usage
increases, and they reach the lower and upper bounds when
η > 0.6, respectively, where the solution is the MPC scheme.
The proposed GA approach can adjust the caching placement
according to different weighting factors η while the mode
selection scheme only chooses a caching placement between
the MPC and the LB-LCD schemes based on whether η > η0 ,
so the proposed GA approach can achieve better performance
than the mode selection scheme. However, the computational
complexity of the mode selection scheme is extremely low.
Fig. 10 shows the performance of the proposed GA and the
mode selection scheme with different cache size M . It can
be seen from the figure that the mode selection scheme can
achieve near-optimal performance over a wide range of the
weighting factor η. The vertex of the mode selection scheme,
i.e., the crossover point of the MPC and the LB-LCD schemes
0.6
0.7
0.8
0.9
1
Fig. 10. Objective function value versus weighting factor η. L = 50, N =
7, β = 1.5.
0.6
Mean objective function value of the population
Fig. 9. Cell average outage probability and fronthaul usage versus weighting
factor η. L = 50, M = 5, N = 7, β = 1.5.
0.5
Weighting factor η
η
η
η
η
0.5
=
=
=
=
0.2
0.4
0.6
0.8
0.4
0.3
0.2
0.1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Generation index
Fig. 11. Convergence behavior of the proposed GA approach. L = 50, M =
5, N = 7, β = 1.5.
moves toward the origin as the cache size M increases, i.e.,
the MPC scheme will dominate with most values of η as
M increases. The reason is explained as follows. When M
increases, more content files can be cached in the RRHs. The
fronthaul usage depends mostly on the first few popular files
cached in the RRHs, so the fronthaul usage will tend to be
the same between the two schemes as M increases. The MPC
scheme can achieve lower outage probability, further more,
the cell average outage probability of the LB-LCD scheme
increases as M increases, so the objective value of the MPC
scheme will be much lower than that of the LB-LCD scheme,
and the MPC scheme is superior to the LB-LCD scheme with
most values of η.
Fig. 11 shows the convergence behavior of the proposed GA
approach. It can be seen from the figure that the mean objective
value of the population converges within average 8 generations. The computational complexity is Ng Np = 8×50 = 400.
While the computational complexity of the exhaustive search
13
7
is 50
= 1.92 × 1044 , which is not feasible in practice. As
5
stated earlier, the popularity of the content files will remain the
same for a relative long period, so the convergence behavior
of the caching placement algorithm is not time-critical. Unlike
the delivery stage, which needs to make an instant decision
for coping with the dynamics of mobile networking systems,
e.g., the rapid change of channel state information, it is not
necessary for the caching strategy to make an instant decision
due to the slow change of the statistics data (e.g., the request
probabilities of the files). In addition, if parallel computing is
adopted, the population size of the proposed GA approach can
be increased without introducing additional execution time,
while the converging speed of the GA will be accelerated.
Thus, the GA approach can perform well with a satisfying
converging speed in practice.
VII. C ONCLUSION
In this paper, we have investigated tradeoff caching strategy
in Cloud-RAN for future mobile communications. In order
to jointly minimize the cell average outage probability and
fronthaul usage, the optimization problem is formulated as a
weighted sum of the two objectives, with weighting factor
η (and 1 − η). Analytical expressions of cell average outage
probability and fronthaul usage have been presented and
verified through simulations. Performances of two particular
caching strategies have been analyzed, namely the MPC and
the LB-LCD schemes. When the minimization of the cell
average outage probability is more focused on, the MPC
scheme is superior to the LB-LCD scheme, while the latter
is superior to the former in the opposite situation, i.e., where
the reduction of average fronthaul usage is more focused on.
When the content files’ popularity skewness factor β is larger,
or the cache size of each RRH increase, the MPC scheme
will dominate in a wide range of η. Two heuristic approaches
have been proposed to solve the joint optimization problem:
one is the GA based approach which can achieve nearly the
same optimal performance of exhaustive search, while the
computational complexity is significantly reduced; the other
is the mode selection approach with extremely low computational complexity, which can obtain near-optimal performance
within a wide range of η. Compared with a typical probabilistic
caching scheme, the proposed GA approach can reduce the
objective function value by up to 45.7% on average and the
proposed two mode selection caching strategy can provide an
average improvement of 36.9%. In practice, the RAN can
make a decision of the tradeoff according to the system’s
statistics of fronthaul traffic and outage probability, and then
adopt caching strategy through the proposed schemes.
(A.2)
The moment generation function (MGF) [53] of the random
variable γn is
Z ∞
fγn (γ)esγ dγ
Mγn (s) =
0
Z ∞
1
γ
(A.3)
exp −
esγ dγ
=
γ0 Sn
γ0 Sn
0
1
,
=
1 − γ0 Sn · s
and the range of convergence (ROC) is Re (s) < γ01Sn . Since
the RRHs are distributed at different locations, {γn , n ∈ Φ}
is
Pindependent of each other, the MGF of received SNR γ =
n∈Φ γn is given by
Mγ (s) =
Y
Mγn (s) =
n∈Φ
Y
n∈Φ
1
,
1 − γ0 Sn · s
(A.4)
T
and the ROC is n∈Φ Re (s) < γ01Sn .
Since there are I distinct distances d1 6= d2 6= · · · 6= di 6=
· · · 6= dI between the service RRHs and the user, and the i-th
distance has multiplicity of Ji , (A.4) can be rewritten as
1
J1
J2
JI ,
1
1
1
1− s
1− s
··· 1 −
s
λ1
λ2
λI
(A.5)
1
,
i
∈
{1,
2,
·
·
·
,
I}
is
the
i-th
pole
of
where λi = γ Kd
−α
0
i
multiplicity Ji of Mγ (s), using partial fraction expansion,
Mγ (s) can be expressed as
Mγ (s) =
Mγ (s) =
Ji
I X
X
i=1 j=1
Aij
j ,
1
1− s
λi
(A.6)
where {Aij } are the undetermined coefficients. Multiplying
(1− λ1i s)Ji to both sides of (A.6), then calculating the (Ji −j)th order derivate for both sides and let s = λi , we have
"
Ji #
dJi −j
1
Mγ (s) 1 − s
dsJi −j
λi
s=λi
=
dJi −j
dsJi −j
Ji
I X
X
i=1 j=1
Aij
j
1
1−
·s
λi
Ji
1
1− s
λi
s=λi
Ji −j
1
=(Ji − j)! −
Aij .
λi
A PPENDIX A
D ERIVATIONS OF (8) AND (9)
For a specific file Fl , the subscript of file index l and
the user’s location x0 are omitted without ambiguity. In (7),
|hn |2 ∼ χ2 (2), and the PDF is given by [52]
f|hn |2 (x) = exp(−x), x > 0.
Then the PDF of γn = γ0 Sn |hn |2 is
γ
1
exp −
, γ > 0, n ∈ Φ.
fγn (γ) =
γ0 Sn
γ0 Sn
(A.1)
(A.7)
Thus Aij is obtained as (10).
The PDF of γ can be obtained by inversely transforming
the MGF in (A.6). Considering a general form of the PDF,
f (γ) = γ n e−aγ ,
γ ≥ 0,
(A.8)
14
where n ∈ {0} ∪ Z+ , a ∈ R+ . The MGF of f (γ) can be
obtained by continuously using the method of integration by
parts.
=
=
=
=
M (s)
Z ∞
γ n e−aγ esγ dγ
0
Z ∞
1
−
γ n de−(a−s)γ
a−s 0
Z ∞
∞
1
−(a−s)γ n−1
n −(a−s)γ
e
γ
dγ
−
−n
γ e
a−s
0
0
..
.
n!
,
(A.9)
(a − s)n+1
and the ROC is Re (s) < a. Denote the pair of the PDF and
its corresponding MGF as
f (γ) = γ n e−aγ ⇐⇒ M (s) =
n!
.
(a − s)n+1
(A.10)
The CDF can be calculated in the same manner,
Z γ
F (γ) =
f (γ)dγ
Z0 γ
=
γ n e−aγ dγ
(A.11)
0"
!#
n
X
n!
1 n!
− e−aγ
γ n−k
.
=
n
a a
(n − k)! ak
k=0
According to (A.6) and (A.10), the PDF of the received
SNR is obtained, as shown in (8). According to (A.11), the
CDF of the received SNR is obtained as shown in (9).
A PPENDIX B
P ROOF OF P ROPOSITION 1
Without loss of generality, it is assumed that Mn =
M, ∀n ∈ N , |N |> 1. According to (6a), it is obvious that
MP C
the objective functions of the MPC and LCD schemes fobj
LCD
and fobj are linearly (thus monotonic) continuous function
of η on closed interval [0, 1].
PL
When η = 0, fobj = l=1 Pl Tl . The objective values of
the two schemes are
MP C
fobj
=
M
X
l=1
LCD
fobj
=
N
M
X
l=1
Pl Tl
↑
0
Pl Tl
↑
0
L
X
+
Pl Tl
↑
1
l=M +1
L
X
+
=
Pl ,
l=M +1
Pl Tl
l=N M +1
L
X
=
↑
1
L
X
Pl .
l=N M +1
(B.1)
PL
Note that
P
=
1
and
P
>
P
>
·
·
·
>
P
,
where
l
1
2
L
l=1
equality holds if and only if β = 0. Thus
MP C
fobj
η=0
LCD
> fobj
.
η=0
(B.2)
h
i
L
P
(l)
When η = 1, fobj =
Pl Ex0 Pout (x0 ) . Denoting
l=1
h
i
(l)
Ex0 Pout (x0 ) as Pcell,out (l), then
MP C
fobj
=
N
M
X
LCD
fobj
=
N
M
X
{z
}
C
LCD
Pl Pcell,out
(l)
l=1
|
MP C
Pl Pcell,out
(l) ,
l=N M +1
l=1
|
L
X
MP C
Pl Pcell,out
(l) +
{z
|
D
L
X
+
}
LCD
Pl Pcell,out
(l) .
l=N M +1
{z
}
E
{z
|
F
}
(B.3)
According to the wireless transmission strategy, D = F , where
D and F correspond to the scenario that all the RRHs serve
the user, while E > C because E denotes there is only one
RRH serving the user, while C corresponds to all the RRHs
serving the user. Thus
MP C
fobj
η=1
LCD
< fobj
.
η=1
(B.4)
MP C
According to (B.2), (B.4) and the linearity of fobj
and
LCD
fobj
, there exists a crossover point η0 ∈ [0, 1] of the two
objective functions. When η < η0 , the LCD scheme is superior
to the MPC scheme, while when η > η0 , the MPC scheme is
superior to the LCD scheme.
Substituting {al,n } of the MPC and LCD schemes into (6a),
respectively, a linear equation of η is formulated, and the
solution is shown as in (17). Because M = Mn , ∀n, (17)
can be further written as (18).
The proof can be extended to the case that Mn is different
with n.
R EFERENCES
[1] Z. Ye, C. Pan, H. Zhu, and J. Wang, “Outage probability and fronthaul
usage tradeoff caching strategy in Cloud-RAN,” in 2017 IEEE International Conference on Communications (ICC), May 2017, pp. 1–6.
[2] S. Buzzi, C. L. I, T. E. Klein, H. V. Poor, C. Yang, and A. Zappone, “A
survey of energy-efficient techniques for 5G networks and challenges
ahead,” IEEE Journal on Selected Areas in Communications, vol. 34,
no. 4, pp. 697–709, April 2016.
[3] A. Gupta and R. K. Jha, “A survey of 5G network: architecture and
emerging technologies,” IEEE Access, vol. 3, pp. 1206–1232, 2015.
[4] O. Galinina, A. Pyattaev, S. Andreev, M. Dohler, and Y. Koucheryavy,
“5G multi-RAT LTE-WiFi ultra-dense small cells: performance dynamics, architecture, and trends,” IEEE Journal on Selected Areas in
Communications, vol. 33, no. 6, pp. 1224–1240, June 2015.
[5] K. M. S. Huq, S. Mumtaz, J. Bachmatiuk, J. Rodriguez, X. Wang,
and R. L. Aguiar, “Green HetNet CoMP: energy efficiency analysis
and optimization,” IEEE Transactions on Vehicular Technology, vol. 64,
no. 10, pp. 4670–4683, October 2015.
[6] A. Checko, H. L. Christiansen, Y. Yan, L. Scolari, G. Kardaras, M. S.
Berger, and L. Dittmann, “Cloud RAN for mobile networks —a technology overview,” IEEE Communications Surveys & Tutorials, vol. 17,
no. 1, pp. 405–426, 2015.
[7] C. L. I, J. Huang, R. Duan, C. Cui, J. Jiang, and L. Li, “Recent progress
on C-RAN centralization and cloudification,” IEEE Access, vol. 2, pp.
1030–1039, 2014.
[8] H. Zhu, “Performance comparison between distributed antenna and
microcellular systems,” IEEE Journal on Selected Areas in Communications, vol. 29, no. 6, pp. 1151–1163, June 2011.
[9] J. Wang, H. Zhu, and N. J. Gomes, “Distributed antenna systems for
mobile communications in high speed trains,” IEEE Journal on Selected
Areas in Communications, vol. 30, no. 4, pp. 675–683, May 2012.
15
[10] H. Zhu and J. Wang, “Radio resource allocation in multiuser distributed
antenna systems,” IEEE Journal on Selected Areas in Communications,
vol. 31, no. 10, pp. 2058–2066, October 2013.
[11] C. Pan, H. Zhu, N. J. Gomes, and J. Wang, “Joint precoding and RRH
selection for user-centric green MIMO C-RAN,” IEEE Transactions on
Wireless Communications, vol. 16, no. 5, pp. 2891–2906, May 2017.
[12] ——, “Joint user selection and energy minimization for ultra-dense
multi-channel C-RAN with incomplete CSI,” IEEE Journal on Selected
Areas in Communications, vol. 35, no. 8, pp. 1809–1824, Aug 2017.
[13] S. C. Zhan and D. Niyato, “A coalition formation game for remote radio
head cooperation in cloud radio access network,” IEEE Transactions on
Vehicular Technology, vol. 66, no. 2, pp. 1723–1738, Feb 2017.
[14] R. Yu, J. Ding, X. Huang, M. T. Zhou, S. Gjessing, and Y. Zhang,
“Optimal resource sharing in 5G-enabled vehicular networks: A matrix
game approach,” IEEE Transactions on Vehicular Technology, vol. 65,
no. 10, pp. 7844–7856, Oct 2016.
[15] M. Jaber, M. A. Imran, R. Tafazolli, and A. Tukmanov, “5G backhaul
challenges and emerging research directions: a survey,” IEEE Access,
vol. 4, pp. 1743–1766, 2016.
[16] J. Liu, S. Xu, S. Zhou, and Z. Niu, “Redesigning fronthaul for nextgeneration networks: beyond baseband samples and point-to-point links,”
IEEE Wireless Communications, vol. 22, no. 5, pp. 90–97, October 2015.
[17] U. Siddique, H. Tabassum, E. Hossain, and D. I. Kim, “Wireless
backhauling of 5G small cells: challenges and solution approaches,”
IEEE Wireless Communications, vol. 22, no. 5, pp. 22–31, October 2015.
[18] X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. C. M. Leung, “Cache
in the air: exploiting content caching and delivery techniques for 5G
systems,” IEEE Communications Magazine, vol. 52, no. 2, pp. 131–139,
February 2014.
[19] K. Poularakis, G. Iosifidis, V. Sourlas, and L. Tassiulas, “Exploiting
caching and multicast for 5G wireless networks,” IEEE Transactions on
Wireless Communications, vol. 15, no. 4, pp. 2995–3007, April 2016.
[20] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,”
IEEE Transactions on Information Theory, vol. 60, no. 5, pp. 2856–
2867, May 2014.
[21] J. Duan, X. Lagrange, and F. Guilloud, “Performance analysis of several
functional splits in C-RAN,” in 2016 IEEE 83rd Vehicular Technology
Conference (VTC Spring), May 2016, pp. 1–5.
[22] X. Peng, J. C. Shen, J. Zhang, and K. B. Letaief, “Joint data assignment
and beamforming for backhaul limited caching networks,” in 2014 IEEE
25th Annual International Symposium on Personal, Indoor, and Mobile
Radio Communication (PIMRC), September 2014, pp. 1370–1374.
[23] M. Tao, E. Chen, H. Zhou, and W. Yu, “Content-centric sparse multicast
beamforming for cache-enabled Cloud RAN,” IEEE Transactions on
Wireless Communications, vol. 15, no. 9, pp. 6118–6131, Sept. 2016.
[24] F. Pantisano, M. Bennis, W. Saad, and M. Debbah, “Match to cache:
joint user association and backhaul allocation in cache-aware small cell
networks,” in 2015 IEEE International Conference on Communications
(ICC), June 2015, pp. 3082–3087.
[25] H. Hsu and K. C. Chen, “A resource allocation perspective on caching
to achieve low latency,” IEEE Communications Letters, vol. 20, no. 1,
pp. 145–148, January 2016.
[26] X. Li, X. Wang, S. Xiao, and V. C. M. Leung, “Delay performance
analysis of cooperative cell caching in future mobile networks,” in 2015
IEEE International Conference on Communications (ICC), June 2015,
pp. 5652–5657.
[27] Y. Zhou, Z. Zhao, R. Li, H. Zhang, and Y. Louet, “Cooperation-based
probabilistic caching strategy in clustered cellular networks,” IEEE
Communications Letters, vol. 21, no. 9, pp. 2029–2032, Sept 2017.
[28] J. Liao, K. K. Wong, M. R. A. Khandaker, and Z. Zheng, “Optimizing
cache placement for heterogeneous small cell networks,” IEEE Communications Letters, vol. 21, no. 1, pp. 120–123, Jan 2017.
[29] S. Wang, X. Zhang, K. Yang, L. Wang, and W. Wang, “Distributed
edge caching scheme considering the tradeoff between the diversity
and redundancy of cached content,” in 2015 IEEE/CIC International
Conference on Communications in China (ICCC), November 2015, pp.
1–5.
[30] Q. Li, W. Shi, X. Ge, and Z. Niu, “Cooperative edge caching in softwaredefined hyper-cellular networks,” IEEE Journal on Selected Areas in
Communications, vol. 35, no. 11, pp. 2596–2605, Nov 2017.
[31] J. Song, H. Song, and W. Choi, “Optimal caching placement of caching
system with helpers,” in 2015 IEEE International Conference on Communications (ICC), June 2015, pp. 1825–1830.
[32] X. Peng, J. C. Shen, J. Zhang, and K. B. Letaief, “Backhaul-aware
caching placement for wireless networks,” in 2015 IEEE Global Communications Conference (GLOBECOM), December 2015, pp. 1–6.
[33] Z. Yan, S. Chen, Y. Ou, and H. Liu, “Energy efficiency analysis of
cache-enabled two-tier HetNets under different spectrum deployment
strategies,” IEEE Access, vol. 5, pp. 6791–6800, 2017.
[34] Y. Chen, M. Ding, J. Li, Z. Lin, G. Mao, and L. Hanzo, “Probabilistic small-cell caching: Performance analysis and optimization,” IEEE
Transactions on Vehicular Technology, vol. 66, no. 5, pp. 4341–4354,
May 2017.
[35] J. Wen, K. Huang, S. Yang, and V. O. K. Li, “Cache-enabled heterogeneous cellular networks: Optimal tier-level content placement,” IEEE
Transactions on Wireless Communications, vol. 16, no. 9, pp. 5939–
5952, Sept 2017.
[36] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker, “Web caching
and Zipf-like distributions: evidence and implications,” in Eighteenth
Annual Joint Conference of the IEEE Computer and Communications
Societies (INFOCOM ’99), vol. 1, March 1999, pp. 126–134.
[37] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and
G. Caire, “Femtocaching: Wireless content delivery through distributed
caching helpers,” IEEE Transactions on Information Theory, vol. 59,
no. 12, pp. 8402–8413, Dec 2013.
[38] H. Zhu and J. Wang, “Chunk-based resource allocation in OFDMA
systems —part I: chunk allocation,” IEEE Transactions on Communications, vol. 57, no. 9, pp. 2734–2744, September 2009.
[39] ——, “Chunk-based resource allocation in OFDMA systems —part II:
joint chunk, power and bit allocation,” IEEE Transactions on Communications, vol. 60, no. 2, pp. 499–509, February 2012.
[40] H. Zhu, “Radio resource allocation for OFDMA systems in high speed
environments,” IEEE Journal on Selected Areas in Communications,
vol. 30, no. 4, pp. 748–759, May 2012.
[41] M. Ehrgott, Multicriteria Optimazation, 2nd ed. Springer, 2005.
[42] J. Y. Wang, J. B. Wang, M. Chen, H. M. Chen, X. Dang, and H. Y.
Li, “System outage probability analysis of uplink distributed antenna
systems over a composite channel,” in 2011 IEEE 73rd Vehicular
Technology Conference (VTC Spring), May 2011, pp. 1–5.
[43] R. L. Burden and J. D. Faires, Numerical Analysis, 9th ed. Brooks/Cole,
Cengage Learning, 2011.
[44] M. Srinivas and L. M. Patnaik, “Genetic algorithms: a survey,” Computer, vol. 27, no. 6, pp. 17–26, June 1994.
[45] M. Mitchell, An Introduction to Genetic Algorithms. Cambridge, MA,
USA: MIT Press, 1998.
[46] H. Ahlehagh and S. Dey, “Video-aware scheduling and caching in the
radio access network,” IEEE/ACM Transactions on Networking, vol. 22,
no. 5, pp. 1444–1462, Oct 2014.
[47] N. Golrezaei, P. Mansourifard, A. F. Molisch, and A. G. Dimakis, “Basestation assisted device-to-device communications for high-throughput
wireless video networks,” IEEE Transactions on Wireless Communications, vol. 13, no. 7, pp. 3665–3676, July 2014.
[48] E. Park, S. R. Lee, and I. Lee, “Antenna placement optimization for
distributed antenna systems,” IEEE Transactions on Wireless Communications, vol. 11, no. 7, pp. 2468–2477, July 2012.
[49] H. Kim, S. R. Lee, K. J. Lee, and I. Lee, “Transmission schemes based
on sum rate analysis in distributed antenna systems,” IEEE Transactions
on Wireless Communications, vol. 11, no. 3, pp. 1201–1209, March
2012.
[50] J. Zhang and J. G. Andrews, “Distributed antenna systems with randomness,” IEEE Transactions on Wireless Communications, vol. 7, no. 9, pp.
3636–3646, September 2008.
[51] A. Alameer and A. Sezgin, “Resource cost balancing with caching
in C-RAN,” in 2017 IEEE Wireless Communications and Networking
Conference (WCNC), March 2017, pp. 1–6.
[52] J. G. Proakis and M. Salehi, Digital Communications, 5th ed. London:
McGraw-Hill, 2008.
[53] M. K. Simon and M.-S. Alouini, Digital Communication over Fading
Channels. Hoboken: John Wiley & Sons, 2005.
| 7 |
VIBRATO AND AUTOMATIC DIFFERENTIATION FOR HIGH
ORDER DERIVATIVES AND SENSITIVITIES OF FINANCIAL
OPTIONS
arXiv:1606.06143v1 [q-fin.CP] 20 Jun 2016
GILLES PAGÈS
∗,
OLIVIER PIRONNEAU
† , AND
GUILLAUME SALL
‡
Abstract. This paper deals with the computation of second or higher order greeks of financial
securities. It combines two methods, Vibrato and automatic differentiation and compares with other
methods. We show that this combined technique is faster than standard finite difference, more
stable than automatic differentiation of second order derivatives and more general than Malliavin
Calculus. We present a generic framework to compute any greeks and present several applications
on different types of financial contracts: European and American options, multidimensional Basket
Call and stochastic volatility models such as Heston’s model. We give also an algorithm to compute
derivatives for the Longstaff-Schwartz Monte Carlo method for American options. We also extend
automatic differentiation for second order derivatives of options with non-twice differentiable payoff.
Key words. Financial securities, risk assessment, greeks, Monte-Carlo, automatic differentiation, vibrato.
AMS subject classifications. 37M25, 65N99
1. Introduction. Due to BASEL III regulations, banks are requested to evaluate the sensitivities of their portfolios every day (risk assessment). Some of these
portfolios are huge and sensitivities are time consuming to compute accurately. Faced
with the problem of building a software for this task and distrusting automatic differentiation for non-differentiable functions, we turned to an idea developed by Mike
Giles called Vibrato.
Vibrato at core is a differentiation of a combination of likelihood ratio method
and pathwise evaluation. In Giles [12], [13], it is shown that the computing time,
stability and precision are enhanced compared with numerical differentiation of the
full Monte Carlo path.
In many cases, double sensitivities, i.e. second derivatives with respect to parameters, are needed (e.g. gamma hedging).
Finite difference approximation of sensitivities is a very simple method but its
precision is hard to control because it relies on the appropriate choice of the increment. Automatic differentiation of computer programs bypass the difficulty and its
computing cost is similar to finite difference, if not cheaper. But in finance the payoff
is never twice differentiable and so generalized derivatives have to be used requiring
approximations of Dirac functions of which the precision is also doubtful.
The purpose of this paper is to investigate the feasibility of Vibrato for second
and higher derivatives. We will first compare Vibrato applied twice with the analytic
differentiation of Vibrato and show that it is equivalent; as the second is easier we
propose the best compromise for second derivatives: Automatic Differentiation of
Vibrato.
In [8], Capriotti has recently investigated the coupling of different mathematical
methods – namely pathwise and likelihood ratio methods – with an Automatic differ∗ Laboratoire de Probabilités et Modèles Aléatoires, UMR 7599, UPMC, Case 188, 4 pl. de Jussieu,
F-75252 Paris Cedex 5, France, [email protected].
† Laboratoire Jacques Louis Lions, UMR 7598, Case 187, 4 pl. de Jussieu, F-75252 Paris Cedex
5, France, [email protected].
‡ Laboratoire de Probabilités et Modèles Aléatoires, UMR 7599, UPMC, Case 188, 4 pl. de Jussieu,
F-75252 Paris Cedex 5, France, [email protected].
1
2
entiation technique for the computation of the second order greeks; here we follow the
same idea but with Vibrato and also for the computation of higher order derivatives.
Automatic Differentiation (AD) of computer program as described by Greiwank
in [19], [20], Naumann in [33] and Hascoet in [22] can be used in direct or reverse mode.
In direct mode the computing cost is similar to finite difference but with no roundoff
errors on the results: the method is exact because every line of the computer program
which implements the financial option is differentiated exactly. The computing cost
of a first derivative is similar to running the program twice.
Unfortunately, for many financial products the first or the second sensitivities do
not exist at some point, such is the case for the standard Digital option at x = K;
even the payofff of the a plain vanilla European option is not twice differentiatble at
x = K, yet the Gamma is well defined due to the regularizing effect of the Brownian
motion (or the heat kernel) which gives sense to the expectation of a Dirac as a
pointwise value of a probability density; in short the end result is well defined but the
intermediate steps of AD are not.
We tested ADOL-C [21] and tried to compute the Hessian matrix for a standard
European Call option in the Black-Scholes model but the results were wrong. So we
adapted our AD library based on operator overloading by including approximations
of Dirac functions and obtained decent results; this is the second conclusion of the
paper: AD for second sensitivities can be made to work; it is simpler than Vibrato+AD
(VAD) but it is risky and slightly more computer intensive.
More details on AD can be found in Giles et al. [11], Pironneau [35], Capriotti
[7], Homescu [26] and the references therein.
An important constraint when designing costly software for risk assessment is to
be compatible with the history of the company which contracts the software; most of
the time, this rules out the use of partial differential equations (see [1]) as most quant
companies use Monte Carlo algorithms for pricing their portfolios.
For security derivatives computed by a Monte Carlo method, the computation
of their sensitivities with respect to a parameter is most easily approximated by
finite difference (also known as the shock method) thus requiring the reevaluation
of the security with an incremented parameter. There are two problems with this
method: it is imprecise when generalized to higher order derivatives and expensive
for multidimensional problems with multiple parameters. The nth derivative of a
security with p parameters requires (n + 1)p evaluations; furthermore the choice of
the perturbation parameter is tricky.
From a semi-analytical standpoint the most natural way to compute a sensitivity
is the pathwise method described in Glasserman [15] which amounts to compute
the derivative of the payoff for each simulation path. Unfortunately, this technique
happens to be inefficient for certain types of payoffs including some often used in
quantitative finance like Digitals or Barrier options. For instance, as it is not possible
to obtain the Delta of a Digital Call that way (the derivative of the expectation of a
Digital payoff is not equal to the expectation of the derivative of the Digital payoff,
which in fact does not exist as a function), the pathwise method cannot evaluate the
Gamma of a Call option in a standard Black-Scholes model. The pathwise derivative
estimation is also called infinitesimal perturbation and there is a extensive literature
on this subject; see for example Ho et al. [24], in Suri et al. [39] and in L’Ecuyer [28].
A general framework for some applications to option pricing is given in Glasserman
[14].
There are also two well known mathematical methods to obtain sensibilities, the
3
so-called log-likelihood ratio method and the Malliavin calculus. However, like the
pathwise method, both have their own advantage and drawback. For the former,
the method consists in differentiating the probability density of the underlying and
clearly, it is not possible to compute greeks if the probability density of the underlying
is not known. Yet, the method has a great advantage in that the probability densities
are generally smooth functions of their parameters, even when payoff functions are
not. This method has been developed primarily in Glynn [17], Reiman et al. [36],
Rubinstein [37] and some financial applications in Broadie et al. [5] and Glasserman
et al. [16].
As for the Malliavin calculus, the computation of the greeks consists in writing
the expectation of the orignal payoff function times a specific factor i.e. the Malliavin
weight which is a Skorohod integral, the adjoint operator of the Malliavin derivative.
The main problem of this method is that the computation of the Malliavin weight can
be complex and/or computationally costly for a high dimensional problem. Several
articles deal with the computation of greeks via Malliavin calculus, Fournié et al.
[10], Benhamou [2] and Gobet et al. [18] to cite a few. The precision of the Malliavin
formulae also degenerates for short maturities, especially for the ∆-hedge.
Both the likelihood ratio and the Malliavin calculus are generally faster than the
pathwise or finite difference method because, once the terms in front of the payoff
function (the weight is computed analytically), the approximation of a greek in a
one-dimensional case is almost equivalent to the cost of the evaluation of the pricing
function. One systematic drawback is the implementation of these method in the
financial industry is limited by the specific analysis required by each new payoff.
The paper is organized as follows; in section 2 we begin by recalling the Vibrato
method for first order derivatives as in Giles [12] for the univariate and the multivariate
case. We then generalize the method for the second and higher order derivatives with
respect to one or several parameters and we describe the coupling to an analytical or
Automatic differentiation method to obtain an additional order of differentiation.
In section 3, we recall briefly the different methods of Automatic differentiation.
We describe the direct and the adjoint or reverse mode to differentiate a computer
program. We also explain how it can be extended to some non differentiable functions.
Section 4 deals with several applications to different derivative securities. We
show some results of second order derivatives (Gamma and Vanna) and third order
derivatives in the case of a standard European Call option: the sensitivity of the
Gamma with respect to changes in the underlying asset and a cross-derivatives with
respect to the underlying asset, the volatility and the interest rate. Also, we compare
different technique of Automatic differentiation and we give some details about our
computer implementations.
In section 5 we study some path-dependent products; we apply the combined
Vibrato plus Automatic differentiation method to the computation of the Gamma for
an American Put option computed with the Longstaff Schwartz algorithm [31]. We
also illustrate the method on a multidimensional Basket option (section 4) and on a
European Call with Heston’s model in section 6. In section 7, we study the computing
time for the evaluation of the Hessian matrix of a standard European Call Option in
the Black-Scholes model. Finally, in section 8 we compare VADs to Malliavin’s and
to the likelihood ratio method in the context of short maturities.
2. Vibrato. Vibrato was introduced by Giles in [12]; it is based on a reformulation of the payoff which is better suited to differentiation. The Monte Carlo path is
4
split into the last time step and its past. Let us explain the method on a plain vanilla
multi-dimensional option.
First, let us recall the likelihood ratio method for derivatives.
Let the parameter set Θ be a subset of Rp . Let b : Θ×Rd → Rd , σ : Θ×Rd → Rd×q be
continuous functions, locally Lipstchitz in the space variable, with linear growth, both
uniformly in θ ∈ Θ. We omit time as variable in both b and σ only for simplicity. And
let (Wt )t≥0 be a q-dimensional standard Brownian motion defined on a probability
space (Ω, A, P).
Lemma 2.1. (Log-likelihood ratio)
Let p(θ, ·) be the probability density of a random variable X(θ), which is function
of θ; consider
Z
V (y)p(θ, y)dy.
(2.1)
E[V (X(θ))] =
Rd
If θ 7→ p(θ, ·) is differentiable at θ0 ∈ Θ for all y, then, under a standard domination or
a uniform integrability assumption one can interchange differentiation and integration
: for i = 1, .., p,
i
∂ h
E[V (X(θ))] 0 =
∂θi
|θ
∂ log p
∂ log p 0
0
(θ , y)p(θ , y)dy = E V (X(θ))
(θ, X(θ))
V (y)
.
∂θi
∂θi
Rd
|θ 0
(2.2)
Z
2.1. Vibrato for a European Contract. Let X = (Xt )t∈[0,T ] be a diffusion
process, the strong solution of the following Stochastic Differential Equation (SDE)
dXt = b (θ, Xt ) dt + σ(θ, Xt )dWt , X0 = x.
(2.3)
For simplicity and without loss of generality, we assume that q = d; so σ is a square
matrix. Obviously, Xt depends on θ; for clarity, we write Xt (θ) when the context
requires it.
Given an integer n > 0, the Euler scheme with constant step h = Tn , defined below in (2.3), approximates Xt at time tnk = kh , i.e. X̄kn ≈ Xkh , and it is recursively
defined by
√
n
n
n
X̄kn = X̄k−1
+ b(θ, X̄k−1
)h + σ(θ, X̄k−1
) hZk , X̄0n = x, k = 1, . . . , n,
(2.4)
where {Zk }k=1,..,n are independent random Gaussian N (0, Id ) vectors. The relation
between W and Z is
√
Wtnk − Wtnk−1 = hZk .
(2.5)
√
Note that X̄nn = µn−1 (θ) + σn−1 (θ) hZn with
√
n
n
n
(2.6)
µn−1 (θ) = X̄n−1
(θ) + b(θ, X̄n−1
(θ))h and σn−1 (θ) = σ(θ, X̄n−1
(θ)) h.
Then, for any Borel function V : Rd → R such that E|V (X̄nn (θ))| < +∞,
n
.
E V (X̄nn (θ)) = E E V (X̄nn (θ)) | (Wtnk )k=0,...,n−1 = E E V (X̄nn (θ)) | X̄n−1
(2.7)
5
This follows from the obvious fact that the Euler scheme defines a Markov chain X̄
with respect to the filtration Fk = σ(Wtnℓ , ℓ = 0, . . . , k).
Furthermore, by homogeneity of the chain,
n
o
√
n
= Ex V (X̄1n (x, θ)) |x=X̄ n = E[V (µ + σ hZ)] µ = µn−1 (θ) .
E V (X̄nn (θ)) | X̄n−1
n−1
σ = σn−1 (θ)
(2.8)
Where X̄1n (x, θ) denotes the value at time tn1 of the Euler scheme with k = 1, starting
at x and where the last expectation is with respect to Z.
h
i
√
2.2. First Order Vibrato. We denote ϕ(µ, σ) = E V (µ + σ hZ) . From
(2.7) and (2.8), for any i ∈ (1, . . . , p)
#
"
o
√
∂
∂ϕ
∂ n
n
E[V (X̄n (θ))] = E
(µn−1 (θ), σn−1 (θ))
E[V (µ + σ hZ)] µ = µn−1 (θ) = E
∂θi
∂θi
∂θi
σ = σn−1 (θ)
(2.9)
and
∂µn−1 ∂ϕ
∂σn−1 ∂ϕ
∂ϕ
(µn−1 , σn−1 ) =
·
:
(µn−1 , σn−1 ) +
(µn−1 , σn−1 ) (2.10)
∂θi
∂θi
∂µ
∂θi
∂σ
where · denotes the scalar product and : denotes the trace of the product of the
matrices.
Lemma 2.2.
∂Xt
The θi -tangent process to X, Yt =
, is defined as the solution of the following
∂θi
SDE (see Kunita[27] for a proof )
∂X0
dYt = b′θi (θ, Xt ) + b′x (θ, Xt )Yt dt + σθ′ i (θ, Xt ) + σx′ (θ, Xt )Yt dWt , Y0 =
∂θi
(2.11)
where the primes denote standard derivatives. As for X̄kn in (2.3), we may discretize
(2.11) by
√
n
Ȳk+1
= Ȳkn + b′θi (θ, X̄kn ) + b′x (θ, X̄kn )Ȳkn h + σθ′ i (θ, X̄kn ) + σx′ (θ, X̄kn )Ȳkn hZ(2.12)
k+1 .
Then from (2.6),
∂µn−1
n
n
n
n
(θ)) + b′x (θ, X̄n−1
(θ))Ȳn−1
(θ)
= Ȳn−1
(θ) + h b′θi (θ, X̄n−1
∂θi
√
∂σn−1
n
n
n
(θ)) + σx′ (θ, X̄n−1
(θ))Ȳn−1
(θ) .
= h σθ′ i (θ, X̄n−1
∂θi
So far we have shown the following lemma.
Lemma 2.3. When Xnn (θ) is given by (2.3), then
(2.13)
∂
E[V (X̄nn (θ))] is given by
∂θi
(2.9) with (2.10), (2.13) and (2.12).
In (2.3) b and σ are constant in the time interval (kh, (k + 1)h), therefore the
n
conditional probability of X̄nn given X̄n−1
given by
T −1
1
1
p e− 2 (x−µ) Σ (x−µ)
p(x) = √
d
( 2π) |Σ|
(2.14)
6
where µ and Σ = hσσ T are evaluated at time (n − 1)h and given by (2.6). As in
Dwyer et al. [9],
∂
∂
1
1
log p(x) = Σ−1 (x − µ),
log p(x) = − Σ−1 + Σ−1 (x − µ)(x − µ)T Σ−1 ⇒
∂µ
∂Σ
2
2
Z
1
∂
∂
log p(x)|x=Xnn = σ −T √ ,
log p(x)|x=Xnn =
σ −T (ZZ T − I)σ −1 .
∂µ
2h
h ∂Σ
Finally, applying Lemma 2.3 and Lemma 2.1 yields the following proposition
Theorem 2.4. (Vibrato, multidimensional first order case)
"
#
o
√
∂ n
∂
n
E[V (µ + σ hZ)] µ = µn−1 (θ)
E[V (X̄n (θ))] = E
∂θi
∂θi
σ = σn−1 (θ)
i
h
√
1 ∂µ
· E V (µ + σ hZ)σ −T Z µ = µn−1 (θ)
=E √
σ = σn−1 (θ)
h ∂θi
+
i
h
√
1 ∂Σ
: E V (µ + σ hZ)σ −T (ZZ T − I)σ −1
2h ∂θi
µ = µn−1 (θ)
σ = σn−1 (θ)
.
(2.15)
2.3. Antithetic Vibrato. One can expect to improve the above formula – that
is, reducing its variance – by the means of antithetic transform (see section 2.6 below
for a short discussion) The following holds:
i 1 h
i
h
√
√
√
E V (µ + σ hZ)σ −T Z = E V (µ + σ hZ) − V (µ − σ hZ) σ −T Z .
2
(2.16)
similarly, using E[ZZ T − I] = 0,
h
i
√
E V (µ + σ hZ)σ −T (ZZ T − I)σ −1
i
√
√
1 h
= E V (µ + σ hZ) − 2V (µ) + V (µ − σ hZ) σ −T (ZZ T − I)σ −1 . (2.17)
2
Corollary 2.5. (One dimensional case, d=1)
Z
√
√
1
∂µ
∂
n
E[V (X̄n (θ))] = E
E V (µ + σ hZ) − V (µ − σ hZ) √
µ = µn−1 (θ)
∂θi
2
∂θi
σ = σn−1 (θ)
σ h
2
√
√
∂σ
Z −1
(2.18)
√
E V (µ + σ hZ) − 2V (µ) + V (µ − σ hZ)
+
µ = µn−1 (θ)
∂θi
σ h
σ = σn−1 (θ)
Conceptual Algorithm. In figure 1 we have illustrated the Vibrato decomposition
at the path level. To implement the above one must perform the following steps:
1. Choose the number of time step n, the number of Monte-Carlo path M for
the n − 1 first time steps, the number MZ of replication variable Z for the
last time step.
2. For each Monte-Carlo path j = 1..M
• Compute {Xkn }k=1:n−1 , µn−1 , σn−1 by (2.3), (2.6).
7
1.5
1.4
1.3
1.2
1.1
1
0.9
0.8
0.7
0.6
First part of the path (n-1 th terms of the Euler scheme)
Last term
0.5
0
0.2
0.4
0.6
0.8
1
Figure 1. Scheme of simulation path of the Vibrato decomposition.
• Compute V (µn−1 )
∂σn−1
∂µn−1
and
by (2.11), (2.13) and (2.12)
• Compute
∂θi
∂θi
• Replicate MZ times the last time step, i.e.
For mZ ∈ (1, . . . , MZ )
√
√
– Compute V (µn−1 + σn−1 hZ (mZ ) ) and V (µn−1 − σn−1 hZ (mZ ) )
3. In (2.18) compute the inner expected value by averaging over all MZ results,
∂µ
∂σ
and ∂θ
and then average over the M paths.
then multiply by ∂θ
i
i
Remark 1. For simple cases such as of the sensibilities of European options, a
small MZ suffices; this is because there is another average with respect to M in the
outer loop.
Remark 2. For European options one may also use the Black-Scholes formula
for the expected value in (2.15).
2.4. Second Derivatives. Assume that X0 , b and σ depend on two parameters
(θ1 , θ2 ) ∈ Θ2 . There are two ways to compute second order derivatives. Either by
differentiating the Vibrato (2.15) while using Lemma 2.1 or by applying the Vibrato
idea to the second derivative.
2.4.1. Second Derivatives by Differentiation of Vibrato. Let us differentiate (2.15) with respect to a second parameter θj :
h
i
√
1 ∂2µ
∂2
E[V (XT )] = E √
· E V (µ + σ hZ)σ −T Z
∂θi ∂θj
h ∂θi ∂θj
i
√
∂µ ∂ h
·
E V (µ + σ hZ)σ −T Z
+
µ = µn−1 (θ)
∂θi ∂θj
σ = σn−1 (θ)
i
h
2
√
∂ Σ
1
: E V (µ + σ hZ)σ −T (ZZ T − I)σ −1
+
2h ∂θi ∂θj
i
√
∂Σ
∂ h
+
:
E V (µ + σ hZ)σ −T (ZZ T − I)σ −1
∂θi ∂θj
µ = µn−1 (θ)
σ = σn−1 (θ)
(2.19)
The derivatives can be expanded further; for instance in the one dimensional case and
after a tedious algebra one obtains:
8
Theorem 2.6. (Second Order by Differentiation of Vibrato)
"
2
√
√
∂2µ
∂2
∂µ
Z
Z2 − 1
√
E[V
(X
)]
=
E
E
V
(µ
+
σ
E
V
(µ
+
σ
+
hZ)
hZ)
T
∂θ2
∂θ2
∂θ
σ2h
σ h
2
4
2
√
∂σ
Z − 5Z + 2
+
E V (µ + σ hZ)
2
∂θ
σ h
√
√
∂2σ
Z2 − 1
Z 3 − 3Z
∂µ ∂σ
+ 2 E V (µ + σ hZ) √
E V (µ + σ hZ)
(2.20)
+2
∂θ
∂θ ∂θ
σ2h
σ h
2.4.2. Second Derivatives by Second Order Vibrato. The same Vibrato
strategy can be applied also directly to second derivatives.
As before the derivatives are transfered to the PDF p of XT :
Z
Z
V (x) ∂ 2 p
∂ 2 ln p
∂ ln p ∂ ln p
∂2
V (x)[
E[V (XT )]=
p(x)dx =
+
]p(x)dx
∂θi ∂θj
∂θi ∂θj
Rd p(x)∂θi ∂θj
Rd ∂θi ∂θj
∂ 2 ln p
∂ ln p ∂ ln p
= E V (x)
(2.21)
+
∂θi ∂θj
∂θi ∂θj
Then
∂2ϕ
∂2
E[V (X̄Tn (θ1 , θ2 ))] =
(µ, σ)
∂θ1 ∂θ2
∂θ1 ∂θ2
∂σ ∂σ ∂ 2 ϕ
∂ 2 µ ∂ϕ
∂µ ∂µ ∂ 2 ϕ
(µ,
σ)
+
(µ,
σ)
+
(µ, σ)
=
∂θ1 ∂θ2 ∂µ2
∂θ1 ∂θ2 ∂σ 2
∂θ1 ∂θ2 ∂µ
2
2
∂σ ∂µ
∂ σ ∂ϕ
∂ ϕ
∂µ ∂σ
+
(µ, σ) +
(µ, σ).
+
∂θ1 ∂θ2 ∂σ
∂θ1 ∂θ2
∂θ1 ∂θ2 ∂µ∂σ
∂2
∂2
µn−1 (θ1 , θ2 ) and
σn−1 (θ1 , θ2 ).
∂θ1 ∂θ2
∂θ1 ∂θ2
It requires the computation of the first derivative with respect to θi of the tangent
(2)
process Yt , that we denote Yt (θ1 , θ2 ).
Then (2.13) is differentiated and an elementary though tedious computations
yields the following proposition:
Proposition 2.7.
(i)
The θi -tangent process Yt defined above in Lemma 2.11 has a θj -tangent process
(ij)
Yt
defined by
h
(j)
(i)
(ij)
dYt
= b′′θi θj (θ1 , θ2 , Xt ) + b′′θi ,x (θ1 , θ2 , Xt )Yt + b′′θj ,x (θ1 , θ2 , Xt )Yt
i
(i) (j)
(ij)
dt
+b′′x2 (θ1 , θ2 , Xt )Yt Yt + b′x (θ1 , θ2 , Xt )Yt
h
(j)
(i)
+ σθ′′i θj (θ1 , θ2 , Xt ) + σθ′′i ,x (θ1 , θ2 , Xt )Yt + σθ′′j ,x (θ1 , θ2 , Xt )Yt
i
(ij)
(i) (j)
dWt .
+σx′′2 (θ1 , θ2 , Xt )Yt Yt + σx′ (θ1 , θ2 , Xt )Yt
We need to calculate the two new terms
Finally in the univariate case θ = θ1 = θ2 this gives
Proposition 2.8. (Second Order Vibrato)
∂2
E[V (XT )] =
∂θ2
9
"
2 h
2
√
√
√
∂2µ
Z
Z2 − 1
∂σ
∂µ
√
E
E V (µ + σ hZ)
+
E V (µ + σ hZ) 2
E V (µ + σ hZ)
+
2
∂θ
∂θ
σ h
∂θ
σ h
√
√
∂2σ
Z2 − 1
Z 3 − 3Z
Z 4 − 5Z 2 + 2
∂µ ∂σ
√
+
E
V
(µ
+
σ
hZ)
E
V
(µ
+
σ
hZ)
(2.22)
+
2
σ2h
∂θ2
∂θ ∂θ
σ2 h
σ h
Remark 3. It is equivalent to Proposition 2.6 hence to the direct differentiation
of Vibrato.
2.5. Higher Order Vibrato. The Vibrato-AD method can be generalized to
higher order of differentiation of Vibrato with respect to the parameter θ with the
help of the Faà di Bruno formula and its generalization to a composite function with
a vector argument, as given in Mishkov [32].
2.6. Antithetic Transform, Regularity and Variance. In this section, we
assume d = q = 1 for simplicity.
√
Starting from Vibrato ϕ(µ, σ) = E[f (µ + σ hZ)] and assuming f Lipschitz continuous with Lipschitz coefficients [f ]Lip , we have
√
√ Z
√
∂ϕ
Z
√
√
=
E
f
(µ
+
σ
. (2.23)
hZ) − f (µ − σZ h)
(µ, σ) = E f (µ + σ hZ)
∂µ
σ h
2σ h
Therefore the variance satisfies
Var
√
f (µ + σ hZ) − f (µ − σ hZ)
√
"
#
Z 2
√
√
Z
√
√
f (µ + σ hZ) − f (µ − σ hZ)
≤E
2σ h
2σ h
"
#
√
(2σ hZ)2 2
≤ [f ]2Lip E
Z = [f ]2Lip E[Z 4 ] = 3[f ]2Lip .
4σ 2 h
As E[Z] = 0, we also have
Z
√
∂ϕ
√ .
(µ, σ) = E f (µ + σ hZ) − f (µ)
∂µ
σ h
(2.24)
Then,
Var
#
"
Z
Z 2
√
√
√
√
f (µ + σ hZ) − f (µ)
≤E
f (µ + σ hZ) − f (µ)
σ h
σ h
h √
i
1
2
2 2
≤ 2 [f ]Lip E (σ hZ) Z = [f ]2Lip E[Z 4 ] = 3[f ]2Lip
σ h
Remark 4. The variances of formulae (2.23) and (2.24) are equivalent but the
latter is less expensive to compute.
If f is differentiable and f ′ has polynomial
growth, we also have
√
∂ϕ
(µ, σ) = E[f ′ (µ + σ hZ)].
∂µ
(2.25)
Thus,
h
i
2
√
√
Var f ′ (µ + σ hZ) ≤ E f ′ (µ + σ hZ)
≤kf ′ k2∞ .
Remark 5. Let f ]Lip denote the Lipschitz constant of f . If f ′ is bounded, we
have [f ]Lip = kf ′ k∞ then the expression in (2.25) has a smaller variance than (2.23)
10
and (2.24). Assume that f ′ is Lipschitz continuous with Lipschitz coefficients [f ′ ]Lip .
We can improve the efficiency of (2.25) because
h
i
h
i
√
√
Var f ′ (µ + σ hZ) = Var f ′ (µ + σ hZ) − f ′ (µ)
2
√
′
′
≤ [f ′ ]2Lip hσ 2 E[Z 2 ] ≤ [f ′ ]Lip hσ 2
≤ E f (µ + σ hZ) − f (µ)
Remark 6. Assuming that f (x) = 1{x≤K} , clearly we cannot differentiate inside
the expectation and the estimation of the variance seen previously can not be applied.
2.6.1. Indicator Function. Let us assume that f (x) = 1{x≤K} . To simplify
assume that K ≤ µ, we have
√
√
h
o − 1n
o = 1n
io ,
f (µ + σ hZ) − f (µ − σ hZ) = 1nZ≤ K−µ
µ−K
√
√
√ , √
Z∈
/ K−µ
Z≥ µ−K
σ
h
σ
h
σ
h
σ
h
hence
Z
√
√
1
io .
f (µ + σ hZ) − f (µ − σ hZ) √ = √ |Z|1nZ ∈/ h K−µ
µ−K
√ , √
σ h σ h
σ h
σ h
For the variance, we have
√
Z
√
√
Var f (µ + σ hZ) − f (µ − σ hZ)
σ h #
"
2
√
√
Z
√
.
≤E
f (µ + σ hZ) − f (µ − σ hZ)
σ h
By Cauchy-Schwarz we can write
#
"
Z 2
√
√
√
E
f (µ + σ hZ) − f (µ − σ hZ)
σ h
√
√
2
1
1
2
2 n
io
h
=
hZ)
−
f
(µ
−
σ
hZ)
=
E
Z
E
Z
1
f
(µ
+
σ
√ , µ−K
√
Z∈
/ K−µ
2σ 2 h
2σ 2 h
σ h σ h
√
1
1
2
2
1
1
µ−K
3
K −µ µ−K
4 2
√ , √
≤
2P Z ≥ √
E[Z ]
≤
.
P Z∈
/
2h
2σ 2 h
2σ
σ h σ h
σ h
Then
√
3
2σ 2 h
√
1
2
µ−K
6
=
2P Z ≥ √
2h
2σ
σ h
Z
+∞
µ−K
√
σ h
2
e
− u2
du
√
2π
!1
2
.
a2
e− 2
, so when a → +∞,
Now, ∀ a > 0, P(Z ≥ a) ≤ √
a 2π
Var
≤
r
(µ−K)2
−
Z
√
1
3 e 4σ2 h
q
√
≤ 2
f (µ + σ hZ) − f (µ − σ hZ)
σ h 2 (2π) 41 µ−K
σ h
√
√
σ
1
1
4
3
2
(2π) σ h
3
4
r
−
(µ−K)2
4σ2 h
3e
√
−→
2 µ − K σ→0
0
+∞
h
if µ 6= K
otherwise.
The fact that such estimate can be obtained with non differentiable f demonstrates
the power of the Vibrato technique.
11
3. Second Derivatives by Vibrato plus Automatic Differentiation (VAD).
The differentiation that leads to formula (2.22) can be derived automatically by AD;
then one has just to write a computer program that implements the formula of proposition 2.19 and apply automatic differentiation to the computer program. We recall
here the basis of AD.
3.1. Automatic Differentiation. Consider a function z = f (u) implemented
in C or C++ by
double f(double u){...}
To find an approximation of zu′ , one could call in C
double dxdu= (f(u + du)-f(u))/du
because
zu′ = f ′ (u) =
f (u + du) − f (u)
+ O(|du|).
du
A good precision ought to be reached by choosing du small. However arithmetic truncation limits the accuracy and shows that it is not easy to choose du appropriately
because beyond a certain threshold, the accuracy of the finite difference formula degenerates due to an almost zero over almost zero ratio. As described in Squire et al.
Figure 2. Precision (log-log plot of |dzdu −
cos(1.)| computed with the forward finite difference formula to evaluate sin′ (u) at u = 1.
Figure 3. Same as Fig. 2 but with the finite
difference which uses complex increments; both
test have been done with Maple-14
[38], one simple remedy is to use complex imaginary increments because
Re
′′′
f (u + idu) − f (u)
f (u + idu)
du2
= Re
= f ′ (u) − Ref (u + iθdu)
idu
idu
6
leads to f ′ (u) = Re[f (u + idu)/(idu)] where the numerator is no longer the result of
a difference of two terms. Indeed tests show that the error does not detoriate when
du → 0 (figure 3). Hence one can choose du = 10−8 to render the last term with a
O(10−16 ) accuracy thus obtaining an essentially exact result.
The cost for using this formula is two evaluations of f (), and the programming
requires to redefine all double as std::complex of the Standard Template Library in
C++.
12
3.2. AD in Direct Mode. A conceptually better idea is based on the fact
that each line of a computer program is differentiable except at switching points of
branching statements like if and at zeros of the sqrt functions etc.
Denoting by dx the differential of a variable x, the differential of a*b is da*b+a*db,
the differential of sin(x) is cos(x)dx, etc. . . By operator overloading, this algebra
can be built into a C++ class, called ddouble here:
class ddouble {
public: double val[2];
ddouble(double a=0, double b=0){ val[1]=b; val[0]=a; }
ddouble operator=(const ddouble& a)
{ val[1] = a.val[1]; val[0]=a.val[0]; return *this; }
ddouble operator - (const ddouble& a, const ddouble& b)
{ return ddouble(a.val[0] - b.val[0],a.val[1] - b.val[1]); }
ddouble operator * (const ddouble& a, const ddouble& b)
{ return ddouble(a.val[0] * b.val[0], a.val[1]*b.val[0]
+ a.val[0] * b.val[1]); }
... };
So all ddouble variables have a 2-array of data: val[0] contains the value of the variable and val[1] the value of its differential. Notice that the constructor of ddouble
assigns zero by default to val[1].
To understand how it works, consider the C++ example of figure 4 which calls a
function f (u, ud ) = (u−ud )2 for u = 2 and ud = 0.1. Figure 5 shows the same program
where double has been changed to ddouble and the initialization of u implies that
its differential is equal to 1. The printing statement displays now the differential of f
which is also its derivative with respect to u if all parameters have their differential
initialized to 0 except u for which has du = 1.
Writing the class double with all
double f(double u, double u_d)
{ double z = u-u_d;
return z*(u-u_d); }
int main() {
double u=2., u_d =0.1;
cout << f(u,u_d)<< endl;
return 0;
}
ddouble f(ddouble u, ddouble u_d)
{ ddouble z = u-u_d;
return z*(u-u_d); }
int main() {
ddouble u=ddouble(2.,1.), u_d = 0.1;
cout << f(u,u_d).val[1] << endl;
return 0;
}
Figure 4. A tiny C++ program to compute (u − ud )2 at u = 2, ud = 0.1.
Figure 5. The same program now comd
putes du
(u − ud )2 at u = 2, ud = 0.1.
functions and common arithmetic operators is a little tedious but not difficult. An
example can be downloaded from www.ann.jussieu.fr/pironneau.
The method can be extended to higher order derivatives easily. For second derivatives, for instance, a.val[4] will store a, its differentials with respected to the first and
second parameter, d1 a, d2 a and the second differential d12 a where the two parameters
can be the same. The second differential of a*b is a∗d12 b+d1a∗d2 b+d2 a∗d1 b+b∗d12a,
and so on.
df
Notice that du
can also be computed by the same program provided the first
d
line in the main() is replaced by ddouble u=2., u d=ddouble(0.1,1.);. However
df df
if both derivatives
are needed, then, either the program must be run twice
,
du dud
or the class ddouble must be modified to handle partial derivatives. In either case
13
the cost of computing n partial derivatives will be approximately n times that of the
original program; the reverse mode does not have this numerical complexity and must
be used when, say, n > 5 if expression templates with traits are used in the direct
mode and n > 5 otherwise [35].
3.3. AD in Reverse Mode. Consider finding Fθ′ where (u, θ) → F (u, θ) ∈ R
and u ∈ Rd and θ ∈ Rn . Assume that u is the solution of a well posed linear system
Au = Bθ + c.
The direct differentiation mode applied to the C++ program which implements
F will solve the linear system n times at the cost of d2 n operations at least.
The mathematical solution by calculus of variations starts with
Fθ′ dθ = (∂θ F )dθ + (∂u F )du with Adu = Bdθ,
then introduces p ∈ Rd solution of AT p = (∂u F )T and writes
(∂u F )du = (AT p)T du = pT Bdθ ⇒ Fθ′ dθ = (∂θ F + pT B)dθ.
The linear system for p is solved only once, i.e. performing O(d2 ) operations at least.
Thus, as the linear system is usually the costliest operation, this second method is
the most advantageous when n is large.
A C program only made of assignments can be seen as a triangular linear system
for the variables. Loops can be unrolled and seen as assignments and tests, etc. Then,
by the above method, the ith line of the program is multiplied by pi and p is computed
from the last line up; but the biggest difficulty is the book-keeping of the values of
the variables, at the time p is computed.
For instance, for the derivative of f=u+ud with respect to ud with u given by
{u=2*ud+4; u=3*u+ud;},u in the second line is not the same as u in the third line
and the program should be rewritten as u1=2*ud+4; u=3*u1+ud;. Then the system
for p is p2=1; p1=3*p2; and the derivative is 2*p1+p2+1=8.
In this study we have used the library adept 1.0 by R.J. Hogan described in
Hogan [25]. The nice part of this library is that the programming for the reverse
mode is quite similar to the direct mode presented above; all differentiable variables
have to be declared as ddouble and the variable with respect to which things are
differentiated is indicated at initialization, as above.
3.4. Non-Differentiable Functions. In finance, non-differentiability is everywhere. For instance, the second derivative Rin K of (x − K)+ does not exist at x = K
∞
as a function, yet the second derivative of 0 f (x)(x − K)+ dx is f (K). Distribution
theory extends the notion of derivative: the Heavyside function H(x) = 1{x≥0} has
the Dirac mass at zero δ(x) for derivative.
Automatic differentiation can be extended to handle this difficulty to some degree
by approximating the Dirac mass at 0 by the functions δ a (x) defined by
x2
1
δ a (x) = √ e− a .
aπ
Now, suppose f is discontinuous at x = z and smooth elsewhere; then
f (x) = f + (x)H(x − z) + f − (x)(1 − H(x − z))
hence
fz′ (x) = (f + )′z (x)H(x − z) + (f − )′z (x)(1 − H(x − z)) − (f + (z) − f − (z))δ(x − z)
14
Unless this last term is added, the computation of the second order sensitivities will
not be right.
If in the AD library the ramp function x+ is defined as xH(x) with its derivative
to be H(x), if H is defined with its derivative equal to δ a and if in the program which
computes the financial asset it is written that (x − K)+ = ramp(x − K), then the
second derivative in K computed by the AD library will be δ a (x − K). Moreover, it
will also compute
Z
0
∞
f (x)(x − K)+ dx ≈
N
1 X
f (ξi )δ a (ξi − K)
N i=1
where ξi are the N quadrature points of the integral or the Monte-Carlo points used
by the programmer to approximate the integral.
However, this trick does not solve all problems and one must be cautious; for
instance writing that (x − K)+ = (x − K)H(x − K) will not yield the right result.
Moreover, the precision is rather sensitive to the value of a.
Remark 7. Notice that finite difference (FD) is not plagued by this problem,
which means that FD with complex increment is quite a decent method for first order
sensitivities. For second order sensitivities the “very small over very small” problem
is still persistent.
4. VAD and the Black-Scholes Model. In this section, we implement and
test VAD and give a conceptual algorithm that describes the implementation of this
method (done automatically). We focus on indicators which depend on the solution
of an SDE, instead of the solution of the SDE itself. Let us take the example of a
standard European Call option in the Black-Scholes model.
4.1. Conceptual algorithm for VAD.
1. Generate M simulation paths with time step h = Tn of the underlying asset
∂X
X and its tangent process Y =
with respect to a parameter θ for k =
∂θ
0, . . . , n − 2:
√
∂X0
n
n
n
n
n
n
X̄k+1 = X̄k + rhX̄k + X̄k σ hZk+1 , X̄0 = X0 , Ȳ0 =
∂θ
√
√
∂
∂
n
n
n
n
n
n
σ h X̄k Zk+1 , .
(rh) X̄k + Ȳk σ h +
Ȳk+1 = Ȳk + rhȲk +
∂θ
∂θ
(4.1)
2. For each simulation path
(a) Generate MZ last time steps (X̄T = X̄nn )
√
n
X̄nn = X̄n−1
(1 + rh + σ hZn ).
(4.2)
(b) Compute the first derivative with respect to θ by Vibrato using the
antithetic technique (formula (2.19) with σ(Xt ) equal Xt σ)
Zn
∂µn−1 1
∂VT
√
=
(VT+ − VT− ) n
∂θ
∂θ 2
X̄n−1 σ h
Z2 − 1
∂σn−1 1
(VT+ − 2VT• + VT− ) nn √ .
+
∂θ 2
X̄n−1 σ h
(4.3)
15
With VT±,• = (X̄T±,• − K)+ ,
(
√
n
n
n
+ rhX̄n−1
± σ X̄n−1
X̄T± = X̄n−1
hZn
n
n
X̄T• = X̄n−1 + rhX̄n−1 .
(4.4)
and
∂µn−1
∂
n
n
= Ȳn−1
(1 + rh) + X̄n−1
(rh)
∂θ
∂θ
√
√
∂
∂σn−1
n
n
= Ȳn−1
σ h + Xn−1
(σ h)
∂θ
∂θ
(4.5)
∂ −rT
(e
)VT to the result above.
∂θ
(c) Apply an Automatic Differentiation method on the computer program
that implements step 4.3 to compute the second derivative with respect
to θ at some θ∗ .
(d) Compute the mean per path i.e. over MZ .
3. Compute the mean of the resulting vector (over the M simulation paths) and
discount it.
If θ = T or θ = r, we have to add
4.2. Greeks. The Delta measures the rate of changes in the premium E[V (XT )]
with respect to changes in the spot price X0 .
The Gamma measures the rate of changes of the Delta with respect to changes
in the spot price. Gamma can be important for a Delta-hedging of a portfolio.
The Vanna is the second derivative of the premium with respect to σ and X0 . The
Vanna measures the rate of changes of the Delta with respect to changes in the volatility.
4.3. Numerical Test. For the generation of the random numbers, we chose the
standard Mersenne-Twister generator available in the version 11 of the C++ STL.
We take MZ = 1 i.e. we simulate only one last time step per path; for all the test
cases except for the European Call contract in the Black-Scholes model. However, for
the European Call in a Black-Scholes model, we used a multiple time steps with the
Euler scheme with or without a Brownian bridge.
The parameters considered in the following numerical experiments are K = 100,
σ = 20% and r = 5%, T = 1 year. The initial price of the risky asset price is varying
from 1 to 200. The Monte Carlo parameters are set to 100, 000 simulation paths, 25
time steps.
4.3.1. Preliminary Numerical Test. Here, we focus on the numerical precision of VAD on the Gamma of a standard European Call contract with constant
volatility and drift for which there is an analytical Black Scholes formula. Since
Vibrato of Vibrato is similar to Vibrato+AD (VAD) it is pointless to compare the
two.
Recall (Proposition 2.6 & 2.8) that it is equivalent to apply Vibrato to Vibrato
or to apply automatic differentiation to Vibrato. However, the computation times are
different and naturally double Vibrato is faster.
We compare the analytical solution to those obtained with VAD but now for each
new set of parameters, we reuse the same sample of the random variables.
16
On figure 6, the Gammas are compared at X0 = 120; true value of the Gamma is
Γ0 = 0.0075003. The convergence with respect to the number of paths is also displayed
for two values of MZ . The method shows a good precision and fast convergence when
the number of paths for the final time step is increased.
✥ ✥✄✁
✥ ✥✝
❱❆✞ ✟✠✡☛
❆☞✌✍✎✏✑✒✌✍ ✓♦✍✔✏✑♦☞
✥ ✥✥✆✁
✥ ✥✄
✥ ✥✥✆
✥ ✥✥☎✁
✥ ✥✂✁
✥ ✥✥☎
✥ ✥✥✄✁
✥ ✥✂
✥ ✥✥✄
✥ ✥✥✂✁
✥ ✥✥✁
✥ ✥✥✂
❱❆✡ ☛☞✌✝ ✍☛✎✮
✥ ✥✥✁✁
❱❆✡ ☛☞✌✞ ✍☛✎✮
❆✏✑✒✓✔✕✖✑✒ ✗♦✒✘✔✕♦✏
✥
✥ ✥✥✁
✥
✄✥
☎✥
✆✥
✝✥
✂✥✥
✂✄✥
✂☎✥
✂✆✥
✂✝✥
✄✥✥
✥
✁✥✥✥
✝✥✥✥✥
✝✁✥✥✥
✞✥✥✥✥
✞✁✥✥✥
✟✥✥✥✥
✟✁✥✥✥
✠✥✥✥✥
✠✁✥✥✥
✁✥✥✥✥
Figure 6. On the left the Gamma versus Price is displayed when computed by VAD; the
analytical exact Gamma is also displayed; both curves overlap. On the right, the convergence history
at one point X0 = 120 is displayed with respect to the number of Monte Carlo samples MW . This
is done for two values of MZ (the number of the final time step), MZ = 1 (low curve) and MZ = 2
(upper curve).
The L2 -error denoted by εL2 is defined by
εL 2
P
1 X i
(Γ̄ − Γ0 )2 .
=
P i=1
(4.6)
On figure 7, we compare the results with and without variance reduction on Vibrato
at the final time step i.e. antithetic variables. The convergence history against the
number of simulation paths is displayed. Results show that variance reduction is
efficient on that test case. The standard error against the number of simulation paths
is also displayed. It is clear that a reduction variance is needed. It requires almost
ten times the number of simulation paths without the reduction variance technique to
obtain the same precision. The Gamma is computed for the same set of parameters
as given above.
On figures 8 we display the Vanna of an European Call option, computed with
VAD. And again, the convergence with respect to the number of simulation paths is
accelerated by more sampling of the final time step. Note that the Vanna requires
double the number of time steps
4.3.2. Third Order Derivatives. For third order derivatives, we compute second derivatives by Vibrato of Vibrato 2.6 and differentiate by AD (VVAD). The
sensitivity of the Gamma with respect to changes in X0 is ∂ 3 V /∂X03 . The sensitivity of the Vanna with respect to changes in the interest rate is ∂ 3 V /∂X0 ∂σ∂r. The
parameters of the European Call are the same but the Monte Carlo path number is
1, 000, 000 and 50 time steps for the discretization. The results are displayed on figure
9. The convergence is slow; we could not eliminate the small difference between the
analytical solution and the approximation by increasing the number of paths.
4.3.3. Ramp Function and High Order Derivatives. As mentioned in Section 3.4, it is possible to handle the non-differentiability of the function (x − K)+
17
✂✞✂✥
✥ ✥✁
❱❆✠ ✡☛☞ ❱✌ ✍✎❈✏
❱❆✠ ✡✑✒✓ ❱✌ ✍✎❈✏
❱❆✟ ✠✡☛ ❱☞ ✌✍❈✎
❱❆✟ ✠✏✑✒ ❱☞ ✌✍❈✎
✥ ✥✆☎
❆✓✔✕✖✑✏✗✔✕ ✘☛✕♦✑✏☛✓
✂✞✂✂✥
✥ ✥✆✄
✂✞✂✂✂✥
✥ ✥✆✂
✥ ✥✆✁
✥ ✁✂✝
✥ ✥✆
✥ ✁✂✆
✥ ✥✥☎
✥ ✥✥✄
✥ ✁✂☎
✥ ✥✥✂
✥ ✁✂✄
✥ ✥✥✁
✥
✆✥✥✥✥
✁✥✥✥✥
✝✥✥✥✥
✂✥✥✥✥
✞✥✥✥✥
✥✂
✥✂✂
✥✂✂✂
✥✂✂✂✂
✥✂✂✂✂✂
✥ ✟✂✆
Figure 7. On the left the Gamma versus the number of simulation paths is displayed when
computed by VAD with and without the variance reduction method on Z, the straight line is the
analytical solution at one point X0 = 120; On the right, the standard error of the two methods
versus the number of simulation paths with and without variance reduction.
✂✄
✲✡✁✄
❱❆✞ ✟✠✡☛
❆☞✌✍✎✏✑✒✌✍ ✓♦✍✔✏✑♦☞
✲✡✁✂
✲
✲ ✁
✲ ✁✠
✁✂✄
✲ ✁✟
✲ ✁✞
✁
✲ ✁✝
✲ ✁✆
✲✁✂✄
✲ ✁☎
❱❆☛ ☞✌✍ ✎☞✏✮
❱❆☛ ☞✌✍✠ ✎☞✏✮
❆✑✒✓✔✕✖✗✒✓ ✘♦✓✙✕✖♦✑
✲ ✁✄
✲
✁
✥✁
☎✁
✆✁
✝✁
✁✁
✥✁
☎✁
✆✁
✝✁
✥✁✁
✲ ✁✂
✡
✝✡✡✡
✡✡✡✡
✝✡✡✡
✠✡✡✡✡
✠✝✡✡✡
✟✡✡✡✡
✟✝✡✡✡
✞✡✡✡✡
✞✝✡✡✡
✝✡✡✡✡
Figure 8. On the left the Vanna versus Price is displayed when computed by VAD; the analytical
exact Vanna is also displayed; both curves overlap. On the right, the convergence history at one
point X0 = 120 is displayed with respect to the number of Monte Carlo samples MW . This is done
for two values of MZ , MZ = 1 (lower curve) and MZ = 2 (upper curve).
✁ ✥
☎
✁
✂
✆
✁
✄
✁
✁
☎
✂
✁
✆
✲✁
✲ ✁
✆
✲✆
✲☎
✲ ✁
☎
✲✄
✲ ✁
✄
✲ ✂
✲ ✁
✂
❱❱❆✝ ✞✟✠✡
❆☛☞✌✍✎✏✑☞✌ ✒♦✌✓✎✏♦☛
✆
☎
✄
✂
✥
✥✆
✥☎
✥✄
✥✂
✆
✲ ✁
❱❱❆✝ ✞✟✠✡
❆☛☞✌✍✎✏✑☞✌ ✒♦✌✓✎✏♦☛
✂
✁✂
✆✂
☎✂
✄✂
✂✂
✁✂
✆✂
☎✂
✄✂
✁✂✂
Figure 9. On the left ∂ 3 V /∂X03 versus Price is displayed when computed by VVAD; the analytical exact curve is also displayed; both curves practically overlap. On the right, the same for the
Vanna with respect to changes in interest rate (∂ 3 V /∂X0 ∂σ∂r).
18
at x = K by using distribution theory and program the ramp function explicitly
with a second derivative equal to an approximate Dirac function at K. We illustrate
this technique with a standard European Call option in the Black-Scholes model.
We computed the Gamma and the sixth derivative with respect to X0 . For the first
derivative, the parameter a does not play an important role but, as we evaluate higher
derivatives, the choice of the parameter a becomes crucial for the quality of a good
approximation and it requires more points to catch the Dirac approximation with
small a. Currently the choice of a is experimental.
We took the same parameters as previously for the standard European Call option
but the maturity for the Gamma now set at T = 5 years and T = 0.2 year for the
sixth derivative with respect to X0 . The initial asset price varies from 1 to 200. The
Monte Carlo parameters are also set to 100, 000 simulation paths and 25 time steps.
The results are displayed on figure 10.
✁ ✆✄
✁
❆✝ ✞✟✠✡
❆☛☞✌✍✎✏✑☞✌ ✒♦✌✓✎✏♦☛
✁ ✆✥
✂
❆✟ ✠✡☛☞ ✌✍✄ ✎✏✑✒
❆✟ ✠✡☛☞ ✓✔✕✌✓ ✌ ✎✏✑✒
❆✖✌✓✗☛✡✕✌✓ ✘✔✓♦☛✡✔✖
✄☎✲ ✄
✁ ✆✂
✁ ✆
✁ ☎
✁ ✄
✲✄☎✲ ✄
✁ ✥
✁
✂
✲ ✁
✂
✲ ✁
✂
✥
✄
☎
✆
✆✂
✆✥
✆✄
✆☎
✂
✲ ✁
✂
✂✄
✥
✆
✝
✞
✂
✂✥
✂✆
✂✝
✂✞
✥
Figure 10. On the left the Gamma versus Price is displayed when computed by AD with the
ramp function (with a = 1); the analytical exact Gamma is also displayed; both curves overlap. On
the right, the sixth derivative with respect to the parameter X0 is displayed when computed via the
same method; the analytical solution is also displayed. We computed the approximation with local
parameter a and with a = 5.
For the Gamma, the curves are overlapping but for the sixth derivative with
respect to the parameter X0 , we cannot take a constant parameter a anymore. When
we choose locally adapted parameter a, the curves are practically overlapping.
4.4. Baskets. A Basket option is a multidimensional derivative security whose
payoff depends on the value of a weighted sum of several risky underlying assets.
As before, Xt is given by (2.3). But now (Wt )t∈[0,T ] is a d-dimensional correlated
Brownian motion with E[dWti dWtj ] = ρi,j dt.
To simplify the presentation, we assume that r and σi are real constants and the
payoff is given by
d
X
ωi Xi T − K)+ ]
VT = e−rT E[(
(4.7)
i=1
Pd
where (ωi )i=1,...,d are positive weights with i=1 ωi = 1. Here, we choose to compare
three different methods. The reference values coming from an approximated momentmatching dynamics (Levy [30] and in Brigo et al. [4]), VAD and second order finite
difference (FD).
19
4.4.1. Algorithm to compute the Gamma of a Basket option. We make
use of the fact that r and σ are constant.
1. Generate M simulation paths using a one time step for the Euler scheme.
X̄ i T ±
d
d
X
X
√
1
Σij T Zj , i = 1, . . . , d,
|Σij |2 T ±
= X i T• exp −
2 j=1
j=1
with XT• = X0 exp (rT ), where Z denotes an N (0; Id ) random vector.
2. For each simulation path, with C = ΣΣT , compute (Vibrato)
T
∂µ
1
√ (VT+ − VT− )C −T Z
∂Xi0
2 h
∂Σ
1
: C −T (ZZ T − Id )C −1
(VT+ − 2VT• + VT− )
+
4h
∂Xi0
∆=
(4.8)
with VT. = (ω · X̄T. − K)+
3. Compute the mean of the resulting vector and discount the result.
4. Apply Automatic Differentiation to what precedes.
4.4.2. Numerical Test. In this numerical test d = 7 and the underlying asset
prices are:
X0 T = (1840, 1160, 3120, 4330.71, 9659.78, 14843.24, 10045.40).
(4.9)
The volatility vector is:
σ T = (0.146, 0.1925, 0.1712, 0.1679, 0.1688, 0.2192, 0.2068).
The correlation matrix is
1.0
0.9477
0.9477
1.0
0.8494 0.7558
0.8548 0.7919
0.8719 0.8209
0.6169 0.6277
0.7886 0.7354
0.8494
0.7558
1.0
0.9820
0.9505
0.6131
0.9303
0.8548
0.7919
0.9820
1.0
0.9378
0.6400
0.8902
0.8719
0.8209
0.9505
0.9378
1.0
0.6417
0.8424
0.6169
0.6277
0.6131
0.6400
0.6417
1.0
0.5927
0.7886
0.7354
0.9303
0.8902
.
0.8424
0.5927
1.0
(4.10)
(4.11)
The number of Monte Carlo paths varies from 1 to 106 with only one time step for
the time integration. Errors are calculated with reference to a solution computed by
approximate moment matching.
On figures 11 and 12, the plot of convergence for the computation of the Gamma
of a Basket made of the first 4 and 7 assets are displayed versus the number of
simulation paths Vibrato plus AD (direct mode) and for Finite differences applied
to a brute force Monte Carlo algorithm. The convergence speed of these methods is
almost the same (with a slight advantage for the Finite difference).
Table 3 displays results for a Basket with the 7 assets, in addition the table 4
displays the CPU time for Vibrato plus AD (direct mode); the finite difference method
is one third more expensive. Again, the method is very accurate.
20
✁
✵ ✵✵✵✆✵
✥✄
❋✠ ✡☛☞✌
❱❆✠ ✡☛☞✌
❆✍✎✏✑✒✓✔✎✏ ✕♦✏✖✒✓♦✍
❋✠ ✡☛☞✌
❱❆✠ ✡☛☞✌
✁
❘✞✍ ❱❛✎✏✞
✥
✵ ✵✵✵✁☎
✁
✂✄
✁
✵ ✵✵✵✁☎
✂
✄☎✲ ✄
✵ ✵✵✵✁✄
✲✄☎✲ ✄
✵ ✵✵✵✁✄
✲ ✁
✲ ✁
✵ ✵✵✵✁✂
✆✵✵✵✵✵
✥✵✵✵✵✵
✝✵✵✵✵✵
✄✵✵✵✵✵
✂
✂✄
✁✞✟✵✝
✥
Figure 11. d=4.
✆
✝
✞
✂☎✟ ✝
Figure 12. d=7.
Convergence of the computation of the Gamma of a Basket option when d = 4 and 7
via Vibrato plus Automatic Differentiation on Monte Carlo and via Finite differences,
versus the number of simulation paths. The parameters are for T = 0.1.
5. American Option. Recall that an American option is like a European option
which can be exercised at any time before maturity. The value Vt of an American
option requires the best exercise strategy. Let ϕ be the payoff, then
Vt := ess sup E[e−r(τ −t) ϕ(Xτ ) | Xt ]
(5.1)
τ ∈Tt
where Tt denotes the set of [t, T ]-valued stopping times (with respect to the (augmented) filtration of the process (Xs )s∈[0,T ] ).
Consider a time grid 0 < t1 < · · · < tn = T with time step h, i.e. tk = kh. To
discretize the problem we begin by assuming that the option can be exercised only at
tk , k = 0, .., n ; its value is defined recursively by
V̄tn = e−rT ϕ(X̄T )
(5.2)
−rt
V̄tk = max e k ϕ(X̄tk ), E[V̄tk+1 | X̄tk ] ,
0≤k≤n−1
5.1. Longstaff-Schwartz Algorithm . Following Longstaff et al. [31] let the
continuation value Ctk = E[e−rh V̄tk+1 | X̄tk ] as X is a Markov process. The holder
of the contract exercises only if the payoff at tk is higher than the continuation value
Ctk . The continuation value is approximated by a linear combination of a finite set
of R real basis functions:
Ck ≃
R
X
αk,i ψk,i (X̄tk ).
(5.3)
i=1
Typically, the (αk,i )i=1,...,R are computed by least squares,
!2
R
X
αk,i ψk,i (X̄tk ) .
min E E[e−rh V̄tk+1 | X̄tk ] −
α
i=1
(5.4)
21
This leads to a Gram linear system
R
X
j=1
αk,i Gram ψk,i (X̄tk ), ψk,j (X̄tk ) = E[E[e−rh Vk+1 | Xtk ]ψk,i (X̄tk )], i = 1, . . . , R.
(5.5)
Remark 8. Once the optimal stopping time is known, the differentiation with
respect to θ of (5.2) can be done as for a European contract. The dependency of the
τ ∗ on θ is neglected; arguably this dependency is second order but this point needs to
be validated. Hence, the following algorithm is proposed.
5.2. Algorithm to compute the Gamma of an American option.
1. Generate M simulation paths of an Euler scheme with n time steps of size
h = Tn .
2. Compute the terminal value of each simulation path
VT = (K − X̄T )+
(5.6)
3. Compute the Gamma of the terminal condition using (4.3) in section (4.1)
for each simulation path.
4. Iterate from n − 1 to 1 and perform the following at the k-th time step.
(a) Solve the Gram linear system (5.5).
(b) Calculate the continuation value of each path.
Ck+1 (X̄tk ) =
R
X
αk,i ψi (X̄kn ).
(5.7)
i=1
(c) Compute the Gamma by differentiating the Vibrato formula from the
time step k − 1 with respect to X0
N
Zki
1
1 X ∂
n
√
Ȳk−1
(1 + rh) (Ṽki+ − Ṽki− )
Γ̃k =
N i=1 ∂X0
2
X0 σ h
√ 1 i
(Zki )2 − 1
i
i
n
√
.
+ Ȳk−1 σ h (Ṽk+ − 2Ṽk• + Ṽk− )
2
X̄0 σ h
(d) For i = 1, . . . , M
(
Vki = Ṽki , Γik = Γ̃ik
if Ṽki ≥ Ck+1 (X̄kn,i ),
i
Vki = e−rh Vk+1
, Γik = e−rh Γik+1 otherwise
(5.8)
(5.9)
(5.10)
n
with Ṽk+1 = (K − X̄k+1
)+ and
(
√
X̄k± = X̄k−1 + rhX̄k−1 ± σ X̄k−1 hZk
X̄k• = X̄k−1 + rhX̄k−1 .
(5.11)
5. Compute the mean of the vector V and Γ.
Remark 9. The differentiation with respect to X0 is implemented by automatic
differentiation of the computer program.
22
5.2.1. Numerical Test. We consider the following value : σ = 20% or σ = 40%,
X0 varying from 36 to 44, T = 1 or T = 2 year, K = 40 and r = 6%. The Monte
Carlo parameters are: 50, 000 simulation paths and 50 time steps for the time grid.
The basis in the Longstaff-Scharwtz algorithm is (xn )n=0,1,2 .
We compare with the solution of the Black-Scholes partial differential equation
discretized by an implicit Euler scheme in time, finite element in space and semismooth Newton for the inequalities [1]. A second order finite Difference approximation
is used to compute the Gamma. A large number of grid points are used to make it
a reference solution. The parameters of the method are 10, 000 and 50 time steps
per year. Convergence history for Longstaff Schwartz plus Vibrato plus AD is shown
on figure 13 with respect to the number of Monte Carlo paths (Finite Difference on
Monte Carlo is also displayed).
On figure 13, we display the history of convergence for the approximation of
the Gamma of an American Put option versus the number of simulation paths for Vibrato plus Automatic differentiation and for Finite Difference applied to the American
Monte Carlo, the straight line is the reference value computed by PDE+ semi-smooth
Newton. The convergence is faster for VAD than with second order Finite Difference
(the perturbation parameter is taken as 1% of the underlying asset price).
On table 5, the results are shown for different set of parameters taken from
Longstaff et al. [31]. The method provides a good precision when variance reduction (??) is used, for the different parameters, except when the underlying asset price
is low with a small volatility. As for the computation time, the method is faster than
Finite Difference applied to the American Monte Carlo which requires three evaluations of the pricing function whereas VAD is equivalent to two evaluations (in direct
mode).
✥ ✥✝✂
❋✟ ✠✡☛☞❆✌✍✎
❱❆✟ ✠✡☛☞❆✌✍✎
✥ ✥✝
❘✏✑
❱❛✒✓✏
✥ ✥✂✆
✥ ✥✂☎
✥ ✥✂✄
✥ ✥✂✂
✥ ✥✂
✥ ✥✁✆
✥ ✥✁☎
✥ ✥✁✄
✥ ✥✁✂
✥
✁✥✥✥✥
✂✥✥✥✥
✝✥✥✥✥
✄✥✥✥✥
✞✥✥✥✥
Figure 13. Convergence of the Gamma of an American option via Vibrato plus Automatic
Differentiation on the Longstaff-Schwartz algorithm and via Finite Difference, versus the number of
simulation paths. The parameters are σ = 40% and X0 = 40.
6. Second Derivatives of a Stochastic Volatility Model. The Heston model
[23] describes the evolution of an underlying asset (Xt )t∈[0,T ] with a stochastic volatility (Vt )t∈[0,T ] :
p
dWt1 ,
dXt = rXt dt + Vt Xtp
dVt = κ(η − Vt )dt + ξ Vt dWt2 , t ∈ [0, T ]; V0 , X0 given.
(6.1)
Here ξ is the volatility of the volatility, η denotes the long-run mean of Vt and κ the
mean reversion velocity. The standard Brownian process (Wt1 )t∈[0,T ] and (Wt2 )t∈[0,T ]
23
are correlated: E[dWt1 Wt2 ] = ρdt, ρ ∈ (−1, 1). If 2κη > ξ 2 , it can be shown that
Vt > 0 for every t ∈ [0, T ]. We consider the evaluation of a standard European Call
with payoff
VT = E[(XT − K)+ ].
(6.2)
6.1. Algorithm to Compute second derivatives in the Heston Model. To
compute the Gamma by Vibrato method for the first derivative coupled to automatic
differentiation for the second derivative one must do the following:
1. Generate M simulation paths for the underlying asset price (X̄, V̄) and its
X̄,V̄)
tangent process (Ȳ , Ū) = ∂(∂X
using an Euler scheme with n time steps of
0
size h = Tn ,
q
√ 1
n
n
n
V̄kn X̄kn hZ̃k+1
X̄
=
X̄
+
rh
X̄
+
,
X̄0n = X0 ,
k+1
k
k
q
√ 1
n
,
Ȳ0n = 1,
Ȳk+1
= Ȳkn + rhȲkn + V̄kn Ȳkn hZ̃k+1
q
√ 2
n
V̄k+1 = V̄kn + κ(η − V̄kn )h + ξ V̄kn hZ̃k+1
, V̄0n = V0
with
1
1
Z̃
=
ρ
Z̃ 2
p 0
1 − ρ2
1
Z
Z2
(6.3)
(6.4)
where (Zk1 , Zk2 )1≤k≤n denotes a sequence of N (0; I2 )-distributed random variables.
2. For each simulation path
(a) Compute the payoff
VT = (X̄nn − K)+ .
(6.5)
(b) Compute the Delta using Vibrato at maturity with the n − 1 time steps
and the following formula
with
Z1
1
n
¯ n = Ȳn−1
qn
∆
(1 + rh) (VT+ − VT− )
√
2
n
n
h
X̄n−1
V̄n−1
2
q
√ 1
Zn1 − 1
n
n
q
V̄n−1
h (VT+ − 2VT• + VT− )
+Ȳn−1
√
2
n
n
h
X̄n−1
V̄n−1
X̄
X̄
T±
T•
n
n
= X̄n−1
+ rhX̄n−1
±
n
n
= X̄n−1
+ rhX̄n−1
.
q
√ 1
n X̄ n
V̄n−1
n−1 hZ̃n ,
(6.6)
(6.7)
(6.8)
(c) Apply an Automatic Differentiation method on step (2b) to compute
the Gamma.
3. Compute the mean of the result and discount it.
24
6.1.1. Numerical Test. We have taken the following values: the underlying
asset price X0 ∈ [60, 130], the strike is K = 90, the risk-free rate r = 0.135% and the
maturity o is T = 1.
The initial volatility is V0 = 2.8087%, the volatility of volatility is ξ = 1%, the
mean reversion is κ = 2.931465 and the long-run mean is ν = 0.101. The correlation
between the two standard Brownian motions is ρ = 50%.
The number of Monte Carlo path is 500, 000 with 100 time steps each.
The results are displayed on figures 14, 15.
On figure 14 we compare the results obtained by Vibrato plus Automatic Differentiation (direct mode), with second order Finite Difference method applied to a
standard Monte Carlo simulation. On figures 15 we display the Vanna of an Eu✥ ✥✁
✁ ✆✂
❋✡ ☛☞✌✍
❱❆✡ ☛☞✌✍
✥ ✥✆✁
✁ ✆
✁ ☎✂
✥ ✥✆
✥ ✥☎✁
✁ ☎
✥ ✥☎
✁ ✄✂
✥ ✥✄✁
✁ ✄
✥ ✥✄
✁ ✥✂
✁ ✥
✥ ✥✂✁
✁
✥ ✥✂
✂
❋✝ ✞✟✠✡
❱❆✝ ✞✟✠✡
❘☛☞✁ ❱❛✌✍☛
✥ ✥✥✁
✲ ✁
✥
✝✥
✞✥
✟✥
✠✥
✂✥✥
✂✂✥
✂
✂✄✥
✂
✥
✥✂
✄
✄✂
☎
☎✂
✆
✆✂
✂
Figure 14. On the left the Gamma versus Price is displayed when computed by VAD; the
approximated Gamma via Finite Difference is also displayed; both curves overlap. On the right, the
convergence history at one point (X0 , V0 ) = (85, 2.8087) is displayed with respect to the number of
Monte Carlo samples.
☎
☎
❋✠ ✡☛☞✌
❱❆✠ ✡☛☞✌
❋✞ ✟✠✡☛
❱❆✞ ✟✠✡☛
✁ ✁
✁
❘☞✌ ❱❛✍✎☞
✁
✂
✄ ✁
✄
✄
✥
✂ ✁
✲✄
✂
✲✂
✲✁
✲
✥ ✁
✥
✆✥
✝✥
✞✥
✟✥
✄✥✥
✄✄✥
✄✂✥
✄✁✥
✆
✁✆✆✆
✝✆✆✆✆
✝✁✆✆✆
✥✆✆✆✆
✥✁✆✆✆
✂✆✆✆✆
✂✁✆✆✆
✄✆✆✆✆
✄✁✆✆✆
✁✆✆✆✆
Figure 15. On the left the Vanna versus Price is displayed when computed by VAD; the
approximated Vanna via Finite Difference is also displayed; both curves overlap. On the right, the
convergence history at one point (X0 , V0 ) = (85, 2.8087) is displayed with respect to the number of
Monte Carlo samples.
ropean Call option in the Heston model, and again, the convergence with respect to
the number of simulation paths. As for the Gamma, the method is quite precise.
provides a good precision for the approximation of the Vomma and the Vanna. Both
are computed at one point (X0 , V0 ) = (85, 2.8087) with the same set of parameters as
25
given above. The computation by VAD is 30% faster for the Gamma compared with
☎✂✂
✲✄✁✁
❋✟ ✠✡☛☞
❱❆✟ ✠✡☛☞
✆✂✂
✲✄✄✁
✁✂✂
✲✆✁✁
✂
✲✆✄✁
✲✁✂✂
✲☎✁✁
✲✆✂✂
✲☎✄✁
✲☎✂✂
✲✂✁✁
✲✄✂✂
✲ ✁✂✂
❋✠ ✡☛☞✌
❱❆✠ ✡☛☞✌
❘✍✎✏ ❱❛✑✒✍
✲✂✄✁
✲ ✂✂✂
☎✂
✥✂
✄✂
✝✂
✂✂
✂
✁✂
✞✂
✲ ✁✁
✁
✄✁✁✁
✥✁✁✁✁
✥✄✁✁✁
✝✁✁✁✁
✝✄✁✁✁
✞✁✁✁✁
✞✄✁✁✁
✟✁✁✁✁
✟✄✁✁✁
✄✁✁✁✁
Figure 16. On the left the Vomma versus Price is displayed when computed by VAD; the
approximated Vomma via Finite Difference is also displayed; both curves overlap. On the right, the
convergence history at one point (X0 , V0 ) = (85, 2.8087) is displayed with respect to the number of
Monte Carlo samples.
the Vanna. In the case of the Vomma and the Gamma, VAD is 30% faster. For the
Vanna Finite difference requires four times the evaluation of the pricing function so
VAD is twice times faster.
7. Vibrato plus Reverse AD (VRAD). If several greeks are requested at
once then it is better to use AD in reverse mode. To illustrate this point, we proceed
to compute all second and cross derivatives i.e. the following Hessian matrix for a
standard European Call option:
2
∂ 2V
∂2V
∂2V
∂ V
∂X02
∂v∂X0 ∂r∂X0 ∂T ∂X0
2
∂ V
∂ 2V
∂2V
∂2V
∂X0 ∂σ
∂σ 2
∂v∂r
∂T ∂v
(7.1)
.
2
∂ 2V
∂2V
∂2V
∂ V
∂X0 ∂r
∂v∂r
∂r2
∂T ∂r
∂ 2V
2
∂ V
∂2V
∂2V
∂X0 ∂T
∂v∂T
∂r∂T
∂T 2
It is easily seen that a Finite Difference procedure will require 36 (at least 33)
evaluations of the original pricing function whereas we only call this function once if
AD is used in reverse mode. Furthermore, we have to handle 4 different perturbation
parameters.
The parameters are X0 = 90, K = 100, σ = 0.2, r = 0.05 and T = 1 year. The
parameters of Monte Carlo are set to 200, 000 simulation paths and 50 time steps. We
used the library adept 1.0 for the reverse mode. One great aspect here is that we only
have one formula in the computer program to compute all the greeks, consequently
one has just to specify which parameters are taken as variable for differentiation.
The results are shown in the table 1, clearly the reverse automatic differentiation
combined with Vibrato is almost 4 times faster than the finite difference procedures.
8. Malliavin Calculus and Likelihood Ratio Method . Here, we want to
point out that Malliavin calculus and LRM are excellent methods but they have their
own numerical issues especially with short maturities which may make VAD more
attractive for a general purpose software.
26
Mode
Time (sec)
FD (MC)
2.01
VRAD (MC)
0.47
Table 1
CPU time (in seconds) to compute the Hessian matrix of a standard European Call option
(considering X0 , σ, r, T as variables) in the Black-Scholes model.
Let us start by recalling briefly the foundations of Malliavin calculus (further
details are available in Nualart [34], Fournié et al.[10] and in Gobet et al. [18], for
instance). We recall the Bismut-Elworthy-Li formula (see [3], for example):
Proposition 8.1. (Bismut-Elworthy-Li formula) Let X be a diffusion process
given by (2.3) with d = 1, b and σ in C 1 . Let f : R → R be C 1 with E[f (XT )2 ]
and E[f ′ (XT )2 ] bounded. Let (Hth)t∈[0,T ] ani F -progressively measurable process in
RT
L2 ([0, T ] × Ω, dt ⊗ dP) such that E 0 Hs2 ds is finite. Then
#
"
#
"
Z T
Z T
σ(Xs )Hs
′
ds
(8.1)
Hs dWs = E f (XT )YT
E f (XT )
Ys
0
0
dXt
where Yt =
is the tangent process defined in (2.11). By choosing Ht = Yt /σ(Xt )
dx
the above yields
Z
1 T Ys
∂
E [f (XTx )] = E f (XTx )
dWs
x
∂x
T 0 σ(Xs )
|
{z
}
(8.2)
Malliavin weight
2 #
Yt
is finite.
provided f has polynomial growth and E
σ(Xtx )
0
Second Derivative.. In the context of the Black-Scholes model, the Malliavin
weights, πΓ , for the Gamma is (see [2]):
2
1
WT
1
πΓ = 2
(8.3)
− − WT .
X0 σT σT
σ
"Z
T
Hence
ΓMal = e
−rT
1
E (XT − K)
X02 σT
+
WT2
1
− − WT
σT
σ
.
The pure likelihood ratio method gives a similar formula (see Lemma 2.1)
!#
"
Z2 − 1
Z
−rT
+
.
ΓLR = e
E (XT − K)
− 2 √
X02 σ 2 T
X0 σ T
(8.4)
(8.5)
LRPW is an improvement of LRM obtained by combining it with a pathwise method
[15].
∂
K
Z
√
ΓLRPW =
= e−rT 2 √ E[Z1{XT >K} ].
e−rT E (XT − K)+
∂X0
X0 σ T
X0 σ T
(8.6)
LRPW is much cheaper than VAD, Malliavin or LRM and it is also less singular at
T = 0. However all these methods require new analytically derivations for each new
problem.
27
8.1. Numerical Tests. We compared VAD with LRPW and Malliavin calculus.
The results are shown on Table 2
T
1.00e+0
5.00e-1
1.00e-1
5.00e-2
1.00e-2
5.00e-3
1.00e-3
5.00e-4
1.00e-4
5.00e-5
1.00e-5
VAD (MC)
3.63e-5
8.55e-5
6.64e-4
1.49e-3
8.78e-3
1.86e-2
9.62e-2
1.85e-1
1.01e+0
1.98e+0
1.03e+1
FD (MC)
1.76e-4
3.11e-4
1.50e-3
2.80e-3
1.84e-2
3.95e-2
1.77e-1
3.34e-1
1.63e+0
3.46e+0
1.78e+1
LRPW (MC)
3.40e-4
7.79e-4
4.00e-3
7.51e-3
3.76e-2
7.55e-2
3.76e-1
7.56e-1
3.77e+0
7.54e+0
3.79e+1
Malliavin (MC)
9.19e-3
1.62e-2
6.54e-2
1.21e-1
5.44e-1
1.10e+0
5.74e+0
1.07e+1
5.26e+1
1.09e+2
5.40e+2
Table 2
Variance of the Gamma of a standard European Call with short maturities in the Black-Scholes
model. Gamma is computed with VAD, FD, LRPW and Malliavin. The computation are done on
the same samples.
The Gamma is computed with the same parameters as in the section 4.3. The
maturity is varying from T = 1 to 10−5 year. The Monte Carlo parameters are also
set to 100, 000 simulation paths and 25 time steps.
Notice the inefficiency of LRPW, Malliavin Calculus and to a lesser degree of
VAD and Finite Difference when T is small.
Note on CPU. Tests have been done on an Intel(R) Core(TM) i5-3210M Processor @ 2,50 GHz. The processor has turbo speed of 3.1 GHz and two cores. We did
not use parallelization in the code.
9. Conclusion. This article extends the work of Mike Giles and investigates the
Vibrato method for higher order derivatives in quantitative finance.
For a general purpose software Vibrato of Vibrato is too complex but we showed
that it is essentially similar to the analytical differentiation of Vibrato. Thus AD of
Vibrato is both general, simple and essentially similar to Vibrato of Vibrato of second
derivatives. We have also shown that Automatic differentiation can be enhanced
to handle the singularities of the payoff functions of finance. While AD for second
derivatives is certainly the easiest solution, it is not the safest and it requires an
appropriate choice for the approximation of the Dirac mass.
Finally we compared with Malliavin calculus and LRPW.
The framework proposed is easy to implement, efficient, faster and more stable
than its competitors and does not require analytical derivations if local volatilities or
payoffs are changed.
Further developments are in progress around nested Monte Carlo and MultilevelMultistep Richardson-Romberg extrapolation [29] (hence an extension to [6]).
Acknowledgment. This work has been done with the support of ANRT and
Global Market Solution inc. with special encouragements from Youssef Allaoui and
Laurent Marcoux.
REFERENCES
28
[1] Y. Achdou and O. Pironneau. Computation methods for option pricing. Frontiers in Applied
Mathematics. SIAM, Philadelphia, 2005. xviii+297 pp., ISBN 0-89871-573-3.
[2] E. Benhamou. Optimal Malliavin weighting function for the computation of the greeks. Mathematical Finance, 13:37–53, 2003.
[3] J. M. Bismut, K. D. Elworthy, and X. M. Li. Bismut type formulae for differential forms.
Probability Theory, 327:87–92, 1998.
[4] D. Brigo, F. Mercurio, F. Rapisarda, and R. Scotti. Approximated moment-matching dynamics
for basket options simulation. Product and Business Development Group, Banca IMI, 2002.
Working paper.
[5] M. Broadie and P. Glasserman. Estimating security price derivatives using simulation. Management Science, 42(2):269–285, 1996.
[6] S. Burgos and M. B. Giles. The computation of greeks with multilevel monte carlo. 2011.
arXiv:1102.1348.
[7] L. Capriotti. Fast greeks by algorithmic differentiation. Journal of Computational Finance,
14(3):3–35, 2011.
[8] L. Capriotti. Likelihood ratio method and algorithmic differentiation: fast second order greeks.
Preprint SSRN:1828503, 2014.
[9] P. S. Dywer and M. S. Macphail. Symbolic matrix derivatives. The Annals of Mathematical
Statistics, 19(4):517–534, 1948.
[10] E. Fournié, J. M. Lasry, J. Lebuchoux, and P. L. Lions. Application of Malliavin calculus to
Monte Carlo methods in finance. Finance and Stochastics, 2(5):201–236, 2001.
[11] M. Giles and P. Glasserman. Smoking adjoints: fast evaluation of greeks in Monte Carlo
calculations. NA-05/15, Numerical Analysis Group, Oxford University, July 2005.
[12] M. B. Giles. Vibrato Monte Carlo sensitivities. In P. L’Ecuyer and A. Owen, editors, Monte
Carlo and Quasi-Monte Carlo Methods 2008, pages 369–382. New York, Springer edition,
2009.
[13] M. B. Giles. Monte carlo evalutation of sensitivities in computational finance. September 20–22,
2007.
[14] P. Glasserman. Gradient estimation via pertubation analysis. Kluwer Academic Publishers,
Norwell, Mass, 1991.
[15] P. Glasserman. Monte Carlo methods in financial engineering, volume 53 of Application of
Mathematics. Springer, New York, 2003. xiii+598 pp., ISBN 0-387-00451-3.
[16] P. Glasserman and X. Zhao. Fast greeks by simulation in forward LIBOR models. Journal of
Computational Finance, 3(1):5–39, 1999.
[17] P. W. Glynn. Likelihood ratio gradient estimation: an overview. In Proceedings of the Winter
Simulation Conference, pages 366–374, New York, 1987. IEEE Press.
[18] E. Gobet and R. Munos. Sensitivity analysing using Itô-Malliavin calculus and martingales:
applications to stochastic optimal control. SIAM Journal on Control and Optimization,
43(5):1676–1713, 2005.
[19] A. Griewank. On automatic differentiation. In M. Iri and K. Tanabe, editors, Mathematical Programming: Recent Developments and Applications, pages 83–108. 1989. Kluwer
Akedemic Publishern Dordrecht.
[20] A. Griewank and A. Walther. Evaluating derivatives: principles and techniques of algorithmic
differentiation. Frontiers in Applied Mathematics. SIAM, Philadelphia, 2008. xxi+426
pp., ISBN 978-0-89871-659-7.
[21] A. Griewank and A. Walther. ADOL-C: A Package for the Automatic differentiation of algorithm written in C/C++. University of Paderborn, Germany, 2010.
[22] L. Hascoët and V. Pascual. The Tapenade automatic differentiation tool: principles, model,
and specification. ACM Transactions On Mathematical Software, 39(3), 2013.
[23] S. L. Heston. A closed-form solution for options with stochastic volatility with applications to
bond and currency options. Review of Financial Studies, 6:327–343, 1993.
[24] Y. C. Ho and X. R. Cao. Optimization and perturbation analysis of queuing networks. Journal
of Optimization Theory and Applications, 40:559–582, 1983.
[25] R. J. Hogan. Fast reverse-mode automatic differentiation using expression templates in C++.
Transactions on Mathematical Software, 40(26):1–26, 2014.
[26] C. Homescu. Adjoints and automatic (algorihmtic) differentiation in computational finance.
arXiv:1107.1831, 2011.
[27] H. Kunita. Stochastic Flows and Stochastic Differential Equations. Cambridge studies in
advanced mathematics. Cambridge University Press, Cambridge, 1990. xiv+361 pp., ISBN
0-521-35050-6.
[28] P. L’Ecuyer. A unified view of the ipa, sf and lr gradient estimation techniques. Management
Science, 36(11):1364–1383, 1990.
29
[29] V. Lemaire and G. Pagès.
Multistep Richardson-Romberg extrapolation.
Preprint
arXiv:1401.1177, 2014.
[30] E. Levy. Pricing European average rate and currency options. Journal of International Money
and Finance, 11:474–491, 1992.
[31] F. A. Longstaff and E. S. Schwartz. Valuing American options by simulation: a simple least
squares approach. Review of Financial Studies, 14:113–148, 2001.
[32] R. L. Mishkov. Generalization of the formula of fa‘a di Bruno for a composite function
with a vector argument. Internation Journal of Mathematics and Mathematical Sciences,
24(7):481–491, 2000.
[33] U. Naumann. The art of differentiating computer programs: an introduction to algorithmic differentiation. Software, Enivronments and Tools. SIAM, RWTH Aachen University, Aachen,
Germany, 2012. xviii+333 pp., ISBN 978-1-61197-206-1.
[34] D. Nualart. The Malliavin calculus and related topics. Probability and its Applications.
Springer-Verlag, Berlin, 2006. x+390 pp., ISBN 978-3-540-28328-7.
[35] O. Pironneau. Automatic differentiation for financial engineering. Université Pierre et Marie
Curie, Paris VI, 2008.
[36] M. Reiman and A. Weiss. Sensitivity analysis for simulations via likelihood ratios. Operations
Research, 37:830–844, 1989.
[37] R. Rubinstein. Sensitivity analysis and performance extrapolation for computer simulation
models. Operations Research, 37:72–81, 1989.
[38] W. Squire and G. Trapp. Using complex variables to estimate derivatives of real functions.
SIAM Review, 40(1):110–112, 1998.
[39] R. Suri and M. Zazanis. Pertubation analysis gives strongly consistent sensitivity estimates for
the m/g/1 queue. Management Science, 34:39–64, 1988.
Table 3
Results for the price, the Delta and the Gamma of a Basket Option priced with the moment-matching approximation (reference values), Finite Difference
on Monte Carlo and Vibrato plus Automatic Differentiation on Monte Carlo. The settings of Monte Carlo simulation are 1 time step and 1, 000, 000 simulation
paths.
d
T
Price
AMM
Price
(MC)
Delta
AMM
Delta
Delta
Vibrato (MC) FD (MC)
Gamma
AMM
Gamma
Gamma
VAD (MC) FD (MC)
1
2
3
4
5
6
7
0.1
0.1
0.1
0.1
0.1
0.1
0.1
38.4285
34.4401
46.0780
59.6741
92.8481
139.235
155.492
37.3823
34.1232
45.9829
58.7849
90.9001
141.766
153.392
0.55226
0.27452
0.18319
0.13750
0.10974
0.09128
0.07820
0.55146
0.27275
0.18220
0.13639
0.10889
0.09017
0.07744
0.55423
0.28467
0.18608
0.14147
0.10956
0.09048
0.07766
4.65557e-3
1.28903e-3
4.29144e-4
1.86107e-4
7.64516e-5
3.54213e-5
2.31624e-5
4.66167e-3
1.34918e-3
4.28572e-4
1.93238e-4
7.79678e-5
3.71834e-5
2.09012e-5
4.64998e-3
1.28193e-3
4.21012e-4
1.79094e-4
7.59901e-5
3.41114e-5
2.18123e-5
1
2
3
4
5
6
7
1
1
1
1
1
1
1
155.389
135.441
181.935
234.985
364.651
543.629
603.818
154.797
133.101
182.642
232.018
363.363
540.870
607.231
0.66111
0.32583
0.21775
0.16304
0.13023
0.10794
0.92420
0.66039
0.32186
0.21497
0.16055
0.12780
0.10477
0.08995
0.67277
0.32547
0.21619
0.01610
0.12804
0.10489
0.89945
1.30807e-3
3.80685e-4
1.26546e-4
5.49161e-5
2.25892e-5
1.04115e-5
6.87063e-6
1.30033e-3
3.86998e-4
1.34423e-4
5.62931e-5
2.38273e-5
8.99834e-6
7.70388e-6
1.32812e-3
3.83823e-4
1.24927e-4
5.50990e-5
2.19203e-5
1.13878e-5
7.22849e-6
30
Table 4
Time computing (in seconds) for the Gamma with Finite Difference on Monte Carlo and with Vibrato plus Automatic Differentiation on Monte Carlo
simulation, dimension of the problem are varying. The settings of Monte Carlo algorithm are the same as above.
Method (Computing Gamma)
d=1
d=2
d=3
d=4
d=5
d=6
d=7
FD (MC)
VAD (MC)
0.49
0.54
0.95
0.77
1.33
0.92
1.82
1.21
2.26
1.50
2.91
1.86
3.36
2.31
Table 5
Results of the price, the Delta and the Gamma of an American option. The reference values are obtained via the Semi-Newton method plus Finite Difference,
they are compared to Vibrato plus Automatic Differentiation on the Longstaff-Schwartz algorithm. We compute the standard error for each American Monte
Carlo results. The settings of the American Monte Carlo are 50 time steps and 50, 000 simulation paths.
Price
Price Standard
Ref. Value (AMC)
Error
Delta
Delta
Standard
Ref. Value Vibrato (AMC)
Error
Gamma
Gamma
Standard
Ref. Value VAD (AMC)
Error
S
σ
T
36
36
36
36
0.2
0.2
0.4
0.4
1
2
1
2
4.47919
4.83852
7.07132
8.44139
4.46289
4.81523
7.07985
8.45612
0.013
0.016
0.016
0.024
0.68559
0.61860
0.51019
0.44528
0.68123
0.59934
0.51187
0.44102
1.820e-3
1.813e-3
1.674e-3
1.488e-3
0.08732
0.07381
0.03305
0.02510
0.06745
0.06398
0.03546
0.02591
6.947e-5
6.846e-5
4.852e-5
5.023e-5
38
38
38
38
0.2
0.2
0.4
0.4
1
2
1
2
3.24164
3.74004
6.11553
7.59964
3.23324
3.72705
6.11209
7.61031
0.013
0.015
0.016
0.025
0.53781
0.48612
0.44726
0.39786
0.53063
0.46732
0.45079
0.39503
1.821e-3
1.669e-3
1.453e-3
1.922e-3
0.07349
0.05907
0.02989
0.02233
0.07219
0.05789
0.03081
0.02342
1.198e-4
1.111e-4
5.465e-5
4.827e-5
40
40
40
40
0.2
0.2
0.4
0.4
1
2
1
2
2.31021
2.87877
5.27933
6.84733
2.30565
2.86072
5.28741
6.85873
0.012
0.014
0.015
0.026
0.41106
0.38017
0.39051
0.35568
0.40780
0.39266
0.39485
0.35446
1.880e-3
1.747e-3
1.629e-3
1.416e-3
0.06014
0.04717
0.02689
0.01987
0.05954
0.04567
0.02798
0.02050
1.213e-4
5.175e-4
1.249e-5
3.989e-5
42
42
42
42
0.2
0.2
0.4
0.4
1
2
1
2
1.61364
2.20694
4.55055
6.17459
1.60788
2.19079
4.57191
6.18424
0.011
0.014
0.015
0.023
0.30614
0.29575
0.33973
0.31815
0.29712
0.28175
0.34385
0.29943
1.734e-3
1.601e-3
1.517e-3
1.347e-3
0.04764
0.03749
0.02391
0.01768
0.04563
0.03601
0.02426
0.01748
4.797e-5
5.560e-5
3.194e-5
2.961e-5
44
44
44
44
0.2
0.2
0.4
0.4
1
2
1
2
1.10813
1.68566
3.91751
5.57268
1.09648
1.66903
3.90838
5.58252
0.009
0.012
0.015
0.028
0.21302
0.22883
0.29466
0.28474
0.20571
0.21972
0.29764
0.28447
1.503e-3
1.487e-3
1.403e-3
1.325e-3
0.03653
0.02960
0.02116
0.01574
0.03438
0.02765
0.02086
0.01520
1.486e-4
2.363e-4
1.274e-4
2.162e-4
31
☎✂✁
✥✁✂✞✂✂✥
✥✄✁
✆✄
✄✂
✝✁✁
❱❆✟ ✠ ❆✝✞✟✠✡☛☞✞✟
✡❱❆✠
✠❡ ✡☛☞✝
☛☞✌
❱❆✆
❱❆✞
✌♦✟✍✡☛♦✝
✍▼✎✏
✌✡✍✮
✝✞✟✠
✟✠✡☛
❱❆✟
❱❆✞
❆✎✎✏♦①☛✑✞✡✒✓
❆✡☛☞✌✍✎✏☛☞
❆☞✌✍✎✏✑✒✌✍
✌✑✠
✌✑✟✠✒✓
❱❆✠
☛☞✌✓♦✍✔✏✑♦☞
☛☞✌✠
✌♦✟✍✡☛♦✝
✍▼✎✏
✌✡✍✮
✍▼✎✏
✒✓ ✡☛☞✞
✑♦☞✒✍✎♦✡
❆✎✏✑✒✓✔✕✏✑ ❡☞✖♦✓✑☞✔
❆✔✕✖✗✓✑✒✕✖
✖♦✑✗✓✔♦✎
❱❆✞ ✟✠✡✠✟❡✠ ☛☞✌✠ ✍▼✎✏
✆
☎✥✁
✁ ✆✥
✆✁✁ ✄
✂✞✂✂✂✥
✥ ✥✄
☎ ✁✁✆✂
☎
☎✁✁
✁ ✁✥✂
✆
☎✁
✥✥✥✂✁
✁✂✝
✁✄ ☎
✄✁
✁ ✥
✄✁
✂✁
✁ ✄
✥✥ ✥✂
✁✂✆
✂
✁ ✥✂
✥✁
✂✁
✁✁ ✂
✥✥✥✥✁
✁✂☎
✁
✁
✲✲✥✥✁✁ ✁✂✄
✥
✂
✁ ✥✥✂
✁✄✄✥
✂
✝✁✁✁✁
✥✁✥☎☎✥
✥✂✂✂✁✄✆✆✥
✞✁✁✁✁
✄✁☎✝✝✥
☎✁✁
✆✥✂✂✂ ✟✁✁✁✁
✥✂✥✥
☎✥✄
✆✂✁
✂✄✥
☎✥✁
✆✥ ✥✂✂✂✂
✥☎
✥✁✁✁✁
☎✂✁
✆✄
✥✆
✂☎✥
✂✆✥
☎✄✁
✆☎
✥✝
✂✝✥
✥✂✂✂✂✂
✄✄✥✥
✂✁✁
✁✁✁✁
| 5 |
LATTICE POINTS IN POLYTOPES, BOX SPLINES, AND TODD
OPERATORS
arXiv:1305.2784v1 [math.CO] 13 May 2013
MATTHIAS LENZ
Abstract. Let X be a list of vectors that is totally unimodular. In a previous
article the author proved that every real-valued function on the set of interior
lattice points of the zonotope defined by X can be extended to a function
on the whole zonotope of the form p(D)BX in a unique way, where p(D) is
a differential operator that is contained in the so-called internal P-space. In
this paper we construct an explicit solution to this interpolation problem in
terms of Todd operators. As a corollary we obtain a slight generalisation of
the Khovanskii-Pukhlikov formula that relates the volume and the number of
integer points in a smooth lattice polytope.
1. Introduction
Box splines and multivariate splines measure the volume of certain variable polytopes. The vector partition function that measures the number of integral points
in polytopes can be seen as a discrete version of these spline functions. Splines and
vector partition functions have recently received a lot of attention by researchers
in various fields including approximation theory, algebra, combinatorics, and representation theory. A standard reference from the approximation theory point of
view is the book [10] by de Boor, Höllig, and Riemenschneider. The combinatorial
and algebraic aspects are stressed in the book [11] by De Concini and Procesi.
Khovaniskii and Pukhlikov proved a remarkable formula that relates the volume
and the number of integer points in a smooth polytope [18]. The connection is
∂x
made via Todd operators, i. e. differential operators of type 1−e
∂x . The formula
is closely related to the Hirzebruch-Riemann-Roch Theorem for smooth projective
toric varieties (see [7, Chapter 13]). De Concini, Procesi, and Vergne have shown
that the Todd operator is in a certain sense inverse to convolution with the box
spline [13]. This implies the Khovanskii-Pukhlikov formula and more generally the
formula of Brion-Vergne [6].
In this paper we will prove a slight generalisation of the deconvolution formula
by De Concini, Procesi, and Vergne. The operator that we use is obtained from
the Todd operator, but it is simpler, i. e. it a polynomial contained in the so-called
internal P-space. Our proof uses deletion-contraction, so in some sense we provide
a matroid-theoretic proof of the Khovanskii-Pukhlikov formula.
Furthermore, we will construct bases for the spaces P− (X) and P(X) that were
studied by Ardila and Postnikov in connection with power ideals [2] and by Holtz
and Ron within the framework of zontopal algebra [15]. Up to now, no general
construction for a basis of the space P− (X) was known.
Date: 14th May 2013.
2010 Mathematics Subject Classification. Primary: 05B35, 19L10, 52B20, 52B40; Secondary:
13B25, 14M25, 16S32, 41A15, 47F05.
Key words and phrases. lattice polytope, box spline, vector partition function, KhovanskiiPukhlikov formula, matroid, zonotopal algebra.
The author was supported by a Junior Research Fellowship of Merton College (University of
Oxford).
1
2
MATTHIAS LENZ
Let us introduce our notation. It is similar to the one used in [11]. We fix a
d-dimensional real vector space U and a lattice Λ ⊆ U . Let X = (x1 , . . . , xN ) ⊆ Λ
be a finite list of vectors that spans U . We assume that X is totally unimodular
with respect to Λ, i. e. every basis for U that can be selected from X is also a lattice
basis for Λ. Note that X can be identified with a linear map X : RN → U . Let
u ∈ U . We define the variable polytopes
ΠX (u) := {w ∈ RN
≥0 : Xw = u}
and
Π1X (u) := ΠX (u) ∩ [0; 1]N .
(1)
Note that any convex polytope can be written in the form ΠX (u) for suitable X
and u. The dimension of these two polytopes is at most N − d. We define the
vector partition function TX (u) := ΠX (u) ∩ ZN ,
T −1/2
(2)
volN −d Π1X (u),
(3)
and the multivariate spline TX (u) := det(XX T )−1/2 volN −d ΠX (u).
(4)
the box spline BX (u) := det(XX )
Note that we have to assume that 0 is not contained in the convex hull of X in order
for TX and TX to be well-defined. Otherwise, ΠX (u) is an unbounded polyhedron.
The zonotope Z(X) is defined as
(N
)
X
Z(X) :=
λi xi : 0 ≤ λi ≤ 1 = X · [0, 1]N .
(5)
i=1
We denote its set of interior lattice points by Z− (X) := int(Z(X)) ∩ Λ. The
symmetric algebra over U is denoted by Sym(U ). We fix a basis s1 , . . . , sd for the
lattice Λ. This makes it possible to identify Λ with Zd , U with Rd , Sym(U ) with
the polynomial ring R[s1 , . . . , sd ], and X with a (d × N )-matrix. Then X is totally
unimodular if and only if every non-singular square submatrix of this matrix has
determinant 1 or −1. The base-free setup is more convenient when working with
quotient vector spaces.
We denote the dual vector space by V = U ∗ and we fix a basis t1 , . . . , td that is
dual to the basis for U . An element of Sym(U ) can be seen as a differential operator
on Sym(V ), i. e. Sym(U ) ∼
= R[s1 , . . . , sd ] ∼
= R[ ∂t∂1 , . . . , ∂t∂d ]. For f ∈ Sym(U ) and
p ∈ Sym(V ) we write f (D)p to denote the polynomial in Sym(V ) that is obtained
when f acts on p as a differential operator. It is known that the box spline is
piecewise polynomial and its local pieces are contained in Sym(V ). We will mostly
use elements of Sym(U ) as differential operators on its local pieces.
Note that a vector
Q u ∈ U defines a linear form u ∈ Sym(U ). For a sublist Y ⊆ X,
we define pY := y∈Y y. For example, if Y = ((1, 0), (1, 2)), then pY = s21 + 2s1 s2 .
Furthermore, p∅ := 1. Now we define the
central P-space P(X) := span{pY : rk(X \ Y ) = rk(X)}
\
and the internal P-space P− (X) :=
P(X \ x).
(6)
(7)
x∈X
The space P− (X) was introduced in [15] where it was also shown that the dimension
of this space is equal to |Z− (X)|. The space P(X) first appeared in approximation
theory [1, 9, 14]. These two P-spaces and generalisations were later studied by
various authors, including [2, 5, 16, 19, 21, 22].
In [20], the author proved the following theorem, which will be made more explicit
in the present paper.
∼ Rd be a list of vectors that is totally unimodular
Theorem 1. Let X ⊆ Λ ⊆ U =
and spans U . Let f be a real valued function on Z− (X), the set of interior lattice
points of the zonotope defined by X.
LATTICE POINTS, BOX SPLINES, AND TODD OPERATORS
3
Then there exists a unique polynomial p ∈ P− (X) ⊆ R[s1 , . . . , sd ], s. t. p(D)BX
is a continuous function and its restriction to Z− (X) is equal to f .
Here, p(D) denotes the differential operator obtained from p by replacing the
∂
variable si by ∂s
and BX denotes the box spline defined by X.
i
P
k
Let z ∈ U . As usual, the exponential is defined as ez := k≥0 zk! ∈ R[[s1 , . . . , sd ]].
We define the (z-shifted) Todd operator
Y
x
∈ R[[s1 , . . . , sd ]].
(8)
Todd(X, z) := e−z
1 − e−x
x∈X
The Todd operator can be expressed in terms of the Bernoulli numbers B0 = 1,
B1 = − 21 , B2 = 16 ,. . . Recall that they are defined by the equation:
X Bk
s
=
sk .
(9)
es − 1
k!
k≥0
P
One should note that e
=
= k≥0 Bk!k (−z)k . For z ∈ Z− (X) we can
fix a list S ⊆ X s. t. z = x∈S x. Let T := X \ S. Then we can write the Todd
operator as
Y X Bk
Y X Bk
Xx Xx
Todd(X, z) =
xk
(−x)k = 1 +
−
+ ...
k!
k!
2
2
z
x∈S
k≥0
z
ezP
−1
z
1−e−z
x∈T
x∈T
k≥0
x∈S
A sublist C ⊆ X is called a cocircuit if rk(X \ C) < rk(X) and C is inclusion
minimal with this property. We consider the cocircuit ideal J (X) := ideal{pC :
C cocircuit} ⊆ Sym(U ). It is known [14, 15] that Sym(U ) = P(X) ⊕ J (X). Let
ψX : P(X) ⊕ J (X) → P(X)
(10)
denote the projection. Note that this is a graded linear map and that ψX maps
any homogeneous polynomial to zero whose degree is at least N − d + 1. This
implies
P that there
P is a canonical extension ψX : R[[s1 , . . . , sd ]] → P(X) given by
ψX ( i (gi )) := i ψX (gi ), where gi denotes a homogeneous polynomial of degree i.
Let
fz = fzX := ψX (Todd(X, z)).
(11)
Theorem 2 (Main Theorem). Let X ⊆ Λ ⊆ U ∼
= Rd be a list of vectors that
is totally unimodular and spans U . Let z be a lattice point in the interior of the
zonotope Z(X).
Then fz ∈ P− (X), Todd(X, z)(D)BX extends continuously on U , and
fz (D)BX |Λ = Todd(X, z)(D)BX |Λ = δz .
(12)
Here, fz and Todd(X, z) act on the box spline BX as differential operators.
Dahmen and Micchelli observed that
TX = BX ∗d TX :=
X
BX (· − λ)TX (λ)
(13)
λ∈Λ
(cf. [11, Proposition 17.17]). Using this result, the following variant of the KhovanskiiPukhlikov formula [18] follows immediately.
Corollary 3. Let X ⊆ Λ ⊆ U ∼
= Rd be a list of vectors that is totally unimodular
and spans U , u ∈ Λ and z ∈ Z− (X). Then
|ΠX (u − z) ∩ Λ| = TX (u − z) = Todd(X, z)(D)TX (u) = fz (D)TX (u).
(14)
4
MATTHIAS LENZ
Remark 4. The box spline BX is piecewise polynomial. Hence each of its local
pieces is smooth but the whole function is not smooth where two different regions
of polynomiality intersect. De Concini, Procesi, and Vergne [13] proved the following deconvolution formula, where BX is replaced by a suitable local piece pc :
Todd(X, 0)(D)pc |Λ = δ0 . In Section 2 we will explain the choice of the local piece.
One can deduce from [13, Remark 3.15] that Todd(X, z)(D)BX can be extended
continuously if z ∈ Z− (X). It is also not difficult to show that Todd(X, z)(D)BX =
fz (D)BX (see Lemma 27) and that multiplying the Todd operator by e−x corresponds to translating Todd(X, z)(D)BX by x. The novelty of the Main Theorem
is that the operator fz for z ∈ Z− (X) is shorter than the original Todd operator
operator (cf. Example 15), i. e. it is contained in P− (X). Furthermore, we provide
a new proof for De Concini, Procesi, and Vergne’s deconvolution formula.
We will also prove a slightly different version of the Main Theorem (Theorem 12)
that only holds for local pieces of the box spline but where lattice points in the
boundary of the zonotope are permitted as well. This theorem implies the following
result.
Corollary 5. Let X ⊆ Λ ⊆ U ∼
= Rd be a list of vectors that is totally unimodular
and spans U and let z be a lattice point in the zonotope Z(X). Let u ∈ Λ and
let Ω ⊆ cone(X) be a chamber s. t. u is contained in its closure. Let pΩ be the
polynomial that agrees with TX on Ω. Then
|ΠX (u − z) ∩ Λ| = TX (u − z) = Todd(X, z)(D)pΩ (u) = fz (D)pΩ (u).
(15)
The original Khovanskii-Pukhlikov formula is the case z = 0 in Corollary 5. For
more information on this formula, see Vergne’s survey article on integral points in
polytopes [24]. An explanation of the Khovanskii-Pukhlikov formula that is easy
to read is contained in the book by Beck and Robins [4, Chapter 10].
Corollary 6. Let X ⊆ Λ ⊆ U ∼
= Rd be a list of vectors that is totally unimodular
and spans U . Then
X
BX (z)fz = 1.
(16)
z∈Z− (X)
This implies formula (13).
The central P-space and various other generalised P-spaces have a canonical
basis [15, 21]. Up to now, no general construction for a basis of the internal space
P− (X) was known (cf. [3, 15, 19]). The polynomials fz form such a basis.
Corollary 7. Let X ⊆ Λ ⊆ U ∼
= Rd be a list of vectors that is totally unimodular
and spans U . Then {fz : z ∈ Z− (X)} is a basis for P− (X).
We also obtain a new basis for the central space P(X). Let w ∈ U be a short
affine regular vector, i. e. a vector whose Euclidian length is close to zero that
is not contained in any hyperplane generated by sublists of X. Let Z(X, w) :=
(Z(X) − w) ∩ Λ (see Figure 4 for an example). It is known that dim P(X) =
|Z(X, w)| = vol(Z(X)) [15].
Corollary 8. Let X ⊆ Λ ⊆ U ∼
= Rd be a list of vectors that is totally unimodular
and spans U . Then {fz : z ∈ Z(X, w)} is a basis for P(X).
This corollary will be used to prove the following new characterisation of the
internal space P− (X).
Corollary 9. Let X ⊆ Λ ⊆ U ∼
= Rd be a list of vectors that is totally unimodular
and spans U . Then
P− (X) = {f ∈ P(X) : f (D)BX is a continuous function}.
(17)
LATTICE POINTS, BOX SPLINES, AND TODD OPERATORS
5
Remark 10. There is a related result due to Dahmen-Micchelli ([8] or [11, Theorem 13.21]): for every function f on Z(X, w) there exists a unique function in
DM(X) that agrees with f on Z(X, w). Here, DM(X) denotes the so-called discrete Dahmen-Micchelli space. The proof of the deconvolution formula in [13] relies
on this result.
Organisation of the article. The remainder of this article is organised as follows:
in Section 2, we will first define deletion and contraction. Then we will describe
a method to make sense of derivatives of piecewise polynomial functions via limits
and state a different version of the Main Theorem.
In Section 3 we will give some examples. In Section 4 we will recall some facts
about zonotopal algebra, i. e. about the space P(X), the dual space D(X) and their
connection with splines. These will be needed in the proof of the Main Theorem
and its corollaries in Section 5. In the appendix we will give an alternative proof
of the Main Theorem in the univariate case that uses residues.
Acknowledgements. The author would like to thank Michèle Vergne for her comments on [20] which lead to the present article. Some examples where calculated
using the Sage Mathematics Software System [23].
2. Deletion-contraction, limits, and the extended Main Theorem
In the first subsection, we will first define deletion and contraction and discuss the
idea of the deletion-contraction proof of the Main Theorem. In the second subsection, we will describe a method to make sense of derivatives of piecewise polynomial
functions via limits and then state a different version of the Main Theorem.
2.1. Deletion and contraction. Let x ∈ X. We call the list X \ x the deletion of
x. The image of X \ x under the canonical projection πx : U → U/ span(x) =: U/x
is called the contraction of x. It is denoted by X/x.
The projection πx induces a map Sym(U ) → Sym(U/x) that we will also denote
by πx . If we identify Sym(U ) with the polynomial ring R[s1 , . . . , sd ] and x = sd ,
then πx is the map from R[s1 , . . . , sd ] to R[s1 , . . . , sd−1 ] that sends sd to zero and
s1 , . . . , sd−1 to themselves. The space P(X/x) is contained in the symmetric algebra
Sym(U/x).
Note that since X is totally unimodular, Λ/x ⊆ U/x is a lattice for every x ∈ X
and X/x is totally unimodular with respect to this lattice.
Let x ∈ X. Using matroid theory terminology, we call x a loop if x = 0 and we
call x a coloop if rk(X \ x) < rk(X).
Recall that we defined fz = ψX (Todd(X, z)) for z ∈ Z− (X). By Theorem 1,
there is a unique polynomial qzX = qz ∈ P− (X) s. t. qz (D)BX |Λ = δz for every
z ∈ Z− (X). In order to prove the Main Theorem it is sufficient to show that fz = qz .
In fact, qz and fz behave in the same way under deletion and contraction: they
X\x
X/x
X
both satisfy the equalities xqz
= qzX − qz+x
and πx (qzX ) = qz̄ . Unfortunately,
it is not obvious that fz ∈ P− (X).
Therefore, we have to make a detour. Since P− (X) is in general not spanned by
polynomials of type pY for some Y ⊆ X (cf. [3]), it is quite difficult to handle this
space. The space P(X) on the other hand has a basis which is very convenient for
deletion-contraction (cf. Proposition 20). Therefore, we will work with the larger
space P(X) and do a deletion-contraction proof there. An extended version of
the Main Theorem will be stated in the next subsection. This will require some
adjustments since for f ∈ P(X), f (D)BX |Λ might not be well-defined.
6
MATTHIAS LENZ
1 − s2
1 − s1 − s2
2 − t2
1 + s1
2 − t1
1 + t1 − t2
1
t1
1 − t1 + t2
1 − s1
t2
1 + s1 + s2
1 + s2
Figure 1. The box spline and the polynomials fz corresponding
to Example 14.
2.2. Differentiating piecewise polynomial functions and limits.
Definition 11. Let H by a hyperplane spanned by a sublist Y ⊆ X. A shift of such
a hyperplane by a vector in the lattice Λ is called an affine admissible hyperplane.
An alcove is a connected component of the complement of the union of all affine
admissible hyperplanes
A vector w ∈ U is called affine regular, if it is not contained in any affine
admissible hyperplane. We call w short affine regular if its Euclidian length is close
to zero.
Note that on the closure of each alcove c, BX agrees with a polynomial pc . For
example, the six triangles in Figure 1 are the alcoves where BX agrees with a
non-zero polynomial.
Fix a short affine regular vector w ∈ U . Let u ∈ U . Let c ⊆ U be an alcove s. t.
u and u + εw are contained in its closure for some small ε > 0 and let pc be the
polynomial that agrees with BX on the closure of c. We define limw BX (z) := pc (z)
and for f ∈ Sym(U )
lim f (Dpw )BX (u) := f (D)pc (u)
w
(18)
(pw stands for piecewise). Note that the limit can be dropped if f (D)BX is continuous at u. Otherwise, the limit is important: note for example that limw B(1) (0)
is either 1 or 0 depending on whether w is positive or negative. More information
on this construction can be found in [13] where it was introduced. We will later
see that fz (D)BX (D) is continuous if z ∈ Z(X) ∩ Λ is in the interior of Z(X) and
discontinuous if it is on the boundary.
Recall that Z(X, w) := (Z(X) − w) ∩ Λ.
Theorem 12. Let X ⊆ Λ ⊆ U ∼
= Rd be a list of vectors that is totally unimodular
and spans U . Let w be a short affine regular and let z ∈ Z(X, w). Then
lim fz (Dpw )BX |Λ = lim Todd(X, z)(Dpw )BX |Λ = δz .
w
w
(19)
3. Examples
Example 13. We consider the three smallest one-dimensional examples. For X =
(1, 1) we obtain Todd((1, 1), 1) = (1 + B1 s + . . .)(1 − B1 s + . . .) = 1 + 0s + . . .
(1,1)
Hence f1
= 1. Furthermore,
s
s
(1,1,1)
(1,1,1)
f1
=1+ ,
f2
=1− ,
2
2
s2
s2
s2
(1,1,1,1)
(1,1,1,1)
(1,1,1,1)
f1
=1+s+ ,
f2
=1− ,
and f3
=1−s+ .
3
6
3
LATTICE POINTS, BOX SPLINES, AND TODD OPERATORS
7
Figure 2. The pentagon D, square , and triangle ∆ that we
discuss in Example 15.
Example 14. Let X = ((1, 0), (0, 1), (1, 1)) ⊆ Z2 . Then P− (X) = R, P(X) =
span{1, s1 , s2 }, Z− (X) = {(1, 1)}, and f(1,1) = 1. ΠX (u1 , u2 ) ∼
= [0, min(u1 , u2 )] ⊆
R1 . The multivariate spline and the vector partition function are:
(
(
u2 for 0 ≤ u2 ≤ u1
u2 + 1 for 0 ≤ u2 ≤ u1
and TX (x, y) =
.
TX (u1 , u2 ) =
u1 for 0 ≤ u1 ≤ u2
u1 + 1 for 0 ≤ u1 ≤ u2
(20)
Corollary 3 correctly predicts that TX (u)|Z2 = TX (u − (1, 1)). Figure 1 shows the
six non-zero local pieces of BX and the seven polynomials fz attached to the lattice
points of the zonotope Z(X).
Example 15. We consider the polygons in Figure 2, which we will denote by D,
, and ∆. These polytopes are defined by the matrix
1 0 0 1 0
(21)
X = 0 1 0 0 1 ,
0 0 1 1 1
where the first three columns correspond to slack variables and the rows of the
last two are the normal vectors of the movable facets. The corresponding zonotope
has two interior lattice points: (1, 1, 1) corresponds to and (1, 1, 2) to ∆. The
projections of the Todd operators are f = 1 + s3 /2 and f∆ = 1 − s3 /2, where s3
corresponds to shifting the diagonal face of the pentagon. Elementary calculations
show that shifting this face outwards1 by ε ∈ [−1, 1] increases the volume of the
pentagon by (ε − 21 ε2 ). This implies ∂s∂ 3 vol(pentagon) = 1.
The volume of the pentagon is 3.5. Corollary 3 correctly predicts that
1 ∂
vol(D) = 4
2 ∂s3
1 ∂
= vol(D) −
vol(D) = 3.
2 ∂s3
∩ Z2 = vol(D) +
and ∆ ∩ Z2
(22)
(23)
The projection to P(X) of the unshifted Todd operator Todd(X, 0) is a lot more
complicated: fD = 1 + s1 + s2 + 23 s3 + s1 s2 + s1 s3 + s2 s3 + s23 .
On the other hand, since Z− (X) contains only two points, the box spline BX
must assume the value 21 at both points. Then (13) correctly predicts that the
volume of the pentagon is the arithmetic mean of the number of integer points in
the square and the triangle.
Remark 16. If X is not totally unimodular, then ψX (Todd(X, z)) is in general
not contained in P− (X). Consider for example X = (2, 1) and Todd(X, z) =
x
2x
e2x −1 1−e−x . Then
ψX (fz ) = ψX (2(1 + 2B1 x)(1 − B1 x)) = 2 − x 6∈ P− (X) = R.
1This means replacing the inequality a + b ≤ 3 by a + b ≤ 3 + ε.
(24)
8
MATTHIAS LENZ
Figure 3. The zonotope corresponding to Example 17. It has six
interior lattice points.
Example 17. Let X be a reduced oriented incidence matrix of the complete graph
on 4 vertices / the set of positive roots of the root system A3 (cf. Figure 3), i. e.
1 0 0
1
1
0
0
1 .
(25)
X = 0 1 0 −1
0 0 1
0 −1 −1
1
1
1
1
1
1
f(1,1,0) = − t1 t2 − t1 t3 + t2 t3 + t1 − t2 − t3 + 1
(26)
6
6
3
2
2
2
1
1
1
1
1
1
f(1,1,−1) = − t1 t2 + t1 t3 − t2 t3 + t1 − t2 + t3 + 1
(27)
6
3
6
2
2
2
1
1
1
1
1
1
f(2,0,−1) = − t1 t2 − t1 t3 + t2 t3 − t1 + t2 + t3 + 1
(28)
6
6
3
2
2
2
1
1
1
1
1
1
f(1,0,0) = t1 t2 − t1 t3 − t2 t3 + t1 + t2 − t3 + 1
(29)
3
6
6
2
2
2
1
1
1
1
1
1
f(2,1,−1) = t1 t2 − t1 t3 − t2 t3 − t1 − t2 + t3 + 1
(30)
3
6
6
2
2
2
1
1
1
1
1
1
f(2,0,0) = − t1 t2 + t1 t3 − t2 t3 − t1 + t2 − t3 + 1
(31)
6
3
6
2
2
2
4. Zonotopal Algebra
In this section we will recall a few facts about the space P(X) and its dual, the
space D(X) that is spanned by the local pieces of the box spline. The theory around
these spaces was named zonotopal algebra by Holtz and Ron in [15]. This theory
allows us to explicitly describe the map ψX . This description will then be used to
describe the behaviour of the polynomials fz under deletion and contraction. This
section is an extremely condensed version of [21].
Recall that the list of vectors X is contained in a vector space U ∼
= Rd and
that we denote the dual space by V . We start by defining a pairing between the
LATTICE POINTS, BOX SPLINES, AND TODD OPERATORS
9
symmetric algebras Sym(U ) ∼
= R[s1 , . . . , sd ] and Sym(V ) ∼
= R[t1 , . . . , td ]:
h·, ·i : R[s1 , . . . , sd ] × R[t1 , . . . , td ] → R
∂
∂
hp, f i := p
,...,
f (0),
∂t1
∂td
(32)
i. e. we let p act on f as a differential operator and take the degree zero part
of the result. Note that this pairing extends to a pairing h·, ·i : R[[s1 , . . . , sd ]] ×
R[t1 , . . . , td ] → R.
It is known that the box spline BX agrees with a polynomial on the closure
of each alcove. These local pieces and their partial derivatives span the DahmenMicchelli space D(X). This space can be described as the kernel of the cocircuit
ideal J (X), namely
D(X) = {f ∈ Sym(V ) : hp, f i = 0 for all p ∈ J (X)}.
(33)
We will now explain a construction of certain polynomials that is essentially due
to De Concini, Procesi, and Vergne [12]. Let Z ⊆ U be a finite list of vectors
and let B = (b1 , . . . , bd ) ⊆ Z be a basis. It is important that the basis is ordered
and that this order is the order obtained by restricting the order on Z to B. For
i ∈ {0, . . . , d}, we define Si = SiB := span{b1 , . . . , bi }. Hence
{0} = S0B ( S1B ( S2B ( . . . ( SdB = U ∼
= Rd
(34)
is a flag of subspaces. Let u ∈ Si \ Si−1 . The vector u can be written as u =
Pi
ν=1 λν bν in a unique way. Note that λi 6= 0. If λi > 0, we call u positive and if
λi < 0, we call u negative. We partition Z ∩ (Si \ Si−1 ) as follows:
and
PiB := {u ∈ Z ∩ (Si \ Si−1 ) : u positive}
(35)
NiB
(36)
:= {u ∈ Z ∩ (Si \ Si−1 ) : u negative}.
We define
TiB+ := (−1)|Ni | · TPi ∗ T−Ni and TiB− := (−1)|Pi | · T−Pi ∗ TNi .
(37)
Note that TiB+ is supported in cone(Pi , −Ni ) and that
TiB− (x) = (−1)|Pi ∪Ni | TiB+ (−x).
(38)
Now define
RiB := TiB+ − TiB−
and
B
RZ
= RB := R1B ∗ · · · ∗ RdB .
(39)
We denote the set of all sublists B ⊆ X that are bases for U by B(X). Fix a basis
B ∈ B(X). A vector x ∈ X \B is called externally active if x ∈ span{b ∈ B : b ≤ x},
i. e. x is the maximal element of the unique circuit contained in B ∪ x. The set of
all externally active elements is denoted E(B).
Definition 18 (Basis for D(X)). Let X ⊆ U ∼
= Rd be a finite list of vectors that
spans U. We define
B
Б(X) := {|det(B)| RX\E(B)
: B ∈ B(X)}.
(40)
Proposition 19 ([14, 17]). Let X ⊆ U ∼
= Rd be a finite list of vectors that spans
U. Then the spaces P(X) and D(X) are dual under the pairing h·, ·i, i. e.
D(X) → P(X)∗
f 7→ h·, f i
is an isomorphism.
(41)
10
MATTHIAS LENZ
Proposition 20 ([14]). Let X ⊆ U ∼
= Rd be a finite list of vectors that spans U. A
basis for P(X) is given by
B(X) := {QB : B ∈ B(X)},
(42)
where QB := pX\(B∪E(B)) .
Theorem 21 ([21]). Let X ⊆ U ∼
= Rd be a finite list of vectors that spans U. Then
Б(X) is a basis for D(X) and this basis is dual to the basis B(X) for the central
P-space P(X).
Remark 22. Theorem 21 yields an explicit formula for the projection map ψX :
R[[s1 , . . . , sd ]] → P(X) that we have defined on page 3:
X
fz := ψX (Todd(X, z)) =
hTodd(X, z), RB i QB .
(43)
B∈B(X)
5. Proofs
In this section we will prove the Main Theorem and its corollaries. The proof
uses a deletion-contraction argument. Deletion-contraction identities for the polynomials fz will be obtained based on the following idea: one can write ψX (f ) as
X
X
X
ψX (f ) =
hf, RB i QB =
hf, RB i QB +
hf, RB i QB (44)
x6∈B∈B(X)
B∈B(X)
x∈B∈B(X)
(cf. Remark 22). We will see that the first sum on the right-hand side of (44)
corresponds to P− (X \ x) and the second to P− (X/x). Note that ψX (f ) is by
definition independent of the order imposed on the list X, while each of the two
sums depends on this order.
It is an important observation that for x ∈ X and z ∈ Λ
x · Todd(X \ x, z) = Todd(X, z) − Todd(X, z + x)
holds because
x
1−e−x (1
−e
−x
(45)
) = x.
Lemma 23. Let x ∈ X and let f ∈ Sym(U ). Then
xψX\x (f ) = ψX (xf ).
(46)
X
xfzX\x = fzX − fz+x
.
(47)
Furthermore, for all z ∈ Λ,
Proof. Note that the statement is trivial for x = 0, so from now on we assume that
x is not a loop. Since ψX (f ) is independent of the order imposed on X, we may
rearrange the list elements s. t. x is minimal.
Let x ∈ B ∈ B(X). Since x is minimal, span(x) ∩ (X \ E(B)) = {x}. By
Lemma 4.5 in [21], this implies that hxf, RB i = hf, Dx RB i = hf, 0i = 0.
Let x 6∈ B ∈ B(X). Since x is minimal, this implies that x 6∈ E(B). Then
B
B
B
hxf, RX\E(B)
i = hf, Dx RX\E(B)
i = hf, RX\(E(B)∪x)
i.
Using the two previous observations we obtain:
X
X
B
ψX (xf ) =
hxf, RX\E(B)
i QB +
=
{z
=0
X
(49)
x6∈B∈B(X)
x∈B∈B(X)
|
B
hxf, RX\E(B)
i QB
(48)
}
X\x
B
hf, RX\(E(B)∪x)
i x QB
= xψX\x (f ).
(50)
B∈B(X\x)
The second to last equality follows from the fact that QX
B = pX\(B∪E(B)) =
X\x
xpX\(B∪E(B)∪x) = QB
if x 6∈ B ∪ E(B).
LATTICE POINTS, BOX SPLINES, AND TODD OPERATORS
11
Figure 4. Deletion and contraction for the set Z(X, w).
We can deduce the second claim using (45):
xfzX\x = xψX\x (Todd(X \ x, z)) = ψX (x Todd(X \ x, z))
= ψX (Todd(X, z) − Todd(X, z + x)) =
fzX
−
X
fz+x
.
(51)
Recall that πx : Sym(U ) → Sym(U/x) denotes the canonical projection.
Lemma 24. Let x ∈ X be neither a loop nor a coloop. Let z ∈ Λ and let z̄ =
πx (z) ∈ Λ/x. Then
πx (fz ) = fz̄ .
(52)
Proof. Since the maps πx and ψX are independent of the order imposed on X, we
may rearrange the list elements s. t. x is minimal.
B\x
Let x ∈ B ∈ B(X). Given that x is minimal, RB = (Tx − T−x ) ∗ RX\(E(B)∪x)
follows. Hence RB is constant in direction x so we can interpret it as a function on
U/x and identify it with RB̄ . In Todd(X, z), the factor x/(1 − e−x ) becomes 1 if
we set x to 0. Note that x divides QB if x 6∈ B since x is minimal. Then
!
X
B
πx (fz ) = πx
hTodd(X, z), R i QB
x6∈B∈B(X)
|
{z
=0, since x|QB .
}
(53)
!
X
+ πx
hTodd(X, z), RB i QB
x∈B∈B(X)
=
X
hTodd(X/x, z̄), RB̄ i QB̄ = fz̄ .
B̄∈B(X/x)
In [20], the author showed that the function
γX : P− (X) → Ξ(X) := {f : Λ → R : supp(f ) ⊆ Z− (X)}
(54)
that maps p to p(D)BX |Λ is an isomorphism. We will now extend γX to a map
that is an isomorphism between P(X) and a superspace of Ξ(X).
For a short affine regular vector w ∈ U we define
w
γX
: P(X) → Ξw (X) := {f : Λ → R : supp(f ) ⊆ Z(X, w)}
p 7→ lim p(Dpw )BX |Λ .
(55)
w
∼ Rd be a list of vectors that is totally unimodProposition 25. Let X ⊆ Λ ⊆ U =
ular and spans U . Let x ∈ X be neither a loop nor a coloop.
12
MATTHIAS LENZ
Then the following diagram of real vector spaces is commutative, the rows are
exact and the vertical maps are isomorphisms:
0
/ P(X \ x)
·x
w
γX\x
0
/ Ξw (X \ x)
/ P(X)
πx
/ Ξw (X)
/0
(56)
w̄
γX/x
w
γX
∇x
/ P(X/x)
Σx
/ Ξw̄ (X/x)
/0
where ∇x (f )(z) := f (z) − f (z − x)
X
X
and Σx (f )(z̄) :=
f (x) =
f (λx + z) for some z ∈ z̄.
x∈z̄∩Λ
λ∈Z
Proof. First note that the map Z(X, w) \ Z(X \ x, w) → Z(X/x, w̄) that sends z
to z̄ is a bijection. This is a variant of [20, Lemma 17] and it can be proved in
the same way, using the fact that dim P(X) = vol(Z(X)) = |Z(X, w)|, which was
established in [15]. See also Figure 4. This implies that the second row is exact.
Exactness of the first row is known (e. g. [2, Proposition 4.4]).
If X contains only loops and coloops, then P(X) = R and BX is the indicator
function of the parallelepiped spanned by the coloops. For every short affine regular
w
w, Z(X, w) contains a unique point zw and limw BX (zw ) = 1. Hence γX
is an
isomorphism in this case. Note that this also holds for U = Λ = {0}. In this case
w = 0, Z(X) = {0} = Z(X, w), and BX = limw BX = 1.
The rest of the proof is analogous to the proof of [20, Proposition] and therefore
omitted.
w −1
) (δz ) ∈ P(X). By construction, this
For z ∈ Z(X, w), let qzX,w = qzw := (γX
w
polynomial satisfies limw qz (Dpw )BX |Λ = δz .
Lemma 26. Let w ∈ U be a short affine regular vector and let z ∈ Z(X, w). Then
qzw = fz .
0
In particular, qzw = qzw for short affine regular vectors w and w0 s. t. z ∈
Z(X, w) ∩ Z(X, w0 ).
Proof. Fix a short affine regular w. We will show by induction that qzw = fz for all
z ∈ Z(X, w).
If X contains only loops and coloops, then P(X) = R and BX is the indicator
function of the parallelepiped spanned by the coloops. Hence fz = ψX (1 + . . .) =
1 = qzw . Note that this is also holds for U = Λ = {0}. In this case w = 0,
Z(X) = {0} = Z(X, w), and BX = limw BX = 1.
Now suppose that there is an element x ∈ X that is neither a loop nor a coloop
and suppose that the statement is already true for X/x and X \ x.
Let gzw = gz := fz − qzw ∈ P(X).
X\x
Step 1. If z 0 = z + λx for some λ ∈ Z, then gz = gz0 : by Lemma 23, xfz
=
X\x,w
X
X
w
fz − fz+x . Because of the commutativity of (56), γX (xqz
) = ∇x (δz ). Since
X\x,w
X\x
X,w
w
γX
is an isomorphism this implies xqz
= qzX,w − qz+x
. By assumption, qz
=
X\x
X,w
X
X
X,w
fz . Hence we can deduce that fz − fz+x = qz − qz+x and the claim follows.
Step 2. πx (gz ) = 0: it follows from Proposition 25 that πx (qzw ) = qz̄w̄ . Taking
into account Lemma 24 this implies πx (gz ) = fz̄ − qz̄w̄ . This is zero by assumption.
Step 3. gz = 0: using the commutativity of (56) again and steps 1 and 2, we
obtain
w
w
w̄
|Z(X, w) ∩ (span(x) + z)| · γX
(gz ) = Σx (γX
(gz )) = γX/x
(πx (gz )) = 0.
w
w
Hence γX
(gz ) = 0 and the claim follows since γX
is an isomorphism.
(57)
LATTICE POINTS, BOX SPLINES, AND TODD OPERATORS
13
Lemma 27. Let f ∈ R[[s1 , . . . , sd ]] and let w be a short affine regular vector.
(1) Then limw f (Dpw )BX = limw ψX (f )(Dpw )BX .
(2) If ψX (f )(D)BX is continuous, then f (D)BX can be extended continuously
s. t. ψX (f )(D)BX = f (D)BX .
Proof. Let u ∈ U and let c be the alcove s. t. u + εw ∈ c for all sufficiently small
ε > 0. Let pc be the polynomial that agrees with BX on the closure of c. By
definition of the map ψX , for all i the degree i part of j = f − ψX (f ) is contained
in J (X). By definition of D(X), pc ∈ D(X). Then (33) implies j(D)pc = 0. Hence
limw f (Dpw )BX (u) = f (D)pc (u) = ψX (f )(D)pc (u) = limw ψX (f )(Dpw )BX (u).
If ψX (f )(D)BX is continuous, then
ψX (f )(D)BX (u) = lim ψX (f )(Dpw )BX (u) = lim f (Dpw )BX (u)
w
w
(58)
for all short affine regular vectors w, so if we define f (D)BX (u) := ψX (f )BX (u)
for all u ∈ U , we obtain a continuous extension of f (D)BX to U .
Proof of Theorem 2 (Main Theorem) and of Theorem 12. If w is a short affine regular vector and z ∈ Z(X, w), then limw fz (Dpw )BX |Λ = δz by Proposition 25 and
Lemma 26.
Now let z ∈ Z− (X). Then by Theorem 1, there exists qz− ∈ P− (X) s. t.
−
qz (D)BX |Z− (X) = δz and qz− (D)BX is continuous. Continuity implies that qz− (D)BX
vanishes on the boundary of Z(X), so qz− (D)BX |Λ = δz . Furthermore, for any short
w
is injective, qz− = qzw = fz must
affine regular w, limw qz− (Dpw )BX |Λ = δz . Since γX
hold. Hence, fz ∈ P− (X).
To finish the proof, note that Lemma 27 implies that limw fz (Dpw )BX |Λ =
limw Todd(X, z)(Dpw )BX |Λ and if fz (D)BX is continuous it is possible to extend
Todd(X, z)(D)BX continuously s. t. fz (D)BX = Todd(X, z)(D)BX .
P
Proof of Corollary 6. Let p :=
z∈Z− (X) BX (z)fz . By the Main Theorem, p ∈
P− (X) and p(D)BX and BX = 1(D)BX agree on Z− (X). By the uniqueness part
of Theorem 1 this implies p = 1.
Using the previous observation and Corollary 3 we can deduce (13):
X
X
BX ∗d TX (u) =
BX (u − λ)TX (λ) =
BX (z)TX (u − z)
(59)
z∈Λ
λ∈Λ
=
X
BX (z)fz (D)TX (u) = TX (u).
z∈Λ
Proof of Corollary 7. By Theorem 2, {fz : z ∈ Z− (X)} is a linearly independent
subset of P− (X). This set is actually a basis since |Z− (X)| = dim P− (X). This
equality follows from Theorem 1. It was also proven in [15].
Proof of Corollary 8. By construction, {fz : z ∈ Z(X, w)} ⊆ P(X). Proposition 25
and Lemma 26 imply that the set is actually a basis.
Proof of Corollary 9. “⊆” is part of Theorem 1.
“⊇”: Let p ∈ P(X) and suppose that p(D)BX is continuous. Let w be a short
affinePregular vector. By Corollary 8, there exist uniquely determined λz ∈ R s. t.
p = z∈Z(X,w) λz fz .
Let z ∈ Z(X, w) \ Z− (X). Since z − w 6∈ Z(X), lim−w p(Dpw )BX (z) = 0 holds.
As we assumed that p(D)BX is continuous, this implies that
X
0 = lim p(Dpw )BX (z) =
λy δy (z) = λz .
(60)
w
y∈Z(X,w)
Hence p =
P
z∈Z− (X)
λz fz , which is in P− (X) by the Main Theorem.
14
MATTHIAS LENZ
Appendix A. The univariate case and residues
In this appendix we will give an alternative proof of (a part of) the Main Theorem
in the univariate case. This proof was provided by Michèle Vergne.
A totally unimodular list of vectors X ⊆ Z1 ⊆ R1 contains only entries in
{−1, 0, 1}. Suppose that it contains a times −1 and b times 1. The zonotope Z(X)
is then the interval [−a, b]. We may assume that X does not contain any zeroes and
we choose N s. t. a + b = N + 1. Then P(X) = R[s]≤N and P− (X) = R[s]≤N −1 .
Let z ∈ Z− (X) = {−a + 1, . . . , b − 1}. Then there exist positive integers α and
β s. t.
b
β
a
α
s
s
s
s
−zs
=
. (61)
Todd(X, z) := e
es − 1
1 − e−s
es − 1
1 − e−s
In this case, ψX : R[s] → R[s]≤N is the map that forgets all monomials of degree
greater or equal N + 1.
Lemma 28. Let X ⊆ Z1 ⊆ R1 be a list of vectors that is totally unimodular and let
z be an interior lattice point of the zonotope Z(X). Then fz = ψX (Todd(X, z)) ∈
P− (X).
Proof. Suppose that X contains N + 1 non-zero entries. Todd(X, z) agrees with its
Taylor expansion
Todd(X, z) = 1 + c1 s + . . . + cN sN + . . .
(62)
The coefficients ci depend on z and X and can be expressed in terms of Bernoulli
numbers. It is sufficient to show that cN = 0. This can be done by calculating the
residue at the origin. Let γ ⊆ C be a circle around the origin. Using the residue
theorem and the considerations at the beginning of this section we obtain:
1
cN = Res0 (z −(N +2) Todd(X, z)) = Res0
(63)
(es − 1)α (1 − e−s )β
I
I
1
1
1
1
1
ds
=
dσ (64)
=
−s
α
s
β
−1
α
β
2πi γ (1 − e ) (e − 1)
2πi exp(γ) (1 − σ ) (σ − 1) σ
I
I
α−1
X α − 1
1
σ α−1
1
(σ − 1)−(α+β)+i dσ.
=
dσ
=
2πi exp(γ) (σ − 1)α+β
2πi exp(γ) i=0
i
The last equality can easily be seen by substituting y = σ + 1, expanding, and
resubstituting. Note that exp(γ) is a curve around 1. Since α and β are positive
the residue of the integrand of the last integral at one is 0. Hence cN = 0.
References
[1] A. A. Akopyan and A. A. Saakyan, A system of differential equations that is related to the
polynomial class of translates of a box spline, Mat. Zametki 44 (1988), no. 6, 705–724, 861.
[2] Federico Ardila and Alexander Postnikov, Combinatorics and geometry of power ideals,
Trans. Amer. Math. Soc. 362 (2010), no. 8, 4357–4384.
[3]
, Two counterexamples for power ideals of hyperplane arrangements, 2012, arXiv:
1211.1368, submitted as a corrigendum for [2].
[4] Matthias Beck and Sinai Robins, Computing the continuous discretely, Undergraduate Texts
in Mathematics, Springer, New York, 2007, Integer-point enumeration in polyhedra.
[5] Andrew Berget, Products of linear forms and Tutte polynomials, European J. Combin. 31
(2010), no. 7, 1924–1935.
[6] Michel Brion and Michèle Vergne, Residue formulae, vector partition functions and lattice
points in rational polytopes, J. Amer. Math. Soc. 10 (1997), no. 4, 797–833.
[7] David A. Cox, John B. Little, and Henry K. Schenck, Toric varieties, Graduate Studies in
Mathematics, vol. 124, American Mathematical Society, Providence, RI, 2011.
[8] Wolfgang Dahmen and Charles A. Micchelli, On the local linear independence of translates
of a box spline, Studia Math. 82 (1985), no. 3, 243–263.
LATTICE POINTS, BOX SPLINES, AND TODD OPERATORS
15
[9] Carl de Boor, Nira Dyn, and Amos Ron, On two polynomial spaces associated with a box
spline, Pacific J. Math. 147 (1991), no. 2, 249–267.
[10] Carl de Boor, Klaus Höllig, and Sherman D. Riemenschneider, Box splines, Applied Mathematical Sciences, vol. 98, Springer-Verlag, New York, 1993.
[11] Corrado De Concini and Claudio Procesi, Topics in hyperplane arrangements, polytopes and
box-splines, Universitext, Springer, New York, 2011.
[12] Corrado De Concini, Claudio Procesi, and Michèle Vergne, Vector partition functions and
index of transversally elliptic operators, Transform. Groups 15 (2010), no. 4, 775–811.
[13]
, Box splines and the equivariant index theorem, Journal of the Institute of Mathematics of Jussieu (2012), 1–42, published online.
[14] Nira Dyn and Amos Ron, Local approximation by certain spaces of exponential polynomials,
approximation order of exponential box splines, and related interpolation problems, Trans.
Amer. Math. Soc. 319 (1990), no. 1, 381–403.
[15] Olga Holtz and Amos Ron, Zonotopal algebra, Advances in Mathematics 227 (2011), no. 2,
847–894.
[16] Olga Holtz, Amos Ron, and Zhiqiang Xu, Hierarchical zonotopal spaces, Trans. Amer. Math.
Soc. 364 (2012), 745–766.
[17] Rong-Qing Jia, Subspaces invariant under translations and dual bases for box splines., Chin.
Ann. Math., Ser. A 11 (1990), no. 6, 733–743 (Chinese).
[18] A. G. Khovanskiı̆ and A. V. Pukhlikov, The Riemann-Roch theorem for integrals and sums
of quasipolynomials on virtual polytopes, Algebra i Analiz 4 (1992), no. 4, 188–216.
[19] Matthias Lenz, Hierarchical zonotopal power ideals, European J. Combin. 33 (2012), no. 6,
1120–1141.
[20]
, Interpolation, box splines, and lattice points in zonotopes, 2012, arXiv:1211.1187.
[21]
, Zonotopal algebra and forward exchange matroids, 2012, arXiv:1204.3869v2.
[22] Nan Li and Amos Ron, External zonotopal algebra, 2011, arXiv:1104.2072v1.
[23] W. A. Stein et al., Sage Mathematics Software (Version 5.6), The Sage Development Team,
2013, http://www.sagemath.org.
[24] Michèle Vergne, Residue formulae for Verlinde sums, and for number of integral points in
convex rational polytopes, European women in mathematics (Malta, 2001), World Sci. Publ.,
River Edge, NJ, 2003, pp. 225–285.
E-mail address: [email protected]
Mathematical Institute, 24-29 St Giles’, Oxford, OX1 3LB, United Kingdom
| 0 |
1
PhaseMax: Convex Phase Retrieval via Basis Pursuit
arXiv:1610.07531v3 [] 30 Jan 2018
Tom Goldstein and Christoph Studer
Abstract—We consider the recovery of a (real- or complexvalued) signal from magnitude-only measurements, known as
phase retrieval. We formulate phase retrieval as a convex
optimization problem, which we call PhaseMax. Unlike other
convex methods that use semidefinite relaxation and lift the
phase retrieval problem to a higher dimension, PhaseMax is
a “non-lifting” relaxation that operates in the original signal
dimension. We show that the dual problem to PhaseMax is Basis
Pursuit, which implies that phase retrieval can be performed
using algorithms initially designed for sparse signal recovery.
We develop sharp lower bounds on the success probability of
PhaseMax for a broad range of random measurement ensembles,
and we analyze the impact of measurement noise on the solution
accuracy. We use numerical results to demonstrate the accuracy
of our recovery guarantees, and we showcase the efficacy and
limits of PhaseMax in practice.
I. I NTRODUCTION
Phase retrieval is concerned with the recovery of an ndimensional signal x0 ∈ Hn , with H either R or C, from
m ≥ n squared-magnitude, noisy measurements [2]
b2i = |hai , x0 i|2 + ηi ,
i = 1, 2, . . . , m,
(1)
where ai ∈ Hn , i = 1, 2, . . . , m, are the (known) measurement
vectors and ηi ∈ R, i = 1, 2, . . . , m, models measurement
noise. Let x̂ ∈ Hn be an approximation vector1 to the true
signal x0 . We recover the signal x0 by solving the following
convex problem we call PhaseMax:
(
maximize
hx, x̂i<
x∈Hn
(PM)
subject to |hai , xi| ≤ bi , i = 1, 2, . . . , m.
Here, hx, x̂i< denotes the real-part of the inner product between
the vectors x and x̂. The main idea behind PhaseMax is to
find the vector x that is most aligned with the approximation
vector x̂ and satisfies a convex relaxation of the measurement
constraints in (1).
Our main goal is to develop sharp lower bounds on the
probability with which PhaseMax succeeds in recovering the
true signal x0 , up to an arbitrary phase ambiguity that does
not affect the measurement constraints in (1). By assuming
noiseless measurements, one of our main results is as follows.
T. Goldstein is with the Department of Computer Science, University of
Maryland, College Park, MD (e-mail: [email protected]).
C. Studer is with the School of Electrical and Computer Engineering, Cornell
University, Ithaca, NY (e-mail: [email protected]).
This paper was presented in part at the 32th International Conference on
Machine Learning (ICML) [1].
The work of T. Goldstein was supported in part by the US National Science
Foundation (NSF) under grant CCF-1535902, the US Office of Naval Research
under grant N00014-17-1-2078, and by the Sloan Foundation. The work of C.
Studer was supported in part by Xilinx, Inc. and by the US NSF under grants
ECCS-1408006, CCF-1535897, CNS-1717559, and CAREER CCF-1652065.
1 Approximation vectors can be obtained via a variety of algorithms, or can
even be chosen at random. See Section VI.
Theorem 1. Consider the case of recovering a complex-valued
signal x ∈ Cn from m noiseless measurements of the form
(1) with measurement vectors ai , i = 1, 2, . . . , m, sampled
independently and uniformly from the unit sphere. Let
hx0 , x̂i<
angle(x0 , x̂) = arccos
kx0 k2 kx̂k2
be the angle between the true vector x0 and the approximation x̂, and define the constant
α=1−
2
π
angle(x0 , x̂)
that measures the approximation accuracy. Then, the probability that PhaseMax recovers the true signal x0 , denoted by
pC (m, n), is bounded from below as follows:
!
2
(αm − 4n)
pC (m, n) ≥ 1 − exp −
(2)
2m
whenever αm > 4n.
In words, if m > 4n/α and α > 0, then PhaseMax will
succeed with non-zero probability. Furthermore, for a fixed
signal dimension n and an arbitrary approximation vector x̂
that satisfies angle(x0 , x̂) < π2 , i.e., one that is not orthogonal
to the vector x0 , we can make the success probability of
PhaseMax arbitrarily close to one by increasing the number
of measurements m. As we shall see, our recovery guarantees
are sharp and accurately predict the performance of PhaseMax
in practice.
We emphasize that the convex formulation (PM) has been
studied by several other authors, including Bahmani and
Romberg [3], whose work appeared shortly before our own.
Related work will be discussed in detail in Section I-C.
A. Convex Phase Retrieval via Basis Pursuit
It is quite intriguing that the following Basis Pursuit problem
[4], [5]
(
minimize
kzk1
z∈Hm
(BP)
subject to x̂ = AB−1 z,
with B = diag(b1 , b2 , . . . , bm ) and A = [a1 , a2 , . . . , am ]
is the dual problem to (PM); see, e.g., [6, Lem. 1]. As a
consequence, if PhaseMax succeeds, then the phases of the
solution vector z ∈ Hm to (BP) are exactly the phases that
were lost in the measurement process in (1), i.e., we have
yi = phase(zi )bi = hai , x0 i,
i = 1, 2, . . . , m,
with phase(z) = z/|z| for z 6= 0 and phase(0) = 1. This
observation not only reveals a fundamental connection between
phase retrieval and sparse signal recovery, but also implies that
2
Basis Pursuit solvers can be used to recover the signal from
the phase-less measurements in (1).
B. A Brief History of Phase Retrieval
where β = m/n is constant and m → ∞. The authors of
[41] derive an exact asymptotic bound of the performance of
PhaseMax, and a related non-rigorous analysis was given in
[42]. It was also shown that better signal recovery guarantees
can be obtained by iteratively applying PhaseMax. The resulting
method, called PhaseLamp, was analyzed in [41].
Phase retrieval is a well-studied problem with a long
history [7], [8] and enjoys widespread use in applications
such as X-ray crystallography [9]–[11], microscopy [12], [13],
imaging [14], and many more [15]–[18]. Early algorithms, D. Contributions and Paper Outline
such as the Gerchberg-Saxton [7] or Fienup [8] algorithms,
In contrast to algorithms relying on semidefinite relaxation
rely on alternating projection to recover complex-valued signals or non-convex problem formulations, we propose PhaseMax, a
from magnitude-only measurements. The papers [2], [19], [20] novel, convex method for phase retrieval that directly operates
sparked new interest in the phase retrieval problem by showing in the original signal dimension. In Section II, we establish
that it can be relaxed to a semidefinite program. Prominent a deterministic condition that guarantees uniqueness of the
convex relaxations include PhaseLift [2] and PhaseCut [21]. solution to the (PM) problem. We borrow methods from
These methods come with recovery guarantees, but require geometric probability to derive sharp lower bounds on the
the problem to be lifted to a higher dimensional space, which success probability for real- and complex-valued systems in
prevents their use for large-scale problems. Several authors Section III. Section V generalizes our results to a broader
recently addressed this problem by presenting efficient methods range of random measurement ensembles and to systems with
that can solve large-scale semidefinite programs without the measurement noise. We show in Section VI that randomly
complexity of representing the full-scale (or lifted) matrix of chosen approximation vectors are sufficient to ensure faithful
unknowns. These approaches include gauge duality methods recovery, given a sufficiently large number of measurements.
[22], [23] and sketching methods [24].
We numerically demonstrate the sharpness of our recovery
More recently, a number of non-convex algorithms have guarantees and showcase the practical limits of PhaseMax in
been proposed (see e.g., [25]–[31]) that directly operate in Section VII. We conclude in Section VIII.
the original signal dimension and exhibit excellent empirical
performance. The algorithms in [25], [27]–[29], [32] come with E. Notation
recovery guarantees that mainly rely on accurate initializers,
Lowercase and uppercase boldface letters stand for column
such as the (truncated) spectral initializer [25], [28], the Null
vectors and matrices, respectively. For a complex-valued
initializer [33], the orthogonality-promoting method [29], or
matrix A, we denote its transpose and Hermitian transpose by
more recent initializers that guarantee optimal performance [34],
AT and A∗ , respectively; the real and imaginary parts are A<
[35] (see Section VI for additional details). These initializers
and A= . The ith column of the matrix A is denoted by ai and
enable non-convex phase retrieval algorithms to succeed, given
the kth entry of the ith vector ai is [ai ]k ; for a vector a without
a sufficiently large number of measurements; see [36] for more
index, we simply denote the kth entry by ak . We define the
details on the geometry of such non-convex problems.
inner product between two complex-valued vectors a and b
as ha, bi = a∗ b. We use j to denote the imaginary unit.
C. Related Work on Non-Lifting Convex Methods
The `2 -norm and `1 -norm of the vector a are kak2 and kak1 ,
Because of the extreme computational burden of traditional respectively.
convex relaxations (which require “lifting” to a higher dimension), a number of authors have recently been interested in
II. U NIQUENESS C ONDITION
non-lifting convex relaxations for phase retrieval. Shortly before
The measurement constraints in (1) do not uniquely define
the appearance of this work, the formulation (PM) was studied a vector. If x is a vector that satisfies (1), then any vector
by Bahmani and Romberg [3]. Using methods from machine x0 = ejφ x for φ ∈ [0, 2π) also satisfies the constraints. In
learning theory, the authors derived bounds on the recovery of contrast, if x is a solution to (PM), then ejφ x with φ 6= 0
signals with and without noise. The results in [3] are stronger will not be another solution. In fact, consider any vector x
than those presented here in that they are uniform with respect in the feasible set of (PM) with hx, x̂i 6= 0. By choosing
=
to the approximation vector x̂, but weaker in the sense that ω = phase(hx, x̂i), we have
they require significantly more measurements.
hωx, x̂i< = |hx, x̂i| > hx, x̂i< ,
Several authors have studied PhaseMax after the initial
appearance of this work. An alternative proof of accurate signal
which implies that given such a vector x, one can always
recovery was derived using measure concentration bounds in
increase the objective function of (PM) simply by aligning
[37]. Compressed sensing methods for sparse signals [38], and
x with the approximation x̂ (i.e., modifying its phase so that
corruption-robust methods for noisy signals [39] have also been
hx, x̂i is real valued). The following definition makes this
presented. The closely related non-lifting relaxation BranchHull
observation rigorous.
has also been proposed for recovering signals from entry-wise
Definition 1. A vector x is said to be aligned with another
product measurements [40].
Finally, we note that recent works have proved tight vector x̂, if the inner product hx, x̂i is real-valued and nonasymptotic bounds for PhaseMax in the asymptotic limit, i.e., negative.
3
From all the vectors that satisfy the measurement constraints
in (1), there is only one that is a candidate solution to the convex
problem (PM), which is also the solution that is aligned with x̂.
For this reason, we adopt the following important convention
throughout the rest of this paper.
The true vector x0 denotes a solution to (1) that is
aligned with the approximation vector x̂.
x0 = x0 .
Theorem 2 has an intuitive geometrical interpretation. If
x0 is an optimal point and δ is an ascent direction, then one
cannot move in the direction of δ starting at x0 without leaving
the feasible set. This condition is met if there is an ai such
that x0 and δ both lie on the same side of the plane through
the origin orthogonal to the measurement vector ai .
III. P RELIMINARIES : CLASSICAL SPHERE COVERING
PROBLEMS AND GEOMETRIC PROBABILITY
Remark 1. There is an interesting relation between the convex
formulation of PhaseMax and the semidefinite relaxation
To derive sharp conditions on the success probability of
method PhaseLift [2], [19], [20]. Recall that the set of solutions PhaseMax, we require a set of tools from geometric probability.
to any convex problem is always convex. However, the solution Many classical problems in geometric probability involve
set of the measurement constraints (1) is invariant under phase calculating the likelihood of a sphere being covered by random
rotations, and thus non-convex (provided it is non-zero). It is “caps,” or semi-spheres, which we define below.
therefore impossible to design a convex problem that yields
n−1
n
this set of solutions. PhaseMax and PhaseLift differ in how Definition 2. Consider the set SH = {x ∈ H | kxk2 = 1},
n
n
they remove the phase ambiguity from the problem to enable the unit sphere embedded in H . Given a vector a ∈ H , the
a convex formulation. Rather than trying to identify the true cap centered at a with central angle θ is defined as
n−1
vector x0 , PhaseLift reformulates the problem in terms of the
CH (a, θ) = {δ ∈ SH
| ha, δi< > cos(θ)}.
(3)
quantity x0 (x0 )H , which is unaffected by phase rotations in
x0 . Hence, PhaseLift removes the rotation symmetry from the This cap contains all vectors that form an angle with a of less
solution set, yielding a problem with a convex set of solutions. than θ radians. When θ = π/2, we have a semisphere centered
PhaseMax does something much simpler: it pins down the at a, which is simply denoted by
phase of the solution to an arbitrary quantity, thus removing
n−1
CH (a) = CH (a, π/2) = {δ ∈ SH
| ha, δi< > 0}. (4)
the phase ambiguity and restoring convexity to the solution set.
This arbitrary phase choice is made when selecting the phase
We say that a collection of caps covers the entire sphere
of the approximation x̂.
if the sphere is contained in the union of the caps. Before
we can say anything useful about when a collection of caps
We are now ready to state a deterministic condition under
covers the sphere, we will need the following classical result,
which PhaseMax succeeds in recovering the true vector x0 . The
which is often attributed to Schläfli [43]. Proofs that use simple
result applies to the noiseless case, i.e., ηi = 0, i = 1, 2, . . . , m.
induction methods can be found in [44]–[46]. For completeness,
In this case, all inequality constraints in (PM) are active at x0 .
we briefly include a short proof in Appendix A.
The noisy case will be discussed in Section V-B.
Lemma 1. Consider a sphere S n−1 ⊂ Rn . Suppose we slice
Theorem 2. The true vector x0 is the unique maximizer of the sphere with k planes through Rthe origin. Suppose also that
(PM) if, for any unit vector δ ∈ Hn that is aligned with the every subset of n planes have linearly independent normal
approximation x̂,
vectors. These planes divide the sphere into
∃i, [hai , x0 i∗ hai , δi]< > 0.
n−1
X k − 1
r(n, k) = 2
(5)
i
Proof. Suppose the conditions of this theorem hold, and
i=0
consider some candidate solution x0 in the feasible set for
regions.
(PM) with hx0 , x̂i ≥ hx0 , x̂i. Without loss of generality, we
0
0
0
assume x to be aligned with x̂. Then, the vector ∆ = x − x
Classical results in geometric probability study the likelihood
is also aligned with x̂, and satisfies
of a sphere being covered by random caps with centers chosen
independently and uniformly from the sphere’s surface. For our
h∆, x̂i = hx0 , x̂i − hx0 , x̂i ≥ 0.
purposes, we need to study the more specific case in which caps
0
are only chosen from a subset of the sphere. While calculating
Since x is a feasible solution for (PM), we have
this probability is hard in general, it is quite simple when the
|hai , x0 + ∆i|2 =
set obeys the following symmetry condition.
|hai , x0 i|2 + 2[hai , x0 i∗ hai , ∆i]< + |hai , ∆i|2 ≤ b2i , ∀i.
Definition 3. We say that the set A is symmetric if, for all
But |aTi x0 |2 = b2i , and so
x ∈ A, we also have −x ∈ A.
[hai , x0 i∗ hai , ∆i]< ≤ − 12 |hai , ∆i|2 ≤ 0, ∀i.
Now, if k∆k2 > 0, then the unit-length vector δ = ∆/k∆k2
satisfies [hai , x0 i∗ hai , δi]< ≤ 0 for all i, which contradicts
the hypothesis of the theorem. It follows that k∆k2 = 0 and
We are now ready to prove a general result that states when
the sphere is covered by random caps.
Lemma 2. Consider some symmetric set A ⊂ SRn−1 of
positive (n − 1 dimensional) measure. Choose some set of
4
A
mA measurements {ai }m
i=1 uniformly from A. Then, the caps
{CR (ai )} cover the sphere SRn−1 with probability
n−1
X mA − 1
1
pcover (mA , n) = 1 − mA −1
.
2
k
k=0
This is the probability of turning up n or more heads when
flipping mA − 1 fair coins.
Proof. Consider the following two-step process for constructing
the set {ai }. First, we sample mA vectors {a0i } independently
and uniformly from A. Note that, because A has positive
measure, it holds with probability 1 that any subset of n
vectors will be linearly independent.
Second, we define ai = ci a0i , where {ci } are i.i.d. Bernoulli
variables that take value +1 or −1 with probability 1/2. We
can think of this second step as randomly “flipping” a subset
of uniform random vectors. Since A is symmetric and {a0i } is
sampled independently and uniformly, the random vectors {ai }
also have an independent and uniform distribution over A. This
construction may seem superfluous since both {ai } and {a0i }
have the same distribution, but we will see below that this
becomes useful.
Given a particular set of coin flips {ci }, we can write the
set of points that are not covered by the caps {CR (ai )} as
\
\
CR (−ai ) =
CR (−ci a0i ).
(6)
i
i
mA
Fig. 1. (left) A diagram showing the construction of x, y, and x̃ in the proof
of Lemma 3. (right) The reflections defined in (7-9) map vectors ai lying in
T
the half-sphere aT
i x < 0 onto the half-sphere a x > 0.
Lemma 3. Consider two vectors x, y ⊂ SRn−1 , and the caps
CR (x) and CR (y). Let α = 1 − π2 angle(x, y) be a measure
of the similarity between the vectors x and y. Draw some
collection {ai ∈ CR (x)}m
i=1 of m vectors uniformly from CR (x)
so that m > 2n/α. Then
[
CR (y) ⊂
CR (ai )
i
with probability at least
Note that there are 2
such intersections that can be formed,
(αm − 2n)2
pcover (m, n; x, y) ≥ 1 − exp −
.
one for each choice of the sequence {ci }. The caps {CR (ai )}
2m
cover the sphere whenever
the intersection (6) is empty.
Consider the set of planes {x | hai , xi = 0} . From Lemma 1, Proof. Due to rotational symmetry, we assume y =
we know that mA planes with a common intersection point [1, 0, . . . , 0]T without loss of generality. Consider the point
divide the sphere into r(n, mA ) non-empty regions. Each of x̃ = [x1 , −x2 , . . . , −xn ]T . This is the reflection of x over
these regions corresponds to the intersection (6) for one possible y (see Figure 1). Suppose we have some collection {ai }
choice of {ci }. Therefore, of the 2mA possible intersections, independently and uniformly distributed on the entire sphere.
only r(n, mA ) of them are non-empty. Since the sequence {ci } Consider the collection of vectors
is random, each intersection is equally likely to be chosen, and
if hai , xi ≥ 0
(7)
ai ,
so the probability of covering the sphere is
0
ai =
ai − 2hai , yiy
if hai , xi < 0, hai , x̃i < 0 (8)
r(n, mA )
−a
if hai , xi < 0, hai , x̃i ≥ 0. (9)
i
pcover (mA , n) = 1 −
.
2mA
The mapping ai → a0i maps the lower half sphere {a | ha, xi <
0} onto the upper half sphere {a | ha, xi > 0} using a combination of reflections and translations (depicted in Figure 1).
Remark 2. Several papers have studied the probability of
Indeed, for all i we have ha0i , xi ≥ 0. This is clearly true in
covering the sphere using points independently and uniformly
case (7) and (9). In case (8), we use the definition of y and x̃
chosen over the entire sphere. The only aspect that is unusual
to write
about Lemma 2 is the observation that this probability remains
the same if we restrict our choices to the set A, provided
ha0i , xi = hai , xi − 2hai , yihy, xi
A is symmetric. We note that this result was observed by
= hai , xi − 2[ai ]1 x1 = −hai , x̃i ≥ 0.
Gilbert [45] in the case n = 3, and we generalize it to any
Because the mapping ai → a0i is onto and (piecewise)
n > 1 using a similar argument.
isometric, {a0i } will be uniformly distributed over the half
We now present a somewhat more complicated covering sphere {a | ha, x0 i > 0} whenever {ai } are independently and
theorem. The next result considers the case where the measure- uniformly distributed over the entire sphere.
ment vectors are drawn only from a semisphere. We consider
Consider the “hourglass” shaped, symmetric set
the question of whether these vectors cover enough area to
contain not only their home semisphere, but another nearby A = {a | ha, xi > 0, ha, x̃i > 0}∪{a | ha, xi < 0, ha, x̃i < 0}.
semisphere as well.
S
0
We now make the following claim: CR (y) ⊂
i CR (ai )
5
which is only valid for αm > 2n.
whenever
SRn−1
⊂
[
ai ∈A
CR (ai ).
(10)
In words, if the caps defined by the subset of {ai } in A cover
the entire sphere, then the caps {CR (a0i )} (which have centers
in CR (x)) not only cover CR (x), but also cover its neighbor cap
CR (y). To justify this claim, suppose that (10) holds. Choose
some δ ∈ CR (y). This point is covered by some cap CR (ai )
with ai ∈ A. If hai , xi > 0 and hai , x̃i > 0, then ai = a0i and
δ is covered by CR (a0i ). Otherwise, we have hai , xi < 0 and
hai , x̃i < 0, then
hδ, a0i i = hδ, ai − 2hai , yiyi
= hδ, ai i − 2hai , yihδ, yi ≥ hδ, ai i ≥ 0.
Note we have used the fact that hδ, yi is real and non-negative
because δ ∈ CR (y). We have also used hai , yi = [ai ]1 =
1
2 (hai , xi + hai , x̃i) < 0, which follows from the definition of
x̃ and the definition of A. In either case, hδ, a0i i > 0, and we
have δ ∈ CR (a0i ), which proves our claim.
S
We can now see that the probability that CR (y) ⊂ i CR (a0i )
is at least as high as the probability that (10) holds. Let
pcover (m, n; x, y | mA ) denote the probability of covering C(y)
conditioned on the number mA of points lying in A. From
Lemma 2, we know that
pcover (m, n; x, y | mA ) ≥ pcover (mA , n) > pcover (mA +1, n+1).
As noted in Lemma 2, the expression on the right is the chance
of turning up more than n heads when flipping mA fair coins.
The probability pcover (m, n; x, y) is then given by
pcover (m, n; x, y) = EmA [pcover (m, n; x, y | mA )]
≥ EmA [pcover (mA , n)]
Remark 3. In the proof of Lemma 3, we obtained a bound on
EmA [pcover (mA + 1, n + 1)] using an intuitive argument about
coin flipping probabilities. This expectation could have been
obtained more rigorously (but with considerably more pain)
using the method of probability generating functions.
Lemma 3 contains most of the machinery needed for the
proofs that follow. In the sequel, we prove a number of exact
reconstruction theorems for (PM). Most of the results rely on
short arguments followed by the invocation of Lemma 3.
We finally state a result that bounds from below the
probability of covering the sphere with caps of small central
angle. The following Lemma is a direct corollary of the results
of Burgisser, Cucker, and Lotz in [47]. A derivation that uses
their results is given in Appendix B.
Lemma 4. Let n ≥ 9, and m > 2n. Then the probability of
covering the sphere SRn−1 with independent uniformly sampled
caps of central angle φ ≤ π/2 is lower bounded by
pcover (m, n, φ) ≥
√
(em)n n − 1
sinn−1 (φ)(m − n)
√
1−
cos(φ)
exp −
(2n)n−1
8n
(m − 2n + 1)2
− exp −
.
2m − 2
IV. R ECOVERY G UARANTEES
Using the uniqueness condition provided by Theorem 2 and
the tools derived in Section III, we now develop sharp lower
bounds on the success probability of PhaseMax for noiseless
real- and complex-valued systems. The noisy case will be
discussed in Section V-B.
> EmA [pcover (mA + 1, n + 1)].
The expression on the right hand side is the probability of
getting more than n heads when one fair coin is flipped for
every measurement ai that lies in A.
Let’s evaluate how often this coin-flipping event occurs. The
region A is defined by two planes that intersect at an angle of
β = angle(x, x̃) = 2 angle(x, y). The probability of a random
point ai lying in A is given by α = 2π−2β
= 1 − πβ , which is
2π
the fraction of the unit sphere that lies either above or below
both planes. The probability of a measurement ai contributing
to the heads count is half the probability of it lying in A,
or 12 α. The probability of turning up more than n heads is
therefore given by
k
m−k
n
X
1
1
m
1−
α
1− α
.
2
2
k
k=0
Using Hoeffding’s inequality, we obtain the following lower
bound
k
m−k
n
X
1
1
m
pcover (m, n; x, y) ≥ 1 −
α
1− α
2
2
k
k=0
2
−(αm − 2n)
≥ 1 − exp
,
2m
A. The Real Case
We now study problem (PM) in the case that the unknown
signal and measurement vectors are real valued. Consider some
collection of measurement vectors {ai } drawn independently
and uniformly from SRn−1 . For simplicity, we also consider the
collection {ãi } = {phase(hai , x0 i)ai } of aligned vectors that
satisfy hãi , x0 i ≥ 0 for all i. Using this notation, Theorem 2
can be rephrased as a simple geometric condition.
Corollary 1. Consider the set {ãi } = {phase(hai , x0 i)ai } of
aligned measurement vectors. Define the half sphere of aligned
ascent directions
DR = CR (x̂) = {δ ∈ SRn−1 | hδ, x̂i ∈ R ≥ 0}.
The true vector x0 will be the unique maximizer of (PM) if
[
DR ⊂
CR (ãi ).
i
Proof. Choose some ascent direction δ ∈ DR . If the assumptions of this Corollary hold, then there is some i with
δ ∈ CR (ãi ), and so hãi , δi ≥ 0. Since this is true for any
δ ∈ DR , the conditions of Theorem 2 are satisfied and exact
reconstruction holds.
6
Using this observation, we can develop the following lower
bound on the success probability of PhaseMax for real-valued
systems.
Form the vector δ 0 = δ + jhδ, x0 i= x0 , which is the projection
of δ onto B. We can verify that δ 0 ∈ B by writing
hδ 0 , x0 i = hδ, x0 i + hjhδ, x0 i= x0 , x0 i
Theorem 3. Consider the case of recovering a real-valued
= hδ, x0 i − jhδ, x0 i= hx0 , x0 i
signal x0 ∈ Rn from m noiseless measurements of the form
= hδ, x0 i − jhδ, x0 i= = hδ, x0 i< ,
(1) with measurement vectors ai , i = 1, 2, . . . , m, sampled
independently and uniformly from the unit sphere SRn−1 . Then, which is real valued. Furthermore, δ 0 ∈ CC (x̂) because
the probability that PhaseMax recovers the true signal x0 ,
hδ 0 , x̂i = hδ, x̂i+hjhδ, x0 i= x0 , x̂i = hδ, x̂i−jhδ, x0 i= hx0 , x̂i.
denoted by pR (m, n), is bounded from below as follows:
The first term on the right is real-valued and non-negative
−(αm − 2n)2
pR (m, n) ≥ 1 − exp
,
(because
δ ∈ DC ), and the second term is complex valued
2m
(because x0 is assumed to be aligned with x̂). It follows that
where α = 1 − π2 angle(x0 , x̂) and m > 2n/α.
hδ 0 , x̂i< ≥ 0 and δ 0 ∈ CC (x̂). Since we already showed that
δ 0 ∈ B, we have δ 0 ∈ CC (x̂) ∩ B. Suppose now that (12) holds.
Proof. Consider the set of m independent and uniformly
The claim will be proved if we can show that δ ∈ D is covered
sampled measurements {ai ∈ SRn−1 }m
i=1 . The aligned vectors
by one of the CC (ãi ). Since δ 0 ∈ CC (x̂) ∩ B, there is some i
0
{ãi = phase(hai , x i)ai } are uniformly distributed over the
with δ 0 ∈ CC (ãi ). But then
half sphere CR (x0 ). Exact reconstruction happens when the
condition in Corollary 1 holds. To obtain a lower bound on the
0 ≤ hδ 0 , ãi i< = hδ, ãi i< + hjhδ, x0 i= x0 , ãi i< = hδ, ãi i< .
probability of this occurrence, we can simply invoke Lemma 3
(13)
with x = x0 and y = x̂.
We see that δ ∈ CC (ãi ), and the claim is proved.
We now know that exact reconstruction happens whenever
condition (12) holds. We can put a bound on the frequency of
B. The Complex Case
this using Lemma 3. Note that the sphere SCn−1 is isomorphic
2n−1
, and the set B is isomorphic to the sphere SR2n−2 .
We now prove Theorem 1 given in Section I, which charac- to SR
{ãi } are uniformly distributed over a half
terizes the success probability of PhaseMax for phase retrieval The aligned vectors
0
sphere
in
C
(x
)
∩
B,
which is isomorphic to the upper half
C
in complex-valued systems. For clarity, we restate our result
2n−2
sphere
in
S
.
The
probability
of these vectors covering the
R
in shorter form.
cap CC (x̂) ∩ B is thus given by pcover (m, 2n − 1; x0 , x̂) from
0
Theorem 1. Consider the case of recovering a complex-valued Lemma 3. We instead use the bound for pcover (m, 2n; x , x̂),
0
n
signal x ∈ C from m noiseless measurements of the form which is slightly weaker.
(1), with {ai }m
i=1 sampled independently and uniformly from
Remark 4. Theorems 1 and 3 guarantee exact recovery for
the unit sphere SCn−1 . Then, the probability that PhaseMax a sufficiently large number of measurements m provided that
recovers the true signal x0 is bounded from below as follows: angle(x0 , x̂) < π . In the case angle(x0 , x̂) > π , our theorems
2
2
!
2
guarantee convergence to −x0 (which is also a valid solution)
(αm − 4n)
,
pC (m, n) ≥ 1 − exp −
for sufficiently large m. Our theorems only fail for large m
2m
if arccos(x0 , x̂) = π/2, which happens with probability zero
when the approximation vector x̂ is generated at random. See
where α = 1 − π2 angle(x0 , x̂) and m > 4n/α.
Section VI for more details.
Proof. Consider the set {ãi } = {phase(hai , x0 i)ai } of aligned
measurement vectors. Define the half sphere of aligned ascent
directions
DC = {δ ∈ SCn−1 | hδ, x̂i< ∈ R+
0 }.
By Theorem 2, the true signal x0 will be the unique maximizer
of (PM) if
[
DC ⊂
CC (ãi ).
(11)
i
Let us bound the probability of this event. Consider the set B =
{δ | hδ, x0 i= = 0}. We now claim that (11) holds whenever
[
CC (x̂) ∩ B ⊂
CC (ãi ).
(12)
i
To prove this claim, consider some δ ∈ DC . To keep notation
light, we will assume without loss of generality that kx0 k2 = 1.
V. G ENERALIZATIONS
Our theory thus far addressed the idealistic case in which
the measurement vectors are independently and uniformly
sampled from a unit sphere and for noiseless measurements. We
now extend our results to more general random measurement
ensembles and to noisy measurements.
A. Generalized Measurement Ensembles
The theorems of Section IV require the measurement vectors
{ai } to be drawn independently and uniformly from the surface
of the unit sphere. This condition can easily be generalized to
other sampling ensembles. In particular, our results still hold
for all rotationally symmetric distributions. A distribution D is
rotationally symmetric if the distribution of a/kak2 is uniform
over the sphere when a ∼ D. For such a distribution, one
can make the change of variables a ← a/kak2 , and then
7
apply Theorems 1 and 3 to the resulting problem. Note that
this change of variables does not change the feasible set for
(PM), and thus does not change the solution. Consequently,
the same recovery guarantees apply to the original problem
without explicitly implementing this change of variables. We
thus have the following simple corollary.
Corollary 2. The results of Theorem 1 and Theorem 3 still hold
if the samples {ai } are drawn from a multivariate Gaussian
distribution with independent and identically distributed (i.i.d.)
entries.
Proof. A multivariate Gaussian distribution with i.i.d. entries
is rotationally symmetric, and thus the change of variables
a ← a/kak2 yields an equivalent problem with measurements
sampled uniformly from the unit sphere.
What happens when the distribution is not spherically
symmetric? In this case, we can still guarantee recovery, but
we require a larger number of measurements. The following
result is, analogous to Theorem 1, for the noiseless complex
case.
D
Theorem 4. Suppose that mD measurement vectors {ai }m
i=1
are drawn from the unit sphere with (possibly non-uniform)
probability density function D : SCn−1 → R. Let `D ≤
inf x∈S n−1 D(x) be a lower bound on D over the unit sphere
C
2π n
and let α = 1 − π2 angle(x0 , x̂) as above. We use sn = Γ(n)
to
n−1
denote the “surface area” of the complex sphere SC , and set
mU = bmD sn `D c. Then, exact reconstruction is guaranteed
with probability at least
(αmU − 4n)2
1 − exp −
2mU
whenever αmU > 4n and `D > 0. In other words, exact
recovery with mD non-uniform measurements happens at least
as often as with mU uniform measurements.
Proof. We compare two measurement models, a uniform
measurement model in which mU measurements are drawn
uniformly from a unit sphere, and a non-uniform measurement
model in which mD measurements are drawn from the
distribution D. Note that the sphere SCn−1 has surface area
2π n
sn = Γ(n)
, and the uniform density function U on this
sphere has constant value s−1
n . Consider some collection of
mU
measurements {aU
i }i=1 drawn from the uniform model. This
ensemble of measurements lies in Rn×mU , and the probability
density of sampling this measurement ensemble (in the space
Rn×mU ) is
U
mU !s−m
.
n
(14)
mD
Now consider a random ensemble {aD
i }i=1 drawn with density
U
U
D. Given {ai }, the event that {ai } ⊂ {aD
i } has density (in
the space Rn×mU )
m
U
Y
mD !
D(ai ).
(mD − mU )! i=1
(15)
The ratio of the non-uniform density (15) to the uniform density
(14) is
mU
mU
mD Y
mD
mD sn `D
mU
sn D(ai ) ≥
(sn `D )
≥
,
mU
mU i=1
mU
(16)
k
D
where we have used the bound m
mU > (mD /mU ) to obtain
the estimate on the right hand side. The probability of exact
reconstruction using the non-uniform model will always be
at least as large as the probability under the uniform model,
provided the ratio (16) is one or higher. This holds whenever
mU ≤ mD sn `D . It follows that the probability of exact
recovery using the non-uniform measurements is at least the
probability of exact recovery from a uniform model with
mU = bmD sn `D c measurements. This probability is what
is given by Theorem 1.
B. Noisy Measurements
We now analyze the sensitivity of PhaseLift to the measurement noise {ηi }. For brevity, we focus only on the case of
complex-valued signals. To analyze the impact of noise, we
re-write the problem (PM) in the following equivalent form:
(
maximize
hx, x̂i<
n
x∈H
subject to |hai , xi|2 ≤ b̂2i + ηi , i = 1, 2, . . . , m.
(17)
Here, b̂2i = |hai , x0 i|2 is the (unknown) true magnitude
measurement and b2i = b̂2i + ηi . We are interested in bounding
the impact that these measurement errors have on the solution
to (PM). Note that the severity of a noise perturbation of size ηi
depends on the (arbitrary) magnitude of the measurement
vector ai . For this reason, we assume the vectors {ai } have
unit norm throughout this section.
We will begin by proving results only for the case of nonnegative noise. We will then generalize our analysis to the
case of arbitrary bounded noise. The following result gives a
geometric characterization of the reconstruction error.
Theorem 5. Suppose the vectors {ai ∈ Cn } in (17) are
normalized to have unit length, and the noise vector η is
non-negative. Let r be the maximum relative noise, defined by
ηi
r = max
,
(18)
i=1,2,...,m
b̂i
and let DC = {δ ∈ SCn−1 | hδ, x̂i< ≥ 0, hδ, x̂i= = 0} be the
set of aligned descent directions. Choose some error bound
ε > r/2, and define the angle θ = arccos(r/2ε). If the caps
{CC (ãi , θ)} cover DC , then the solution x? of (PM), and
equivalently of the problem in (17), satisfies the bound
kx? − x0 k2 ≤ ε.
Proof. We first reformulate the problem (17) as
hx0 + ∆, x̂i<
maximize
n
∆∈C
subject to |hãi , (x0 + ∆)i|2 ≤ b̂2i + ηi ,
i = 1, 2, . . . , m,
(19)
8
where ∆ = x? − x0 is the recovery error vector and {ãi } =
{phase(hai , x0 i)ai } are aligned measurement vectors. In this
form, the recovery error vector ∆ appears explicitly. Because
we assume the errors {ηi } to be non-negative, the true signal
x0 is feasible for (17). It follows that the optimal objective
of the perturbed problem (17) must be at least as large as the
optimal value achieved by x0 , i.e., h∆, x̂i< ≥ 0. Furthermore,
the solution x0 + ∆ must be aligned with x̂, as is the true
signal x0 , and so h∆, x̂i ∈ R. For the reasons just described,
we know that the unit vector δ = ∆/k∆k2 ∈ DC .
Our goal is to put a bound on the magnitude of the recovery
error ∆. We start by reformulating the constraints in (19) to
get
|hãi , (x0 + ∆)i|2 =
|hãi , x0 i|2 + 2[hãi , x0 i∗ hãi , ∆i]< + |hãi , ∆i|2 ≤ b̂2i + ηi .
Proof. Define the following two sets:
D = {δ ∈ SCn−1 | hδ, x̂i ∈ R+
0}
D0 = {δ ∈ SCn−1 | hδ, x0 i ∈ R+
0 }.
We now claim that the conditions of Theorem 5 hold whenever
[
D0 ⊂
CC (ãi , φ)
(22)
i
0
where {ãi = phase(hai , x i)ai } is the set of aligned measurement vectors. To prove this claim, choose some δ ∈ D
and assume that (22) holds. Since the half-sphere D0 can be
obtained by rotating D by a principal angle of angle(x̂, x0 ),
there is some point δ 0 ∈ D0 with angle(δ, δ 0 ) ≤ angle(x̂, x0 ).
By property (22), there is some cap CC (ãi , φ) that contains δ 0 .
By the triangle inequality for spherical geometry it follows
that:
Subtracting |hãi , x0 i|2 = b̂2i from both sides yields
angle(δ, ãi ) ≤ angle(δ, δ 0 ) + angle(δ 0 , ãi )
2[hãi , x0 i∗ hãi , ∆i]< + |hãi , ∆i|2 ≤ ηi .
≤ angle(x0 , x̂) + φ ≤ θ.
Therefore, δ ∈ CC (ãi , θ), and the claim is proved.
It only remains to put a bound on the probability that (22)
ηi ≥ 2[hãi , x0 i∗ hãi , ∆i]< + |hãi , ∆i|2
occurs. Note that the aligned vectors {ãi } are uniformly
= 2hãi , x0 ihãi , ∆i< + |hãi , ∆i|2
distributed in D0 , which is isomorphic to a half-sphere in
2n−2
. The probability of covering the half sphere SR2n−2
≥ 2hãi , x0 ihãi , ∆i< + |hãi , ∆i< |2 .
(20) SR
with uniformly distributed caps drawn from that half sphere is
This inequality can hold only if hãi , ∆i< is sufficiently small. at least as great as the probability of covering the whole sphere
In particular we know
SR2n−2 with caps drawn uniformly from the entire sphere. This
p
probability is given by Lemma 4 as pcover (m, 2n − 1, φ), and
hãi , ∆i< ≤ −hãi , x0 i + (hãi , x0 i)2 + ηi
is lower bounded by pcover (m, 2n, φ).
ηi
r
ηi
=
≤ .
(21)
≤
0
2hãi , x i
2
2b̂i
We now consider the case of noise that takes on both
positive and negative values. In this case, we bound the error
Now suppose that k∆k2 > ε. From (21) we have
by converting the problem into an equivalent problem with
r
hãi , δi< ≤ hãi , ∆/k∆k2 i< < ,
non-negative noise, and then apply Theorem 6.
2ε
Therefore δ 6∈ CC (ãi , θ) where θ = arccos(r/2ε). This is Theorem 7. Suppose the vectors {ai } in (17) are normalized
2
2
a contradiction because the caps {CC (ãi , θ)} cover DC . It to have unit length, and that ηi > −b̂i for all i. Define the
following measures of the noise
follows that k∆k2 ≤ ε.
)
( )
(
b2i
b̂2i + ηi
Using this result, we can bound the reconstruction error in
2
= min
s = min
the noisy case. For brevity, we present results only for the
i=1,2,...,m
i=1,2,...,m
b̂2i
b̂2i
complex-valued case.
and
Theorem 6. Suppose the vectors {ai } in (17) are independenly
1
ηi
n−1
2
2
and uniformly distributed in SC , and the noise vector η is
r=
max
b̂ − s b̂i +
.
s i=1,2,...,m i
b̂i
non-negative. Let r be the maximum relative error defined in
(18). Choose some error bound ε > r/2, and define the angle Choose some error bound ε > r/2, and define the angle
φ = arccos(r/2ε) − angle(x0 , x̂). Then, the solution x? to φ = arccos(r/2ε) − angle(x0 , x̂). Then, we have the bound
(PM) satisfies
kx? − x0 k2 ≤ ε + (1 − s)kx0 k2 .
kx? − x0 k ≤ ε.
with probability at least
with probability at least
pcover (m, 2n, φ) ≥
pcover (m, 2n, φ) ≥
√
√
(em)2n 2n − 1
sin2n (φ)(m − n)
sin2n−1 (φ)(m − n)
(em)2n 2n − 1
√
1
−
exp
−
cos(φ)
√
1−
exp −
cos(φ)
(4n)2n−1
16n
(4n)2n−1
16n
(m − 4n + 1)2
(m − 4n + 1)2
− exp −
− exp −
2m − 2
2m − 2
Since ηi is non-negative, we have
when n ≥ 5 and m > 4n.
2 This
assumption is required so that b2i = b̂2i + ηi > 0.
9
when n ≥ 5 and m > 4n.
A. Random Initialization
Proof. Consider the “shrunk” version of problem (17)
(
maximize
hx, x̂i<
n
Consider the use of approximation vectors x̂ drawn randomly
from the unit sphere SRn−1 . Do we expect such approximation
vectors to be accurate enough to recover the unknown signal?
To find out, we analyze the inner product between two realvalued random vectors on the unit sphere. Note that we only
care about the magnitude of this inner product. If the inner
product is negative, then PhaseMax simply recovers −x0 rather
than x0 . Our analysis will make use of the following result.
x∈H
subject to |hai , xi|2 ≤ s2 b̂2i + ζi ,
i = 1, 2, . . . , m.
(23)
for some real-valued “shrink factor” s > 0. Clearly, if x0 is
aligned with x̂ and satisfies |hai , x0 i| = bi for all i, then sx0
is aligned with x̂ and satisfies |hai , sx0 i| = sbi . We can now
transform the noisy problem (17) into an equivalent problem
with non-negative noise by choosing
(
)
b̂2i + ηi
2
s = min
and ζi = b̂2i − s2 b̂2i + ηi ≥ 0.
i=1,2,...,m
b̂2i
We then have (sbi )2 + ζi = b2i + ηi2 , and so problem (23) is
equivalent to problem (17). However, the noise ζi in problem
(23) is non-negative, and thus we can apply Theorem 6. This
theorem requires the constant r for the shrunken problem,
which is now
ζi
rshrunk = max
i=1,2,...,m sb̂
i
n
o
1
b̂2i − s2 b̂i + ηi /b̂i .
=
max
s i=1,2,...,m
The solution to the shrunk problem (23) satisfies kx? −
sx0 k2 ≤ , with probability pcover (m, 2n, φ), where φ =
arccos(rshrunk /2) − angle(x0 , x̂). If this condition is fulfilled,
then we have
kx? − x0 k2 ≤ kx? − sx0 + sx0 − x0 k2
≤ kx? − sx0 k2 + ksx0 − x0 k2
≤ + (1 − s)kx0 k2 ,
which concludes the proof.
Remark 5. Theorem 7 requires the noise to be sufficiently small
so that the measurements are non-negative, and the PhaseMax
formulation is feasible. A natural extension that avoids this
caveat is to enforce constraints with a hinge penalty rather
than a hard constraint. In addition, it is possible to achieve
better noise robustness in this case. This direction was studied
in [39].
Lemma 5. Consider the angle β = angle(x, y) between
n−1
two random vectors x, y ∈ SH
sampled independently and
uniformly from the unit sphere. Then, the expected magnitude
of the cosine distance between the two random vectors satisfies
s
r
2
2
≤ E[| cos(β)|] ≤
, for H = R
(24)
πn
π(n − 12 )
s
r
1
4
≤ E[| cos(β)|] ≤
, for H = C. (25)
πn
π(4n − 1)
Proof. We first consider the real case. The quantity cos(β) =
hx, yi/(kxk2 kyk2 ) is simply the sample correlation between
two random vectors, whose distribution function is given
by [49]
n−3
f (z) =
(1 − z 2 ) 2
n−1 ,
2n−2 B n−1
2 , 2
(26)
where B is the beta function and z ∈ [−1, +1]. Hence, the
expectation of the magnitude of the inner product is given by
R1
n−3
z(1 − z 2 ) 2 dz
0
E[| cos(β)|] = 2 n−2 n−1 n−1 .
2
B 2 , 2
The integral in the numerator was studied in [50, Eq. 31] and
evaluates to 12 B(1, n−1
2 ). Plugging this expression into
√ (26),
and using the identity Γ(a)/Γ(a/2) = 2a−1 Γ( a+1
)/
π, we
2
get
Γ( n2 )
E[| cos(β)|] = √
.
π Γ( n+1
2 )
Finally, by using bounds on ratios of Gamma functions [51], we
obtain the bounds in (24) for real-valued vectors. The bounds
in (25) for complex-valued vectors are obtained by noting that
SCn−1 is isomorphic to SR2n−1 and by simply replacing n ← 2n
in the bounds for the real-valued case.
VI. H OW TO C OMPUTE A PPROXIMATION V ECTORS ?
There exist a variety of algorithms that compute approximation vectors3 , such as the (truncated) spectral initializer [25],
[28] or corresponding optimized variants [34], [35], the Null
initializer [33], the orthogonality-promoting method [29], or
least-squares methods [48]. We now show that even randomly
generated approximation vectors guarantee the success of
PhaseMax with high probability given a sufficiently large
number of measurements. We then show that more sophisticated
methods guarantee success with high probability if the number
of measurements depends linearly on n.
3 Approximation
vectors are also known as initialization or anchor vectors.
For such randomly-generated approximation vectors, we
now consider the approximation accuracy α that appears in
Theorems 1 and 3. Note that E[|β|] ≤ π2 − E[| cos(β)|], and,
thus
r
2
2
8
E[α] = 1 − E[|β|] ≥ E[| cos(β)|] ≥
π
π
π3 n
for the real case. Plugging this bound on the expected value
for α into Theorem 3, we see that, for an average
prandomlygenerated approximation vector (one with α > 8/(π 3 n)),
the probability of exact reconstruction goes to 1 rapidly as n
goes to infinity, provided that the number of measurements
10
p
satisfies m > cn3/2 for any c > π 3 /2. For complex-valued
√
signals and measurement matrices, this becomes c > 2 π 3 .
Our results indicate that the use of random approximation
vectors requires O(n3/2 ) measurements rather than the O(n)
required by other phase retrieval algorithms with other initialization methods, e.g., the ones proposed in [19], [29] (see also
Section VII). Hence, it may be more practical for PhaseMax
to use approximation vectors obtained from more sophisticated
initialization algorithms.
B. Spectral Initializers
of required measurements) as compared to PhaseLift, TWF,
and TAW, when used together with the truncated spectral
initializer [19]. While the constants c0 , c1 , and c2 in the
recovery guarantees for all of the other methods are generally
very large, our recovery guarantees contain no unspecified
constants, explicitly depend on the approximation factor α,
and are surprisingly sharp. We next demonstrate the accuracy
of our results via numerical simulations.
B. Tightness of Recovery Guarantees
We now investigate the tightness of the recovery guarantees
Spectral initializers were first used in phase retrieval by [25],
of PhaseMax using both experiments and theory. First, we
and enable the computation of an approximation vector x̂
compare the empirical success probability of PhaseMax in
that exhibits strong theoretical properties [28]. This class of
a noiseless and complex-valued scenario with measurement
initializers was analyzed in detail in [34], and [35] developed
vectors taken independently and uniformly from the unit sphere.
an optimal spectral initializer for a class of problems including
All experiments were obtained by using the implementations
phase retrieval.
provided in the software library PhasePack [52]. This software
Fix some δ > 1, and consider a measurement process with
library provides efficient implementations of phase retrieval
m = δn measurement vectors sampled from a Gaussian distrimethods using fast adaptive gradient solvers [53]. We declare
bution. Then, the spectral initializers in [34], [35] are known
signal recovery a success whenever the relative reconstruction
0
to deliver an initializer x̂ with limn→∞ angle(x̂, x ) > for
error satisfies
some positive . Equivalently, there is some accuracy parameter
2
0
kx0 − xk22
α? = limn→∞ 1 − π angle(x̂, x ) > 0. By combining this
RRE =
< 10−5 .
(27)
kx0 k22
result with Theorem 1, we see that spectral initializers enable
PhaseMax to succeed for large n with high probability provided We compare empirical rates of success to the theoretical lower
that m > n/α∗ measurements are used for signal recovery4 ; a bound in Theorem 1. Figure 2 shows results for n = 100
number of measurements that is linear in n. Note that similar and n = 500 measurements, where we artificially generate
non-asymptotic guarantees can be achieved for finite n using an approximation x̂ for different angles β = angle(x̂, x0 )
other results on spectral initializers (e.g., Prop. 8 in [28]), measured in degrees. Clearly, our theoretical lower bound
although with less sharp bounds.
accurately predicts the real-world performance of PhaseMax.
In Section VII-B, we investigate the numerical value For large n and large β, the gap between theory and practice
of α∗ and make quantitative statements about how many becomes extremely tight. We furthermore observe a sharp
measurements are required when using a spectral initializer. phase transition between failure and success, with the transition
As we will see there, PhaseMax requires roughly m = 5.5n getting progressively sharper for larger dimensions n.
noiseless Gaussian measurements in theory5 , and somewhat
Next, we investigate the tightness of the recovery guarantees
fewer measurements in practice.
when an initial vector is chosen using the spectral initialization
method [34], [35]. Using analytical formulas, the authors of [35]
VII. D ISCUSSION
calculate the accuracy of their proposed spectral initializer for
This section briefly compares our theoretical results to that of real-valued signal estimation from Gaussian measurements with
existing algorithms. We furthermore demonstrate the sharpness large n. They find that, when 2n measurements are used for
of our recovery guarantees and show the practical limits of spectral estimation, the initializer and signal have a squared
cosine similarity of over 0.6. This corresponds to an accuracy
PhaseMax.
parameter of α > 0.55, and PhaseMax can recover the signal
exactly with m = n/α ≈ 3.5n measurements. This is within
A. Comparison with Existing Recovery Guarantees
a factor of 2 of the information-theoretic lower bound for
Table I compares our noiseless recovery guarantees in a real-valued signal recovery, which is 2n − 2 measurements.
complex system to that of PhaseLift [19], truncated Wirtinger
Note that our analysis of PhaseMax assumes that the
flow (TWF) [19], and truncated amplitude flow (TAW) [29]. We measurements used for recovery are statistically independent
also compare to the recovery guarantee provided for PhaseMax from those used by the initializer. For this reason, our theory
using classical machine learning methods in [3]. 6 We see does not allow us to perform signal recovery by “recycling”
that PhaseMax requires the same sample complexity (number the measurements used for initialization. While we do not
empirically observe any change in behavior of the method when
4 Our results assume the initial vector is independent of the measurement
vectors. The total number of measurements needed is thus m > n/α∗ + δn the initialization measurements are used for recovery, the above
for initialization and recovery combined.
reconstruction bounds formally require 5.5n measurements (2n
5 The constants that appear in the bounds [34], [35] were approximated using
for initialization, plus 3.5n for recovery). The bounds in [3]
stochastic numerical methods, and were not obtained exactly using analysis.
are uniform with respect to the initializer, and thus enable
6 Since AltMinPhase [25] requires an online measurement model that differs
significantly from the other algorithms considered here, we omit a comparison. measurement recycling (although this does not result in tighter
11
TABLE I
C OMPARISON OF T HEORETICAL R ECOVERY G UARANTEES FOR N OISELESS P HASE R ETRIEVAL
Algorithm
Sample complexity
Lower bound on pC (m, n)
PhaseMax
m > 4n/α
1 − e−(αm−4n)
PhaseLift [19]
m ≥ c0 n
TWF [19]
m ≥ c0 n
TAF [29]
m ≥ c0 n
Bahmani and Romberg [3]
m>
32
sin4 (α)
2
1 − c1 e−c2 m
1 − c1 e−c2 m
log
8e
sin4 (α)
2
1 − (m + 5)e−n/2 − c1 e−c2 m − 1/n
n
1 − 8e
− sin4 (α) M − sin32
4 (α) log
8e
sin4 (α)
N /16
1
Success probability pC (m, n)
Success probability pC (m, n)
1
0.8
0.6
0.4
β = 45◦
β = 36◦
β = 25◦
0.2
0
/(2m)
2
4
6
8
Oversampling ratio m/n
10
12
(a) n = 100
0.8
0.6
0.4
β = 45◦
β = 36◦
β = 25◦
0.2
0
2
4
6
8
Oversampling ratio m/n
10
12
(b) n = 500
Fig. 2. Comparison between the empirical success probability (solid lines) and our theoretical lower bound (dashed lines) for varying angles β between the
true signal and the approximation vector. Our theoretical results accurately characterize the empirical success probability of PhaseMax. Furthermore, PhaseMax
exhibits a sharp phase transition for larger dimensions.
bounds for the overall number of measurements because the
required constants are larger).
Finally, we would like to mention that asymptotically exact
performance bounds for PhaseMax have recently been derived
in [41], and tighter bounds for non-lifting phase retrieval have
been shown for the method PhaseLamp, which repeatedly uses
PhaseMax within an iterative process.
C. Experimental Results
We compare PhaseMax to other algorithms using synthetic
and empirical datasets. The implementations used here are
publicly available as part of the PhasePack software library [52].
This software library contains scripts for comparing different
phase retrieval methods, including scripts for reproducing the
experiments shown here.
1) Comparisons Using Synthetic Data: We briefly compare
PhaseMax to a select set of phase retrieval algorithms in terms
of relative reconstruction error with Gaussian measurements.
We emphasize that this comparison is by no means intended to
be exhaustive and serves the sole purpose of demonstrating the
efficacy and limits of PhaseMax (see, e.g., [17], [21] for more
extensive phase retrieval algorithm comparisons). We compare
the Gerchberg-Saxton algorithm [7], the Fienup algorithm [8],
truncated Wirtinger flow [28], and PhaseMax—all of these
methods use the truncated spectral initializer [28]. We also
run simulations using the semidefinite relaxation (SDR)-based
method PhaseLift [2] implemented via FASTA [53]; this is,
together with PhaseCut [21], the only convex alternative to
PhaseMax, but lifts the problem to a higher dimension.
Figure 3 reveals that PhaseMax requires larger oversampling
ratios m/n to enable faithful signal recovery compared to
non-convex phase-retrieval algorithms that operate in the
original signal dimension. This is because the truncated spectral
initializer requires oversampling ratios of about six or higher to
yield sufficiently accurate approximation vectors x̂ that enable
PhaseMax to succeed.
2) Comparisons Using Empirical Data: The PhasePack
library contains scripts for comparing phase retrieval algorithms
using synthetic measurements as well as publicly available real
datasets. The datasets contain measurements obtained using
the experimental setup described in [54], in which a binary
mask is imaged through a diffusive medium.
Figure 4 shows reconstructions from measurements obtained
using two different test images. Both images were acquired
using phaseless measurements from the same measurement
12
GS
Fienup
TWF
TAF
PhaseLift
PhaseMax
−20
−40
−60
−80
2
4
6
8
Oversampling ratio m/n
10
GS
Fienup
TWF
TAF
PhaseMax
0
Relative reconstruction error [dB]
Relative reconstruction error [dB]
0
12
−20
−40
−60
−80
2
4
(a) n = 100
6
8
Oversampling ratio m/n
10
12
(b) n = 500
Fig. 3. Comparison of the relative reconstruction error. We use the truncated spectral initializer for Gerchberg-Saxton (GS), Fienup, truncated Wirtinger
flow (TWF), truncated amplitude flow (TAF), and PhaseMax. PhaseMax does not achieve exact recovery for the lowest number of measurements among the
considered methods, but is convex, operates in the original dimension, and comes with sharp performance guarantees. PhaseLift only terminates in reasonable
computation time for n = 100.
Original
Gerchberg-Saxton
0.13019
Wirtinger Flow
0.13930
Trunc. Wirt. Flow
0.18017
PhaseMax
0.23459
PhaseLift
0.35452
0.12468
0.13386
0.17836
0.24947
0.35282
Fig. 4. Reconstruction of two 64 × 64 masks from empirical phaseless measurements obtained through a diffusive medium [54]. The numbers on top of each
of the recovered images denote the relative measurement error k|Ax| − bk2 /kbk2 achieved by each method.
operator A. Reconstructions are shown for the GerchbergSaxton [7], Wirtinger flow [27], and truncated Wirtinger
flow [28] methods. We also show results of the two convex
methods PhaseLift (which lifts the problem dimension) [2]
and PhaseMax (the proposed non-lifting relaxation). For each
algorithm, Figure 4 shows the recovered images and reports
the relative measurement error k|Ax| − bk2 /kbk2 . All phase
retrieval algorithms were initialized using a variant of the
spectral method proposed in [35].
We find that the Gerchberg-Saxton algorithm outperforms
all other considered methods by a small margin (in terms
of measurement error) followed by the Wirtinger flow. On
this dataset, measurements seem to be quite valuable, and
the truncated Wirtinger flow method (with default truncation
parameters as described in [28]) performs slightly worse
than the original Wirtinger flow, which uses the full dataset.
PhaseMax produces results that are visually comparable to that
of Wirtinger flow, but with slightly higher relative measurement
error.
D. Advantages of PhaseMax
While PhaseMax does not achieve exact reconstruction with
the lowest number of measurements, it is convex, operates
in the original signal dimension, can be implemented via
efficient solvers for Basis Pursuit, and comes with sharp
performance guarantees that do not sweep constants under
the rug (cf. Figure 2). The convexity of PhaseMax enables
a natural extension to sparse phase retrieval [55], [56] or
other signal priors (e.g., total variation or bounded infinity
norm) that can be formulated with convex functions. Such
13
non-differentiable priors cannot be efficiently minimized using
simple gradient descent methods (which form the basis of
Wirtinger or amplitude flow, and many other methods), but
can potentially be solved using standard convex solvers when
combined with the PhaseMax formulation.
VIII. C ONCLUSIONS
We have proposed a novel, convex phase retrieval algorithm,
which we call PhaseMax. We have provided accurate bounds
on the success probability that depend on the signal dimension,
the number of measurements, and the angle between the approximation vector and the true vector. Our analysis covers a broad
range of random measurement ensembles and characterizes the
impact of general measurement noise on the solution accuracy.
We have demonstrated the sharpness of our recovery guarantees
and studied the practical limits of PhaseMax via simulations.
There are many avenues for future work. Developing a
computationally efficient algorithm for solving PhaseMax (or
the related PhaseLamp procedure) accurately and for large
problems is a challenging problem. An accurate analysis of
more general noise models is left for future work.
A PPENDIX A
P ROOF OF L EMMA 1
The proof is by induction. As a base case, we note that
r(n, 1) = 2 for n ≥ 1 and r(2, k) = 2k for k ≥ 1. Now
suppose we have a sphere SRn−1 in n dimensions sliced by
k − 1 planes into r(n, k − 1) “original” regions. Consider the
effect of adding a kth plane, Pk . Every original region that is
intersected by Pk will split in two, and increase the number
of total regions by 1. The increase in the number of regions is
then the number of original regions that are intersected by Pk .
Equivalently, this is the number of regions formed inside Pk
by the original k − 1 planes. Any subset of k − 1 planes will
have normal vectors that remain linearly independent when
projected into Pk . By the induction hypothesis, the number
of regions formed inside Pk is then given by r(n − 1, k − 1).
Adding this to the number of original regions yields
r(n, k) = r(n, k − 1) + r(n − 1, k − 1).
We leave it to the reader to verify that (5) satisfies this
recurrence relation and base case.
A PPENDIX B
P ROOF OF L EMMA 4
In this section, we prove Lemma 4. This Lemma is a direct
corollary of the following result of Burgisser, Cucker, and Lotz
[47]. For a complete proof of this result, see Theorem 1.1
of [47], and the upper bound on the constant “C” given in
Proposition 5.5.
Theorem 8. Let m > n ≥ 2. Then the probability of covering
the sphere SRn−1 with independent and uniform random caps
of central angle φ ≤ π/2 is bounded by
pcover (m, n, φ) ≥
Z
2
m
1−
C
(1 − t2 )(n −2n−1)/2 (1 − λ(t))m−n dt
n
0
−
n−1
X
1
2m−1
k=0
where λ(t)
=
Vol(SRn−1 )
n/2
=
2π
Γ(n/2) ,
m−1
k
Vn−1
Vn
C=
R arccos(t)
0
√
n n−1
2n−1 ,
sinn−2 (φ) dφ, Vn
=
and = cos(φ).
While Theorem 8 provides a bound on pcover (m, n, φ), the
formulation of this bound does not provide any intuition of
the scaling of pcover (m, n, φ) or its dependence on m and n.
For this reason, we derive Lemma 4, which is a weaker but
more intuitive result. We restate Lemma 4 here for clarity.
Lemma 4. Let n ≥ 9, and m > 2n. Then, the probability of
covering the sphere SRn−1 with caps of central angle φ ≤ π/2
is lower bounded by
pcover (m, n, φ) ≥
√
sin(n−1) (φ)(m − n)
(em)n n − 1
√
exp
−
cos(φ)
1−
(2n)n−1
8n
(m − 2n + 1)2
− exp −
.
2m − 2
Proof. Let us simplify the result of Theorem 8. If we assume
m > 2n, then Hoeffding’s inequality yields
n−1
1 X m−1
(m − 2n + 1)2
≤ exp −
.
2m−1
k
2m − 2
k=0
Next, we derive a lower bound as follows:
Z arccos(t)
Γ(n/2)
√
sinn−2 (φ) dφ
λ(t) =
Γ((n − 1)/2) π 0
Z arccos(t)
p
≥ (n/2 − 1)/π
sinn−2 (φ) cos(φ) dφ
0
p
= (n/2 − 1)/π
1
sinn−1 arccos(t)
n−1
1
≥ √ (1 − t2 )(n−1)/2 .
8n
p
1
We have used the fact that (n/2 − 1)/π n−1
>
Γ(n/2)
4, and also the “Wallis ratio” bound Γ((n−1)/2)
[57], [58]. Finally, we plug in the inequality m
n
√1
8n
p
≥
for n ≥
n/2 − 1
(em)n
nn .
≤
now have
Z
2
m
C
(1 − t2 )(n −2n−1)/2 (1 − λ(t))m−n dt
n
0
√
Z
2
(em)n n − 1
≤
(1 − t2 )(n −2n−1)/2
n−1
(2n)
0
m−n
1
dt.
× 1 − √ (1 − t2 )(n−1)/2
8n
We
Now we simplify the integral. Using the identity (1 − x)a <
e
, which holds for x ≤ 1, we can convert each term in the
integrand into an exponential. We do this first with x = t2 and
then with x = √18n (1 − t2 )(n−1)/2 to obtain
−ax
(1 − t2 )(n
2
−2n−1)/2
1
1 − √ (1 − t2 )(n−1)/2
8n
m−n
14
2 2
t (n − 2n − 1) (1 − t2 )(n−1)/2 (m − n)
√
≤ exp −
.
−
2
8n
(28)
We then apply the Cauchy-Schwarz inequality to get
2 2
Z
t (n − 2n − 1) (1 − t2 )(n−1)/2 (m − n)
√
exp −
dt
−
2
8n
0
1/2
Z
2 2
exp −t (n − 2n − 1) dt
≤
0
1/2
(1 − t2 )(n−1)/2 (m − n)
√
exp −
dt
2n
0
1/2
(1 − 2 )(n−1)/2 (m − n)
1/2
√
≤ []
exp −
2n
2 (n−1)/2
(1 − )
(m − n)
√
= exp −
.
8n
Z
×
Replacing the integral with this bound and using the definition
= cos(φ) yields the result.
R EFERENCES
[1] T. Goldstein and C. Studer, “Convex phase retrieval without lifting via
PhaseMax,” in Proceedings of the 34th International Conference on
Machine Learning, ser. Proceedings of Machine Learning Research,
D. Precup and Y. W. Teh, Eds., vol. 70. International Convention
Centre, Sydney, Australia: PMLR, 06–11 Aug 2017, pp. 1273–1281.
[Online]. Available: http://proceedings.mlr.press/v70/goldstein17a.html
[2] E. J. Candès, T. Strohmer, and V. Voroninski, “PhaseLift: Exact and stable
signal recovery from magnitude measurements via convex programming,”
Commun. Pure Appl. Math., vol. 66, no. 8, pp. 1241–1274, 2013.
[3] S. Bahmani and J. Romberg, “Phase retrieval meets statistical learning
theory: A flexible convex relaxation,” in Intl. Conf. on Artificial
Intelligence and Statistics (AISTATS), May 2017, pp. 252–260.
[4] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition
by basis pursuit,” SIAM Rev., vol. 43, no. 1, pp. 129–159, Jul. 2001.
[5] S. Chen and D. Donoho, “Basis pursuit,” in Proc. Asilomar Conf. Signals,
Syst., Comput., vol. 1, Oct. 1994, pp. 41–44.
[6] C. Studer, W. Yin, and R. G. Baraniuk, “Signal representations with
minimum `∞ -norm,” in Proc. Allerton Conf. Commun., Contr., Comput.,
Oct. 2012, pp. 1270–1277.
[7] R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the
determination of phase from image and diffraction plane pictures,” Optik,
vol. 35, pp. 237–246, Aug. 1972.
[8] J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt.,
vol. 21, no. 15, pp. 2758–2769, Aug. 1982.
[9] R. W. Harrison, “Phase problem in crystallography,” J. Opt. Soc. Am. A,
vol. 10, no. 5, pp. 1046–1055, May 1993.
[10] J. Miao, T. Ishikawa, Q. Shen, and T. Earnest, “Extending X-ray
crystallography to allow the imaging of noncrystalline materials, cells,
and single protein complexes,” Ann. Rev. Phys. Chem., vol. 59, pp.
387–410, Nov. 2008.
[11] F. Pfeiffer, T. Weitkamp, O. Bunk, and C. David, “Phase retrieval and
differential phase-contrast imaging with low-brilliance X-ray sources,”
Nat. Phys., vol. 2, no. 4, pp. 258–261, Apr. 2006.
[12] S. S. Kou, L. Waller, G. Barbastathis, and C. J. Sheppard, “Transportof-intensity approach to differential interference contrast (TI-DIC)
microscopy for quantitative phase imaging,” Opt. Lett., vol. 35, no. 3,
pp. 447–449, Feb. 2010.
[13] H. Faulkner and J. Rodenburg, “Movable aperture lensless transmission
microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett., vol. 93,
no. 2, Jul. 2004.
[14] J. Holloway, M. S. Asif, M. K. Sharma, N. Matsuda, R. Horstmeyer,
O. Cossairt, and A. Veeraraghavan, “Toward long-distance subdiffraction
imaging using coherent camera arrays,” IEEE Trans. Comput. Imag.,
vol. 2, no. 3, pp. 251–265, Sept. 2016.
[15] F. Fogel, I. Waldspurger, and A. d’Aspremont, “Phase retrieval for
imaging problems,” Math. Prog. Comp., vol. 8, no. 3, pp. 311–335, Sept.
2016.
[16] E. J. Candès, E.es, X. Li, and M. Soltanolkotabi, “Phase retrieval from
coded diffraction patterns,” Appl. Comput. Harm.. Anal., vol. 39, no. 2,
pp. 277–299, Sept. 2015.
[17] K. Jaganathan, Y. C. Eldar, and B. Hassibi, “Phase retrieval: An overview
of recent developments,” arXiv:1510.07713, Oct. 2015.
[18] L. Tian and L. Waller, “3D intensity and phase imaging from light field
measurements in an LED array microscope,” Optica, vol. 2, no. 2, pp.
104–111, Feb. 2015.
[19] E. J. Candès and X. Li, “Solving quadratic equations via phaselift when
there are about as many equations as unknowns,” Found. Comput. Math.,
vol. 14, no. 5, pp. 1017–1026, Oct. 2014.
[20] E. J. Candès, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval
via matrix completion,” SIAM Rev., vol. 57, no. 2, pp. 225–251, Nov
2015.
[21] I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, maxcut
and complex semidefinite programming,” Math. Prog., vol. 149, no. 1-2,
pp. 47–81, Feb. 2015.
[22] M. P. Friedlander, I. Macedo, and T. K. Pong, “Gauge optimization and
duality,” SIAM Journal on Optimization, vol. 24, no. 4, pp. 1999–2022,
2014.
[23] A. Y. Aravkin, J. V. Burke, D. Drusvyatskiy, M. P. Friedlander, and
K. MacPhee, “Foundations of gauge and perspective duality,” arXiv
preprint arXiv:1702.08649, 2017.
[24] A. Yurtsever, M. Udell, J. A. Tropp, and V. Cevher, “Sketchy decisions:
Convex low-rank matrix optimization with optimal storage,” in Intl. Conf.
on Artificial Intelligence and Statistics (AISTATS), May 2017.
[25] P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating
minimization,” in Adv. Neural Inf. Process. Syst., 2013, pp. 2796–2804.
[26] P. Schniter and S. Rangan, “Compressive phase retrieval via generalized
approximate message passing,” IEEE Trans. Sig. Process., vol. 63, no. 4,
pp. 1043–1055, Feb. 2015.
[27] E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger
flow: Theory and algorithms,” IEEE Trans. Inf. Theory, vol. 61, no. 4,
pp. 1985–2007, Feb. 2015.
[28] Y. Chen and E. Candès, “Solving random quadratic systems of equations
is nearly as easy as solving linear systems,” in Adv. Neural Inf. Process.
Syst., 2015, pp. 739–747.
[29] G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random
quadratic equations via truncated amplitude flow,” arXiv: 1605.08285,
Jul. 2016.
[30] K. Wei, “Solving systems of phaseless equations via kaczmarz methods:
A proof of concept study,” Inverse Problems, vol. 31, no. 12, p. 125008,
2015.
[31] W.-J. Zeng and H. So, “Coordinate descent algorithms for phase retrieval,”
arXiv preprint arXiv:1706.03474, 2017.
[32] Z. Yuan and H. Wang, “Phase retrieval via reweighted Wirtinger flow,”
Applied optics, vol. 56, no. 9, p. 2418, 2017.
[33] P. Chen, A. Fannjiang, and G.-R. Liu, “Phase retrieval with one or
two diffraction patterns by alternating projections of the null vector,”
arXiv:1510.07379, Apr. 2015.
[34] Y. M. Lu and G. Li, “Phase transitions of spectral initialization for highdimensional nonconvex estimation,” arXiv preprint arXiv:1702.06435,
2017.
[35] M. Mondelli and A. Montanari, “Fundamental limits of weak recovery
with applications to phase retrieval,” arXiv preprint: 1708.05932, 2017.
[36] J. Sun, Q. Qu, and J. Wright, “A geometric analysis of phase retrieval,”
arXiv:1602.06664, Mar. 2016.
[37] P. Hand and V. Voroninski, “An elementary proof of convex phase
retrieval in the natural parameter space via the linear program phasemax,”
arXiv preprint: 1611.03935, 2016.
[38] ——, “Compressed sensing from phaseless Gaussian measurements via
linear programming in the natural parameter space,” arXiv preprint:
1611.05985, 2016.
[39] ——, “Corruption robust phase retrieval via linear programming,” arXiv
preprint: 1612.03547, 2016.
[40] A. Aghasi, A. Ahmed, and P. Hand, “BranchHull: convex bilinear
inversion from the entrywise product of signals with known signs,” arXiv
preprint: 1702.04342, 2017.
[41] O. Dhifallah, C. Thrampoulidis, and Y. M. Lu, “Phase retrieval via linear
programming: Fundamental limits and algorithmic improvements,” arXiv
preprint: 1710.05234, 2017.
[42] O. Dhifallah and Y. M. Lu, “Fundamental limits of phasemax for phase
retrieval: A replica analysis,” arXiv preprint arXiv:1708.03355, 2017.
[43] L. Schläfli, Gesammelte Mathematische Abhandlungen I. Springer Basel,
1953.
[44] J. G. Wendel, “A problem in geometric probability,” Math. Scand., vol. 11,
pp. 109–111, 1962.
15
[45] E. Gilbert, “The probability of covering a sphere with n circular caps,”
Biometrika, vol. 52, no. 3/4, pp. 323–330, Dec. 1965.
[46] Z. Füredi, “Random polytopes in the d-dimensional cube,” Disc. Comput.
Geom., vol. 1, no. 4, pp. 315–319, Dec. 1986.
[47] P. Bürgisser, F. Cucker, and M. Lotz, “Coverage processes on spheres
and condition numbers for linear programming,” Ann. Probab., vol. 38,
no. 2, pp. 570–604, 2010.
[48] T. Bendory and Y. C. Eldar, “Non-convex phase retrieval from STFT
measurements,” IEEE Trans. Inf. Theory, Aug. 2017.
[49] J. F. Kenney and E. Keeping, Mathematics of Statistics, Part 2. D. Van
Nostrand, 1951.
[50] L. Jacques, “A quantized Johnson–Lindenstrauss lemma: The finding of
Buffon’s needle,” IEEE Trans. Inf. Theory, vol. 61, no. 9, pp. 5012–5027,
Sept. 2015.
[51] F. Qi and Q.-M. Luo, “Bounds for the ratio of two gamma functions—
from Wendel’s and related inequalities to logarithmically completely
monotonic functions,” Banach J. Math. Anal, vol. 6, no. 2, pp. 132–158,
May. 2012.
[52] R. Chandra, Z. Zhong, J. Hontz, V. McCulloch, C. Studer, and
T. Goldstein, “PhasePack: A phase retrieval library,” arXiv preprint,
2017.
[53] T. Goldstein, C. Studer, and R. Baraniuk, “A field guide to forwardbackward splitting with a FASTA implementation,” arXiv:1411.3406,
Feb. 2014.
[54] C. A. Metzler, M. K. Sharma, S. Nagesh, R. G. Baraniuk, O. Cossairt,
and A. Veeraraghavan, “Coherent inverse scattering via transmission
matrices: Efficient phase retrieval algorithms and a public dataset,” in
Computational Photography (ICCP), 2017 IEEE International Conference
on. IEEE, 2017, pp. 1–16.
[55] K. Jaganathan, S. Oymak, and B. Hassibi, “Sparse phase retrieval: Convex
algorithms and limitations,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT),
Jul. 2013, pp. 1022–1026.
[56] Y. Shechtman, A. Beck, and Y. C. Eldar, “GESPAR: efficient phase
retrieval of sparse signals,” IEEE Trans. Sig. Process., vol. 62, no. 4, pp.
928–938, Jan. 2014.
[57] C. Mortici, “New approximation formulas for evaluating the ratio of
gamma functions,” Math. Comput. Model., vol. 52, no. 1, pp. 425–433,
Jul. 2010.
[58] W. Gautschi, “Some elementary inequalities relating to the gamma and
incomplete gamma function,” J. Math. Phys., vol. 38, no. 1, pp. 77–81,
Apr. 1959.
Tom Goldstein is an Assistant Professor at the University of Maryland
in the Department of Computer Science. Before joining the faculty at
UMD, Tom completed his PhD at UCLA, and held research positions at
Stanford University and Rice University. Tom’s research focuses on efficient,
low complexity optimization routines. His work ranges from large-scale
computing on distributed architectures to inexpensive power-aware algorithms
for small-scale embedded systems. Applications of his work include scalable
machine learning, computer vision, and signal processing methods for wireless
communications. Tom has been the recipient of several awards, including
SIAM’s DiPrima Prize, and a Sloan Fellowship.
Christoph Studer (S’06–M’10–SM’14) received his Ph.D. degree in Electrical
Engineering from ETH Zurich in 2009. In 2005, he was a Visiting Researcher
with the Smart Antennas Research Group at Stanford University. From 2006 to
2009, he was a Research Assistant in both the Integrated Systems Laboratory
and the Communication Technology Laboratory (CTL) at ETH Zurich. From
2009 to 2012, Dr. Studer was a Postdoctoral Researcher at CTL, ETH Zurich,
and the Digital Signal Processing Group at Rice University. In 2013, he has
held the position of Research Scientist at Rice University. Since 2014, Dr.
Studer is an Assistant Professor at Cornell University and an Adjunct Assistant
Professor at Rice University.
Dr. Studer’s research interests include signal and information processing
as well as the design of digital very large-scale integration (VLSI) circuits.
His current research areas include applications in wireless communications,
nonlinear signal processing, optimization, and machine learning.
Dr. Studer received an ETH Medal for his M.S. thesis in 2006 and for his
Ph.D. thesis in 2009. He received a two-year Swiss National Science Foundation
fellowship for Advanced Researchers in 2011 and a US National Science
Foundation CAREER Award in 2017. In 2016, Dr. Studer won a Michael Tien
’72 Excellence in Teaching Award from the College of Engineering, Cornell
University. He shared the Swisscom/ICTnet Innovations Award in both 2010
and 2013. Dr. Studer was the winner of the Student Paper Contest of the 2007
Asilomar Conf. on Signals, Systems, and Computers, received a Best Student
Paper Award of the 2008 IEEE Int. Symp. on Circuits and Systems (ISCAS),
and shared the best Live Demonstration Award at the IEEE ISCAS in 2013.
| 7 |
arXiv:1510.00697v1 [] 2 Oct 2015
Towards a Fully Abstract Compiler Using Micro-Policies
— Secure Compilation for Mutually Distrustful Components —
Yannis Juglaret1,2
Cătălin Hriţcu1
Benjamin C. Pierce3
1
3
Arthur Azevedo de Amorim3
Antal Spector-Zabusky3
Inria Paris
2
Andrew Tolmach4
Université Paris Diderot (Paris 7)
University of Pennsylvania
4
Portland State University
Technical Report
initial version: August 21, 2015
last revised: October 5, 2015
Abstract
Secure compilation prevents all low-level attacks on compiled code and allows for sound reasoning about security in the source language. In this
work we propose a new attacker model for secure compilation that extends
the well-known notion of full abstraction to ensure protection for mutually distrustful components. We devise a compiler chain (compiler, linker,
and loader) and a novel security monitor that together defend against this
strong attacker model. The monitor is implemented using a recently proposed, generic tag-based protection framework called micro-policies, which
comes with hardware support for efficient caching and with a formal verification methodology. Our monitor protects the abstractions of a simple
object-oriented language—class isolation, the method call discipline, and
type safety—against arbitrary low-level attackers.
Contents
1 Introduction
1.1 General Context .
1.2 Research Problem .
1.3 Our Contribution .
1.4 Other Insights . . .
.
.
.
.
4
4
4
5
5
2 Stronger Attacker Model for Secure Compilation of Mutually Distrustful Components
2.1 Full Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Limitations of Full Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Mutual Distrust Attacker Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
5
6
7
3 Micro-Policies and the PUMP: Efficient Tag-Based Security Monitors
8
4 Compilation Chain Overview
9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Languages and Machines
5.1 Source Level: An Object-Oriented Language . . . . . . . . . . . . . . . .
5.1.1 Interfacing: A Specification Language for Communicating Classes
5.1.2 Source Syntax and Semantics . . . . . . . . . . . . . . . . . . . .
5.2 Intermediate Level: An Object-Oriented Stack Machine . . . . . . . . .
5.3 Target Level: An Extended Symbolic Micro-Policy Machine . . . . . . .
5.3.1 Symbolic Micro-Policy Machine . . . . . . . . . . . . . . . . . . .
5.3.2 Extensions to Monitoring Mechanism . . . . . . . . . . . . . . . .
5.3.3 Easing Component-Oriented Reasoning: Segmented Memory . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 Our Solution: Protecting Compiled Code with a Micro-Policy
6.1 Two-Step Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 From Source to Intermediate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.2 From Intermediate to Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Micro-Policy Protecting Abstractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Enforcing Class Isolation via Compartmentalization . . . . . . . . . . . . . . . . . . . . .
6.2.2 Enforcing Method Call Discipline using Method Entry Points, Linear Return Capabilities,
and Register Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.3 Enforcing Type Safety Dynamically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.4 Micro-Policy in Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
10
10
11
12
13
13
14
14
15
15
15
16
19
19
19
20
21
7 Related Work
22
7.1 Secure Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7.2 Multi-Language Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8 Discussion and Future Work
8.1 Finite Memory and Full Abstraction
8.2 Efficiency and Transparency . . . . .
8.3 Future Work . . . . . . . . . . . . .
8.4 Scaling to Real-World Languages . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
23
24
24
25
A Appendix
25
A.1 Encoding Usual Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
A.2 Source Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3
1
1.1
Introduction
without sacrificing efficiency or transparency. Because updating hardware is expensive and hardware
adoption takes decades, the need for generic protection mechanisms that can fit with ever-evolving security requirements has emerged. Active research in the
domain includes capability machines [11, 42, 43] and
tag-based architectures [8, 9, 14, 39]. In this work, we
use a generic tag-based protection mechanism called
micro-policies [9, 14] as the target of a secure compiler.
Micro-policies provide instruction-level monitoring
based on fine-grained metadata tags. In a micropolicy machine, every word of data is augmented with
a word-sized tag, and a hardware-accelerated monitor
propagates these tags every time a machine instruction gets executed. Micro-policies can be described
as a combination of software-defined rules and monitor services. The rules define how the monitor will
perform tag propagation instruction-wise, while the
services allow for direct interaction between the running code and the monitor. This mechanism comes
with an efficient hardware implementation built on
top of a RISC processor [14] as well as a mechanized
metatheory [9], and has already been used to enforce
a variety of security policies [9, 14].
General Context
In this work we study compiled partial programs
evolving within a low-level environment, with which
they can interact. Such interaction is useful — think
of high-level programs performing low-level library
calls, or of a browser interacting with native code that
was sent over the internet [12,45] — but also dangerous: parts of the environment could be malicious or
compromised and try to compromise the program as
well [12, 15, 45]. Low-level libraries written in C or
in C++ can be vulnerable to control hijacking attacks [15,41] and be taken over by a remote attacker.
When the environment can’t be trusted, it is a major
concern to ensure the security of running programs.
With today’s compilers, low-level attackers [15] can
circumvent high-level abstractions [1,25] and are thus
much more powerful than high-level attackers, which
means that the security reasoning has to be done at
the lowest level, which it is extremely difficult. An
alternative is to build a secure compiler that ensures
that low- and high-level attackers have exactly the
same power, allowing for easier, source-level security
reasoning [4, 19, 22, 32]. Formally, the notion of secure compilation is usually expressed as full abstraction of the translation [1]. Full abstraction is a much
stronger property than just compiler correctness [27].
Secure compilation is, however, very hard to
achieve in practice. Efficiency, which is crucial for
broad adoption [41], is the main challenge. Another
concern is transparency. While we want to constrain
the power of low-level attackers, the constraints we
set should be relaxed enough that there is a way for
all benign low-level environments to respect them. If
we are not transparent enough, the partial program
might be prevented from properly interacting with its
environment (e.g. the low-level libraries it requires).
For a compiler targeting machine code, which lacks
structure and checks, a typical low-level attacker has
write access to the whole memory, and can redirect
control flow to any location in memory [15]. Techniques have been developed to deal with such powerful attackers, in particular involving randomization [4] and binary code rewriting [16, 29]. The first
ones only offer weak probabilistic guarantees; as a
consequence, address space layout randomization [4]
is routinely circumvented in practical attacks [17,37].
The second ones add extra software checks which often come at a high performance cost.
Using additional protection in the hardware can result in secure compilers with strong guarantees [32],
1.2
Research Problem
Recent work [6, 32] has illustrated how protected
module architectures — a class of hardware architectures featuring coarse-grained isolation mechanisms [20, 28, 38] — can help in devising a fully abstract compilation scheme for a Java-like language.
This scheme assumes the compiler knows which components in the program can be trusted and which ones
cannot, and protects the trusted components from
the distrusted ones by isolating them in a protected
module.
This kind of protection is only appropriate when all
the components we want to protect can be trusted,
for example because they have been verified [5]. Accounting for the cases in which this is not possible,
we present and adopt a stronger attacker model of
mutual distrust: in this setting a secure compiler
should protect each component from every other component, so that whatever the compromise scenario
may be, uncompromised components always get protected from the compromised ones.
The main questions we address in this work are:
(1) can we build a fully abstract compiler to a micropolicy machine? and (2) can we support a stronger
attacker model by protecting mutually distrustful
4
allowing an easier specification for complex micropolicies, which can then still run on the current hardware. The second ones require actual hardware extensions. Both of these extensions keep the spirit of
micro-policies unchanged: Rules, in particular, are
still specified as a mapping from tags to tags.
Finally, as we mention in §6.1.2, we were able to
provide almost all security at the micro-policy level
rather than the compiler level. This is very encouraging because it means that we might be able to provide
full abstraction for complex compilers that already
exist, using micro-policies while providing very little
change to the compiler itself.
components against each other?
We are the first to work on question 1, and among
the first to study question 2: Micro-policies are a recent hardware mechanism [9, 14] that is flexible and
fine-grained enough to allow building a secure compiler against this strong attacker model. In independent parallel work [31,34], Patrignani et al. are trying
to extend their previous results [32] to answer question 2 using different mechanisms (e.g. multiple protected modules and randomization). Related work is
further discussed in §7.
1.3
Our Contribution
In this work we propose a new attacker model for
secure compilation that extends the well-known notion of full abstraction to ensure protection for mutually distrustful components (§2). We devise a secure
compilation solution (§4) for a simple object-oriented
language (§5.1) that defends against this strong attacker model. Our solution includes a simple compiler
chain (compiler, linker, and loader; §6.1) and a novel
micro-policy (§6.2) that protects the abstractions of
our simple language—class isolation, the method call
discipline, and type safety—against arbitrary lowlevel attackers. Enforcing a method call discipline
and type safety using a micro-policy is novel and constitutes a contribution of independent interest.
We have started proving that our compiler is secure, but since that proof is not yet finished, we do
not present it in the report. Section 8.2 explains
why we have good hopes in the efficiency and transparency of our solution for the protection of realistic
programs. We also discuss ideas for mitigation when
our mechanism is not transparent enough. However,
in both cases gathering evidence through experiments
to confirm our hopes is left for future work.
1.4
2
Stronger Attacker Model for Secure Compilation of Mutually
Distrustful Components
Previous work on secure compilation [4,19,22,32] targets a property called full abstraction [1]. This section presents full abstraction (§2.1), motivates why
it is not enough in the context of mutually distrustful components (§2.2), and introduces a stronger attacker model for this purpose (§2.3).
2.1
Full Abstraction
Full abstraction is a property of compilers that talks
about the observable behaviors of partial programs
evolving in a context. When we use full abstraction
for security purposes, we will think of contexts as attackers trying to learn the partial program’s secrets,
or to break its internal invariants. Full abstraction
relates the observational power of low-level contexts
to that of high-level contexts. Hence, when a compiler achieves full abstraction, low-level attackers can
be modeled as high-level ones, which makes reasoning about the security of programs much easier: Because they are built using source-level abstractions,
high-level attackers have more structure and their interaction with the program is limited to that allowed
by the semantics of the source language.
In order to state full abstraction formally, one first
has to provide definitions for partial programs, contexts, and observable behaviors both in the highand the low-level. Partial programs are similar to
usual programs; but they could still be missing some
elements—e.g. external libraries—before they can be
executed. The usual, complete programs can be seen
as a particular case of partial programs which have no
missing elements, and are thus ready for execution.
Other Insights
Throughout this work, we reasoned a lot about abstractions. One insight we gained is that even very
simple high-level languages are much more abstract
than one would naively expect. Moreover, we learned
that some abstractions — such as functional purity
— are impossible to efficiently enforce dynamically.
We also needed to extend the current hardware and
formalism of micro-policies (§5.3) in order to achieve
our challenging security goal. We needed two kinds
of extensions: some only ease micro-policy writing,
while the others increase the power to the monitoring
mechanism. The first ones require a policy compiler,
5
and communicated to the compiler chain, which can
insert a single protection barrier between between the
two. In particular, in the definition of full abstraction
the compiler is only ever invoked for the protected
program (P↓ or Q↓), and can use this fact to its advantage, e.g. to add dynamic checks. Moreover, the
definition of a[p] (low-level linking) can insert a dynamic protection barrier between the trusted p and
the untrusted a. For instance, Patrignani et al. [32]
built a fully abstract compilation scheme targeting
protected module architectures by putting the compiled program into a protected part of the memory
(the protected module) and giving only unprotected
memory to the context. A single dynamic protection
barrier is actually enough to enforce the full abstraction attacker model.
A context is usually defined as a partial program with
a hole; this hole can later be filled with a partial program in order to yield a new partial program. Finally,
observable behaviors of complete programs can vary
depending on the language and may include, termination, I/O during execution, final result value, or
final memory state.
The chosen definition for contexts will set the granularity at which the attacker can operate. Similarly,
defining the observable behaviors of complete programs can affect the observational power of the attacker in our formal model. The attacker we want to
protect the program against is the context itself: The
definition we choose for observable behaviors should
allow the context to produce an observable action every time it has control, thus letting it convert its
knowledge into observable behaviors. In our case,
our source and target languages feature immediate
program termination constructs. We can thus choose
program termination as an observable behavior which
models such strong observational power.
We denote high-level partial programs by P , Q,
and high-level contexts by A. We denote by A[P ]
the partial program obtained by inserting a high-level
partial program P in a high-level context A. We denote low-level partial programs by p, q, and high-level
contexts by a. We denote by a[p] the insertion of a
low-level partial program p in a low-level context a.
Given a high-level partial program P , we denote by
P ↓ the low-level program obtained by compiling P .
We denote the fact that two complete high-level programs P and Q have the same observable behavior by
P ∼H Q. For two complete low-level programs p and
q, we denote this by p ∼L q. With these notations,
full abstraction of the compiler is stated as
2.2
Limitations of Full Abstraction
We study languages for which programs can be decomposed into components. Real-world languages
have good candidates for such a notion of components: depending on the granularity we target, they
could be packages, modules, or classes. Our compiler
is such that source components are separately compilable program units, and compilation maps sourcelevel components to target-level components.
When using a fully abstract compiler in the presence of multiple components, the user has a choice
whether a component written in the high-level language is trusted, in which case it is considered part
of the program, or untrusted, in which case it is considered part of the context. If it is untrusted, the
component can as well be compiled with an insecure
compiler, since anyway the fully abstract compiler
only provides security to components that are on the
good side of the protection barrier. If the program includes components written in the low-level language,
e.g. for efficiency reasons, then the user has generally no choice but to consider these components untrusted. Because of the way full abstraction is stated,
low-level components that are not the result of the
compiler cannot be part of the trusted high-level program, unless they have at least a high-level equivalent
(we discuss this idea in §2.3).
Figure 1 graphically illustrates how full abstraction
could be applied in a multi-component setting. Components C1, C2, and C3 are written in the high-level
language, while c4 and c5 are written in the low-level
one. Suppose the user chooses to trust C1 and C2
and not to trust C3, then the compiler will introduce
a single barrier protecting C1 and C2 from all the
(∀ A, A[P ] ∼H A[Q]) ⇐⇒ (∀ a, a[P↓] ∼L a[Q↓])
for all P and Q. Put into words, a compiler is fully
abstract when any two high-level partial programs P
and Q behave the same in every high-level context
if and only if the compiled partial programs P↓ and
Q ↓ behave the same in every low-level context. In
other words, a compiler is fully abstract when a lowlevel attacker is able to distinguish between exactly
the same programs as a high-level attacker.
Intuitively, in the definition of full abstraction the
trusted compiled program (P ↓ or Q ↓) is protected
from the untrusted low-level context (a) in a way that
the context cannot cause more harm to the program
than a high-level context (A) already could. This
static separation between the trusted compiled program and the context is in practice chosen by the user
6
user chosen separation
trusted
fully abstract
compilation
C1
C1↓
distrusted
C2
C2↓
C3
C3↓opt
c4
c4
c5
secure
compilation
optimizing
compilation
C2↓
a
a
C2
C3
c4
c5
C1↓
C2↓
C3↓
c4
c5
C1↓
a
C3↓
a
c5
∀ attack
c5
attack
C1↓
C1
a
~
C5↓
Figure 1: Full abstraction for multiple components
Figure 2: Secure compilation for mutually distrustful
components
other components.
There are two assumptions on the attacker model
when we take full abstraction as a characterization
of secure compilation: the user correctly identifies
trusted and untrusted components so that (1) trusted
components need not be protected from each other,
and (2) untrusted components need no protection
whatsoever. We argue that there are common cases
in which the absolute, binary trust notion implied by
full abstraction is too limiting (e.g. there is no way to
achieve all the user’s security goals), and for which a
stronger attacker model protecting mutually distrustful components is needed.
Assumption (1) is only realistic if all trusted components are memory safe [13] and do not exhibit Cstyle undefined behaviors. Only when all trusted
components have a well-defined semantics in the
high-level language is a fully abstract compiler required to preserve this semantics at the low level.
Memory safety for the trusted components may follow either from the fact that the high-level language
is memory safe as a whole or that the components
have been verified to be memory safe [5]. In the typical case of unverified C code, however, assumption (1)
can be unrealistically strong, and the user cannot be
realistically expected to decide which components are
memory safe. If he makes the wrong choice all bets
are off for security, a fully abstract compiler can produce code in which a control hijacking attack [15, 41]
in one trusted component can take over all the rest.
While we are not aware of any fully abstract compiler
for unverified C, we argue that if one focuses solely on
achieving the full abstraction property, such a compiler could potentially be as insecure in practice as
standard compilers.
Even in cases where assumption (1) is acceptable,
assumption (2) is still a very strong one. In particular, since components written in the low-level language cannot get protection, every security-critical
component would have to be written in the high-level
source language, which is often not realistic. Compiler correctness would be sufficient on its own if all
components could be written in a safe high-level language. The point in moving from compiler correctness to full abstraction, which is stronger, is precisely
to account for the fact that some components have
to be written in the low-level language, e.g. for performance reasons.
Assumption (2) breaks as soon as we consider that
it makes a difference whether the attacker owns one
or all the untrusted components. As an example, assume that an attacker succeeds in taking over an untrusted component that was used by the program to
render the picture of a cat. Would one care whether
this allows the attacker to also take over the low-level
cryptographic library that manages private keys? We
believe that the cryptographic library, which is a
security-critical component, should get the same level
of protection as a compiled component, even if for efficiency it is implemented in the low-level language.
When assumption (1) breaks, trusted components
need to be protected from each other, or at least from
the potentially memory unsafe ones. When assumption (2) breaks, untrusted security-critical components need to be protected from the other untrusted
components. In this work, we propose a stronger attacker model that removes both these assumptions by
requiring all components to be protected from each
other.
2.3
Mutual Distrust Attacker Model
We propose a new attacker model that overcomes the
previously highlighted limitations of full abstraction.
In this attacker model, we assume that each component could be compromised and protect all the other
components from it: we call it an attacker model for
7
set of components, (c) in every compromise scenario,
each uncompromised compiled high-level component
is secure against low-level attacks from all compromised components, and (d) in every compromise scenario, each uncompromised low-level component that
has a high-level equivalent is secure against low-level
attacks from all compromised components.
mutually distrustful components. This model can provide security even in C-like unsafe languages when
some of the high-level components are memory unsafe or have undefined behaviors. This is possible if
the high-level semantics treats undefined behavior as
arbitrary changes in the state of the component that
triggered it, rather than in the global state of the program. In the following we will assume the high-level
language is secure.
All compiled high-level components get security
unconditionally: the secure compiler and the dynamic barriers protect them from all low-level attacks, which allows reasoning about their security in
the high-level language. For low-level components to
get security they have to satisfy additional conditions,
since the protection barriers are often not enough on
their own for security, as the compiler might be inserting boundary checks, cleaning registers, etc. and
the low-level code still needs to do these things on
its own in order to get full protection. Slightly more
formally, in order for a low-level component c to get
security it must behave in all low-level contexts like
some compiled high-level component C↓. In this case
we can reason about its security at the high level by
modelling it as C. This captures the scenario in which
c is written in the low-level language for efficiency
reasons.
We illustrate our stronger attacker model in figure 2. The protected program is the same as in the
previous full abstraction diagram of figure 1. This
time, however, the user doesn’t choose a trust barrier: all components are considered mutually distrustful instead. Each of them gets protected from the
others thanks to barriers inserted by the compiler.
While components C3, c4, and c5 were distrusted
and thus not protected in the previous diagram, here
all of them can get the same amount of protection
as other components. To get security C3 is compiled using the secure compiler, while for c4 and c5
security is conditioned on equivalence to high-level
components; in the figure we assume this only for c5.
The attacker can compromise arbitrary components
(including high-level compiled components), e.g. C2↓
and c4 in the diagram. In this compromise scenario,
we ensure that the uncompromised components C1↓,
C3↓, and c5 are protected from all low-level attacks
coming from the compromised components. In general, our attacker model defends against all such compromise scenarios.
To sum up, our attacker model can be stated as follows: (a) the attacker compromises with component
granularity, (b) the attacker may compromise any
3
Micro-Policies and the PUMP:
Efficient Tag-Based Security
Monitors
We present micro-policies [9, 14], the mechanism we
use to monitor low-level code so as to enforce that
our compiler is secure. Micro-policies [9, 14] are
a tag-based dynamic protection mechanism for machine code. The reference implementation on which
micro-policies are based is called the PUMP [14] (Programmable Unit for Metadata Processing).
The PUMP architecture associates each piece of
data in the system with a metadata tag describing its
provenance or purpose (e.g. “this is an instruction,”
“this came from the network,” “this is secret,” “this is
sealed with key k”), propagates this metadata as instructions are executed, and checks that policy rules
are obeyed throughout the computation. It provides
great flexibility for defining policies and puts no arbitrary limitations on the size of the metadata or the
number of policies supported. Hardware simulations
show [14] that an Alpha processor extended with
PUMP hardware achieves performance comparable
to dedicated hardware on a standard benchmark suite
when enforcing either memory safety, control-flow integrity, taint tracking, or code and data separation.
When enforcing these four policies simultaneously,
monitoring imposes modest impact on runtime (typically under 10%) and power ceiling (less than 10%),
in return for some increase in energy usage (typically
under 60%) and chip area (110%).
The reference paper on micro-policies [9] generalizes previously used methodology [8] to provide a
generic framework for formalizing and verifying arbitrary policies enforceable by the PUMP architecture. In particular, it defines a generic symbolic machine, which abstracts away from low-level hardware
details and serves as an intermediate step in correctness proofs. This machine is parameterized by a symbolic micro-policy, provided by the micro-policy designer, that expresses tag propagation and checking
in terms of structured mathematical objects rather
than bit-level concrete representations. The micro8
high-level
type
components interfaces
policies paper also defines a concrete machine which
is a model of PUMP-like hardware, this time including implementation details.
The proposed approach to micro-policy design and
verification is presented as follows. First, one designs a reference abstract machine, which will serve
as a micro-policy specification. Then, one instantiates the generic symbolic machine with a symbolic
micro-policy and proves that the resulting symbolic
machine refines the abstract machine: the observable
behaviors of the symbolic machine are also legal behaviors of the abstract machine, and in particular the
symbolic machine fail-stops whenever the abstract
machine does. Finally, the symbolic micro-policy is
implemented in low-level terms, and one proves that
the concrete machine running the micro-policy implementation refines the symbolic machine.
In this work, we use a slightly modified symbolic
machine as the target of our secure compiler. Our
symbolic machine differs from the previous one [9] in
two ways: First, its memory is separated into regions,
which are addressed by symbolic pointers. Note, however, that our protection does not rely on this separation but only on the tagging mechanism itself: in
particular, all components can refer to all symbolic
pointers, without restriction. Mapping memory regions to concrete memory locations before executing
the program on a concrete machine would be a main
task of a loader — we leave a complete formalization
of loading for future work. Second, we extend the
micro-policy mechanism itself, allowing rules to read
and write the tags on more components of the machine state. We detail these extensions in section 5.3,
which is dedicated to our target machine. We also
leave for future work the implementation of these additional tags in the PUMP rules, their formalization
in an extended concrete machine, and the extension
of our results to the concrete level.
4
low-level
type
protection
components interfaces micro-policy
compiler phase I
compiler phase II
linker
complete program
type interfaces
loader
tagged complete program
symbolic micro-policy machine
result
Figure 3: Overview of the compilation chain
get components (e.g. low-level libraries) as input, and
outputs a target-executable program. Components
must all come with interface specifications, written
in a common interface language. These interfaces
specify the definitions that each component provides,
and the definitions it expects other components to
provide.
In the compilation phase, the compiler first translates source components to an intermediate representation, which then gets translated to target components.
In the linking phase, the linker checks that the interfaces of the components getting linked are compatible. It also makes sure that all components only
give definitions under names that follow from their
interfaces; and symmetrically that they do provide a
definition for each of these names. Is so, the linker
puts them together to form a partial program, and
makes sure that this partial program is actually complete (i.e. no definition is missing).
In the loading phase, the loader builds an initial
machine state out of this complete program by tagging its memory using type information that was
gathered by the linker. The result is thus a targetlevel executable program — i.e. a symbolic machine
tagged program. The loader’s tagging will force
all components to correctly implement the interfaces
that was provided for them, when we later run and
monitor them using our protection micro-policy: The
machine will failstop as soon as a component violates
its interface upon interaction with another component (violations on internal computational steps is
not harmful and hence allowed).
Because we required that every component should
have an interface, low-level libraries that were com-
Compilation Chain Overview
In this section, we present our compiler and give intuition about the connections between the different
parts that play a role in our solution.
Our compilation chain, which we present in figure 3, splits into a two-step compiler, a linker and
a loader. It produces a program to execute on the
symbolic micro-policy machine. Our dedicated protection micro-policy will be loaded into the machine,
allowing proper runtime monitoring of the program.
The compilation chain takes source components
(e.g. a main program and standard libraries) and tar9
piled using other compilers — e.g. C compilers — are
only usable in our system if somebody writes interfaces for them. In section 8 we discuss more generally
the need for manual or automated wrapping of lowlevel code; once we have a way to wrap low-level code,
providing a matching interface will be easy.
IDT ::=
DT
import declaration tables
EDT ::=
DT
export declaration tables
DT ::=
(CDT , ODT )
5
declaration tables
CDT ::=
[] | {c 7→ CD} :: CDT
Languages and Machines
In this section, we present and formalize the languages that are used in the compilation scheme.
We introduce a simple object-oriented language, an
abstract stack machine, and an extended symbolic
micro-policy machine with segmented memory. The
first will be our source language and the last our
target machine, while the intermediate abstract machine will offer a different view from the source language which makes the connection with the low level
more direct. The source language includes constructs
for specifying the interfaces of components; these get
reused as-is at all levels.
class declaration table
CD ::=
class decl {MD1 , ... , MDk }
MD ::=
cr (ca )
class declaration
method declaration
ODT ::=
[] | {o 7→ OD} :: ODT
OD ::=
obj decl c
object declaration table
object declaration
Figure 4: Interface language syntax
5.1
Source Level:
Language
5.1.1
An Object-Oriented
Interfacing: A Specification Language
for Communicating Classes
The notion of component in this work is that of a class
c together with all its object instances’ definitions.
Because we have no dynamic allocation, for the moment these instances are simply all the static objects
defined with type c. To allow interaction between
components while being able to separately compile
them, we have a simple interface syntax based on import and export declarations. This interface language
gives the external view on source components and is
presented in figure 4.
We first formalize our source language, beginning
with the interface constructs that it offers and then
presenting its syntax and semantics. The source language we consider is an imperative class-based objectoriented language with static objects, private fields,
and public methods. It is inspired by previous formalizations of Java core calculi [10,21] and Java subsets [23], with the aim of keeping things as simple as
possible. As a consequence, we do not handle inheritance nor dynamic allocation, which are available in
all these works.
Syntax and Naming Conventions Object
names o and class names c are global and assumed to
be arbitrary natural numbers. The two main syntactic constructs in the interface language are class declarations and static object declarations. Class declarations specify public methods with argument and
result types without providing their body; no fields
are declared in an interface because we only consider
private fields in our languages. Static object declarations specify an object to be an instance of a given
class, without providing the values of its fields.
The interface of a partial program at all levels is
composed of an import declaration table IDT speci-
We start with the simplicity of Featherweight
Java [21], and add imperative features in the way
Middleweight Java [10] does. However, we do not add
as many imperative features: just branching, field
update and early termination (the latter is not a feature of Middleweight Java). The resulting language
is similar to Java Jr. [23] with early termination, but
without packages: Our granularity for components is
that of classes instead.
Example components which encode some usual
types are provided in appendix section A.1, and could
help in getting familiar with this language.
10
fying the class and object definitions it expects other
components to provide, and an export declaration table EDT which declares the definitions that this partial program provides. Export and import declaration tables share common syntax and are composed
of class and object declarations. The type of the declared objects must come from the classes specified
by the partial program: defining object instances of
classes coming from other components is not allowed.
Intuitively, our object constructors (and fields) are
private and the only way to interact with objects is
via method calls.
In contrast with objects and classes to which we
refer using global names, methods are class-wise indexed: the methods m of a class c are referred to
as 1, . . . , k following the order of their declarations.
(The same goes for fields f , below.) The syntax we
consider for names can be thought of as coming out
of a parser, that would take a user-friendly Java-like
syntax and perform simple transformations so that
the names match our conventions.
P, Q ::=
(IDT , T , EDT )
source program
T ::=
(CT , OT )
definition tables
CT ::=
[] | {c 7→ C } :: CT
C ::=
class {c1 , ... , ck ; M1 , ... , Ml }
M ::=
cr (ca ){e}
class definition table
class definition
method definition
e ::=
expressions
this | arg | o | e.f | e.f := e 0 | e.m(e 0 )
| e == e 0 ? e 00 : e 000 | exit e | e; e 0
OT ::=
[] | {o 7→ O} :: OT
object definition table
O ::=
obj c{o1 , ... , ok }
Use at Linking and Loading We have presented
the roles of the linker and the loader when we introduced the compilation chain in section 4: We can
model linking as an operation that takes two target
partial programs and their interfaces, and yields a
new partial program which contains all definitions
from both partial programs, with a new matching
interface. Loading then takes a complete target program and tags it, yielding a machine state that can be
reduced using the semantics of our symbolic micropolicy machine. Let us now explain how interfaces
are used at linking and loading.
A class import declaration gives the precise type
signatures that the partial program expects from the
methods of the corresponding class: When linking
against a partial program that defines this class, the
class export declaration should exactly match with
the import one. Similarly, an import object declaration gives the expected type for this object, and
should match with the corresponding export declaration when linking against a partial program that
defines it.
Two partial programs have compatible interfaces
when (1) they don’t both have export declarations
for the same class nor the same object, and (2) for
every name in an import declaration of any of the
two programs, if the other program happens to have
an export declaration for this name, then the import and export declarations are syntactically equal.
Linking two partial programs with compatible in-
Figure 5: Source language syntax
terfaces yields a new partial program with updated
import/export declarations: Export declarations are
combined, and import declarations that found matching export declarations are removed. When all partial programs have been linked together, the linker
can check that there are no remaining import declarations to make sure that the program is complete.
Finally, the loader will make use of the export declarations to ensure that programs comply with the export declarations they declared: In the untyped target language, the loader sets the initial memory tags
in accordance with the export declarations, which
will allow our micro-policy to perform dynamic type
checking. This will be further explained in section
6.2.3.
5.1.2
Source Syntax and Semantics
The syntax of our source language is presented in
figure 5. The two main syntactic constructs in this
language are class definitions and static object definitions. Class definitions declare typed private fields
and define public methods with argument and result types as well as an expression which serves as
a method body. Static object definitions define instances of defined classes by providing well-typed val11
ues for the fields. For simplicity, methods only take
one argument: this does not affect expressiveness because our language is expressive enough to encode tuple types (appendix section A.1 shows examples that
encode more complex types than tuple types).
Most expressions are not surprising for an objectoriented language: apart from static object references o and variables (this for the current object
and arg for the method’s argument), we have support for selecting private fields, calling public methods, and testing object identities for equality. The
language also features field update and early termination. Both are crucial for modeling realistic low-level
attackers in our high-level language: Low-level attackers can indeed keep information between calls using the memory and stop the machine whenever they
have control. We thus add primitives that can model
this in the high-level: field updates enable methods
to have state (methods are not pure functions anymore), and early termination allows an attacker to
prematurely end the program.
Like we already mentioned, fields are private and
methods are public. This means that in the method
body of a given class, the only objects whose fields
may be accessed are the instances of that specific
class. The only way to interact with object instances
of other classes is to perform a method call.
The only values in the language are object names,
and the only types are class names. The language
comes with a type system that ensures that object
and method definitions match with the types that
were declared for them. Our language does not feature dynamic allocation, inheritance, or exceptions.
We hope to add some of these features in the future.
Loops are simulated using recursive method calls and
branching is done via object identity tests.
The semantics of the source language is standard
and is given in appendix A.2.
5.2
Intermediate Level:
An
Oriented Stack Machine
IP , IQ ::=
(IDT , ICT , EDT )
ICT ::=
[] | {c →
7 IC } :: ICT
intermediate program
compartment table
IC ::=
compartment declaration
{IM1 , ... , IMk ; LOT ; LS }
IM ::=
Icode
Icode ::=
[] | Iinstr ; Icode
method body
intermediate machine code
Iinstr ::=
machine instruction
Nop | This | Arg | Ref o | Sel f | Upd f
| Call c m | Ret | Skip n | Skeq n | Drop | Halt
LOT ::=
compartment local object table
[] | {o 7→ LO} :: LOT
LO ::=
(o1 , ... , ol )
LS ::=
[] | o :: LS
local instance definition
local stack
Figure 6: Intermediate language syntax
Object-
Our intermediate machine is an object-oriented stack
machine with one local stack per class. The syntax
for intermediate machine programs is presented in
figure 6. The main syntactic construct is that of a
compartment, which is the notion of component at
this level. A compartment combines a class definition with all the object instances of this class and
with a local stack.
The main difference with respect to the source language is that instead of expressions, method bodies
12
are expressed as sequences of instructions in abstract
machine code. These instructions manipulate values
stored on the local stack associated with each compartment.
Nop does nothing. This, Arg and Ref o put an
object on the local stack — the current object for
This, the current method argument for Arg, and
object o for Ref o. Sel f pops an object from the
stack, selects field f of this object and pushes the selected value back to the stack. Upd f pops a value
and an object, sets the f field of the object to the
popped value, then pushes back this value on the
stack. Call c m pops an argument value and a target
object o, performs a call of the object o’s m method
with the popped argument if o has type c (otherwise,
the machine failstops). The callee can then use the
Ret instruction to give control back to the caller: this
instruction pops a value from the callee’s stack and
pushes it on the caller’s stack, as a result value for the
call. Skip n skips the n next instructions. Skeq n
pops two values from the local stack and either skips
the n next instructions if the values are equal, or does
nothing more if they are different. Drop removes the
top value from the stack. Halt halts the machine
immediately, the result of the computation being the
current top value on the stack.
The purpose of this intermediate language is to
ease going low-level. In particular, its syntax with the
Call instruction being annotated with a class makes
explicit the fact that method calls are statically resolved by the source to intermediate compiler. This
is possible in our source language, because we have
no inheritance.
5.3
mem ::=
[] | {loc 7→ R} :: mem
loc ::=
region symbolic location
objl o | methl c m | stackl c
R ::=
[] | (word @ tmem ) :: R
tagged memory region
word ::=
symbolic machine word
n | loc + n | encode instr
instr ::=
machine instruction
Nop | Const i rd | Mov rs rd | Binopop r1 r2 rd
| Load rp rd | Store rp rs | Jump r | Jal r | Bnz r i
| Halt
i ::=
n | loc + n
immediate value
Figure 7: Symbolic machine memory
Target Level: An Extended Symbolic
Micro-Policy Machine
Here, we present our the target of our compiler: an
extended symbolic micro-policy machine with segmented memory. We first recall the common basis
for our machine and the symbolic machine presented
in the original micro-policies paper [9], then present
and comment on the differences between them.
5.3.1
memory
LP , LQ ::=
(IDT , LPmem, EDT )
low-level program
LPmem ::=
[] | {loc 7→ LPR} :: LPmem
program memory
LPR ::=
[] | word :: LPR
i ::=
n | loc + n
Symbolic Micro-Policy Machine
A symbolic micro-policy machine [9] is an executable
model of micro-policies that abstracts away from
some of the implementation details (e.g. the implementation of the micro-policy monitor in machine
code). The definition of a symbolic micro-policy machine is abstracted over a symbolic micro-policy.
program memory region
immediate value
Figure 8: Symbolic machine program syntax
13
for instance, to transfer a linear capability from an
input to an output: one wants not only to copy the
capability in the output tag, but also to erase it from
the input tag.
Second, we assume that there are some fixed registers whose tags can always be checked and updated
by the transfer function, even if the registers are neither input nor output to the current instruction. This
allows us to efficiently clean these fixed registers upon
calls and returns.
Third, we assume that the transfer function receives as an argument not only the tag of the current
instruction, but also the tag on the next instruction.
For instance, when monitoring a Jump instruction,
we assume the tag on the targeted location can be
checked. This extension allows us to write and explain our micro-policy in a much simpler way.
The first two extensions require extra hardware
support. For the last extension, however, we conjecture that our micro-policy — and probably other
similar micro-policies — can be transformed into a
policy which doesn’t have to perform next instruction checks. This kind of translation has already been
done by hand, for example in a previous compartmentalization micro-policy [9]: the next instruction
checks are replaced by current instruction checks happening on the next step, making the machine failstop
one step later in the case of an error. We leave for future work the writing of a policy compiler doing such
a transformation automatically.
In our case, a symbolic micro-policy is defined as
a collection of symbolic tags, which are used to label
instructions and data, and a transfer function, which
is invoked on each step of the machine to propagate
tags from the current machine state to the next one.
We ignore monitor services of Azevedo de Amorim et
al. [9] and extra pieces of state which are only used
by monitor services — because we don’t need them:
we successfully avoid monitor services, which in the
context of micro-policies are much more expensive
than rules. The transfer function is a mathematical
function that defines the micro-policy rules — in the
mechanized metatheory of the original micro-policies
paper this function is written in Gallina, the purelyfunctional language at the heart of the Coq proof
assistant.
A machine state of our symbolic micro-policy machine is composed of a memory, a register file of
general-purpose registers, and a program counter register pc. The program counter register points to a location in memory which contains the next instruction
to execute.
We present the list of instructions in figure 7,
together with other memory-related definitions on
which we will focus when we later explain the segmented memory. These instructions are those of the
machine code of the original micro-policies paper [9]:
Nop does nothing. Const i rd puts an immediate
constant i into register rd . Mov rs rd copies the contents of rs into rd . Binopop r1 r2 rd performs a binary operation op (e.g. addition, subtraction, equality test) on the content of registers r1 and r2 , and
puts the result in register rd . Load rp rs copies the
content of the memory cell at the memory location
stored in rp to rs . Store rp rs copies the content of rs
to the memory cell at the memory location stored in
rp . Jump and Jal (jump-and-link) are unconditional
indirect jumps, while Bnz r i branches to a fixed offset imm (relative to the current pc) if register r is
nonzero. Halt halts the machine immediately.
In the following, we denote Binop+ r1 r2 rd (addition) by Add r1 r2 rd , and Binop− r1 r2 rd (subtraction) by Sub r1 r2 rd .
5.3.2
5.3.3
Easing Component-Oriented Reasoning: Segmented Memory
Instead of having a monolithic word-addressed memory, the machine we consider has a segmented memory which consists of several memory regions which
are addressed by symbolic locations. Targeting such
a machine allows for easy separate compilation, and
is a pragmatic intermediate step when going towards
real machine code, which we plan to do in the future.
As presented in figure 7, the definition of memory
directly mentions symbolic locations. A generic symbolic machine definition would be abstracted over the
definition of symbolic locations, but in our case, we
define them to be either method, object, or stack locations for reasons that will be clear when we present
our compiler in section 6.1.2. Our instantiation of
memory tags tmem will be studied with other definitions related to the symbolic micro-policy, in section 6.2.
Immediate constants and words in the symbolic
Extensions to Monitoring Mechanism
We consider a more powerful symbolic micro-policy
machine that allows the transfer function to inspect
and update more tags.
First, we assume that the transfer function produces new tags for the input arguments of the instructions; not only for the output one. This is required,
14
6
machine are extended to include symbolic locations
with an offset, which are memory addresses: The k
memory cells of a region located at loc are addressed
from loc + 0 to loc + (k − 1). In the following, we use
the simpler notation loc for loc + 0.
Our Solution: Protecting Compiled Code with a Micro-Policy
In this section, we present our solution for the secure compilation of mutually distrustful components:
first we describe our simple two-step compiler, then
we present our micro-policy dynamically protecting
components from each other.
Words are also extended to include a new
encode instr construct: Decoding instructions in
this machine is a symbolic operation of deconstructing such a word. Now that instructions feature symbolic locations with an offset as immediate values, it
would indeed have no practical meaning to try to extend the encoding of instructions to this. When we
use the PUMP in future work, some of the symboliclevel instructions could have to be compiled to sequences of PUMP instructions: PUMP memory addresses, the PUMP equivalent of symbolic locations
with an offset, are word-sized and thus too big to fit
in a PUMP immediate value. Another solution would
be to restrict addressable memory to less than a full
word; the symbolic encoding allows us to delay the
choice between these solutions.
6.1
Two-Step Compilation
We start with our two-step compilation: first the
compilation of source programs to intermediate machine programs, then the one of intermediate machine
programs to target machine programs.
6.1.1
From Source to Intermediate
Our type-preserving source to intermediate compiler
is a mostly direct translation of source expressions to
abstract machine instructions, which gives a lowerlevel view on source components. Nothing in the
translation is related to security: we provide security at this level by giving appropriate semantics to
intermediate-level instructions, which make them manipulate local stacks and local object tables rather
than a single global stack and a single object table. In
this translation, we statically resolves method calls,
which is possible because our language doesn’t feature inheritance.
A high-level component is easily mapped to an intermediate compartment: Method bodies are compiled one by one to become intermediate-level method
bodies. Object definitions corresponding to that component are taken from the global object table OT and
put in the local object table LOT of the compartment. Finally, the stack is initialized to an empty
stack.
Tags are not affected by all the changes that we
listed, hence the monitoring mechanism isn’t affected
either. The semantics, however, is affected in the following way: Trying to decode an instruction out of a
regular word or of a symbolic location with an offset
failstops the machine. All instructions failstop when
one of their operands has the form encode instr . A
Jump, Jal, Load or Store instruction used with a
regular word for the pointer failstops the machine.
These instructions keep their usual behavior when
provided with a symbolic location and a valid offset;
if it does not correspond to a valid memory address,
however, the machine failstops. Most binary operations failstop when the first or second operand is a
symbolic location with an offset: exceptions are 1)
addition and subtraction with regular words, when
the first operand is the location with an offset, which
respectively increment and decrement the offset accordingly; 2) equality tests between symbolic locations with offsets. Finally, the Bnz instruction failstops when the provided register or immediate value
is a symbolic location with an offset.
Compiling Source Expressions to Stack Machine Code Assuming that method m of class c
in program P has definition cr (ca ){e}, compilation is
defined as follows:
C (P , c, m) = A (e); Ret
The A function is recursively defined as presented
in figure 9; we allow ourselves to refer to P , c and
m in this definition. We denote by P ; c, m ` e : c 0
the predicate indicating that expression e has type c 0
when typed within method m of class c from program
P . In the compilation of method calls, we assume a
type inference algorithm which, given P , c and m,
The syntax for symbolic machine programs is presented in figure 8. They define a memory which is like
the one of the symbolic machine, except that cells are
not tagged: The tags for the memory will be provided
by the low-level loader, which will be detailed in the
next section.
15
note by |A (e)| the length of the sequence of instructions A (e). With equal objects, executing
Skeq (|A (e4 )|+1) will skip the code corresponding to
e4 and the Skip |A (e3 )| instruction, hence branching
directly to the code corresponding to e3 to execute it.
With different objects, executing Skeq (|A (e4 )| + 1)
will do nothing and execution will proceed with the
code corresponding to e4 , followed by a Skip |A (e3 )|
instruction which will unconditionally skip the code
corresponding to e3 , hence branching to the Nop instruction. The effect of all this is that in the end,
we have executed e1 and e2 in this order, popped the
resulting objects from the stack, and either executed
e3 if they were equal or e4 if they were different: We
execute the appropriate code in both cases.
A (this) = This
A (arg) = Arg
A (o) = Ref o
A (e.f ) = A (e); Sel f
A (e.f := e 0 ) = A (e); A (e 0 ); Upd f
A (e.m 0 (e 0 )) = A (e); A (e 0 ); Call c 0 m 0
where c0 satisfying
P ; c, m ` e : c 0
is found by type inference
A (e1 == e2 ? e3 : e4 ) = A (e1 ); A (e2 );
Skeq (|A (e4 )| + 1);
A (e4 ); Skip |A (e3 )|;
A (e3 );
6.1.2
Nop
From Intermediate to Target
We now present our unoptimizing, type-preserving
translation from intermediate-machine compartments to target-level components. Target-level compartments are defined as sets of untagged symbolic
machine memory regions. Like in the source to intermediate compilation, the translation itself is rather
standard. The exception is that components cannot
trust other components to follow conventions such as
not messing with a global call stack, or not modifying some registers. As a consequence, components
use local stacks and all relevant registers need to be
(re)initialized when the compiled component takes
control. Other than that, all security is enforced by
means of instruction-level monitoring (§6.2).
A (e; e 0 ) = A (e); Drop; A (e 0 )
A (exit e) = A (e); Halt
Figure 9: Compiling source expressions to intermediate machine code
finds the unique type c 0 such that P ; c, m ` e : c 0 .
We use it to statically resolve method calls by annotating intermediate-level call instructions. In this
document, we do not present the type system nor
the type inference algorithm for our source language,
which are standard.
The invariant used by the compilation is that
executing A (e) in the intermediate level will either diverge—when evaluating e in the high-level
diverges— or terminate with exactly one extra object on the local stack—in which case this object is
exactly the result of evaluating e. In a method body,
this object on top of the stack can then be returned
as the result of the method call, which is why A (e)
is followed by a Ret instruction in the main compilation function C .
With this invariant in mind, the translation is
rather straightforward, which is not surprising since
our abstract stack machine was designed with this
goal in mind. An important point is that we keep
the evaluation order of the source language: left to
right. It matters because of side effects and early
termination being available in our language.
Let us explain the only non-trivial expression to
compile: the object identity test (e1 == e2 ? e3 :
e4 ), for which we use two branching instructions
Skeq (|A (e4 )| + 1) and Skip |A (e3 )|. Here, we de-
Object Compilation Each object o that was assigned a definition (o1 , ... , ol ) now gets its own region
in target memory. This region is assigned symbolic
location objl o and spans over l memory cells. These
cells are filled with the objl o1 , . . . , objl ol symbolic
locations — which are the addresses of these objects
in memory.
Local Stack Compilation Each local stack also
gets its own memory region, under symbolic location
stackl c where c is the name of the compartment being compiled. Components will maintain the following invariant during computation: The first cell holds
a symbolic pointer to the top of the stack, which is
(stackl c) + l where l is the length of the stack. The
following cells are used for storing actual values on
the stack.
Here, we only care about compiling intermediatelevel components that come from the source to inter16
mediate compiler. For these components, the initial
stack is always empty. Hence, we just initialize the
first cell to stackl c, and the initial content of extra
cells can be arbitrary constants: their only purpose
is to be available for filling during the computation.
Method Compilation: From Coarse- to FineGrained Instructions A method with index m
gets its own memory region under symbolic location methl c m where c is the name of the compartment being compiled. The length of these memory
regions is that of the corresponding compiled code,
which is what they are filled with. The compilation
is a translation of the method bodies, mapping each
intermediate-level instruction to a low-level instruction sequence.
The compilation uses ten distinct general-purpose
registers. Register ra is automatically filled upon lowlevel call instructions Jal — following the semantics
of the machine studied in the original micro-policies
paper [9] — for the callees to get the address to
which they should return. Registers rtgt , rarg and
rret are used for value communication: rtgt stores the
object on which we’re calling a method—the target
object—and rarg the argument for the method, while
rret stores the result of a call on a return. Registers
raux1 , raux2 , raux3 are used for storing temporary results. Register rsp holds a pointer to the current top
value of the local stack — we call this register the
stack pointer register. Register rspp always holds a
pointer to a fixed location where the stack pointer
can be stored and restored – this location is the first
cell in the memory region dedicated to the local stack.
Finally, register rone always stores the value 1 so that
this particular value is always easily available.
The compilation of method m of class c with
method body Icode is defined as follows:
A (c, Call c 0 m; Icode) =
(* pop call argument and object *)
Load rsp raux2 ; Sub rsp rone rsp ; Load rsp raux1 ;
(* push current object and argument *)
Store rsp rtgt ; Add rsp rone rsp ; Store rsp rarg ;
(* save stack pointer *)
Store rspp rsp ;
(* set call object and argument *)
Mov raux1 rtgt ; Mov raux2 rarg ;
(* perform call *)
Const (methl c 0 m) raux3 ; Jal raux3 ;
(* reinitialize environment *)
Const 1 rone ; Const (stackl c) rspp ; Load rspp rsp ;
(* restore current object and argument *)
Load rsp rarg ; Sub rsp rone rsp ; Load rsp rtgt
(* push call result *)
Store rsp rret ; A (c, Icode)
A (c, Ret; Icode) =
(* pop return value *)
Load rsp rret ; Sub rsp rone rsp ;
(* pop return address *)
Load rsp ra ; Sub rsp rone rsp ;
(* save stack pointer *)
Store rspp rsp ;
C (c, m, Icode) =
(* perform return *)
Const 1 rone ;
Jump ra ; A (c, Icode)
(* load stack pointer *)
Const (stackl c) rspp ; Load rspp rsp ;
(* push return address *)
Figure 10: Compilation of communication-related instructions of the intermediate machine
Add rsp rone rsp ; Store rsp ra ; A (c, Icode)
where A (c, Icode) is an auxiliary, recursively defined
function having A (c, [] ) = [] as a base case. As
shown in the code snippet, the first instructions initialize the registers for them to match with the invariant we just explained informally.
Compilation is most interesting for calls and return instructions, which we present in figure 10. We
17
A (c, Nop; Icode) = Nop; A (c, Icode)
A (c, This; Icode) = (* push object *)
Add rsp rone rsp ; Store rsp rtgt ; A (c, Icode)
A (c, Halt; Icode) = Halt; A (c, Icode)
A (c, Arg; Icode) = (* push argument *)
A (c, Skip k ; Iinstr1 ; ... ; Iinstrk ; Icode) =
Add rsp rone rsp ; Store rsp rarg ; A (c, Icode)
Bnz rone L (Iinstr1 ; ... ; Iinstrk );
A (c, Iinstr1 ; ... ; Iinstrk ; Icode)
A (c, Ref o; Icode) = (* push object o *)
Const (objl o) raux1 ;
A (c, Skeq k ; Iinstr1 ; ... ; Iinstrk ; Icode) =
Add rsp rone rsp ; Store rsp raux1 ; A (c, Icode)
(* pop and compare objects *)
Load rsp raux2 ; Sub rsp rone rsp ; Load rsp raux1 ;
A (c, Drop; Icode) = Sub rsp rone rsp ; A (c, Icode)
Sub rsp rone rsp ; Binop= raux1 raux2 raux1 ;
(* branch according to result *)
A (c, Sel f ; Icode) =
Bnz raux1 L (Iinstr1 ; ... ; Iinstrk );
Const (f − 1) raux2 ;
A (c, Iinstr1 ; ... ; Iinstrk ; Icode)
(* pop object to select from *)
Load rsp raux1 ; Add raux1 raux2 raux1 ;
where L (Iinstr1 ; ... ; Iinstrk ) is the length of the
sequence of compiled instructions corresponding to
instructions Iinstr1 ; ... ; Iinstrk , defined as:
(* load and push field value *)
Load raux1 raux1 ; Store rsp raux1 ; A (c, Icode)
L ([] ) = 0
A (c, Upd f ; Icode) =
L (Drop; Icode) = L (Nop; Icode) = 1 + L (Icode)
Const (f − 1) raux2 ;
L (Halt; Icode) = L (Skip n; Icode) = 1 + L (Icode)
(* pop new field value and object *)
L (This; Icode) = L (Arg; Icode) = 2 + L (Icode)
Load rsp raux3 ; Sub rsp rone rsp ; Load rsp raux1 ;
L (Ref o; Icode) = 3 + L (Icode)
(* perform update on object *)
L (Sel f ; Icode) = 5 + L (Icode)
Add raux1 raux2 raux1 ; Store raux1 raux3 ;
L (Ret; Icode) = L (Skeq n; Icode) = 6 + L (Icode)
(* push new field value *)
L (Upd f ; Icode) = 7 + L (Icode)
Store rsp raux3 ; A (c, Icode)
L (Call c 0 m; Icode) = 18 + L (Icode)
Figure 11: Compilation of stack-related instructions
of the intermediate machine
Figure 12: Compilation of control-related instructions of the intermediate machine
18
Load and Store instructions are restricted to locations having the same color as the current instruction.
Moreover, the rules will compare the next instruction’s compartment to that of the current instruction:
Switching compartments is only allowed for Jump
and Jal; however, we need more protection on these
instructions because switching compartments should
be further restricted: this is what we now present.
also present the compilation of stack-manipulating
instructions in figure 11 and that of control-related
instructions in figure 12. In all these figures, inline
comments are provided so that the interested reader
can get a quick understanding of what is happening.
More standard compilers typically use a global call
stack and compile code under the assumption that
other components will not break invariants (such as
rone always holding value 1) nor mess with the call
stack. In our case, however, other components may
be controlled by an attacker, which is incompatible
with such assumptions. As a consequence, we use
local stacks not only for intermediate results, but also
to spill registers rtgt and rarg before performing a call.
After the call, we restore the values of registers rone ,
rspp and rsp that could have been overwritten by the
callee, then fill registers rtgt and rarg from the local
stack.
There is a lot of room for improvement in terms of
compiler efficiency; having a code optimization pass
would be very interesting in the future.
6.2
6.2.2
Abstraction In the source language, callers and
callees obey a strict call discipline: a caller performs
a method call, which leads to the execution of the
method body, and once this execution ends evaluation proceeds with the next operation of the caller.
In machine code, though, the Jal and Jump instructions can target any address.
Moreover, in the high-level language callers and
callees give no information to each other except for
the arguments and the result of the call. In the lowlevel machine, however, registers may carry extra information about the caller or the callee and their intermediate computational results. This suggests a
need for register cleaning upon calls and returns.
Micro-Policy Protecting Abstractions
Here, we first present the abstractions that our source
language provides with respect to low-level machine
code, and give intuition about how we can protect
them using a micro-policy. Then, we present our actual micro-policy and explain how it effectively implements this intuition. We end with a description
of the low-level loader, which sets the initial memory
tags for the program.
6.2.1
Enforcing Method Call Discipline using
Method Entry Points, Linear Return
Capabilities, and Register Cleaning
Protection Mechanism On our machine, calls
are done via Jal instructions, which store the return
address in the ra register, and returns by executing a
Jump to the value stored in ra . The first goal here
is to ensure that a caller can only performs calls at
method entry points, and that a callee can only return to the return address it was given in register ra
on the corresponding call.
To this end, we extend the memory tags to allow
tagging a memory location as a method entry point.
Then, we monitor Jal instructions so that they can
only target such locations. This is enough to protect
method calls. For returns, however, the problem is
not that simple. In contrast with calls, for which all
method entry points are equally valid, only one return
address is the good one at any given point in time;
it is the one that was transferred to the callee in the
ra register by the corresponding call. More precisely,
because there can be nested calls, there is exactly one
valid return address for each call depth.
We reflect this by tracking the current call depth in
the PC tag: it starts at zero and we increment it on
Jal instructions and decrement it on Jump instructions that correspond to returns. With such tracking,
Enforcing Class Isolation via Compartmentalization
Abstraction Because in the source language fields
are private, classes have no way to read or to write
to the data of other classes. Moreover, classes cannot
read or write code, which is fixed. Finally, the only
way to communicate between classes is to perform a
method call.
In machine code, however, Load, Store and
Jump operations can target any address in memory,
including those of other components. This interaction must be restricted to preserve the class isolation
abstraction.
Protection Mechanism To enforce class isolation, memory cells and the program counter get
tagged with a class name, which can be seen as a
color. The code and data belonging to class c get
tagged with color c.
19
we can now, upon a Jal instruction, tag register ra
to mark its content as the only valid return address
for the current call depth. It is however crucial to
make sure that when we go back to call depth n,
there isn’t any return capability for call depth n + 1
idling in the system. We do this by enforcing that the
tag on the return address is never duplicated, which
makes it a linear return capability. The return capability gets created upon Jal, moved from register
to memory upon Store and from memory to register
upon Load, moved between registers upon Mov, and
is finally destroyed upon returning via Jump. When
it gets destroyed, we can infer that the capability has
disappeared from the system, since there was always
only one.
Now that our first goal is met, we can think about
the second one: ensuring that compiled components
do not transmit more information upon call and return than they do within the high-level semantics.
Upon return, the distrusted caller potentially receives
more information than in the high-level: Uncleaned
registers could hold information from the compiled
callee. Upon calls, the distrusted callee similarly receives information through registers, but has also an
extra information to use: the content of register ra ,
which holds the return address. This content leaks
the identity of the compiled caller to the distrusted
callee, while there is no way for a callee to know the
identity of its caller in the high-level. Fortunately, the
content of register ra is already a specifically tagged
value, and we already prevent the use of linear return capabilities for any other means than returning
through it or moving it around.
Let us now review the general purpose registers
which could leak information about our compiled partial programs. rtgt and rarg could leak the current
object and argument to the distrusted caller upon return, but this is fine: the caller was the one who set
them before the call, so he already knows them. Upon
call, these registers do not leak information either
since, according to the compilation scheme, they are
already overwritten with call parameters. rret could
leak a previous result value of the compiled caller to
the distrusted callee upon call: it has to be cleaned.
Upon return, however, and according to the compilation scheme, this register is already overwritten with
the return value for the call. raux1 , raux2 , raux3 could
leak intermediate results of the computation upon return and have to be cleaned accordingly. Upon call,
however, following the compilation scheme, they are
already overwritten with information that the distrusted callee either will get or already has. rspp could
leak the identity of the compiled caller to the distrusted callee upon call, since it contains a pointer to
the caller’s local stack’s memory region: it has to be
cleaned. In the case of a return however, the identity
of the compiled callee is already known by the distrusted caller so no new information will be leaked.
rsp could leak information about the current state of
the local stack, as well as the identity of the compiled
caller, and should accordingly be cleaned in both call
and return cases. rone will be known by the distrusted
caller or the distrusted callee to always hold value 1,
and thus won’t leak any information. ra is already
protected from being leaked: it is overwritten both
at call and return time. In the case of a call, it is
overwritten upon the execution of the Jal instruction to hold the new return address. In the case of a
return, according to the compilation scheme, it will
be overwritten before performing the return to hold
the address to which we are returning. This description concerns the current unoptimizing compiler; for
an optimizing compiler the situation would be a little different: more information could be leaked, and
accordingly more registers would have to be cleaned.
A first solution would be to have the compiler produce register reset instructions Const 0 r for every
register r that could leak information, before any external call or return instruction. However, this would
be very expensive. This is one of the reasons why we
have made the following assumption about our target symbolic micro-policy machine: The tags of some
fixed registers (here, rret , rspp and rsp upon Jal, and
raux1 , raux2 , raux3 and rsp upon Jump) can be updated in our symbolic micro-policy rules. We are thus
by assumption able to clean the registers that might
leak information, by using a special tag to mark these
registers as cleared when we execute a Jump or a Jal
instruction.
6.2.3
Enforcing Type Safety Dynamically
Abstraction Finally, in the source language callees
and callers expect arguments or return values that are
well-typed with respect to method signatures: We
only consider high-level components that are statically well-typed and thus have to comply with the
type interface they declare. At the machine code
level, however, compromised components are untyped
and can feed arbitrary machine words to uncompromised ones, without any a priori typing restrictions.
Protection Mechanism We use micro-policy
rules to ensure that method arguments and return
20
values always have the type mentioned in type signatures. Fortunately, our type system is simple enough
that we can encode types (which are just class names)
as tags. Hence, we can build upon the call discipline mechanism above and add the expected argument type to the entry point tag, and the expected
return type to the linear return capability tag.
Our dynamic typing mechanism relies on the loader
to initialize memory tags appropriately. This initial
tagging will be presented in detail after the micropolicy itself: The main idea is that a register or memory location holding an object pointer will be tagged
as such, together with the type of the object. This
dynamic type information is moved around with the
value when it is stored, loaded or copied. One key for
making this possible is that the Const (objl o) r instructions which put an object reference in a register
are blessed with the type of this object according to
the type declared for o: executing a blessed Const
instruction will put the corresponding type information on the target register’s tag.
Remember that we assume that we can check the
next instruction tag in micro-policy rules. With dynamic type information available, we can then do
type checking by looking at the tags of registers rtgt
and rarg upon call, and that of register rret upon return. Upon call, we will compare with type information from the next instruction tag, which is the tag
for the method entry point. Upon return, we will
compare with type information from the return capability’s tag. For these checks to be possible, we
use the following assumption we made about our target symbolic micro-policy machine: The tags of some
fixed registers (here, rtgt and rarg upon Jal and rret
upon Jump) can be checked in our symbolic micropolicy rules.
6.2.4
tpc ::=
n
program counter tag
tmem ::=
(bt, c, et, vt)
memory tag
treg ::=
vt
register tag
bt ::=
B c | NB
blessed tag
et ::=
EP ca → cr | NEP
vt ::=
rt | ⊥
rt ::=
Ret n cr | O c | W
entry point tag
regular or cleared value tag
regular value tag
Figure 13: Symbolic micro-policy tags syntax
Memory tags (bt, c, et, vt) combine the various information we need about memory cells: First, a memory cell belongs to a compartment c. Its bt tag can
be either B c 0 for it to be blessed with type c 0 —
which means that it is a Const instruction which
puts an object of type c 0 in its target register — or
NB when it shouldn’t be blessed. Similarly, its et
tag can be either EP ca → cr when it is the entry
point of a method of type signature ca (cr ), or NEP
when it is not. Finally, the vt tag is for the content
of the memory cell, which can be either: a cleared
value, tagged ⊥; a return capability for going back to
call depth n by providing a return value of type cr ,
tagged Ret n cr ; an object pointer of type c 0 , tagged
O c 0 ; or a regular word, tagged W.
Tags on registers are the same vt tags as the ones
for the content of memory cells: The content of the
register is also tagged as a cleared value, a return
capability, an object pointer, or a regular word.
Micro-Policy in Detail
As presented in section 5.3, a symbolic micro-policy is
the combination of a collection of symbolic tags and
a transfer function. We first detail our tag syntax.
Then, we give the rules of our symbolic micro-policy,
which define its transfer function [9]. Finally, we explain how the loader initially tags program memory
following the program’s export declarations.
Micro-Policy Rules Our micro-policy is presented in figure 14, where we use the meta notation
clear vt for clearing return capabilities: It is equal to
vt, unless vt is a return capability, in which case it is
equal to ⊥.
This micro-policy combines all the informal intuition we previously gave in sections 6.2.1, 6.2.2 and
6.2.3 into one transfer function. A notable optimization is that we use the tag on the next instruction
Symbolic Micro-Policy Tags Our collection of
symbolic micro-policy tags is presented in figure 13.
The tag on the program counter register is a natural number n which represents the current call depth.
21
Nop : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 )} =⇒ {tpc 0 = n}
Const i rd : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 )} =⇒ {tpc 0 = n, trd 0 = W}
Const i rd : {tpc = n, tci = (B c 0 , c, et, W), tni = (bt 0 , c, et 0 , rt 0 )} =⇒ {tpc 0 = n, trd 0 = O c 0 }
Mov rs rd : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 ), trs = vt, trd = vt 0 }
=⇒ {tpc 0 = n, trd 0 = vt, trs 0 = clear vt}
Binopop r1 r2 rd : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 ), tr1 = W, tr2 = W}
=⇒ {tpc 0 = n, trd 0 = W}
Load rp rd : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 ), trp = W, tmem = (bt, c, et 00 , vt), trd = vt 0 }
=⇒ {tpc 0 = n, trd 0 = vt, tmem 0 = (bt, c, et 00 , clear vt)}
Store rp rs : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 ), trp = W, trs = vt, tmem = (bt, c, et 00 , vt 0 )}
=⇒ {tpc 0 = n, tmem 0 = (NB, c, et 00 , vt), trs 0 = clear vt}
Jump r : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 ), tr = W} =⇒ {tpc 0 = n}
Jump r : {tpc = n + 1, tci = (NB, c, et, W), tni = (bt 0 , c 0 , et 0 , rt 0 ), tr = Ret n c, trret = O c}
=⇒ {tpc 0 = n, tr 0 = ⊥, traux1 0 = ⊥, traux2 0 = ⊥, traux3 0 = ⊥, trsp 0 = ⊥} when c 6= c 0
Jal r : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 ), tr = W, tra = vt 0 } =⇒ {tpc 0 = n, tra 0 = W}
Jal r : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c 0 , EP c1 → c2 , rt 0 ), tr = W, tra = vt 0 , trarg1 = O c, trarg2 = O c1 }
=⇒ {tpc 0 = n + 1, tra 0 = Ret n c2 , trret 0 = ⊥, trspp 0 = ⊥, trsp 0 = ⊥} when c 6= c 0
Bnz r i : {tpc = n, tci = (NB, c, et, W), tni = (bt 0 , c, et 0 , rt 0 ), tr = W} =⇒ {tpc 0 = n}
Figure 14: The rules of our symbolic micro-policy
memory cells in the program get tagged as not being entry points. All locations holding an encoded
Const (objl o) r instruction are tagged as blessed
instructions storing a c object pointer in register r
(B c), where c is the exported type of object o. All locations that hold a symbolic pointer objl o are tagged
as storing a pointer to an object of class c (O c),
where c is the exported type of object o. This applies to cells in both object and stack memory regions.
The other stack memory cells are tagged as cleared
values ⊥, and all the remaining memory cells as regular words W.
to distinguish internal from cross-compartment calls
and returns. Indeed, we don’t need to enforce method
call discipline nor to perform type checking on internal calls and returns. This is a crucial move with respect to both transparency and efficiency: It means
that low-level components don’t have to comply with
the source language’s abstractions for their internal
computations, and moreover this lax monitoring will
result in a lower overhead on execution time for internal steps, since there will be less cache misses.
Loader Initializing Memory Tags The loader
first performs simple static checks: it must make sure
that (1) no import declaration is left (the program is
complete); (2) there exists a methl c m region for
each method m of each exported class c; (3) there
exists a stackl c region for each exported class c; (4)
there exists an objl o region for each exported object
o; (5) all memory regions have a matching counterpart in the export declarations.
If all these checks succeed the loader proceeds and
tags all program memory, following the export declarations: Every memory region is tagged uniformly
with the tag of its compartment; which is c for
methl c m and stackl c memory regions, and the exported type of o for objl o memory regions. The
first memory cell of each method region methl c m
gets tagged as an entry point with the exported type
signature for method m of class c, while all other
7
7.1
Related Work
Secure Compilation
Secure compilation has been the topic of many
works [4, 19, 22, 32], but only recently has the problem of targeting machine code been considered [6,32].
Moreover, all of these works focus on protecting a program from its context, rather than protecting mutually distrustful components like we do.
Abadi and Plotkin [4] formalized address space
layout randomization as a secure implementation
scheme for private locations, with probabilistic guarantees: They expressed it as a fully-abstract compilation between a source language featuring public and
private locations, and a lower-level language in which
22
multi-language programs and building verified compilers that can provide guarantees for them is a hot
topic, studied in particular by Ahmed et al. [7, 35]
and Ramananandro et al. [36].
memory addresses are natural numbers. Follow-ups
were presented by Jagadeesan et al. [22] and Abadi,
Planul and Plotkin [2,3], with extended programming
languages.
Fournet et al. [19] constructed a fully-abstract compiler from a variant of ML [40] to JavaScript. Their
protection scheme protects one generated script from
its context. A key point is that the protected script
must have first-starter privilege: The first script
which gets executed can overwrite objects in the
global namespace, on which other scripts may rely.
Hence, this scheme can’t protect mutually distrustful
components from each other, since only one component can be the first to execute.
The closest work to ours is recent [6,31,32] and ongoing [31,34] work by Agten, Patrignani, Piessens, et
al. They target a set of hardware extensions, which
they call protected module architectures [20, 28, 38].
Micro-policies could be used to define a protected
module architecture, but they offer finer-grained protection: In our work, this finer-grained protection
allows us to manage linear return capabilities and
perform dynamic type-checking. Micro-policies also
allow us to support a stronger attacker model of dynamic corruption, by means of a secure compilation
of mutually distrustful components. As we discovered recently [24, 34], Patrignani et al. are currently
trying to extend there previous work to ensure a secure compilation of mutually distrustful components.
Our hope is that this parallel work can lead to interesting comparison and exchange, because the mechanisms we use are different: We believe that exploiting
the full power of micro-policies can provide stronger
guarantees and better performance than using micropolicies just as an instance of protected module architectures.
7.2
8
Discussion and Future Work
In this section, we discuss the limitations of our work
and the generality of our approach, as well as future
work.
8.1
Finite Memory and Full Abstraction
While memory is infinite in our high-level language,
memory is finite in any target low-level machine. Our
symbolic micro-policy machine is no exception: memory regions have a fixed finite size. This means that
memory exhaustion or exposing the size of regions
can break full abstraction.
Let us first recall how memory regions are used in
this work. Our compiler translates method bodies
from high-level expressions to machine code: Each
method gets a dedicated memory region in the process, to store its compiled code. This code manipulates a stack that is local to the method’s compartment; and this stack also gets its own memory region.
Finally, each object gets a dedicated memory region,
storing low-level values for its fields.
The first problem is potential exhaustion of the local stack’s memory: When the stack is full and the
program tries to put a new value on top, the machine
will stop. This already breaks compiler correctness:
Executing the compiled code for (this; o) will for example first try to put rtgt on top on the full stack
and hence stop the machine, when the high-level expression would simply return o to the caller. Full
abstraction, which typically relies on a compiler correctness lemma, is broken as well: The low-level attacker can now distinguish between method bodies
(this; o) and o, even though they have the same behavior in the high-level. One workaround would be
to add one more intermediate step in the compilation chain, where the symbolic machine would have
infinite memory regions: Full abstraction would be
preserved with respect to this machine, and weakened when we move to finite memory regions. This
workaround is the one taken by CompCert [27], which
until very recently [30] only formalized and proved
something about infinite memory models. A better but probably more difficult to implement solution
would be to keep the current finite-memory machine,
but make it explicitly in the property (e.g. compiler
Multi-Language Approaches
In contrast with previous fully-abstract compilers
where a single source component gets protected from
its context, we protect linked mutually distrustful
low-level components from each other.
One benefit of this approach is that components
need not share a common source language. While
our current protection mechanism is still deeply connected to our source language, in principle each component could have been written in a specific source
language and compiled using a specific compiler.
It is actually common in real-life that the final program comes from a mix of components that were all
written in different languages. Giving semantics to
23
with compiled components.
A first, good step in this direction is that we
don’t enforce method call discipline nor type safety
on internal calls and returns, but only on crosscompartment calls and returns. This is a good
idea for both efficiency and transparency: Checks
are lighter, leading to better caching and thus better performance; and low-level programs are less
constrained, while still being prevented from taking
harmful actions.
However, the constraints we set may still be too restrictive: For example, we enforce an object-oriented
view on function calls and on data, we limit the number of arguments a function can pass through registers, and we force the programs to comply with our
type system. This suggests the need for wrappers.
Since internal calls and returns are not heavily monitored, we can define methods that respect our constraints and internally call the non-compliant benign
low-level code: This low-level code can then take its
non-harmful, internal actions without constraints —
hence with good performance — until it internally
returns to the wrapper, which will appropriately convert the result of the call before returning to its caller.
correctness, full abstraction) that in cases such as resource exhaustion all bets are off.
The second problem is that the size of compiled
method bodies, as well as the number of private fields
of compiled objects, exactly match the size of their
dedicated memory regions. This does not cause problems with the current memory model in which region
locations exist in isolation of other regions. In future
work, switching to a more concrete view of memory
could lead to the exposure of information to the attacker: If a program region happens to be surrounded
by attacker memory regions, then the attacker could
infer the size of the program region and hence get size
information about a method body or an object. Because there is no similar information available in the
high-level, this will likely break full abstraction. The
concrete loader could mitigate this problem, for example by padding memory regions so that all implementations of the same component interface get the
same size for each dedicated memory region. This
would be, however, wasteful in practice. Alternatively, we could weaken the property to allow leaking
this information. For instance we could weaken full
abstraction to say that one can only replace a compartment with an equivalent one that has the same
sizes when compiled. This would weaken the property
quite a lot, but it would not waste memory. There
could also be interesting compromises in terms of security vs practicality, in which we pad to fixed size
blocks and we only leak the number of blocks.
These problems are not specific to our compiler.
Fournet et al. [19] target JavaScript and view stack
and heap memory exhaustion as a side channel of concrete JavaScript implementations: It is not modeled
by the semantics they give to JavaScript. Similarly,
the key to the full abstraction result of Patrignani et
al. [32, 33] (the soundness and completeness of their
trace semantics) is given under the assumption that
there is no overflow of the program’s stack [33]. Patrignani et al. [32] also pad the protected program so
that for all implementations to use the same amount
of memory.
8.2
8.3
Future Work
The first crucial next step is to finish the full abstraction proof. As we explain in section 2, however,
full abstraction does not capture the exact notion
of secure compilation we claim to provide. We will
thus formalize a suitable characterization and prove
it, hopefully reusing lemmas from the full abstraction
proof. Afterwards, we will implement the compiler
and conduct experiments that could confirm or deny
our hopes regarding efficiency and transparency.
There are several ways to extend this work. The
most obvious would be to support more features that
are common in object-oriented languages, such as
dynamic allocation, inheritance, packages or exceptions. Another way would be to move to functional
languages, which provide different, interesting challenges. Taking as source language a lambda-calculus
with references and simple modules, would be a first
step in this direction, before moving to larger ML
subsets.
Finally, the micro-policy use in this work was built
progressively, out of distinct micro-policies which
we designed somewhat independently. Composing
micro-policies in a systematic and correct way, without breaking the guarantees of any of the composed
policies, is still an open problem that would be very
Efficiency and Transparency
Our micro-policy constrains low-level programs so as
to prevent them from taking potentially harmful actions. However, we should make sure 1) that this
monitoring has reasonable impact on the performance
of these programs; and 2) that these programs are not
constrained too much, in particular that benign lowlevel components are not prevented from interacting
24
interesting to study on its own.
8.4
For better understanding, we use a syntax with
strings for names which is easily mapped to our source
language syntax. We present the three encoded types
as distinct components, resulting in quite verbose
programs: Linking them together in the high-level
would result in one partial program with three classes
and no import declarations.
Scaling to Real-World Languages
Our micro-policy seems to scale up easily to more
complicated languages, except for dynamic type
checking which will be trickier.
Sub-typing, which arises with inheritance, would
bring the first new challenges in this respect. Our
dynamic type checking mechanism moreover requires
encoding types in tags: When we move to languages
with richer type systems, we will have to explore in
more detail the whole field of research on dynamic
type and contract checking [18].
Compartmentalization could easily be extended to
deal with public fields, by distinguishing memory locations that hold public field values from other locations.
Dynamic allocation seems possible and would be
handled by monitor services, setting appropriate tags
on the allocated memory region. However, such tag
initialization is expensive for large memory regions
in the current state of the PUMP, and could benefit
from additional hardware acceleration [44].
Finally, functional languages bring interesting challenges that have little to do with the work presented
in this document, such as closure protection and polymorphism. We plan to study these languages and
discover how micro-policies can help in securely compiling them.
A
A.1
A.2
Source Semantics
The semantics we propose for the source language is
a small-step continuation-based semantics. It is particularly interesting to present this variant because
it is very close to our intermediate machine and can
help understanding how the source to intermediate
compilation works.
After loading, source programs become a pair of a
class table CT and an initial configuration Cfg. The
syntax for configurations is presented on figure 18.
A configuration (OT , CS , ot , oa , K , e) can be
thought of as a machine state: OT is the object table, from which we fetch field values and which gets
updated when we perform field updates. CS is the
call stack, on top of which we store the current environment upon call. ot is the current object and oa
the current argument. e is the current expression to
execute and K the current continuation, which defines what we should do with the result of evaluating
e.
Configurations can be reduced: The rules for the
reduction CT ` Cfg −→ Cfg 0 are detailed in figure 19. The class table CT is on the left of the
turnstile because it does not change throughout the
computation.
The initial configuration (OT , [] , ot , oa , [] , e)
features the program’s object table OT , an empty
call stack, and an empty continuation. The current expression to execute, e, is the body of
the main method of the program, executing with
appropriately-typed current object ot and argument
oa . Since object and class names are natural numbers,
an example choice which we take in our formal study
is to say that the main method is method 0 of class
0, and that it should initially be called with object 0
of type 0 as both current object and argument.
Our reduction is deterministic. A program terminates with result or when there is a possibly
empty reduction sequence from its initial configuration to a final configuration (OT 0 , [] , ot , oa , [] , or )
or (OT 0 , CS , ot , oa , (exit ) :: K , or ). The type system ensures that everywhere a exit e expression is
encountered, expression e has the same type as the
Appendix
Encoding Usual Types
export obj decl tt : Unit
export class decl tt { }
obj tt : Unit { }
class Unit { }
Figure 15: Encoding the unit type
Here we give a flavor of what programming looks
like with our source language by encoding some familiar types using our class mechanism: the unit type in
figure 15, booleans in figure 16, and bounded natural
numbers in figure 17. Encoding unbounded natural
numbers would be possible with dynamic allocation,
which is not part of our source language at the moment.
25
expected return type for the main method. Hence,
when a program terminates with a value, the value
necessarily has this particular type.
import obj decl tt : Unit
import class decl Unit { }
export
export
Bool
Bool
Bool
}
obj decl t, f : Bool
class decl Bool {
not(Unit),
and(Bool),
or(Bool)
obj t : Bool { }
obj f : Bool { }
class Bool {
Bool not(Unit) { this == t ? f : t }
Bool and(Bool) { this == t ? arg : f }
Bool or(Bool) { this == t ? t : arg }
}
Figure 16: Encoding booleans
export obj decl zero, one, two, three : BNat4
export class decl BNat4 {
BNat4 add(BNat4),
BNat4 mul(BNat4 arg)
}
obj zero : BNat4 {
obj one
: BNat4 {
obj two
: BNat4 {
obj three : BNat4 {
class BNat4 {
BNat4 pred, succ;
zero,
zero,
one,
two,
one }
two }
three }
three }
BNat4 add(BNat4) {
arg == zero ?
this : this.succ.add(arg.pred)
}
BNat4 mul(BNat4) {
arg == zero ?
zero : this.mul(arg.pred).add(this)
}
}
Figure 17: Encoding bounded natural numbers
26
Cfg ::=
(OT , CS , ot , oa , K , e)
reduction configurations
CS ::=
[] | (ot , oa , K ) :: CS
call stack
K ::=
[] | E :: K
continuations
E ::=
flat evaluation contexts
.f | .f := e 0 | o.f := | .m(e 0 ) | o.m()
| == e 0 ? e 00 : e 000 | o == ? e 00 : e 000 | ; e
| exit
Figure 18: Configuration syntax for the source language
27
CT ` Cfg −→ Cfg 0
CT ` (OT , CS , ot , oa , K , this) −→ (OT , CS , ot , oa , K , ot )
this
CT ` (OT , CS , ot , oa , K , arg) −→ (OT , CS , ot , oa , K , oa )
arg
CT ` (OT , CS , ot , oa , K , e.f ) −→ (OT , CS , ot , oa , .f :: K , e)
sel_push
OT (ot ) = obj ct {ot1 , ... , otk }
OT (o) = obj ct {o1 , ... , ol }
1≤f ≤l
CT ` (OT , CS , ot , oa , .f :: K , o) −→ (OT , CS , ot , oa , K , of )
sel_pop
upd_push
CT ` (OT , CS , ot , oa , K , e.f := e 0 ) −→ (OT , CS , ot , oa , (.f := e 0 ) :: K , e)
upd_switch
CT ` (OT , CS , ot , oa , (.f := e) :: K , o) −→ (OT , CS , ot , oa , (o.f := ) :: K , e)
OT (ot ) = obj ct {ot1 , ... , otk }
OT (o) = obj ct {o1 , ... , of −1 , of , oj0 , ... , ol0 }
OT 0 = OT [o 7→ obj c{o1 , ... , of −1 , o 00 , oj0 , ... , ol0 }]
CT ` (OT , CS , ot , oa , (o.f := ) :: K , o 00 ) −→ (OT 0 , CS , ot , oa , K , o 00 )
upd_pop
call_push
CT ` (OT , CS , ot , oa , K , e.m(e 0 )) −→ (OT , CS , ot , oa , .m(e 0 ) :: K , e)
call_switch
CT ` (OT , CS , ot , oa , .m(e) :: K , o) −→ (OT , CS , ot , oa , o.m() :: K , e)
OT (ot 0 ) = obj ct 0 {ot 01 , ... , ot 0k }
CT (ct 0 ) = class {c1 , ... , ci ; M1 , ... , Mj }
1≤m≤j
Mm = cr (ca ){e}
CT ` (OT , CS , ot , oa , ot 0 .m() :: K , oa 0 ) −→ (OT , (ot , oa , K ) :: CS , ot 0 , oa 0 , K , e)
call_pop
return
CT ` (OT , (ot 0 , oa 0 , K ) :: CS , ot , oa , [] , or ) −→ (OT , CS , ot 0 , oa 0 , K , or )
CT ` (OT , CS , ot , oa , K , e1 == e2 ? e3 : e4 ) −→ (OT , CS , ot , oa , ( == e2 ? e3 : e4 ) :: K , e1 )
test_push
CT ` (OT , CS , ot , oa , ( == e2 ? e3 : e4 ) :: K , o1 ) −→ (OT , CS , ot , oa , (o1 == ? e3 : e4 ) :: K , e2 )
CT ` (OT , CS , ot , oa , (o1 == ? e3 : e4 ) :: K , o1 ) −→ (OT , CS , ot , oa , K , e3 )
o1 6= o2
CT ` (OT , CS , ot , oa , (o1 == ? e3 : e4 ) :: K , o2 ) −→ (OT , CS , ot , oa , K , e4 )
CT ` (OT , CS , ot , oa , K , (e; e 0 )) −→ (OT , CS , ot , oa , (; e 0 ) :: K , e)
CT ` (OT , CS , ot , oa , (; e) :: K , o) −→ (OT , CS , ot , oa , K , e)
CT ` (OT , CS , ot , oa , K , exit e) −→ (OT , CS , ot , oa , (exit ) :: K , e)
test_pop_eq
test_pop_neq
seq_push
seq_pop
exit_push
Figure 19: Continuation-based semantics for the source language
28
test_switch
References
[10] G. M. Bierman and M. J. Parkinson. Effects and
effect inference for a core java calculus. Electr.
Notes Theor. Comput. Sci., 82(7):82–107, 2003.
[1] M. Abadi. Protection in programming-language
translations. Research Report 154, SRC, 1998.
[11] N. P. Carter, S. W. Keckler, and W. J. Dally.
Hardware support for fast capability-based addressing. In F. Baskett and D. W. Clark, editors, ASPLOS-VI Proceedings - Sixth International Conference on Architectural Support for
Programming Languages and Operating Systems,
San Jose, California, USA, October 4-7, 1994.
1994.
[2] M. Abadi and J. Planul. On layout randomization for arrays and functions. In D. A. Basin
and J. C. Mitchell, editors, Principles of Security and Trust - Second International Conference,
POST 2013, Held as Part of the European Joint
Conferences on Theory and Practice of Software,
ETAPS 2013, Rome, Italy, March 16-24, 2013.
Proceedings. 2013.
[12] T. Dai, S. Sathyanarayan, R. H. C. Yap, and
Z. Liang. Detecting and preventing activex apimisuse vulnerabilities in internet explorer. In
T. W. Chim and T. H. Yuen, editors, Information and Communications Security - 14th International Conference, ICICS 2012, Hong Kong,
China, October 29-31, 2012. Proceedings. 2012.
[3] M. Abadi, J. Planul, and G. D. Plotkin. Layout randomization and nondeterminism. Electr.
Notes Theor. Comput. Sci., 298:29–50, 2013.
[4] M. Abadi and G. D. Plotkin. On protection by
layout randomization. ACM Trans. Inf. Syst. Secur., 15(2):8, 2012.
[13] A. A. de Amorim and B. C. Pierce. Alpha is
for address! the essence of memory safety. Draft,
2015.
[5] P. Agten, B. Jacobs, and F. Piessens. Sound modular verification of C code executing in an unverified context. In Proceedings of the 42nd Annual
ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2015,
Mumbai, India, January 15-17, 2015, 2015.
[14] U. Dhawan, C. Hriţcu, R. Rubin, N. Vasilakis,
S. Chiricescu, J. M. Smith, T. F. Knight, Jr.,
B. C. Pierce, and A. DeHon. Architectural support for software-defined metadata processing.
ASPLOS, 2015.
[6] P. Agten, R. Strackx, B. Jacobs, and F. Piessens.
Secure compilation to modern processors. In
S. Chong, editor, 25th IEEE Computer Security
Foundations Symposium, CSF 2012, Cambridge,
MA, USA, June 25-27, 2012. 2012.
[15] Ú. Erlingsson. Low-level software security: Attacks and defenses. In Foundations of Security
Analysis and Design IV, FOSAD 2006/2007 Tutorial Lectures, 2007.
[16] Ú. Erlingsson, M. Abadi, M. Vrable, M. Budiu,
and G. C. Necula. XFI: Software guards for system address spaces. OSDI. 2006.
[7] A. Ahmed.
Verified compilers for a multilanguage world. In T. Ball, R. Bodík, S. Krishnamurthi, B. S. Lerner, and G. Morrisett, editors,
1st Summit on Advances in Programming Languages, SNAPL 2015, May 3-6, 2015, Asilomar,
California, USA. 2015.
[17] I. Evans, S. Fingeret, J. Gonzalez, U. Otgonbaatar, T. Tang, H. Shrobe, S. SidiroglouDouskos, M. Rinard, and H. Okhravi. Missing
the point(er): On the effectiveness of code pointer
integrity. In 2015 IEEE Symposium on Security
and Privacy, SP 2015, San Jose, CA, USA, May
17-21, 2015, 2015.
[8] A. Azevedo de Amorim, N. Collins, A. DeHon,
D. Demange, C. Hriţcu, D. Pichardie, B. C.
Pierce, R. Pollack, and A. Tolmach. A verified
information-flow architecture. POPL. 2014.
[18] R. B. Findler and M. Felleisen. Contracts for
higher-order functions. In Proceedings of the 7th
International Conference on Functional Programming. 2002.
[9] A. Azevedo de Amorim, M. Dénès, N. Giannarakis, C. Hriţcu, B. C. Pierce, A. SpectorZabusky, and A. Tolmach. Micro-policies: Formally verified, tag-based security monitors. In
36th IEEE Symposium on Security and Privacy
(Oakland S&P). 2015.
[19] C. Fournet, N. Swamy, J. Chen, P. Dagand,
P. Strub, and B. Livshits. Fully abstract compilation to javascript. In R. Giacobazzi and
29
[30] E. Mullen, Z. Tatlock, and D. Grossman. Peek:
A formally verified peephole optimization framework for x86. CoqPL Workshop, 2015.
R. Cousot, editors, The 40th Annual ACM
SIGPLAN-SIGACT Symposium on Principles of
Programming Languages, POPL ’13, Rome, Italy
- January 23 - 25, 2013. 2013.
[31] M. Patrignani. The Tome of Secure Compilation:
Fully Abstract Compilation to Protected Modules
Architectures. PhD thesis, KU Leuven, Leuven,
Belgium, 2015.
[20] M. Hoekstra, R. Lal, P. Pappachan, V. Phegade,
and J. del Cuvillo. Using innovative instructions
to create trustworthy software solutions. In Lee
and Shi [26].
[32] M. Patrignani, P. Agten, R. Strackx, B. Jacobs,
D. Clarke, and F. Piessens. Secure compilation
to protected module architectures. ACM Transactions on Programming Languages and Systems,
2015.
[21] A. Igarashi, B. C. Pierce, and P. Wadler. Featherweight java: a minimal core calculus for java
and GJ. ACM Trans. Program. Lang. Syst.,
23(3):396–450, 2001.
[22] R. Jagadeesan, C. Pitcher, J. Rathke, and
J. Riely. Local memory via layout randomization.
In Proceedings of the 24th IEEE Computer Security Foundations Symposium, CSF 2011, Cernayla-Ville, France, 27-29 June, 2011. 2011.
[33] M. Patrignani and D. Clarke. Fully abstract
trace semantics for protected module architectures. Computer Languages, Systems & Structures, 42(0):22 – 45, 2015. Special issue on
the Programming Languages track at the 29th
{ACM} Symposium on Applied Computing.
[23] A. Jeffrey and J. Rathke. Java jr: Fully abstract trace semantics for a core java language.
In S. Sagiv, editor, Programming Languages and
Systems, 14th European Symposium on Programming,ESOP 2005, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2005, Edinburgh, UK, April 4-8,
2005, Proceedings. 2005.
[34] M. Patrignani, D. Devriese, and F. Piessens.
Multi-module fully abstract compilation (extended abstract). Workshop on Foundations of
Computer Security, 2015.
[35] J. T. Perconti and A. Ahmed. Verifying an
open compiler using multi-language semantics. In
Z. Shao, editor, Programming Languages and Systems - 23rd European Symposium on Programming, ESOP 2014, Held as Part of the European
Joint Conferences on Theory and Practice of Software, ETAPS 2014, Grenoble, France, April 5-13,
2014, Proceedings. 2014.
[24] Y. Juglaret and C. Hriţcu. Secure compilation
using micro-policies (extended abstract). Workshop on Foundations of Computer Security, 2015.
[25] A. Kennedy. Securing the .net programming
model. Theor. Comput. Sci., 364(3):311–317,
2006.
[36] T. Ramananandro, Z. Shao, S. Weng, J. Koenig,
and Y. Fu. A compositional semantics for verified separate compilation and linking. In X. Leroy
and A. Tiu, editors, Proceedings of the 2015 Conference on Certified Programs and Proofs, CPP
2015, Mumbai, India, January 15-17, 2015. 2015.
[26] R. B. Lee and W. Shi, editors. HASP 2013, The
Second Workshop on Hardware and Architectural
Support for Security and Privacy, Tel-Aviv, Israel, June 23-24, 2013. ACM, 2013.
[27] X. Leroy. Formal verification of a realistic compiler. CACM, 52(7):107–115, 2009.
[37] K. Z. Snow, F. Monrose, L. Davi, A. Dmitrienko,
C. Liebchen, and A. Sadeghi. Just-in-time code
reuse: On the effectiveness of fine-grained address
space layout randomization. In 2013 IEEE Symposium on Security and Privacy, SP 2013, Berkeley, CA, USA, May 19-22, 2013, 2013.
[28] F. McKeen, I. Alexandrovich, A. Berenzon,
C. V. Rozas, H. Shafi, V. Shanbhogue, and U. R.
Savagaonkar. Innovative instructions and software model for isolated execution. In Lee and
Shi [26].
[38] R. Strackx and F. Piessens. Fides: selectively hardening software application components
against kernel-level or process-level malware. In
T. Yu, G. Danezis, and V. D. Gligor, editors, the
[29] G. Morrisett, G. Tan, J. Tassarotti, J.-B. Tristan, and E. Gan. RockSalt: better, faster,
stronger SFI for the x86. PLDI. 2012.
30
ACM Conference on Computer and Communications Security, CCS’12, Raleigh, NC, USA, October 16-18, 2012. 2012.
[39] G. T. Sullivan, S. Chiricescu, A. DeHon, D. Demange, S. Iyer, A. Kliger, G. Morrisett, B. C.
Pierce, H. Reubenstein, J. M. Smith, A. Thomas,
J. Tov, C. M. White, and D. Wittenberg. SAFE:
A clean-slate architecture for secure systems. In
Proceedings of the IEEE International Conference
on Technologies for Homeland Security, 2013.
[40] N. Swamy, J. Chen, C. Fournet, P. Strub,
K. Bhargavan, and J. Yang. Secure distributed
programming with value-dependent types. J.
Funct. Program., 23(4):402–451, 2013.
[41] L. Szekeres, M. Payer, T. Wei, and D. Song.
SoK: Eternal war in memory. IEEE S&P. 2013.
[42] R. N. M. Watson, P. G. Neumann, J. Woodruff,
J. Anderson, R. Anderson, N. Dave, B. Laurie,
S. W. Moore, S. J. Murdoch, P. Paeps, M. Roe,
and H. Saidi. CHERI: a research platform deconflating hardware virtualization and protection. In
Proc. RESoLVE, 2012.
[43] R. N. M. Watson, J. Woodruff, P. G. Neumann,
S. W. Moore, J. Anderson, D. Chisnall, N. H.
Dave, B. Davis, K. Gudka, B. Laurie, S. J. Murdoch, R. Norton, M. Roe, S. Son, and M. Vadera.
CHERI: A hybrid capability-system architecture
for scalable software compartmentalization. In
2015 IEEE Symposium on Security and Privacy,
SP 2015, San Jose, CA, USA, May 17-21, 2015,
2015.
[44] E. Witchel, J. Cates, and K. Asanović. Mondrian
memory protection. ASPLOS. 2002.
[45] B. Yee, D. Sehr, G. Dardyk, J. B. Chen,
R. Muth, T. Ormandy, S. Okasaka, N. Narula,
and N. Fullagar. Native client: a sandbox for
portable, untrusted x86 native code. Commun.
ACM, 53(1):91–99, 2010.
31
| 6 |
POD-based reduced-order model of an
eddy-current levitation problem
arXiv:1710.08180v1 [] 23 Oct 2017
MD Rokibul Hasan, Laurent Montier, Thomas Henneron, and Ruth V. Sabariego
Abstract The accurate and efficient treatment of eddy-current problems with movement is still a challenge. Very few works applying reduced-order models are available in the literature. In this paper, we propose a proper-orthogonal-decomposition
reduced-order model to handle these kind of motional problems. A classical magnetodynamic finite element formulation based on the magnetic vector potential is used
as reference and to build up the reduced models. Two approaches are proposed. The
TEAM workshop problem 28 is chosen as a test case for validation. Results are
compared in terms of accuracy and computational cost.
1 Introduction
The finite element (FE) method is widely used and versatile for accurately modelling
electromagnetic devices accounting for eddy current effects, non-linearities, movement,... However, the FE discretization may result in a large number of unknowns,
which maybe extremely expensive in terms of computational time and memory. Furthermore, the modelling of a movement requires either remeshing or ad-hoc techniques. Without being exhaustive, it is worth mentioning: the hybrid finite-element
boundary-element (FE-BE) approaches [1], the sliding mesh techniques (rotating
machines) [2] or the mortar FE approaches [3].
Physically-based reduced models are the most popular approaches for efficiently
handling these issues. They extract physical parameters (inductances, flux linkages,...) either from simulations or measurements and construct look-up tables covMD Rokibul Hasan · Ruth V. Sabariego
KU Leuven, Dept. Electrical Engineering (ESAT), Leuven & EnergyVille, Genk, Belgium, e-mail:
[email protected], [email protected]
Laurent Montier · Thomas Henneron
Laboratoire d’Electrotechnique et d’Electronique de Puissance, Arts et Metiers ParisTech, Lille,
France, e-mail: [email protected], [email protected]
1
2
MD Rokibul Hasan, Laurent Montier, Thomas Henneron, and Ruth V. Sabariego
ering the operating range of the device at hand [4, 5]. Future simulations are performed by simple interpolation, drastically reducing thus the computational cost.
However, these methods depend highly on the expert’s knowledge to choose and
extract the most suitable parameters.
Mathematically-based reduced-order (RO) techniques are a feasible alternative,
which are gaining interest in electromagnetism [6]. RO modelling of static coupled
system has already been implemented in [7,8]. Few RO works have addressed problems with movement (actuators, electrical machines, etc.) [9–11].
In [9], authors consider a POD-based FE-BE model electromagnetic device comprising nonlinear materials and movement. Meshing issues are avoided but the system matrix is not sparse any more, increasing considerably the cost of generating
the RO model. In [10], a magnetostatic POD-RO model of a permanent magnet
synchronous machine is studied. A locked step approach is used, so the mesh and
associated number of unknowns remains constant. A POD-based block-RO model
is proposed in [11,12], where the domain is split in linear and nonlinear regions and
the ROM is applied only to the linear part.
In this paper, we consider a POD-based FE model of a levitation problem, namely
the Team Workshop problem 28 (TWP28) [5, 13] (a conducting plate above two
concentric coils, see Fig. 1). The movement is modelled with two RO models based
on: 1) FE with automatic remeshing of the complete domain; 2) FE with constraint
remeshing, i.e., localized deformation of the mesh around the moving plate, hereafter referred to as mesh deformation. Both models are validated in the time domain
and compared in terms of computational efficiency.
az (0/1)
-0.000304
-9.04e-05
az (0/1)
0.000123
Y
ZX
-0.000304
-9.04e-05
0.000123
Y
ZX
Fig. 1: 2D axisymmetric mesh of TWP28: aluminium plate above two concentric
coils (12.8 mm clearance). Real part of the magnetic flux density. Left: automatic
remeshing of the full domain, Right: mesh deformation of sub-domain around plate
with nodes fixed at it’s boundaries (except axes).
POD-based reduced-order model of an eddy-current levitation problem
3
2 Magnetodynamic levitation model
Let us consider a bounded domain Ω = Ωc ∪ ΩcC ∈ R3 with boundary Γ . The conducting and non-conducting parts of Ω are denoted by Ωc and ΩcC , respectively.
The (modified) magnetic-vector-potential (a−) magnetodynamic formulation (weak
form of Ampère’s law) reads: find a, such that
(ν curla, curl a′ )Ω + (σ ∂t a, a′ )Ωc + hn̂ × h, a′iΓ = ( js , a′ )Ωs , ∀a′
(1)
with a′ test functions in a suitable function space; b(t) = curla(t), the magnetic
flux density; js (t) a prescribed current density and n̂ the outward unit normal vector
on Γ . Volume integrals in Ω and surface integrals on Γ of the scalar product of
their arguments are denoted by (·, ·)Ω and h·, ·iΩ . The derivative with respect to
time is denoted by ∂t . We further assume linear isotropic and time independent
materials with magnetic constitutive law, so that the magnetic field is h(t) = ν b(t)
(reluctivity ν ) and electric constitutive law, given by induced eddy current density
j(t) = σ e(t), (conductivity σ ) where, electric field e(t) = −∂t a(t). Assuming a rigid
Ωc (no deformation) and a purely translational movement (no rotation, no tilting),
the electromagnetic force appearing due to the eddy currents in Ωc can be modelled
as a global quantity with only one component (vertical to the plate). If Ωc is nonmagnetic, Lorentz force can be used:
Fem (t) =
Z
Ωc
j(t) × b(t) dΩc =
Z
Ωc
−σ ∂t a(t) × curla(t) dΩc .
(2)
The 1D mechanical equation governing the above described levitation problem
reads:
m ∂t v(t) + ξ v(t) + ky(t) + mg = Fem (t)
(3)
where unknown y(t) is the center position of the moving body in the vertical direction, v(t) = ∂t y(t) is the velocity of the moving body, m is the mass of the moving
body, g is the acceleration of gravity, ξ is the scalar viscous friction coefficient,
k is the elastic constant. We apply the backward Euler method to solve (3). The
moving body displacement of system (3) results from the ensuing electromagnetic
force generated by system (1) and thus affects the geometry. Given that, the dynamics of the mechanical equation is much slower than the electromagnetic equation,
if the time-step is taken sufficiently small, one can decouple the equations. Under
this condition, the electromagnetic and mechanical equations can be solved alternatively rather than simultaneously by the weak electromechanical coupling algorithm
of [14]. We adopt this approach.
4
MD Rokibul Hasan, Laurent Montier, Thomas Henneron, and Ruth V. Sabariego
3 POD-based model order reduction
The proper orthogonal decomposition (POD) is applied to reduce the matrix system
resulting from the FE discretisation of (1):
A∂t x(t) + Bx(t) = C(t) .
(4)
where x(t) ∈ RN×1 is the time-dependent column vector of N unknowns, A, B ∈
RN×N are the matrices of coefficients and C(t) ∈ RN×1 is the source column vector.
Furthermore, the system (4) is discretized in time by means of the backward Euler
scheme. A system of algebraic equation is obtained for each time step from tk−1 to
tk = tk−1 + ∆ t, ∆ t the step size. The discretized system reads:
[A∆ t + B]xk = A∆ t xk−1 + Ck
(5)
with A∆ t = ∆At , xk = x(tk ) the solution at instant tk , xk−1 = x(tk−1 ) the solution at
instant tk−1 , Ck the right-hand side at instant tk .
In RO techniques, the solution vector x(t) is approximated by a vector xr (t) ∈
M×1
R
within a reduced subspace spanned by Ψ ∈ RN×M , M ≪ N,
x(t) ≈ Ψ xr (t) ,
(6)
with Ψ an orthonormal projection operator generated from the time-domain full
solution x(t) via snapshot techniques [15].
Let us consider the snapshot matrix, S = [x1 , x2 , . . . , xM ] ∈ RN×M from the set
of solution xk for the selected number of time steps. Applying the singular value
decomposition (SVD) to S as,
(7)
S = U ΣV T .
where Σ contains the singular values, ordered as σ1 > σ2 > . . . > 0. We consider
Ψ = U r ∈ RN×r , that corresponds to the truncation (r first columns, which has
larger singular values than a pre-defined error tolerance ε ) with orthogonal matrices
U ∈ RN×r and V ∈ RM×r . Therefore, the RO system of (5) reads
[A∆r t + Br ] xrk = Ar∆ t xrk−1 + Ckr ,
(8)
with Ar∆ t = Ψ T A∆ t Ψ , Br = Ψ T BΨ and Cr = Ψ T C [16].
3.1 Application to an electro-mechanical problem with movement
3.1.1 RO modelling with automatic remeshing technique
In case of automatic remeshing, we transfer results from the source meshk−1 to the
new target meshk by means of a Galerkin projection, which is optimal in the L2 -
POD-based reduced-order model of an eddy-current levitation problem
5
norm sense [17]. Note that, this projection is limited to the conducting domain, i.e.
the plate, as it is only there that we need to compute the time derivative. The number
of unknowns per time step tk varies and the construction of the snapshot matrix S is
not straightforward. As the solution at tk is supported on its own mesh, the snapshot
vectors xk have a different size. They have to be projected to a common basis using
a simple linear interpolation technique before being assembled in S and getting the
projection operator Ψ . The procedure becomes thus extremely inefficient.
3.1.2 RO modelling with mesh deformation technique
The automatic remeshing task is replaced by a mesh deformation technique, limited
to a region around the moving body (see, e.g., the box in Fig. 2). Therefore, in this
case, the remeshing is done by deforming the initial mesh, which is generated with
the conducting plate placed at, e.g., y0 (avoiding bad quality elements), see Fig. 2.
The mesh elements only inside the sub-domain can be deformed (shrink/expand),
see Fig. 3 and the nodes at the boundary of the sub-domain are fixed. The surrounding mesh does not vary. In our test case, we assume a vertical force (neglect the other
two components) in (2), therefore, the mesh elements only deform in the vertical direction and the nodes are fixed at the boundary of the sub-domain (not at the axes
due to the axisymmetry). The size of the sub-domain (a×b) is determined by the
extreme positions of the moving body. In our validation example, the minimum position (3.8 mm) is given by the upper borders of the coils and the maximum position
(22.3 mm) could be estimated by means of a circuital model, e.g. [5]. The number
of unknowns per time step remains now constant so the construction of matrix S is
direct.
r
ymax
b
y0
ymin
a
Fig. 2: Sub-domain for deformation: plate position at y0 = 12.8 mm (initial mesh).
r
ymax
y
b
ymin
a
Fig. 3: Sub-domain for deformation: plate position at y = 20 mm. Mesh elements
under the plate are expanded and above the plate are shrinked.
6
MD Rokibul Hasan, Laurent Montier, Thomas Henneron, and Ruth V. Sabariego
Algorithm 1: Automatic remeshing
Input : snapshot vectors
{Sc } ← {xk } ∈ Rn(k)×1
time steps
{tk }, k ∈ [1, ..., K]
A∆ t , B, C, tolerance ε
m ≤ n(k) snapshot vectors
Output: displacement yk
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
y0 = initial position, ∆ y0 = 0
//Time resolution
for k ← 1 to K do
//Magnetics
generate matrices A∆ tk , Bk , Ck
find length of Ck ∈ Rn(k)×1
S p = 0 ∈ Rn(k)×m
S p ← projection of {Sc } to n(k)
rank subspace
SVD of S p = U Σ V T
Ψk = U (:, 1 . . . r) with r such
that σ (i)/σ (1) > ε , ∀i ∈ [1 . . . r]
Ar∆ tk =ΨkT A∆ tk Ψk ,
Brk
Ckr
=ΨkT BkΨk ,
=ΨkT Ck
solve
A∆r tk + Brk xrk = Ckr + Ar∆ tk xrk−1
xk ≈ Ψk xrk
Algorithm 2: Mesh deformation
Input : snapshot matrix
S = [x1 , . . . , xm ] ∈ Rn×m ,
xk ∈ Rn×1
time steps
{tk }, k ∈ [1, ..., K]
A∆ t , B, C, tolerance ε
m ≤ n snapshot vectors
Output: displacement yk
1
2
3
4
5
6
7
8
9
10
compute force Fk
//Mechanics
compute displacement yk
update ∆ yk = yk − yk−1
remesh with yk
end
11
12
13
14
y0 = initial position, ∆ y0 = 0
get initial mesh
SVD of S = U Σ V T
Ψk = U (:, 1 . . . r) with r such that
σ (i)/σ (1) > ε , ∀i ∈ [1 . . . r]
//Time resolution
for k ← 1 to K do
//Magnetics
generate matrices A∆ tk , Bk , Ck
Ar∆ tk =Ψ T A∆ tk Ψ ,
Brk =Ψ T BkΨ ,
Ckr =Ψ T Ck
solve
A∆r tk + Brk xrk = Ckr + Ar∆ tk xrk−1
xk ≈ Ψ xrk
compute force Fk
//Mechanics
compute displacement yk
update ∆ yk = yk − yk−1
deform mesh with yk
end
4 Application example
We consider TWP28: an electrodynamic levitation device consisting of a conducting
cylindrical aluminium plate (σ = 3.47 · 107 S/m, m = 0.107 Kg, ξ = 1) above two
coaxial exciting coils. The inner and outer coils have 960 and 576 turns respectively.
Note that, if we neglect the elastic force, the equilibrium is reached when the Fem is
POD-based reduced-order model of an eddy-current levitation problem
7
1N. At t = 0, the plate rests above the coils at a distance of 3.8 mm. For t ≥ 0, a timevarying sinusoidal current (20 A, f = 50 Hz) is imposed, same amplitude, opposite
directions [13]. Assuming a translational movement (no rotation and tilting) we can
use an axisymmetric model. A FE model is generated as reference and origin of
the RO models. We have time-stepped 50 periods (100 time steps per period and
step size 0.2 ms), discretization that ensures accuracy and avoids degenerated mesh
elements during deformation.
4.1 RO modelling with automatic remeshing full domain
In case of full domain remeshing, the first 1500 time steps (300 ms) of the simulation, that correspond to the first two peaks (2P) in Fig. 4, are included in the snapshot
matrix.
Three POD-based RO models are constructed based on the r number of first
singular value modes greater than a prescribed error tolerance ε , that is set manually
observing the singular values decay curve of the snapshot matrix (see in Table 1).
The smaller the prescribed ε , the bigger the size of the RO model will be (size of
RO3>RO2>RO1).
Table 1: L2 -relative errors of RO models on levitation height for 2P (automatic
remeshing).
RO models
RO1
RO2
RO3
M
ε
1085 10−6
1403 10−11
1411 10−15
rel. error
1.25 · 10−1
1.03 · 10−2
2.45 · 10−6
The displacement and relative error of the full and RO models are shown in Fig. 4.
Accurate results have been achieved with the truncated basis models: RO2 and RO3,
with fix size per time step M = 1403 and 1411. This approach is completely inefficient, as the maximum number of unknowns we have in the full model is 1552.
4.2 RO modelling with mesh deformation of a sub-domain
The choice of the sub-domain to deform the mesh is a non-trivial task: it should
be as small as possible while ensuring a high accuracy. From our reference FE
solution [13], by observing the minimum and maximum levitation height of the
plate, we fixed the sub-domain size along the y−axis between ymin = 1.3 mm and
ymax = 29.3 mm, distances measured from the upper border of the coils. The size
along the x−axis has a minimum equal to the radius of the plate, i.e. r = 65 mm.
This value is however not enough due to fringing effects. We have taken different
MD Rokibul Hasan, Laurent Montier, Thomas Henneron, and Ruth V. Sabariego
levitation (mm)
8
20
Full
RO1
RO2
RO3
15
10
5
−1
relative error
10
−3
10
−6
10
−9
10
0
100
200
300
400
500
time (ms)
Fig. 4: Displacement (up) and relative error (down) between full and RO models.
size along the x−axis: 1.5r, 2r, 3r (97.5, 130 and 195 mm), measured from the axis
(Fig. 2). The meshed boxes yield 1921, 1836 and 1780 number of unknowns.
The relative errors in time shown in Table 2 decrease with the increasing subdomain lengths/box sizes considered. We have therefore chosen to further analyse
the RO results obtained with a box length along x of 195 mm (3r). The discretization
is kept constant for all RO models computation.
Table 2: L2 -relative errors of RO models on levitation height for 1P (mesh deform).
sub-domain lengths (mm) M = 7
M = 35
97.5
8.24 · 10−2 6.14 · 10−4
130
5.71 · 10−2 1.90 · 10−4
195
4.53 · 10−3 3.73 · 10−5
The first 800 time steps (160 ms) of the simulation, that correspond to the first
peak (1P), are taken in the snapshot matrix in order to generate the projection basis Ψ . In the snapshot matrix, the most important time step solutions are included,
which found as optimum selection for approximating the full solution. Then the basis is truncated as Ψ = U r (r first columns) by means of prescribed error tolerance
(ε = 10−5, 10−8 ). The basis are truncated for 1P to get RO models of size M = 7
and 35.
From Fig. 5, it can be observed that, RO model already shows very good argument with only M = 7 truncated basis, which is generated from the snapshot matrix that incorporates first peak (1P). The accuracy of RO models does not improve
significantly with the addition of following transient peaks (2P) into the snapshot
matrix, but the accuracy certainly improves with M. Hence, with M = 35 the full
and RO curves are indistinguishable. The accuracy of the RO models can also be
observed from the L2 -relative errors figure.
POD-based reduced-order model of an eddy-current levitation problem
levitation (mm)
20
9
Full
RO, M=7 1P
RO, M=7 2P
RO, M=35 1P
RO, M=35 2P
15
10
5
−2
relative error
10
−4
10
−6
10
−8
10
0
100
200
300
400
500
time (ms)
Fig. 5: Displacement (up) and relative error (down) between full and RO models for
195 mm sub-domain length.
With regard to the computation time (5000 time steps), the RO with M = 7, can
be solved less than an hour, which is 3.5 times faster than the full-domain automatic
remeshing approach, where the major time consuming part is to project the Ψ on
a same dimensional basis as the system coefficient matrices, to reduce the system
in each time step. Be aware that the computation is not optimized, performed on a
laptop, (Intel Core i7-4600U CPU at 2.10 GHz) without any parallelization.
5 Conclusion
In this paper, we have proposed two approaches for POD-based RO models to treat
a magnetodynamic levitation problem: automatic remeshing and mesh deformation
of a sub-domain around a moving body. The RO model is completely inefficient
with automatic remeshing technique, as the computational cost is nearly expensive
as the classical approach. The approach with sub-domain deformation to limit the
influence of the movement on the RO model construction has proved accurate and
efficient (low computational cost). We have shown results for three different subdomain sizes, the bigger the sub-domain the higher the accuracy. Further, computationally efficient RO modelling of such parametric model is ongoing research.
References
1. Sabariego, R., Gyselinck, J., Dular, P., Geuzaine, C., Legros W.: Fast multipole acceleration
of the hybrid finite-element/boundary-element analysis of 3-D eddy-current problems. IEEE
10
MD Rokibul Hasan, Laurent Montier, Thomas Henneron, and Ruth V. Sabariego
Trans. Magn. vol. 40, no. 2, pp. 1278–1281, (2004).
2. Boualem, B., Piriou, F.: Numerical models for rotor cage induction machines using finite
element method. IEEE Trans. on Magn., vol. 34, no. 5, pp. 3202–3205, (1998).
3. Rapetti, F.: An overlapping mortar element approach to coupled magneto-mechanical problems. Math. and Comput. in Sim., vol. 88, no. 8, pp. 1647–1656, (2010).
4. Liu, Z., Liu, S., Mohammed, O. A.: A Practical Method for Building the FE-Based Phase
Variable Model of Single Phase Transformers for Dynamic Simulations. IEEE Trans. Magn.,
vol. 43, no. 4, pp. 1761–1764, (2007).
5. Lee, S. M., Lee, S. H., Choi, H. S., Park, I. H.: Reduced Modeling of Eddy Current-Driven
Electromechanical System Using Conductor Segmentation and Circuit Parameters Extracted
by FEA. IEEE Trans. Mag., vol. 41, no. 5, pp. 1448–1451, (2005).
6. Schilders, W.H., Van der Vorst, H.A. and Rommes, J.: Model order reduction: theory, research
aspects and applications. Springer-Verlag, (2008).
7. Yue, Y., Feng, L., Meuris, P., Schoenmaker, W., Benner, P.; Application of Krylov-type Parametric Model Order Reduction in Efficient Uncertainty Quantification of Electro-thermal Circuit Models. In Proc. of the Prog. In Electromag. Res. Symp. (PIERS), pp. 379–384, (2015).
8. Banagaaya, N., Feng, L., Meuris, P., Schoenmaker, W., Benner, P.; Model order reduction of
an electro-thermal package model. IFAC-PapersOnLine, vol. 48, no. 1, pp. 934–935, (2015).
9. Albunni, M.N., Rischmuller, V., Fritzsche, T., Lohmann, B.: Model-order reduction of moving
nonlinear electromagnetic devices. IEEE Trans. Mag., vol. 44, no. 7, pp. 1822–1829, (2008).
10. Henneron, T., Clénet, S., Model order reduction applied to the numerical study of electrical
motor based on POD method taking into account rotation movement. Int. J. Numer. Model.,
vol. 27, no. 3, pp. 485–494, (2014).
11. Sato, T., Sato, Y., Igarashi, H.: Model order reduction for moving objects: fast simulation of
vibration energy harvesters. COMPEL, vol. 34, no. 5 pp. 1623–1636, (2015).
12. Schmidthäusler, D., Schöps and Clemens, M.: Linear subspace reduction for quasistatic field
simulations to accelerate repeated computations. IEEE Trans. Magn., vol. 50, no. 2, article
# 7010304, (2014).
13. Karl, H., Fetzer, J., Kurz, S., Lehner, G., Rucker, W. M.: Description of TEAM workshop
problem 28: An electrodynamic levitation device. In Proc. of the TEAM Workshop, Graz,
Austria, pp. 48–51, (1997).
14. Henrotte, F., Nicolet, A., Hedia, H., Genon, A., Legros, W.; Modelling of electromechanical
relays taking into account movement and electric circuits. IEEE Trans. Magn. vol. 30, no. 5,
pp. 3236–3239, (1994).
15. Sato, Y., Igarashi, H.: Model reduction of three-dimensional eddy current problems based on
the method of snapshots. IEEE Trans. Magn., vol. 49, no. 5, pp. 1697–1700, (2013).
16. Hasan, MD R., Sabariego, R. V., Geuzaine, C., Paquay, Y.; Proper orthogonal decomposition
versus Krylov subspace methods in reduced-order energy-converter models. In Proc. of IEEE
Inter. Ener. Conf., pp. 1–6, (2016).
17. Parent, G., Dular, P., Ducreux, J.P. and Piriou, F.; Using a Galerkin projection method for
coupled problems. IEEE Trans. Magn., vol. 44, no. 6, pp. 830–833, (2008).
| 5 |
Global parameter identification of stochastic
reaction networks from single trajectories
arXiv:1111.4785v1 [q-bio.MN] 21 Nov 2011
Christian L. Müller† , Rajesh Ramaswamy† , and Ivo F. Sbalzarini
Abstract We consider the problem of inferring the unknown parameters of a
stochastic biochemical network model from a single measured time-course of the
concentration of some of the involved species. Such measurements are available,
e.g., from live-cell fluorescence microscopy in image-based systems biology. In
addition, fluctuation time-courses from, e.g., fluorescence correlation spectroscopy
provide additional information about the system dynamics that can be used to more
robustly infer parameters than when considering only mean concentrations. Estimating model parameters from a single experimental trajectory enables single-cell
measurements and quantification of cell–cell variability. We propose a novel combination of an adaptive Monte Carlo sampler, called Gaussian Adaptation, and efficient exact stochastic simulation algorithms that allows parameter identification
from single stochastic trajectories. We benchmark the proposed method on a linear
and a non-linear reaction network at steady state and during transient phases. In addition, we demonstrate that the present method also provides an ellipsoidal volume
estimate of the viable part of parameter space and is able to estimate the physical
volume of the compartment in which the observed reactions take place.
1 Introduction
Systems biology implies a holistic research paradigm, complementing the reductionist approach to biological organization [16, 15]. This frequently has the goal of
mechanistically understanding the function of biological entities and processes in
interaction with the other entities and processes they are linked to or communicate
with. A formalism to express these links and connections is provided by network
Christian L. Müller, Rajesh Ramaswamy, and Ivo F. Sbalzarini:
Institute of Theoretical Computer Science and Swiss Institute of Bioinformatics, ETH Zurich, CH–
8092 Zurich, Switzerland, e-mail: [email protected], [email protected], [email protected].
† These authors contributed equally to this work.
1
2
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
models of biological processes [4, 1]. Using concepts from graph theory [26] and
dynamic systems theory [44], the organization, dynamics, and plasticity of these
networks can then be studied.
Systems biology models of molecular reaction networks contain a number of
parameters. These are the rate constants of the involved reactions and, if spatiotemporal processes are considered, the transport rates, e.g. diffusion constants, of the
chemical species. In order for the models to be predictive, these parameters need to
be inferred. The process of inferring them from experimental data is called parameter identification. If in addition also the network structure is to be inferred from
data, the problem is called systems identification. Here, we consider the problem of
identifying the parameters of a biochemical reaction network from a single, noisy
measurement of the concentration time-course of some of the involved species.
While this time series can be long, ensemble replicas are not possible, either because the measurements are destructive or one is interested in variations between
different specimens or cells. This is particularly important in molecular systems biology, where cell–cell variations are of interest or large numbers of experimental
replica are otherwise not feasible [45].
This problem is particularly challenging and traditional genomic and proteomic
techniques do not provide single-cell resolution. Moreover, in individual cells the
molecules and chemical reactions can only be observed indirectly. Frequently, fluorescence microscopy is used to observe biochemical processes in single cells. Fluorescently tagging some of the species in the network of interest allows measuring the
spatiotemporal evolution of their concentrations from video microscopy and fluorescence photometry. In addition, fluorescence correlation spectroscopy (FCS) allows
measuring fluctuation time-courses of molecule numbers [23].
Using only a single trajectory of the mean concentrations would hardly allow
identification of network parameters. There could be several combinations of network parameters that lead to the same mean dynamics. A stochastic network model,
however, additionally provides information about the fluctuations of the molecular
abundances. The hope is that there is then only a small region of parameter space that
produces the correct behavior of the mean and the correct spectrum of fluctuations
[31]. Experimentally, fluctuation spectra can be measured at single-cell resolution
using FCS.
The stochastic behavior of biochemical reaction networks can be due to low copy
numbers of the reacting molecules [39, 10]. In addition, biochemical networks may
exhibit stochasticity due to extrinsic noise. This can persist even at the continuum
scale, leading to continuous–stochastic models. Extrinsic noise can, e.g., arise from
environmental variations or variations in how the reactants are delivered into the system. Also measurement uncertainties can be accounted for in the model as extrinsic
noise, modeling our inability to precisely quantify the experimental observables.
We model stochastic chemical kinetics using the chemical master equation
(CME). Using a CME forward model in biological parameter identification amounts
to tracking the evolution of a probability distribution, rather than just a single
function. This prohibits predicting the state of the system and only allows statements about the probability for the system to be in a certain state, hence requiring
Global stochastic parameter identification
3
sampling-based parameter identification methods. In the stochastic–discrete context, a number of different approaches have been suggested. Boys et al. proposed
a fully Bayesian approach for parameter estimation using an explicit likelihood for
data/model comparison and a Markov Chain Monte Carlo (MCMC) scheme for
sampling [5]. Zechner et al. developed a recursive Bayesian estimation technique
[46, 45] to cope with cell–cell variability in experimental ensembles. Toni and coworkers used an approximate Bayesian computation (ABC) ansatz, as introduced
by Marjoram and co-workers [25], that does not require an explicit likelihood [43].
Instead, sampling is done in a sequential Monte Carlo (or particle filter) framework.
Reinker et al. used a hidden Markov model where the hidden states are the actual
molecule abundances and state transitions model chemical reactions [40]. Inspired
by Prediction Error Methods [24], Cinquemani et al. identified the parameters of a
hybrid deterministic–stochastic model of gene expression from multiple experimental time courses [7]. Randomized optimization algorithms have been used, e.g., by
Koutroumpas et al. who applied a Genetic Algorithm to a hybrid deterministic–
stochastic network model [21]. More recently, Poovathingal and Gunawan used
another global optimization heuristic, the Differential Evolution algorithm [32]. A
variational approach for stochastic two-state systems is proposed by Stock and coworkers based on Maximum Caliber [41], an extension of Jaynes’ Maximum Entropy principle [14] to non-equilibrium systems. If estimates are to be made based
on a single trajectory, the stochasticity of the measurements and of the model leads
to very noisy similarity measures, requiring optimization and sampling schemes that
are robust against noise in the data.
Here, we propose a novel combination of exact stochastic simulations for a
CME forward model and an adaptive Monte Carlo sampling technique, called
Gaussian Adaptation, to address the single-trajectory parameter estimation problem
for monostable stochastic biochemical reaction networks. Evaluations of the CME
model are done using exact partial-propensity stochastic simulation algorithms [35].
Parameter optimization uses Gaussian Adaptation. The method iteratively samples
model parameters from a multivariate normal distribution and evaluates a suitable
objective function that measures the distance between the dynamics of the forward
model output and the experimental measurements. In addition to estimates of the
kinetic parameters in the network, the present method also provides an ellipsoidal
volume estimate of the viable part of parameter space and is able to estimate the
physical volume of the intra-cellular compartment in which the reactions take place.
We assume that quantitative experimental time series of either a transient or the
steady state of the concentrations of some of the molecular species in the network
are available. This can, for example, be obtained from single-cell fluorescence microscopy by translating fluorescence intensities to estimated chemical concentrations. Accurate methods that account for the microscope’s point-spread function
and the camera noise model are available to this end [12, 13, 6]. Additionally, FCS
spectra can be analyzed in order to quantify molecule populations, their intrinsic
fluctuations, and lifetimes [23, 34, 39]. The present approach requires only a single
stochastic trajectory from each cell. Since the forward model is stochastic and only a
single experimental trajectory is used, the objective function needs to robustly mea-
4
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
sure closeness between the experimental and the simulated trajectories. We review
previously considered measures and present a new distance function in Sec. 5. First,
however, we set out the formal stochastic framework and problem description below.
We then describe Gaussian Adaptation and its applicability to the current estimation
task. The evaluation of the forward model is outlined in Sec. 4. We consider a cyclic
linear chain as well as a non-linear colloidal aggregation model as benchmark test
cases in Sec. 6 and conclude in Sec. 7.
2 Background and problem statement
We consider a network model of a biochemical system given by M coupled chemical
reactions
N
kj
N
∑ νi,−j Si −−−−−→ ∑ νi,+j Si
i=1
∀ j = 1, . . . , M
(1)
i=1
between N species, where ν − = [νi,−j ] and ν + = [νi,+j ] are the stoichiometry matrices
of the reactants and products, respectively, and Si is the ith species in the reaction
network. Let ni be the population (molecular copy number) of species Si . The reactions occur in a physical volume Ω and the macroscopic reaction rate of reaction j
is k j . This defines a dynamic system with state n(t) = [ni (t)] and M + 1 parameters
θ = [k1 , . . . , kM , Ω ].
The state of such a system can be interpreted as a realization of a random variable
n(t) that changes over time t. All one can know about the system is the probability
for it to be in a certain state at a certain time t j given the system’s state history, hence
P(n(t j ) | n(t j−1 ), . . . , n(t1 ), n(t0 )) dNn
= Prob{n(t j ) ∈ [n(t j ), n(t j ) + dn) | n(ti ), i = 0, . . . , j − 1 } .
(2)
A frequently made model assumption, substantiated by physical reasoning, is
that the probability of the current state depends solely on the previous state, i.e.,
P(n(t j ) | n(t j−1 ), . . . , n(t1 ), n(t0 )) = P(n(t j ) | n(t j−1 )) .
(3)
The system is then modeled as a first-order Markov chain where the state n evolves
as
n(t + ∆t) = n(t) + Ξ (∆t; n,t)
(4)
This is the equation of motion of the system. If n is real-valued, it defines a
continuous–stochastic model in the form of a continuous-state Markov chain. Discrete n, as is the case in chemical kinetics, amount to discrete–stochastic models expressed as discrete-state Markov chains. The Markov propagator Ξ is itself a random variable, distributed with probability distribution Π (ξ | ∆t; n,t) =
P(n + ξ ,t + ∆t | n,t) for the state change ξ . For continuous-state Markov chains,
Π is a continuous probability density function (PDF), for discrete-state Markov
Global stochastic parameter identification
5
chains a discrete probability distribution. If Π (ξ ) = δ (ξ − ξ 0 ), with δ the Dirac
delta distribution, then the system’s state evolution becomes deterministic with predictable discrete or continuous increments ξ 0 . Deterministic models can hence be
interpreted as a limit case of stochastic models [22].
In chemical kinetics, the probability distribution Π of the Markov propagator
is a linear combination of Poisson distributions with weights given by the reaction
stoichiometry. This leads to the equation of motion for the population n given by
ψ1
n(t + ∆t) = n(t) + (ν + − ν − ) . . . ,
(5)
ψM
where ψi ∼ P(ai (n(t))∆t) is a random variable from the Poisson distribution with
rate λ = ai (n(t))∆t. The second term on the right-hand side of Eq. 5 follows a
probability distribution Π (ξ | ∆t; n,t) whose explicit form is analytically intractable
in the general case. The rates a j , j = 1, . . . M, are called the reaction propensities
and are defined as
N
kj
ni
aj = ∏
.
(6)
−
ν
1+
ν−
∑N
i, j
i0 =1 i0 , j
i=1
Ω
They depend on the macroscopic reaction rates and the reaction volume and can be
interpreted as the probability rates of the respective reactions. Advancing Eq. 5 with
a ∆t such that more than one reaction event happens per time step yields an approximate simulation of the biochemical network as done in approximate stochastic
simulation algorithms [9, 3].
An alternative approach consists in considering the evolution of the state probability distribution P(n,t | n0 ,t0 ) of the Markov chain described by Eq. 5, hence:
!
M
N ν − −ν +
∂P
i, j
i, j
(7)
= ∑ ∏ Ei E j − 1 a j (n(t))P(n,t)
∂t
j=1 i=1
with the step operator Eip f (n) = f (n + pî ) for any function f where î is the Ndimensional unit vector along the ith dimension. This equation is called the chemical
master equation (CME). Directly solving it for P is analytically intractable, but
trajectories of the Markov chain governed by the unknown state probability P can
be sampled using exact stochastic simulation algorithms (SSA) [8]. Exact SSAs
are exact in the sense that they sample Markov chain realizations from the exact
solution P of the CME, without ever explicitly computing this solution. Since SSAs
are Monte Carlo algorithms, however, a sampling error remains.
Assuming that the population n increases with the volume Ω , n can be approximated as a continuous random variable in the limit of large volumes, and Eq. 5
becomes
η1
n(t + ∆t) = n(t) + (ν + − ν − ) . . . ,
(8)
ηM
6
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
where ηi ∼ N (ai (n(t))∆t, ai (n(t))∆t) are normally distributed random variables.
The second term on the right-hand side of this equation is a random variable that is
distributed according to the corresponding Markov propagator Π (ξ | ∆t; n,t), which
is a Gaussian. Equation 8 is called the chemical Langevin equation with Π given by
1
T Σ −1 (ξ −µ)
Π (ξ | ∆t; n,t) = (2π)−N/2 |Σ |−1/2 e− 2 (ξ −µ)
,
(9)
where
a1 (n(t))
µ = ∆t (ν + − ν − ) . . . and Σ = ∆t (ν + − ν − ) diag (n(t))(ν + − ν − )T .
aM (n(t))
The corresponding equation for the evolution of the state PDF is the non-linear
Fokker-Planck equation, given by
∂P
1
T
=∇
D ∇ − F P(n,t) ,
(10)
∂t
2
where
∂
∂
,...,
∇ =
,
∂ n1
∂ nN
T
1
∆t→0 ∆t
Fi = lim
Z +∞
−∞
dξi ξi Π (ξ | ∆t; n,t) ,
(11)
(12)
and
1
Di j = lim
∆t→0 ∆t
Z +∞ Z +∞
−∞
−∞
dξi dξ j ξi ξ j Π (ξ | ∆t; n,t) .
(13)
At much larger Ω , when the population n is on the order of Avogadro’s number,
Eq. 8 can be further approximated as
φ1 (n(t))∆t
+
−
,
...
n(t + ∆t) = n(t) + (ν − ν )
(14)
φM (n(t))∆t
1−∑N
ν−
ν−
i, j
i0 =1 i0, j
where φ j (n) = k j Ω
∏Ni=1 ni (νi,−j !)−1 . Note that the second term on the
right-hand side of this equation is a random variable whose probability distribution
is the Dirac delta
φ1 (n(t))∆t
.
...
Π (ξ | ∆t; n,t) = δ ξ − (ν + − ν − )
(15)
φM (n(t))∆t
Global stochastic parameter identification
7
Equation 14 hence is a deterministic equation of motion. In the limit ∆t → 0 this
equation can be written as the ordinary differential equation
φ1 (x(t))
dx
= (ν + − ν − ) . . .
(16)
dt
φM (x(t))
for the concentration x = nΩ −1 . This is the classical reaction rate equation for the
system in Eq. 1.
By choosing the appropriate probability distribution Π of the Markov propagator,
one can model reaction networks in different regimes: small population n (small Ω )
using SSA over Eq. 7, intermediate population (intermediate Ω ) using Eq. 8, and
large population (large Ω ) using Eq. 16. The complete model definition therefore is
M (θ ) = {ν − , ν + , Π }.
The problem considered here can then be formalized as follows: Given a forward
model M (θ ) and a single noisy trajectory of the population of the chemical species
n̂(t0 + (q − 1)∆texp ) at K discrete time points t = t0 + (q − 1)∆texp , q = 1, . . . , K, we
wish to infer θ = [k1 , . . . , kM , Ω ]. The time between two consecutive measurements
∆texp and the number of measurements K are given by the experimental technique
used. As a forward model we use the full CME as given in Eq. 7 and sample trajectories from it using the partial-propensity formulation of Gillespie’s exact SSA as
described in Sec. 4.
3 Gaussian Adaptation for global parameter optimization,
approximate Bayesian computation, and volume estimation
Gaussian Adaptation (GaA), introduced in the late 1960’s by Gregor Kjellström
[17, 19], is a Monte Carlo technique that has originally been developed to solve
design-centering and optimization problems in analog electric circuit design. Design centering solves the problem of determining the nominal values (resistances,
capacitances, etc.) of the components of a circuit such that the circuit output is
within specified design bounds and is maximally robust against random variations
in the circuit components with respect to a suitable criterion or objective function.
This problem is a superset of general optimization, where one is interested in finding a parameter vector that minimizes (or maximizes) the objective function without
any additional robustness criterion. GaA has been specifically designed for scenarios where the objective function f (θ ) is only available in a black-box (or oracle)
model that is defined on a real-valued domain A ⊆ Rn and returns real-valued output R. The black-box model assumes that gradients or higher-order derivatives of
the objective function may not exist or may not be available, hence including the
class of discontinuous and noisy functions. The specific black-box function used
here is presented in Sec. 5.
8
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
The principle idea behind GaA is the following: Starting from a user-defined
point in parameter space, GaA explores the space by iteratively sampling single
parameter vectors from a multivariate Gaussian distribution N (m, Σ ) whose mean
m ∈ Rn and covariance matrix Σ ∈ Rn×n are dynamically adapted based on the information from previously accepted samples. The acceptance criterion depends on the
specific mode of operation, i.e., whether GaA is used as an optimizer or as a sampler [28, 27]. Adaptation is performed such as to maximize the entropy of the search
distribution under the constraint that acceptable search points are found with a predefined, fixed hitting (success) probability p < 1 [19]. Using the
qdefinition of the
(2πe)n det(Σ )
entropy of a multivariate Gaussian distribution H (N ) = log
shows that this is equivalent to maximizing the determinant of the covariance matrix Σ . GaA thus follows Jaynes’ Maximum Entropy principle [14].
GaA starts by setting the mean m(0) of the multivariate Gaussian to an initial
acceptable point θ (0) and the Cholesky factor Q(0) of the covariance matrix to the
identity matrix I. At each iteration g > 0, the covariance Σ (g) is decomposed as:
T
T
Σ (g) = r · Q(g) r · Q(g) = r2 Q(g) Q(g) , where r is the scalar step size
that controls the scale of the search. The matrix Q(g) is the normalized square root
of Σ (g) , found by eigen- or Cholesky decomposition of Σ (g) . The candidate parameter vector in iteration g + 1 is sampled from a multivariate Gaussian according to
θ (g+1) = m(g) + r(g) Q(g) η (g) , where η (g) ∼ N (0, I). The parameter vector is then
evaluated by the objective function f (θ (g+1) ).
Only if the parameter vector is accepted, the following adaptation rules are applied: The step size r is increased as r(g+1) = fe · r(g) , where fe > 1 is termed the
expansion factor. The mean of the proposal distribution is updated as
1
1 (g+1)
(g+1)
m
m(g) +
= 1−
θ
.
(17)
Nm
Nm
Nm is a weighting factor that
rate of the method. The successful
controls the learning
(g+1)
(g+1)
(g)
= θ
−m
is used to perform a rank-one update of
search direction d
the covariance matrix: Σ (g+1) = 1 − N1C Σ (g) + N1C d (g+1) d (g+1) T . NC weights the
influence of the accepted parameter vector on the covariance matrix. In order to
decouple the volume of the covariance (controlled by r(g+1) ) from its orientation,
Q(g+1) is normalized such that det(Q(g+1) ) = 1.
In case θ (g+1) is not accepted at the current iteration, only the step size is adapted
as r(g+1) = fc · r(g) , where fc < 1 is the contraction factor.
The behavior of GaA is controlled by several strategy parameters. Kjellström
analyzed the information-theoretic optimality of the acceptance probability p for
GaA in general regions [19]. He concluded that the efficiency E of the process and
p are related as E ∝ −p log p, leading to an optimal p = 1e ≈ 0.3679, where e is
Euler’s number. A proof is provided in [18]. Maintaining this optimal hitting prob-
Global stochastic parameter identification
9
ability corresponds to leaving the volume of the distribution, measured by det(Σ ),
constant under stationary conditions. Since det(Σ ) = r2n det(Q QT ), the expansion
and contraction factors fe and fc expand or contract the volume by a factor of fe2n
and fc2n , respectively. After S accepted and F rejected samples, a necessary condiS
, and
tion for constant volume thus is: ∏Si=1 ( fe )2n ∏Fi=1 ( fc )2n = 1. Using p = S+F
introducing a small β > 0, the choice fe = 1 + β (1 − p) and fc = 1 − β p satisfies
the constant-volume condition to first order. The scalar rate β is coupled to NC . NC
influences the update of Σ ∈ Rn×n , which contains n2 entries. Hence, NC should be
related to n2 . We suggested using NC = (n + 1)2 / log(n + 1) as a standard value, and
coupling β = N1C [29]. A similar reasoning is also applied to Nm . Since Nm influences the update of m ∈ Rn , it is reasonable to set Nm ∝ n. We propose Nm = en as
a standard value.
Depending on the specific acceptance rule used, GaA can be turned into a global
optimizer [29], an adaptive MCMC sampler [28, 27], or a volume estimation method
[30], as described next.
3.1 GaA for global black-box optimization
In a minimization scenario, GaA uses an adaptive-threshold acceptance mechanism.
(0)
Given an initial scalar cutoff threshold cT , we accept a parameter vector θ (g+1) at
(g)
iteration
g + 1 if f (θ (g+1) ) < cT . Upon acceptance, the threshold cT is lowered as
(g+1)
(g)
cT
= 1 − N1T cT + N1T f (θ (g+1) ), where NT controls the weighting between the
old threshold and the objective-function value of the accepted sample. This sampledependent threshold update renders the algorithm invariant to linear transformations
of the objective function. The standard strategy parameter value is NT = en [28].
We refer to [28] for further information about convergence criteria and constraint
handling techniques in GaA.
3.2 GaA for approximate Bayesian computation and viable volume
estimation
Replacing the threshold acceptance-criterion by a probabilistic Metropolis criterion,
and setting Nm = 1, turns GaA into an adaptive MCMC sampler with global adaptive scaling [2]. We termed this method Metropolis-GaA [28, 27]. Its strength is that
GaA can automatically adapt to the covariance of the target probability distribution
while maintaining the fixed hitting probability. For standard MCMC, this cannot be
achieved without fine-tuning the proposal using multiple MCMC runs. We hypothesize that GaA might also be an effective tool for approximate Bayesian computation
(ABC) [43]. In essence, the ABC ansatz is MCMC without an explicit likelihood
10
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
function [25]. The likelihood is replaced by a distance function — which plays the
same role as our objective function — that measures closeness between a parameterized model simulation and empirical data D, or summary statistics thereof. When a
uniform prior over the parameters and a symmetric proposal are assumed, a parameter vector in ABC is unconditionally accepted if its corresponding distance function
value f (θ (g+1) ) < cT [25]. The threshold cT is a problem-dependent constant that
is fixed prior to the actual computation. Marjoram and co-workers have shown that
samples obtained in this manner are approximately drawn from the posterior parameter distribution given the data D. While Pritchard et al. used a simple rejection
sampler [33], Marjoram and co-workers proposed a standard MCMC scheme [25].
Toni and co-workers used sequential MC for sample generation [43]. To the best of
our knowledge, however, the present work presents the first application of an adaptive MCMC scheme for ABC in biochemical network parameter inference. Finally,
we emphasize that when GaA’s mean, covariance matrix, and hitting probability p
stabilize during ABC, they provide direct access to an ellipsoidal estimation of the
volume of the viable parameter space as defined by the threshold cT [30]. Hafner
and co-workers have shown how to use such viable volume estimates for model
discrimination [11].
4 Evaluation of the forward model
In each iteration of the GaA algorithm, the forward model of the network needs
to be evaluated for the proposed parameter vector θ . This requires an efficient and
exact SSA for the chemical kinetics of the reaction network, used to generate trajectories n(t) from M (θ ). Since GaA could well propose parameter vectors that lead
to low copy numbers for some species, it is important that the SSA be exact since
approximate algorithms are not appropriate at low copy number.
In its original formulation, Gillespie’s SSA has a computational cost that is
linearly proportional to the total number M of reactions in the network. If many
model evaluations are required, as in the present application, this computational cost
quickly becomes prohibitive. While more efficient formulations of SSA have been
developed for weakly coupled reaction networks, their computational cost remains
proportional to M for strongly coupled reaction networks [35]. A reaction network is
weakly coupled if the number of reactions that are influenced by any other reaction
is bounded by a constant. If a network contains at least one reaction whose firing
influences the propensities of a fixed proportion (in the worst case all) of the other
reactions, then the network is strongly coupled [35]. Scale-free networks as seem to
be characteristic for systems biology models [1, 42] are by definition strongly coupled. This is due to the existence of hubs that have a higher connection probability
than other nodes. These hubs frequently correspond to chemical reactions that produce or consume species that also participate in the majority of the other reactions,
such as water, ATP, or CO2 in metabolic networks.
Global stochastic parameter identification
11
We use partial-propensity methods [35, 36] to simulate trajectories according to
the solution of the chemical master Eq. 7 of the forward model. Partial-propensity
methods are exact SSAs whose computational cost scales at most linearly with the
number N of species in the network [35]. For large networks, this number is usually much smaller than the number of reactions. Depending on the network model
at hand, different partial-propensity methods are available for its efficient simulation. Strongly coupled networks where the rate constants span only a limited spectrum of values are best simulated with the partial-propensity direct method (PDM)
[35]. Multi-scale networks where the rate constants span many orders of magnitude are most efficiently simulated using the sorting partial-propensity direct method
(SPDM) [35]. Weakly coupled reaction networks can be simulated at constant computational cost using the partial-propensity SSA with composition-rejection sampling (PSSA-CR) [37]. Lastly, reaction networks that include time delays can be
exactly simulated using the delay partial-propensity direct method (dPDM) [38].
Different combinations of the algorithmic modules of partial-propensity methods
can be used to constitute all members of this family of SSAs [36]. We refer to the
original publications for algorithmic details, benchmarks of the computational cost,
and a proof of exactness of partial-propensity methods.
5 Objective function
In the context of parameter identification of stochastic biochemical networks, a
number of distance or objective functions have previously been suggested. Reinker
et al. proposed an approximate maximum-likelihood measure under the assumption
that only a small number of reactions fire between two experimental measurement
points, and a likelihood based on singular value decomposition that works when
many reactions occur per time interval [40]. Koutroumpas et al. compared objective functions based on least squares, normalized cross-correlations, and conditional
probabilities using a Genetic Algorithm [21]. Koeppl and co-workers proposed the
Kantorovich distance to compare experimental and model-based probability distributions [20]. Alternative distance measures include the Earth Mover’s distance or
the Kolomogorov-Smirnov distance [32]. These distance measures, however, can
only be used when many experimental trajectories are available. In order to measure the distance between a single experimental trajectory n̂(t) and a single model
output n(t), we propose a novel cost function f (θ ) = f (M (θ ), n̂) that reasonably
captures the kinetics of a monostable system. We define a compound objective function f (θ ) = f1 (θ ) + f2 (θ ) with
4
f1 (θ ) = ∑ γi ,
i=1
where
N
z
x
|ACFl (n̂i ) − ACFl (ni )|
∑l=0
,
zx
ACFl (n̂i )
∑l=0
i=1
f2 (θ ) = ∑
(18)
12
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
N
γi =
s
∑
j=1
µi (n j ) − µi (n̂ j )
µi (n̂ j )
2
(19)
with the central moments given by
(
if i = 1
∑Kp=1 n j (t0 + (p − 1)∆texp )
i
µi (n j ) =
(| ∑Kq=1 n j (t0 + (q − 1)∆texp ) − µ1 (n j ) |)1/i otherwise
(20)
and the time-autocorrelation function (ACF) at lag l given by
ACFl (ni ) =
ni (t0 )ni (t0 + l ∆texp ) − (µ1 (ni ))2
.
µ2 (ni )
The variable zx is the lag at which the experimental ACF crosses 0 for the first time.
The function f1 (θ ) measures the difference between the first four moments of n and
n̂. This function alone would, however, not be enough to capture the kinetics since
it lacks information about correlations in time. This is taken into account by f2 (θ ),
measuring the difference in the lifetimes of all chemical species. These lifetimes are
systematically modulated by the volume Ω [39], hence enabling volumetric measurements of intra-cellular reaction compartments along with the identification of
the rate constants.
The present objective function allows inclusion of experimental readouts from
image-based systems biology. The moment-matching part is a typical readout from
fluorescence photometry, whereas the autocorrelation of the fluctuations can directly
be measured using, e.g., FCS.
a
b
c
100
300
300
250
250
80
n̂1 200
n̂1
200
n̂1
60
n̂2
n̂2
150
n̂3
40
n̂2
150
100
20
0
0
100
50
20
40
60
time [s]
80
100
0
0
50
20
40
60
time [s]
80
100
0
0
20
40
60
80
100
time [s]
Fig. 1 In silico data for all test cases. a. Time evolution of the populations of three species in the
cyclic chain model at steady state (starting at t0 = 2000). b. Time evolution of the populations of
two species in the aggregation model at steady state (starting at t0 = 5000). c. Same as b, but during
the transient phase (starting at t0 = 0).
Global stochastic parameter identification
13
6 Results
We estimate the unknown parameters θ for two reaction networks: a weakly coupled
cyclic chain and a strongly coupled non-linear colloidal aggregation network. For
the cyclic chain we estimate θ at steady state. For the aggregation model we estimate
θ both at steady state and in the transient phase. Every kinetic parameter is allowed
to vary in the interval [10−3 , 103 ] and the reaction volume Ω in [1, 500]. Each GaA
run starts from a point selected uniformly at random in logarithmic parameter space.
6.1 Weakly coupled reaction network: cyclic chain
The cyclic chain network is given by:
k
i
Si −
→
Si+1
kN
Si −→ S1
i = 1, . . . , N − 1 ,
i=N.
(21)
In this linear network, the number of reactions M is equal to the number of species
N. The maximum degree of coupling of this reaction network is 2, irrespective of the
size of the system (length of the chain), rendering it weakly coupled [35]. We hence
use PSSA-CR to evaluate the forward model with a computational complexity in
O(1) [37]. In the present test case, we limit ourselves to 3 species and 3 reactions,
i.e., N = M = 3. The parameter vector for this case is given by θ = [k1 , k2 , k3 ], since
the kinetics of linear reactions is independent of the volume Ω [39].
We simulate steady-state “experimental” data n̂ using PSSA-CR with ground
truth k1 = 2, k2 = 1.5, k3 = 3.2 (see Fig. 1a). We set the initial population of the
species to n1 (t = 0) = 50, n2 (t = 0) = 50, and n3 (t = 0) = 50 and sample a single
CME trajectory at equi-spaced time points with ∆texp = 0.1 between t = t0 and
t = t0 + (K − 1)∆texp with t0 = 2000 and K = 1001 for each of the 3 species S1 , S2 ,
and S3 . For the generated data we find zx = 7.
We generate trajectories from the forward model for every parameter vector θ
proposed by GaA using PSSA-CR between t = 0 and t = (K − 1)∆texp = 100, starting from the initial population ni (t = 0) = n̂i (t = t0 ).
Before turning to the actual parameter identification, we illustrate the topography
of the objective function landscape for the present example. We fix k3 = 3.2 to its
optimal value and perform a two-dimensional grid sampling for k1 and k2 over the
full search domain. We use 40 logarithmically spaced sample points per parameter,
resulting in 402 parameter combinations. For each combination we evaluate the objective function. The resulting landscapes of f1 (θ ), f2 (θ ), and f (θ ) are depicted in
Fig. 2a. Figure 2b shows refined versions around the global optimum. We see that
the moment-matching term f1 (θ ) is largely responsible for the global single-funnel
topology of the landscape. The autocorrelation term f2 (θ ) sharpens the objective
function near the global optimum and renders it locally more isotropic.
14
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
We perform both global optimization and ABC runs using GaA. In each of the 15
independent optimization runs, the number of function evaluations (FES) is limited
to MAX FES= 1000M = 3000. We set the initial step size to r(0) = 1 and perform all
searches in logarithmic scale of the parameters. Independent restarts from uniformly
random points are performed when the step size r drops below 10−4 [29]. For each
of the 15 independent runs, the 30 parameter vectors with the smallest objective
function value are collected and displayed in the box plot shown in the left panel of
Fig. 3a. All 450 collected parameter vectors have objective function values smaller
than 1.6. The resulting data suggest that the present method is able to accurately
determine the correct scale of the kinetic parameters from a single experimental
trajectory, although an overestimation of the rates is apparent.
We use the obtained optimization results for subsequent ABC runs. We conduct
15 independent ABC runs using cT = 2. The starting points for the ABC runs are
selected uniformly at random from the 450 collected parameter vectors in order to
ensure stable initialization. For each run, we again set MAX FES= 1000M = 3000.
The initial step size r(0) is set to 0.1, and the parameters are again explored in logarithmic scale. For all runs we observe rapid convergence of the empirical hitting
a
3
3
3
2
8
ï1
6
4
ï2
1
2.5
0
ï1
2
ï2
1.5
log10 k2
1
log10 k2
log10 k2
0
16
3
12
10
18
2
14
1
20
3.5
16
2
14
12
0
10
ï1
8
6
ï2
4
2
ï3
ï3
ï2
ï1
0
log10 k1
1
2
ï3
ï3
3
ï2
ï1
0
log10 k1
1
2
ï3
ï3
3
ï2
ï1
0
log10 k1
1
2
3
2
b
1
1
1
4.5
1.9
0.8
2.5
0.4
2
1.5
0.2
0
0
0.2
0.4
0.6
log10 k1
0.8
1
1.7
0.6
log10 k2
log10 k2
3
5.5
5
1.8
3.5
0.6
6
0.8
1.6
1.5
0.4
4.5
0.6
log10 k2
4
0.8
2
4
3.5
0.4
1.4
1.3
0.2
1
1.2
0.5
1.1
0
0
0.2
0.4
0.6
log10 k1
0.8
1
3
0.2
2.5
2
0
0
0.2
0.4
0.6
0.8
1
log10 k1
Fig. 2 a. Global objective function landscape for the cyclic chain over the complete search domain
for optimal k3 = 3.2. The three panels from left to right show f1 (θ ), f2 (θ ), and f (θ ), respectively.
b. A refined view of the global objective function landscape near the global optimum. The three
panels from left to right show f1 (θ ), f2 (θ ), and f (θ ), respectively. The white dots mark the true,
optimal parameters.
Global stochastic parameter identification
15
probability pemp to the optimal p = 1e (see Sec. 3). We collect the ABC samples
along with the means and covariances of GaA as soon as |pemp − p| < 0.05. As an
example we show the histograms of the posterior samples for a randomly selected
run in Fig. 3b. The means of the posterior distributions are again larger than the true
kinetic parameters. Using GaA’s means, covariance matrices, and the corresponding hitting probabilities that generated the posterior samples, we can construct an
ellipsoidal volume estimation [30]. This is done by multiplying each eigenvalue of
the average of the collected covariance matrices with c pemp = inv χn2 (pemp ), the ndimensional inverse Chi-square distribution evaluated at the empirical hitting probability. The product of these scaled eigenvalues and the volume of the n-dimensional
n
unit sphere, |S(n)| = Γ (πn 2+1) , then yields the ellipsoid volume with respect to a uni2
form distribution (see [30] for details). The resulting ellipsoid contains the optimal
kinetic parameter vector and is depicted in the right panel of Fig. 3a. It has a volume
of 0.045 in log-parameter space. This constitutes only 0.0208% of the initial search
space volume, indicating that GaA significantly narrows down the viable parameter
space around the true optimal parameters.
a
0.7
0.6
log10 k3
log10 ki
0.5
0.4
0.3
0.2
0.1
b
log10 k2
1
2
i
3
log10 k1
90
90
90
80
80
80
70
70
70
60
60
60
50
50
50
40
40
40
30
30
30
20
20
20
10
10
0
0.1
0.2
0.3
0.4
log10 k1
0.5
0.6
0
0
10
0.1
0.2
0.3
log10 k2
0.4
0.5
0
0.3
0.4
0.5
0.6
0.7
0.8
log10 k3
Fig. 3 a. Left panel: Box plot of the 30 best parameter vectors from each of the 15 independent
optimization runs. The blue dots mark the true parameter values. Right panel: Ellipsoidal volume
estimation of the parameter space below an objective-function threshold cT = 2 from a single ABC
run. b. Empirical posterior distributions of the kinetic parameters from the same single ABC run
with cT = 2. The red lines indicate the true parameters.
16
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
6.2 Strongly coupled reaction network: colloidal aggregation
The colloidal aggregation network is given by:
kon
1
→
0/ −−
S1
ki j
Si + S j −→ Si+ j
k̄i j
Si+ j −→ Si + S j
kioff
Si −−→
0/
i + j = 1, . . . , N
i + j = 1, . . . , N
i = 1, . . . , N .
(22)
j 2k
For this network of N species, the number of reactions is M = N2 + N + 1. The
maximum degree of coupling of this reaction network is proportional to N, rendering the network strongly coupled [35]. We hence use SPDM to evaluate the
forward model with a computational complexity in O(N) [35]. We use SPDM instead of PDM since the search path of GaA is unpredictable and could well generate parameters that lead to multi-scale networks. For this test case, we limit ourselves to two species, i.e., N = 2 and M = 5. The parameter vector for this case is
θ = [k11 , k̄11 , k1on , k1off , k2off , Ω ].
We perform GaA global optimization runs following the same protocol as for the
cyclic chain network with MAX FES = 1000(M + 1) = 6000.
6.2.1 At steady state
We simulate “experimental” data n̂ using SPDM with ground truth k11 = 0.1, k̄11 =
1.0, k1on = 2.1, k1off = 0.01, k2off = 0.1, and Ω = 15 (see Fig. 1b). We set the initial
population of the species to n1 (t = 0) = 0, n2 (t = 0) = 0, and n3 (t = 0) = 0 and
sample K = 1001 equi-spaced data points between t = t0 and t = t0 + (K − 1)∆texp
with t0 = 5000 and ∆texp = 0.1.
We generate trajectories from the forward model for every parameter vector θ
proposed by GaA using SPDM between t = 0 and t = (K − 1)∆texp = 100, stating
from the initial population ni (t = 0) = n̂i (t = t0 ).
The optimization results are summarized in the left panel of Fig. 4a. For each
of 15 independent runs, the 30 lowest-objective parameters are collected and shown
in the box plot. We observe that the true parameters corresponding to θ2 = k̄11 ,
θ3 = k1on , θ4 = k1off , and θ5 = k2off are between the 25th and 75th percentiles of the
identified parameters. Both the first parameter and the reaction volume are, on average, overestimated. Upon rescaling the kinetic rate constants with the estimated
volume, we find θ norm = [θ1 /θ6 , θ2 , θ3 θ6 , θ4 , θ5 ], which are the specific probability
rates of the reactions. The identified values are shown in the right panel of Fig. 4a.
The median of the identified θ3norm coincides with the true specific probability rate.
Likewise, θ1norm is closer to the 25th percentile of the parameter distribution. This
Global stochastic parameter identification
17
suggests a better estimation performance of GaA in the space of specific probability
rates, at the expense of not obtaining an estimate of the reaction volume.
6.2.2 In the transient phase
We simulate “experimental” data in the transient phase of the network dynamics
using the same parameters as above between t = t0 and t = (K − 1)∆texp with t0 =
0, ∆texp = 0.1, and K = 1001 (see Fig. 1c). We evaluate the forward model with
ni (t = 0) = n̂i (t = t0 ) to obtain trajectories between t = 0 and t = (K − 1)∆texp for
every proposed parameter vector θ .
The optimization results for the transient case are summarized in Fig. 4b. We
observe that the true parameters corresponding to θ3 = k1on , θ5 = k2off , and θ6 = Ω
are between the 25th and 75th percentiles of the identified parameters. The remaining
parameters are, on average, overestimated. In the space of rescaled parameters θ norm
we do not observe a significant improvement of the estimation.
3
3
2
2
log10 θinorm
a
log10 θi
1
0
1
0
ï1
ï1
ï2
ï2
ï3
ï3
1
2
3
4
5
6
1
2
3
i
4
5
1
2
3
i
4
5
i
3
3
2
2
log10 θinorm
b
log10 θi
1
0
1
0
ï1
ï1
ï2
ï2
ï3
ï3
1
2
3
4
i
5
6
Fig. 4 a. Left panel: Box plot of the 30 best parameter vectors from each of the 15 independent
optimization runs for the steady-state data set. Right panel: Box plots of the normalized parameters
(see main text for details). b. Left panel: Box plot of the 30 best parameter vectors from each of the
15 independent optimization runs for the transient data set. Right panel: Box plot of the normalized
parameters (see main text for details). The blue dots indicate the true parameter values.
18
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
7 Conclusions and Discussion
We have considered parameter estimation of stochastic biochemical networks from
single experimental trajectories. Parameter identification from single time series is
desirable in image-based systems biology, where per-cell estimates of the fluorescence evolution and its fluctuations are available. This enables quantifying cell–cell
variability on the level of network parameters. The histogram of the parameters
identified for different cells provides a biologically meaningful way of assessing
phenotypic variability beyond simple differences in the fluorescence levels.
We have proposed a novel combination of a flexible Monte Carlo method, the
Gaussian Adaptation (GaA) algorithm, and efficient exact stochastic simulation algorithms, the partial-propensity methods. The presented method can be used for
global parameter optimization, approximate Bayesian inference under a uniform
prior, and volume estimation of the viable parameter space. We have introduced an
objective function that measures closeness between a single experimental trajectory
and a single trajectory generated by the forward model. The objective function comprises a moment-matching and a time-autocorrelation part. This allows including
experimental readouts from, e.g., fluorescence photometry and fluorescence correlation spectroscopy.
We have applied the method to estimate the parameters of two monostable reaction networks from a single simulated temporal trajectory each, both at steady
state and during transient phases. We considered the linear cyclic chain network and
a non-linear colloidal aggregation network. For the linear model we were able to
robustly identify a small region of parameter space containing the true kinetic parameters. In the non-linear aggregation model, we could identify several parameter
vectors that fit the simulated experimental data well. There are two possible reasons
for this reduced parameter identifiability: either GaA cannot find the globally optimal region of parameter space due to high ruggedness and noise in the objective
function, or the non-linearity of the aggregation network modulates the kinetics in
a non-trivial way [39, 10]. Both cases are not accounted for in the current objective
function, thus leading to reduced performance for non-linear reaction networks.
We also used GaA as an adaptive MCMC method for approximate Bayesian inference of the posterior parameter distributions in the linear chain network. This enabled estimating the volume of the viable parameter space below a given objectivefunction value threshold. We found these volume estimates to be stable across independent runs. We thus believe that GaA might be a useful tool for exploring the
parameter spaces of stochastic systems.
Future work will include (i) alternative objective functions that include temporal cross-correlations between species and the derivative of the autocorrelation; (ii)
longer experimental trajectories; (iii) multi-stable and oscillatory systems; and (iv)
alternative global optimization schemes. Moreover, the applicability of the present
method to large-scale, non-linear biochemical networks and real-world experimental data will be tested in future work.
Global stochastic parameter identification
19
Acknowledgements RR was financed by a grant from the Swiss SystemsX.ch initiative (grant
WingX), evaluated by the Swiss National Science Foundation. This project was also supported
with a grant from the Swiss SystemsX.ch initiative, grant LipidX-2008/011, to IFS.
References
1. Albert, R.: Scale-free networks in cell biology. J. Cell Sci. 118(21), 4947–4957 (2005)
2. Andrieu, C., Thoms, J.: A tutorial on adaptive MCMC. Statistics and Computing 18(4), 343–
373 (2008)
3. Auger, A., Chatelain, P., Koumoutsakos, P.: R-leaping: Accelerating the stochastic simulation
algorithm by reaction leaps. J. Chem. Phys. 125, 084,103 (2006)
4. Barabási, A.L., Oltvai, Z.N.: Network biology: understanding the cell’s functional organization. Nat. Rev. Genet. 5(2), 101–113 (2004)
5. Boys, R.J., Wilkinson, D.J., Kirkwood, T.B.L.: Bayesian inference for a discretely observed
stochastic kinetic model. Statistics and Computing 18(2), 125–135 (2008)
6. Cardinale, J., Rauch, A., Barral, Y., Székely, G., Sbalzarini, I.F.: Bayesian image analysis with
on-line confidence estimates and its application to microtubule tracking. In: Proc. IEEE Intl.
Symposium Biomedical Imaging (ISBI), pp. 1091–1094. IEEE, Boston, USA (2009)
7. Cinquemani, E., Milias-Argeitis, A., Summers, S., Lygeros, J.: Stochastic dynamics of genetic
networks: modelling and parameter identification. Bioinformatics 24(23), 2748–2754 (2008)
8. Gillespie, D.T.: A rigorous derivation of the chemical master equation. Physica A 188, 404–
425 (1992)
9. Gillespie, D.T.: Approximate accelerated stochastic simulation of chemically reacting systems. J. Chem. Phys. 115(4), 1716–1733 (2001)
10. Grima, R.: Noise-induced breakdown of the Michaelis-Menten equation in steady-state conditions. Phys. Rev. Lett. 102(21), 218103 (2009). DOI 10.1103/PhysRevLett.102.218103. URL
http://link.aps.org/abstract/PRL/v102/e218103
11. Hafner, M., Koeppl, H., Hasler, M., Wagner, A.: ‘Glocal’ robustness analysis and model discrimination for circadian oscillators. PLoS Comput. Biol. 5(10), e1000,534 (2009)
12. Helmuth, J.A., Burckhardt, C.J., Greber, U.F., Sbalzarini, I.F.: Shape reconstruction of subcellular structures from live cell fluorescence microscopy images. J. Struct. Biol. 167, 1–10
(2009)
13. Helmuth, J.A., Sbalzarini, I.F.: Deconvolving active contours for fluorescence microscopy images. In: Proc. Intl. Symp. Visual Computing (ISVC), Lecture Notes in Computer Science,
vol. 5875, pp. 544–553. Springer, Las Vegas, USA (2009)
14. Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106(4), 620–630 (1957).
DOI 10.1103/PhysRev.106.620
15. Kitano, H.: Computational systems biology. Nature 420(6912), 206–210 (2002)
16. Kitano, H.: Systems biology: A brief overview. Science 295(5560), 1662–1664 (2002)
17. Kjellström, G.: Network optimization by random variation of component values. Ericsson
Technics 25(3), 133–151 (1969)
18. Kjellström, G.: On the efficiency of Gaussian Adaptation. J. Optim. Theory Appl. 71(3),
589–597 (1991)
19. Kjellström, G., Taxen, L.: Stochastic optimization in system design. IEEE Trans. Circ. and
Syst. 28(7), 702–715 (1981)
20. Koeppl, H., Setti, G., Pelet, S., Mangia, M., Petrov, T., Peter, M.: Probability metrics to calibrate stochastic chemical kinetics. In: Proc. IEEE Intl. Symp. Circuits and Systems, pp.
541–544. Paris, France (2010)
21. Koutroumpas, K., Cinquemani, E., Kouretas, P., Lygeros, J.: Parameter identification for
stochastic hybrid systems using randomized optimization: A case study on subtilin production by Bacillus subtilis. Nonlinear Anal.: Hybrid Syst. 2(3), 786–802 (2008)
20
C. L. Müller, R. Ramaswamy, and I. F. Sbalzarini
22. Kurtz, T.G.: Relationship between stochastic and deterministic models for chemical reactions.
J. Chem. Phys. 57(7), 2976–2978 (1972)
23. Lakowicz, J.R.: Principles of Fluorescence Spectroscopy. Springer US (2006). DOI
10.1007/978-0-387-46312-4
24. Ljung, L.: Prediction error estimation methods. Circuits, systems, and signal processing 21(1),
11–21 (2002)
25. Marjoram, P., Molitor, J., Plagnol, V., Tavare, S.: Markov chain Monte Carlo without likelihoods. Proc. Natl. Acad. Sci. USA 100(26), 15,324–15,328 (2003)
26. Mason, O., Verwoerd, M.: Graph theory and networks in biology. Systems Biology, IET 1(2),
89–119 (2007)
27. Müller, C.L.: Exploring the common concepts of adaptive MCMC and Covariance Matrix Adaptation schemes. In: A. Auger, J.L. Shapiro, D. Whitley, C. Witt (eds.) Theory of Evolutionary Algorithms, no. 10361 in Dagstuhl Seminar Proceedings. Schloss
Dagstuhl – Leibniz-Zentrum für Informatik, Germany, Dagstuhl, Germany (2010). URL
http://drops.dagstuhl.de/opus/volltexte/2010/2813
28. Müller, C.L., Sbalzarini, I.F.: Gaussian Adaptation as a unifying framework for continuous
black-box optimization and adaptive Monte Carlo sampling. In: Proc. IEEE Congress on
Evolutionary Computation (CEC), pp. 2594–2601. Barcelona, Spain (2010)
29. Müller, C.L., Sbalzarini, I.F.: Gaussian Adaptation revisited — an entropic view on covariance matrix adaptation. In: Proc. EvoStar, Lect. Notes Comput. Sci., vol. 6024, pp. 432–441.
Springer, Istanbul, Turkey (2010)
30. Müller, C.L., Sbalzarini, I.F.: Gaussian Adaptation for robust design centering. In: Proc. EuroGen, Intl. Conf. Evolutionary and Deterministic Methods for Design, Optimization and Control. Capua, Italy (2011)
31. Munsky, B., Trinh, B., Khammash, M.: Listening to the noise: random fluctuations reveal gene
network parameters. Mol. Sys. Biol. 5(1), 318 (2009)
32. Poovathingal, S.K., Gunawan, R.: Global parameter estimation methods for stochastic biochemical systems. BMC Bioinformatics 11(1), 414 (2010)
33. Pritchard, J.K., Seielstad, M.T., Perez-Lezaun, A., Feldman, M.W.: Population growth of human Y chromosomes: A study of Y chromosome microsatellites. Molecular Biology and
Evolution 16(12), 1791–1798 (1999)
34. Qian, H., Elson, E.L.: Fluorescence correlation spectroscopy with high-order and dual-color
correlation to probe nonequilibrium steady states. Proc. Natl. Acad. Sci. USA 101(9), 2828–
2833 (2004)
35. Ramaswamy, R., González-Segredo, N., Sbalzarini, I.F.: A new class of highly efficient exact
stochastic simulation algorithms for chemical reaction networks. J. Chem. Phys. 130(24),
244,104 (2009)
36. Ramaswamy, R., Sbalzarini, I.F.: Fast exact stochastic simulation algorithms using partial
propensities. In: Proc. ICNAAM, Numerical Analysis and Applied Mathematics, International Conference, pp. 1338–1341. AIP (2010)
37. Ramaswamy, R., Sbalzarini, I.F.: A partial-propensity variant of the composition-rejection
stochastic simulation algorithm for chemical reaction networks. J. Chem. Phys. 132(4),
044,102 (2010)
38. Ramaswamy, R., Sbalzarini, I.F.: A partial-propensity formulation of the stochastic simulation
algorithm for chemical reaction networks with delays. J. Chem. Phys. 134, 014,106 (2011)
39. Ramaswamy, R., Sbalzarini, I.F., González-Segredo, N.: Noise-induced modulation of the relaxation kinetics around a non-equilibrium steady state of non-linear chemical reaction networks. PLoS ONE 6(1), e16,045 (2011)
40. Reinker, S., Altman, R.M., Timmer, J.: Parameter estimation in stochastic biochemical reactions. IEE Proc.-Syst. Biol 153(4), 168 (2006)
41. Stock, G., Ghosh, K., Dill, K.A.: Maximum Caliber: A variational approach applied to twostate dynamics. J. Chem. Phys. 128(19), 194,102 (2008)
42. Strogatz, S.H.: Exploring complex networks. Nature 410, 268–276 (2001)
Global stochastic parameter identification
21
43. Toni, T., Welch, D., Strelkowa, N., Ipsen, A., Stumpf, M.P.H.: Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J. R. Soc.
Interface 6(31), 187–202 (2009)
44. Wolkenhauer, O.: Systems biology: The reincarnation of systems theory applied in biology?
Briefings in Bioinformatics 2(3), 258 (2001)
45. Zechner, C., Ganguly, A., Pelet, S., Peter, M., Koeppl, H.: Accounting for extrinsic variability in the estimation of stochastic rate constants. Intl. J. Robust Nonlinear Control (2011,
submitted)
46. Zechner, C., Pelet, S., Peter, M., Koeppl, H.: Recursive Bayesian estimation of stochastic rate
constants from heterogeneous cell populations. In: Proc. IEEE CDC, Conference on Decision
and Control (2011, submitted)
| 5 |
arXiv:1208.5934v1 [] 29 Aug 2012
ON THE LOCAL COHOMOLOGY MODULES DEFINED BY A
PAIR OF IDEALS AND SERRE SUBCATEGORIES
KH. AHMADI-AMOLI AND M.Y. SADEGHI
Abstract. This paper is concerned about the relation between local cohomology modules defined by a pair of ideals and Serre classes of R-modules, as
a generalization of results of J. Azami, R. Naghipour and B. Vakili (2009)
and M. Asgharzadeh and M.Tousi (2010). Let R be a commutative Noetherian ring, I , J be two ideals of R and M be an R-module. Let a ∈ W̃ (I, J)
i (M )) ∈ S
and t ∈ N0 be such that ExttR (R/a, M ) ∈ S and ExtjR (R/a, HI,J
t (M ) such that
for all i < t and all j ≥ 0. Then for any submodule N of HI,J
t (M )/N ) ∈ S.
Ext1R (R/a, N ) ∈ S, we obtain HomR (R/a, HI,J
1. Introduction
Throughout this paper, R is denoted a commutative Noetherian ring, I , J are
denoted two ideals of R, and M is denoted an arbitrary R-module. By N0 , we shall
mean the set of non-negative integers. For basic results, notations and terminologies
not given in this paper, the reader is referred to [7] and [22], if necessary.
As a generalization of the usual local cohomology modules, Takahashi, Yoshino and
Yoshizawa [22], introduce the local cohomology modules with respect to a pair of
ideals (I, J). To be more precise, let W (I, J) = { p ∈ Spec(R) | I n ⊆ p + J for
some positive integer n} and W̃ (I, J) denote the set of ideals a of R such that
I n ⊆ a + J for some integer n. In general, W (I, J) is closed under specialization,
but not necessarily a closed subset of Spec(R). For an R-module M , we consider
the (I, J)-torsion submodule ΓI,J (M ) of M which consists of all elements x of M
with Supp(Rx) in W (I, J). Furthermore, for an integer i, the local cohomology
i
functor HI,J
with respect to (I, J) is defined to be the i-th right derived functor of
i
ΓI,J . Also HI,J (M ) is called the i-th local cohomology module of M with respect
i
to (I, J). If J = 0, then HI,J
coincides with the ordinary local cohomology functor
HIi with the support in the closed subset V (I).
Recently, some authors approached the study of properties of these extended
modules, see for example [9], [10], [19] and [23].
It is well known that an important problem in commutative algebra is to determine when the R-module HomR (R/I, HIi (M )) is finite. Grothendieck in [14]
conjectured the following:
If R is a Noetherian ring, then for any ideal I of R and any finite R-module M ,
the modules HomR (R/I, HIi (M )) are finite for all i ≥ 0.
2010 Mathematics Subject Classification. Primary 13D45, 13E05.
Key words and phrases. Local cohomology modules defined by a pair of ideals, local cohomology, Goldie dimension, (I, J)-minimax modules, Serre subcategory, (S, I, J)-cominimax modules,
associated primes.
1
2
KH. AHMADI-AMOLI AND M.Y. SADEGHI
In [15], Hartshorne gave a counterexample to Grothendieck, s conjecture and
he defined the concept of I-cofinite modules to generalize the conjecture. In [6],
Brodmann and Lashgari showed that if, for a finite R-module M and an integer
t, the local cohomology modules H 0 (M ) , H 1 (M ) , · · · , H t−1 (M ) are finite, then
R-module HomR (R/I, HIt (M )) is finite and so Ass(HIt (M )/N ) is a finite set for any
finite submodule N of HIt (M ). A refinement of this result for I-minimax R-modules
is as follows, see [4].
Theorem 1.1. Let M be an I-minimax R-module and t be a non-negative integer
such that HIi (M ) is I-minimax for all i < t. Then for any I-minimax submodule
N of HIt (M ), the R-module HomR (R/I, HIt (M )) is I-minimax.
Also authors in [1] and [3] studied local cohomology modules by means of Serre
subcategories. As a consequence, for an arbitrary Serre subcategory S, authors in
[3] showed the following result.
Theorem 1.2. Let s ∈ N0 be such that ExtsR (R/I, M ) ∈ S and ExtjR (R/I, HIi (M ))
∈ S for all i < s and all j ≥ 0. Let N be a submodule of HIs (M ) such that
Ext1R (R/I, N ) ∈ S. Then HomR (R/I, HIs (M )/N ) ∈ S.
The aim of the present paper is to generalize the concept of I-cominimax Rmodule, introduced in [4], to an arbitrary Serre subcategory S, to verify situations
i
in which the R-module HomR (R/I, HI,J
(M )) belongs to S. To approach it, we use
the methods of [3] and [4]. Our paper consists of four sections as follows.
In Section 2, by using the concept of (I, J)-relative Goldie dimension, we introduce the (I, J)-minimax R-modules and we study some properties of them, see
Proposition 2.7.
In Section 3, for an arbitrary Serre subcategory S, we defined (S, I, J)-cominimax
R-modules. This concept of R-modules can be considered as a generalization of Icofinite R-modules [14], I-cominimax R-modules [4], and (I, J)-cofinite R-modules
[23]. Also, as a main result of our paper, we prove the following. (See Theorem
3.4).
Theorem 1.3. Let a ∈ W̃ (I, J). Let t ∈ N0 be such that ExttR (R/a, M ) ∈ S and
i
ExtjR (R/a, HI,J
(M )) ∈ S for all i < t and all j ≥ 0. Then for any submodule N of
t
t
HI,J (M ) such that Ext1R (R/a, N ) ∈ S, we have HomR (R/a, HI,J
(M )/N ) ∈ S.
One can see, by replacing various Serre classes with S and using Theorem 1.3,
the main results of [2, Theorem 1.2] , [3, Theorem 2.2] , [4, Theorem 4.2] , [5,
Lemma 2.2] , [6] , [12, Corollary 2.7] , [16] , [17, Corollary 2.3], and [23, Theorem
3.2] are obtained.(See Theorem 3.14 and Proposition 3.15).
At last, in Section 4, as an application of results of the previous sections, we give
the following consequence about finiteness of associated primes of local cohomology
modules.(See Proposition 4.1 and Corollary 4.2)
i
Proposition 1.4. Let t ∈ N0 be such that ExttR (R/I, M ) ∈ SI,J and HI,J
(M ) ∈
t
C(SI,J , I, J) for all i < t. Let N be a submodule of HI,J
(M ) such that Ext1R (R/I, N )
t
t
(M )/N < ∞ and so
(M )/N ) ⊆ V (I) then GdimHI,J
belongs to SI,J . If Supp(HI,J
t
t
(M ).
HI,J (M )/N has finitely many associated primes; in particular, for N = JHI,J
ON THE LOCAL COHOMOLOGY MODULES DEFINED BY A PAIR OF IDEALS
3
2. Serre Classes And (I, J)-Minimax Modules
Recall that for an R-module H, the Goldie dimension of H is defined as the
cardinal of the set of indecomposable submodules of E(H), which appear in a
decomposition of E(H) in to direct sum of indecomposable submodules. Therefore,
H is said to have finite Goldie dimension if H does not contain an infinite direct sum
of non-zero submodules, or equivalently the injective hull E(H) of H decomposes as
a finite direct sum of indecomposable (injective) submodules. We shall use Gdim H
to denote the Goldie dimension of H. For a prime ideal p, let µ0 (p, H) denotes the
0-th Bass number of H with respect to prime ideal p. It is known that µ0 (p, H) > 0
iff pP∈ Ass(H). It is clear by
of the Goldie dimension that Gdim H
P the definition
0
0
µ
(p,
H). Also, the (I, J)-relative Goldie
µ
(p,
H)
=
=
p∈Spec(R)
p∈Ass(H)
P
dimension of H is defined as GdimI,J H := p∈W (I,J) µ0 (p, H).(See [19, Definition
3.1]). If J = 0, then GdimI,J H = GdimI H (see[11, Definition 2.5]), moreover if I
= 0, we obtain GdimI,J H = Gdim H. It is known that when R is a Noetherian
ring, an R-module H is minimax if and only if any homomorphic image of H has
finite Goldie dimension (see [13], [25], or [26]). This motivates the definition of
(I, J)-minimax modules.
Definition 2.1. An R-module M is said to be minimax with respect to (I, J) or
(I, J)-minimax if the (I, J)-relative Goldie dimension of any quotient module of M
is finite.
Remarks 2.2. By Definition 2.1, it is clear that
(i) GdimI M ≤ GdimI,J M ≤ Gdim M.
This inequalities maybe strict (see [19, Definition 3.1]).
(ii) For Noetherian ring R, an R-module M is minimax iff for any R-module N
of M , Gdim M/N < ∞. Therefore, by (i), in Noetherian case, the class of
I-minimax R-modules contains the class of (I, J)-minimax R-modules and
it contains the class of minimax R-modules.
Example and Remarks 2.3. It is easy to see that
(i) Every quotient of finite modules, Artinian modules, Matlis refelexive modules and linearly compact modules have finite (I, J)-relative Goldie dimension, and so all of them are (I, J)-minimax modules.
(ii) If I = 0, then W (I, J) = Spec(R) = V (I) and so an R-module M is minimax
iff is (I, J)-minimax iff is I-minimax.
(iii) If J = 0, then W (I, J) = V (I), so that an R-module M is (I, J)-minimax
iff is I-minimax.
(iv) let M be an I-torsion module. Then, by [11, Lemma 2.6] and [19, Lemma
3.3], M is minimax iff is (I, J)-minimax iff is I-minimax.
(v) If M is (I, J)-torsion module, then M is minimax iff is (I, J)-minimax. (By
the definition and [19, Lemma 3.3]). Specially, when (0) ∈ W̃(I, J) or (0)
∈ W(I, J).
(vi) By [22, Corollary (1.8)(2)], the class of (I, J)-torsion is a Serre subcategory of R-modules. Therefore, by part(v), in this category, the concept of
minimax modules coincides with the concept of (I, J)-minimax modules;
i
(M ) (i ≥ 0).
specially, for the (I, J)-torsion module of HI,J
(vii) If M in(M ) ⊆ W (I, J) and GdimI,J M < ∞, then by [19, Lemma 3.3] and
definition, Gdim M < ∞, and so |Ass(M )| < ∞.
4
KH. AHMADI-AMOLI AND M.Y. SADEGHI
The following proposition shows that the class of (I, J)-minimax R-modules is
a Serre subcategory.
Proposition 2.4. Let 0 → M ′ → M → M ′′ → 0 be an exact sequence of Rmodules. Then M is (I, J)-minimax if and only if M′ and M′′ are both (I, J)minimax.
Proof. One can obtain the result, by replacing W (I, J) with V (I) and a modification of the proof of proposition 2.3 of [4].
Remark 2.5. Recall that a class S of R-modules is a ′′ Serre subcategory′′ or ′′ Serre
class′′ of the category of R-modules, when it is closed under taking submodules,
quotients and extensions. For example, the following class of R-modules are Serre
subcategory.
(a) The class of Zero modules.
(b) The class of Noeterian modules.
(c) The class of Artinian modules.
(d) The class of R-modules with finite support.
(e) The class of all R-modules M with dimR M ≤ n, where n is a non-negative
integer.
(f) The class of minimax modules and the class of I-cofinite minimax Rmodules. (see [18, Corollary 4.4] )
(g) The class of I-minimax R-modules. (see [4, Proposition 2.3 ] )
(h) The class of I-torsion R-modules and the class of (I, J)-torsion R-modules.
(see [22, Corollary 1.8])
(i) The class of (I, J)-minimax R-modules. (Proposition 2.4)
Notations 2.6. In this paper, the following notations are used for the following
Serre subcategories:
′′
′′
′′
′′
S ′′ for an arbitrary Serre class of R-modules;
S 0 ′′ for the class of minimax R-modules;
S I ′′ for the class of I-minimax R-modules;
S I,J ′′ for the class of (I, J)-miniax R-modules.
Using the above notations and Remark 2.2, we have S 0 ⊆ S I,J ⊆ S I .
Now, we exhibit some of the properties of SI,J .
Proposition 2.7. Let I,J,I ′ ,J ′ be ideals of R and M be an R-module. Then
(i) SI,J = S√I,J = S√I,√J = SI,√J .
√
(ii) If I n ⊆ J, for some n ∈ N (or equally, if R/J is an I-torsion R-module),
then S0 √
= SI,J .
(iii) If I n ⊆ √I ′ , for some n ∈ N, then SI,J ⊆ SI ′ ,J .
(iv) If J n ⊆ √ J ′ , for some n ∈ N, then SI,J ′ ⊆ SI,J .
(v) If I n ⊆ I ′ , for some n ∈ N and M is (I ′ , J)-torsion, then M ∈ SI,J iff
M ∈ S0 √iff M ∈ SI ′ ,J .
(vi) If J n ⊆ J ′ , for some n ∈ N and M is (I, J)-torsion, then M ∈ SI,J iff
M ∈ S0 iff M ∈ SI,J ′ .
Proof. All these statements follow easily from [22, Proposition 1.4 and 1.6] and
Remark 2.3. As an illustration, we just prove statement (iii).
ON THE LOCAL COHOMOLOGY MODULES DEFINED BY A PAIR OF IDEALS
5
√
√
Let H ∈ SI,J . Since I n ⊆ I ′ , we have W ( I ′ , J) ⊆ W (I, J), by [22, Proposition
1.6]. Now, since H is (I, J)-minimax, the assertion follows from definition.
Lemma 2.8. (i) If N ∈ S and M is a finite R-module, then for any submodi
ule H of ExtiR (M, N ) and T of T orR
(M, N ), we have ExtiR (M, N )/H ∈ S and
i
T orR (M, N )/T ∈ S, for all i ≥ 0 .
(ii) For all i ≥ 0, we have ExtiR (R/I, M ) ∈ SI,J iff ExtiR (R/I, M ) ∈ S0 iff
ExtiR (R/I, M ) ∈ SI .
Proof. (i) The result follows from [3, Lemma 2.1].
(ii) Since, for all i, ExtiR (R/I, M ) and T oriR (R/I, M ) are (I, J)-torsion R-modules,
the assertion holds by Remark 2.3 (iv).
The following proposition can be thought of as a generalization of Proposition
2.6 of [4], in case of J = 0 and S = SI .
i
Proposition 2.9. Let M in(M ) ⊆ W (I, J). If M ∈ S, then HI,J
(M ) ∈ S for all
i ≥ 0.
Proof. By hypothesis and [21, Corollary 1.7], M is (I, J)-torsion R-module and so
0
i
HI,J
(M ) = ΓI,J (M ) = M . Therefore, HI,J
(M ) = 0 for all i ≥ 1, by [22, Corollary
1.13]. Thus the assertion holds.
Now, we are in position to prove the main results of this section, which is a
generalization of Theorem 2.7 of [4], for S = SI . Some applications of these results
are appeared in Section 3.
Theorem 2.10. Let M be a finite R-module and N an arbitrary R-module. Let
t ∈ N0 . Then the following conditions are equivalent:
(i) ExtiR (M, N ) ∈ S for all i ≤ t .
(ii) For any finite R-module H with Supp(H) ⊆ Supp(M ), ExtiR (H, N ) ∈ S
for all i ≤ t.
Proof. (i)⇒(ii) Since Supp(H) ⊆ Supp(M ), according to Gruson’s Theorem [24,
Theorem 4.1], there exists a chain of submodules of M ,
0 = H0 ⊂ H1 ⊂ · · · ⊂ Hk = H,
such that the factors Hj /Hj−1 are homomorphic images of a direct sum of finitely
many of M . Now, consider the exact sequences
0 → K → M n → H1 → 0
0 → H1 → H2 → H2 /H1 → 0
..
.
0 → Hk−1 → Hk → Hk /Hk−1 → 0,
for some positive integer n. Considering the long exact sequence
i
i
i
· · · → Exti−1
R (Hj−1 , N ) → ExtR (Hj /Hj−1 , N ) → ExtR (Hj , N ) → ExtR (Hj−1 , N ) → . . .
and an easy induction on k, the assertion follows. So, it suffices to prove the case
k = 1. From the exact sequence
0 → K → M n → H → 0,
6
KH. AHMADI-AMOLI AND M.Y. SADEGHI
where n ∈ N and K is a finite R-module, and the induced long exact sequence, by
using induction on i, we show that ExtiR (H, N ) ∈ S for all i. For i = 0, we have
the exact sequence
0 → HomR (H, N ) → HomR (M n , N ) → HomR (K, N ).
n
L
HomR (M, N ), hence in view of the assumption and
Since HomR (M n , N ) ∼
=
Lemma 2.8, Ext0R (H, N ) ∈ S. Now, let i > 0. We have, for any R-module H with
Supp(H) ⊆ Supp(M ), the R-module Exti−1
R (H, N ) ∈ S, in particular for K. Now,
from the long exact sequence
i
i
n
· · · → Exti−1
R (K, N ) → ExtR (H, N ) → ExtR (M , N ) → . . .
and by Lemma 2.8, we can conclude that ExtiR (H, N ) ∈ S.
(ii)⇒(i) It is trivial.
Corollary 2.11. Let r ∈ N0 . Then, for any R-module M , the following conditions
are equivalent:
(i) ExtiR (R/I, M ) ∈ S for all i ≤ r.
(ii) For any ideal a of R with a ⊇ I, ExtiR (R/a, M ) ∈ S for all i ≤ r.
(iii) For any finite R-module N with Supp(N ) ⊆ V (I), ExtiR (N, M ) ∈ S for all
i ≤ r.
(iv) For any p ∈ M in(I), ExtiR (R/p, M ) ∈ S for all i ≤ r.
Proof. In view of Theorem 2.10, it is enough to show that (iv) implies (i). To
do this, let p1 ,p2 ,· · · ,pn be the minimal elements of V (I). Then, by assumption,
the R-modules ExtiR (R/pj, M ) ∈ S for all j = 1, 2, · · · , n. Hence, by Lemma
2.8, ExtiR (⊕nj=i R/pj, M ) ∼
= ⊕nj=1 ExtiR (R/pj, M ) ∈ S. Since Supp(⊕nj=1 R/pj) =
Supp(R/I), it follows from Theorem 2.10 that ExtiR (R/I, M ) ∈ S, as required.
i
3. (S, I, J)-Cominimax Modules And HI,J
(M )
Recall that M is said to be (I, J)-cofinite if M has support in W (I, J) and
ExtiR (R/I, M ) is a finite R-module for each i ≥ 0 (see [23, Definition 2.1]). In
fact this definition is a generalization of I-cofinite modules, which is introduced
by Hartshorne in [15]. Considering an arbitrary Serre subcategory of R-modules
instead of finitely generated one, we can give a generalization of (I, J)-cofinite
modules as follows.
Definition 3.1. Let R be a Noetherian ring and I, J be two ideals of R. For the
Serre subcategory S of the category of R-modules, an R-module M is called an
(S, I, J)-cominimax precisely when Supp(M ) ⊆ W (I, J) and ExtiR (R/I, M ) ∈ S
for all i ≥ 0.
Remark 3.2. By applying various Serre classes of R-modules in 3.1, we may
obtain different concepts. But in view of [22, Proposition 1.7], the class of (S, I, J)cominimax R-modules is contained in the class of (I, J)-torsion R-modules. Moreover, for every R-module M and all i ≥ 0, ExtiR (R/I, M ) is I-torsion, so by
Lemma 2.8 (ii), we have ExtiR (R/I, M ) ∈ SI,J iff ExtiR (R/I, M ) ∈ S0 . In other
words, the class of (S0 , I, J)-cominimax R-modules and the class of (SI,J , I, J)cominimax R-modules are the same. Also, since Supp(M ) ⊆ V (I) implies that
Supp(M ) ⊆ W (I, J), hence the class of (SI,J , I, J)-cominimax R-modules contains
the class of (SI , I, J)-cominimax R-modules.
ON THE LOCAL COHOMOLOGY MODULES DEFINED BY A PAIR OF IDEALS
7
Notation 3.3. For a Serre classes S of R-modules and two ideals I, J of R, we use
C(S, I, J) to denote the class of all (S, I, J)-cominimax R-modules.
Example and Remark 3.4. (i) Let N ∈ S be such that Supp(N ) ⊆ W (I, J).
Then it follows from Lemma 2.8 (i) that N ∈ C(S, I, J).
(ii) Let N be a pure submodule of R-module M . By using the following exact
sequence 0 → ExtiR (R/I, N ) → ExtiR (R/I, M ) → ExtiR (R/I, M/N ) → 0, for all
i ≥ 0, [20, Theorem 3.65], M ∈ C(S, I, J) iff N, M/N ∈ C(S, I, J); in particular,
when S = SI,J .
Proposition 3.5. let 0 → M ′ → M → M ′′ → 0 be an exact sequence of R-modules
such that two of the modules belong to S. Then the third one is (S, I, J)-cominimax
if its support is in W (I, J).
Proof. The assertion follows from the induced long exact sequence
i+1
′
· · · → ExtiR (R/I, M ) → ExtiR (R/I, M ′′ ) → Exti+1
R (R/I, M ) → ExtR (R/I, M ) → . . .
and Lemma 2.8 (i).
An immediate consequence of Proposition 3.5 and Lemma 2.8 is as follows.
Corollary 3.6. Let f : M → N be a homomorphism of R-modules such that
M, N ∈ S. Let one of the three modules Kerf , Imf and Cokerf be in S. Then
two others belong to C(S, I, J) if their supports are in W (I, J).
Proposition 3.7. Let I, J, I ′ , J ′ are ideals of R. Then
√
√ √
√
(i) M ∈ C(S, I, J) iff M ∈ C(S, I, J) iff M ∈ C(S, I, J) iff M ∈ C(S, I, J).
(ii) If M is I-cominimax, then M ∈ C(SI,J , I, J).
√
(iii) If M in(M ) ⊆ W (I ′ , J) , ExtiR (R/I, M ) ∈ SI,J , and I n ⊆ I ′ for some
n ∈ N and all i ≥ 0, then M ∈ C(SI,J , I, J) and M ∈ C(SI ′ ,J , I ′ , J). In
′
′
particular, if M ∈ C(SI,J , I, J), then
√ we have M ∈ C(SI ,J , I , J).
n
′
(iv) If M in(M ) ⊆ W (I, J) and J ⊆ J for some n ∈ N, then M ∈ C(SI,J , I, J)
iff M ∈ C(SI,J ′ , I, J ′ ).
√
Proof. (i) Since V (I) = V ( I), the assertions follow from [22, Proposition 1.6],
Corollary 2.11, and Definition 3.1.
(ii) By assumption and Lemma 2.8 (ii), Supp(M ) ⊆ V (I) ⊆ W (I, J) and ExtiR (R/I,
M ) ∈ SI,J for all i ≥ 0.
(iii), (iv) Apply [22, Proposition 1.6 and 1.7], Corollary 2.11 and Proposition 2.7(iii),
(iv).
The following Remark plays an important role in the proof of our main theorems
in this section.
Remark 3.8. In view of proof [22, Theorem 3.2], Γa (M ) ⊆ ΓI,J (M ), for any
a ∈ W̃ (I, J). Thus ΓI,J (M )= 0 implies that Γa (M ) = 0, for all a ∈ W̃ (I, J). Now,
let M̄ = M/ΓI,J (M ) and E = ER (M̄ ) be the injective hull of R-module M̄ . Put
L = E/M̄ . Since ΓI,J (M̄ ) = 0, then ΓI,J (E) = 0 and also for any a ∈ W̃ (I, J),
we have Γa (M̄ ) = 0 = Γa (E). In particular, the R-module HomR (R/a, E) is zero.
Now, from the exact sequence 0 → M̄ → E → L → 0, and applying HomR (R/a, −)
and ΓI,J (−), we have the following isomorphisms
Exti (R/a, L) ∼
= Exti+1 (R/a, M̄ ) and H i (L) ∼
= H i+1 (M ),
R
R
I,J
I,J
8
KH. AHMADI-AMOLI AND M.Y. SADEGHI
for any a ∈ W̃ (I, J) and all i ≥ 0. In particular, ExtiR (R/I, L) ∼
= Exti+1
R (R/I, M̄ ).
i
Proposition 3.9. Let t ∈ N0 be such that HI,J
(M ) ∈ C(S, I, J) for all i < t. Then
ExtiR (R/I, M ) ∈ S for all i < t.
Proof. We use induction on t. When t = 0, there is nothing to prove. For t = 1, since
HomR (R/I, ΓI,J (M )) = HomR (R/I, M ), and ΓI,J (M ) is (S, I, J)-cominimax, the
result is true. Now, suppose that t ≥ 2 and the case t − 1 is settled. The exact
sequence 0 → ΓI,J (M ) → M → M̄ → 0 induced the long exact sequence
· · · → ExtiR (R/I, ΓI,J (M )) → ExtiR (R/I, M ) → ExtiR (R/I, M̄ ) → . . . .
Since ΓI,J (M ) ∈ C(S, I, J), we have ExtiR (R/I, ΓI,J (M )) ∈ S for all i ≥ 0. Therefore, it is enough to show that ExtiR (R/I, M̄ ) ∈ S for all i < t. For this purpose,
let E = ER (M̄ ) and L = E/M̄ . Now, by Remark 3.8, for all i ≥ 0, we get the
i+1
i
isomorphisms HI,J
(L) ∼
(M ) and ExtiR (R/I, L) ∼
= HI,J
= Exti+1
R (R/I, M̄ ). Now,
i+1
i
(L) ∈ C(S, I, J)
by assumption, HI,J (M ) ∈ C(S, I, J) for all i < t − 1, and so HI,J
i
for all i < t − 1. Thus, by the inductive hypothesis, ExtR (R/I, L) ∈ S and so
Exti+1
R (R/I, M̄ ) ∈ S.
The next corollary generalizes Proposition 3.7 of [4].
i
Corollary 3.10. Let HI,J
(M ) ∈ C(S, I, J) for all i ≥ 0. Then ExtiR (R/I, M ) ∈ S
for all i ≥0; particularly, when S is the class of I-minimax modules or the class of
(I, J)-minimax modules.
The Proposition 3.8 of [4] can be obtained from the following theorem when
J = 0 and S = SI .
Theorem 3.11. Let ExtiR (R/I, M ) ∈ S for all i ≥ 0. Let t ∈ N0 be such that
i
t
HI,J
(M ) ∈ C(S, I, J), for all i 6= t, then HI,J
(M ) ∈ C(S, I, J).
Proof. We use induction on t. If t = 0, we must prove that ExtiR (R/I, ΓI,J (M )) ∈ S
for all i ≥ 0. By the exact sequence
i
i
· · · → Exti−1
R (R/I, M̄ ) → ExtR (R/I, ΓI,J (M )) → ExtR (R/I, M ) → . . .
and the hypothesis, it is enough to show that ExtiR (R/I, M̄ ) ∈ S for all i ≥
i
0. Now, by Remark 3.8 and our assumption, we obtain HI,J
(L) ∈ C(S, I, J).
i
Therefore Corollary 3.10 implies that ExtR (R/I, M̄ ) ∈ S for all i ≥ 0 (note that
Ext0R (R/I, M̄ ) = 0). Now suppose, inductively, that t > 0 and the result has been
proved for t − 1. By Remark 3.8, it is easy to show that L satisfies in our inductive
t−1
t
hypothesis. Therefore, the assertion follows from HI,J
(M ) ∼
(L).
= HI,J
i
Corollary 3.12. Let M ∈ S and t ∈ N0 be such that HI,J
(M ) is (S, I, J)t
cominimax for all i 6= t. Then HI,J
(M ) is (S, I, J)-cominimax.
Proof. This is an immediate consequence of Lemma 2.8 (i) and Theorem 3.11.
Corollary 3.13. Let I be a principal ideal and J be an arbitrary ideal of R. Let
i
M ∈ S. Then HI,J
(M ) is (S, I, J)-cominimax for all i ≥ 0.
0
Proof. For i = 0, since HI,J
(M ) is a submodule of M and M ∈ S, it turns out
0
that HI,J (M ) is (S, I, J)-cominimax, by Remark 3.4 (i). Now, let I = aR. By [22,
i
•
Definition 2.2 and Theorem 2.4], we have HI,J
(M ) ∼
⊗R M ) = 0 for all
= H i (CI,J
i > 1. Therefore the result follows from Theorem 3.11.
ON THE LOCAL COHOMOLOGY MODULES DEFINED BY A PAIR OF IDEALS
9
Now we are prepared to prove the main theorem of this section, which is a
generalization of one of the main results of [3, Theorem 2.2] and also [23, Theorem
2.3].
Theorem 3.14. Let a ∈ W̃ (I, J). Let t ∈ N0 be such that ExttR (R/a, M ) ∈ S and
i
ExtjR (R/a, HI,J
(M )) ∈ S for all i < t and all j ≥ 0. Then for any submodule N
t
t
of HI,J (M ) such that Ext1R (R/a, N ) ∈ S, we have HomR (R/a, HI,J
(M )/N ) ∈ S;
in particular, for a = I.
Proof. Considering the following long exact sequence
t
t
· · · → HomR (R/a, HI,J
(M )) → HomR (R/a, HI,J
(M )/N ) → Ext1R (R/a, N ) → . . . ,
t
since Ext1R (R/a, N ) ∈ S, it is enough to show that HomR (R/a, HI,J
(M )) ∈ S.
To do this, we use induction on t. When t = 0, since HomR (R/a, ΓI,J (M )) =
HomR (R/a, M ) ∈ S, the result is obtained. Next, we assume that t > 0 and that
the claim is true for t − 1. Let M̄ = M/ΓI,J (M ). Then, by the long exact sequence
· · · → ExtjR (R/a, M ) → ExtjR (R/a, M̄ ) → Extj+1
R (R/a, ΓI,J (M )) → . . . ,
and assumption, we conclude that ExtjR (R/a, M̄) ∈ S. Now, by using notation
of Remark 3.8, it is easy to see that L satisfies the inductive hypothesis. So that
t−1
t
we get HomR (R/a, HI,J
(L)) ∈ S and therefore, HomR (R/a, HI,J
(M )) ∈ S, as
required.
The main results of [4, Theorem 4.2] , [5, Lemma 2.2] , [2, Theorem 1.2] ,[16] ,
[12, Corollary 2.7], and [17, Corollary 2.3] are all special cases of next corollary, by
replacing various Serre classes with S and J = 0.
i
Corollary 3.15. Let t ∈ N0 be such that ExttR (R/I, M ) ∈ S and HI,J
(M ) ∈
t
C(S, I, J) for all i < t. Then for any submodule N of HI,J (M ) and any finite
R-module M ′ with Supp(M ′ ) ⊆ V (I) and Ext1R (M ′ , N ) ∈ S, we have HomR (M ′ ,
t
HI,J
(M )/N ) ∈ S.
Proof. Apply Theorem 3.14 and Corollary 2.11.
i
Proposition 3.16. Let t ∈ N0 be such that HI,J
(M ) ∈ C(S, I, J) for all i < t.
Then the following statements hold:
t
(i) If ExttR (R/I, M ) ∈ S, then HomR (R/I, HI,J
(M )) ∈ S.
t+1
1
t
(ii) If ExtR (R/I, M ) ∈ S, then ExtR (R/I, HI,J (M )) ∈ S.
t+1
(iii) If ExtiR (R/I, M ) ∈ S for all i ≥ 0, then HomR (R/I, HI,J
(M )) ∈ S iff
2
t
ExtR (R/I, HI,J (M )) ∈ S.
Proof. (i) Apply Corollary 3.15 or Theorem 3.14 .
(ii) We proceed by induction on t. If t = 0, then by the long exact sequence
(∗)
0 → Ext1R (R/I, ΓI,J (M )) → Ext1R (R/I, M ) → Ext1R (R/I, M̄ )
→ Ext2R (R/I, ΓI,J (M )) → Ext2R (R/I, M ) → Ext2R (R/I, M̄ )
..
.
→ ExtiR (R/I, ΓI,J (M )) → ExtiR (R/I, M ) → ExtiR (R/I, M̄ )
→ Exti+1
R (R/I, ΓI,J (M )) → · · ·
10
KH. AHMADI-AMOLI AND M.Y. SADEGHI
and Ext1R (R/I, M̄ ) ∈ S, the result follows. Suppose that t > 0 and the assertion
is true for t − 1. Since ΓI,J (M ) ∈ C(S, I, J), so ExtiR (R/I, ΓI,J (M )) ∈ S for all
i ≥ 0, and so by (∗), Extt+1
R (R/I, M̄ ) ∈ S. Now, by the notations of Remark
3.8, it is easy to see that R-module L satisfies the inductive hypothesis and so
t−1
t−1
t
Ext1R (R/I, HI,J
(M )) ∈ S. Now, the result follows from HI,J
(L) ∼
(M ).
= HI,J
(iii) (⇒) We use induction on t. Let t = 0. Then considering the long exact
sequence (∗), it is enough to show that Ext1R (R/I, M̄ ) ∈ S. By Remark 3.8, we
have
Ext1R (R/I, M̄ ) ∼
= HomR (R/I, L)
∼
= HomR (R/I, ΓI,J (L))
1
∼
(M )),
= HomR (R/I, HI,J
as required. Suppose t > 0 and the assertion is true for t − 1. Since ΓI,J (M ) ∈
C(S, I, J), we have ExtiR (R/I, ΓI,J (M )) ∈ S for all i ≥ 0. Therefore the exactness
of sequence (∗) implies that ExtiR (R/I, M̄ ) ∈ S for all i ≥ 0. Again by using
the notations of Remark 3.8, we get ExtiR (R/I, L) ∈ S, for all i ≥ 0, and also
t+1
t
HomR (R/I, HI,J
(L)) ∼
(M )) ∈ S. Now, by inductive hypothe= HomR (R/I, HI,J
t−1
2
t
sis, ExtR (R/I, HI,J (L)) ∈ S and hence Ext2R (R/I, HI,J
(M )) ∈ S, as required.
(⇐) This part can be proved by the same method of (⇒), using induction on t, the
following exact sequence
Ext1R (R/I, M ) → Ext1R (R/I, M̄ ) → Ext2R (R/I, ΓI,J (M )),
and Remark 3.8.
4. Finiteness Properties Of Associated Primes
In this short section, we obtain some results, as some applications of previous
sections, about associated prime ideals of local cohomology modules and also finiteness properties of them.
i
Proposition 4.1. Let t ∈ N0 be such that ExttR (R/I, M ) ∈ SI,J and HI,J
(M ) ∈
t
1
C(SI,J , I, J) for all i < t. Let N be a submodule of HI,J (M ) such that ExtR (R/I, N )
t
t
∈ SI,J . If Supp(HI,J
(M )/N ) ⊆ V (I), then Gdim(HI,J
(M )/N ) < ∞ and so
t
HI,J
(M )/N has finitely many associated primes.
Proof. By using Theorem 3.14, for the Serre class SI,J , we have HomR (R/I,
t
t
HI,J
(M )/N ) ∈ SI,J . Hence, by Lemma 2.8 (ii), HomR (R/I, HI,J
(M )/N ) ∈ S0 , as
required.
i
Corollary 4.2. Let t ∈ N0 be such that ExttR (R/I, M ) and HI,J
(M ) are (I, J)t
minimax R-modules for all i < t. Let N be a submodule of HI,J
(M ) such that
t
t
Supp(HI,J
(M )/N ) ⊆ V (I) and Ext1R (R/I, N ) is (I, J)-minimax. Then HI,J
(M )/N
t
(M )/N ) is a finite set; in particular
has finite Goldie dimension and so Ass(HI,J
t
(M ).
for N = JHI,J
Proof. For the first part, apply Remark 3.4 and proposition 4.1. Since by [22,
i
t
Corollary 1.9], HI,J
(M )/JHI,J
(N ) is I-torsion, so the last part immediately follows
from the first.
ON THE LOCAL COHOMOLOGY MODULES DEFINED BY A PAIR OF IDEALS
11
Corollary 4.3. (See [21, Theorem 4]) Let t ∈ N0 be such that ExttR (R/I, M ) is a
finite R-module. If HIi (M ) is I-cofinite for all i < t and HIt (M ) is minimax, then
HIt (M ) is I-cofinite and so Ass((HIt (M )) is a finite set.
Proof. In Proposition 3.16 (i), apply S as the class of finite R-modules, and J =
0. Therefore, HomR (R/I, HIt (M )) is finite R-module. Now, use [18, Proposition
3.4].
Corollary 4.4. Let the situation be as in Corollary 4.3. Then the following statements hold:
t+1
(i) If Extt+1
(M )) and Ext1R (R/I, HIt (M ))
R (R/I, M ) is finite, then HomR (R/I, HI
t+1
are finite and so Ass(HI (M )) is a finite set.
t
(ii) If ExtiR (R/I, M ) is finite for all i ≥ 0, then Ext2R (R/I, HI,J
(M )) is finite.
Proof. (i) By Corollary 4.3, we conclude that HIt (M ) is I-cofinite. So HIi (M ) is
I-cofinite for all i < t + 1. Now, using Proposition 3.16 (i), (ii).
(ii) The result follows from (i) and Proposition 3.16 (iii).
References
1. M. Aghapournahr and L. Melkersson, Local cohomology and Serre subcategories, J. Algebra
320 (2008), 1275–1287.
2. J. Asdollahi, K. Khashyarmanesh and S. Salarian, On the finitenees properties of the generalized local cohomology modules, Comm. Alg. 30 (2002), 859–867.
3. M. Asgharzadeh and M. Tousi, A unified approach to local cohomology modules using Serre
classes, Canad. Math. Bull. 53 (2010), 577–586.
4. J. Azami, R. Naghipour and B.Vakili, Finiteness properties of local cohomology modules for
a-minimax modules, Proc. Amer. Math. Soc. 137 (2009), 439–448.
5. K. Bahmanpour and R. Naghipour, On the cofiniteness of local cohomology modules, Proc.
Amer. Math. Soc. 136 (2008), 2359–2363.
6. M.P. Brodmann and A. Lashgari Faghani, A finiteness result for associated primes of local
cohomology modules, Proc. Amer. Math. Soc. 128 (2000), 2851–2853.
7. M.P. Brodmann and R.Y. Sharp, Local cohomology: An algebraic introduction with geometric
applications, Cambridge University Press, (1998).
8. W. Bruns and J. Herzog, Cohen-Macaulay Rings, Cambridge Studies in Advanced Mathematics, Vol.39, Cambridge Univ. Press, Cambridge, UK, (1998).
9. L. Chu, Top local cohomology modules with respect to a pair of ideals, Proc. Amer. Math.
Soc. 139 (2011), 777–782.
10. L. Chu and Q. Wang, Some results on local cohomology modules defined by a pair of ideals,
J. Math. Kyoto Univ. 49 (2009), 193–200.
11. K. Divaani-Aazar and M. A. Esmkhani, Artinianness of local cohomology modules of ZDmodules, Comm. Alg. 33 (2005), 2857–2863.
12. K. Divaani-Aazar and A. Mafi, Associated primes of local cohomology modules, Proc. Amer.
Math. Soc. 133 (2005), 655–660.
13. C. Faith and M. D. Herbera, Endomorphism rings and tensor products of linearly compact
modules, Comm. Alg. 54 (1997), 1215–1255.
14. A. Grothendieck, Cohomologie Locale des Faisceaux Cohérents et Théorèmes de Lefschetz
Locaux et Globaux, (SGA 2), North-Holland, Amesterdam, (1968).
15. R. Hartshorne, Affine duality and cofiniteness, Invent. Math. 9 (1970), 154–164.
16. K. Khashyarmanesh and S.Salarian, On the associated primes of local cohomology modules,
Comm. Alg. 27 (1999), 6191–6198.
17. B. Lorestani, P. Sahandi and S. Yassemi, Artinian local cohomology modules, To appear in
Canadian Mathematical Bulletin. 53 (2010), 577–586.
18. L. Melkersson, Modules cofinite with respect to an ideal, J. Algebra. 285 (2005), 649–668.
19. S. Payrovi and M. L. Parsa, Artinianness of local cohomology modules defined by a pair of
ideals, To appear in Bull. Malays. Math. Sci. Soc. 53 (2010), 577–586.
12
KH. AHMADI-AMOLI AND M.Y. SADEGHI
20. J.J. Rotman, An Introduction to homological algebra, Academic Press, San Diego, (1979).
21. S. S. Laleh, M.Y. Sadeghi and M. H. Mostaghim, Some results on the cofiniteness of local
cohomology modules, Czechoslovak Mathematical Journal. 62 (2012), 105–110.
22. R. Takahashi, Y. Yoshino and T. Yoshizawa, Local cohomology based on a nonclosed support
defined by a pair of ideals, J. Pure. Appl. Algebra. 213 (2009), 582–600.
23. A. Tehranian and A. P. Talemi, Cofiniteness of local cohomology based on a nonclosed support
defined by a pair of ideals, Bull. Iranian Math. Soc. 36 (2010), 145–155.
24. W. Vasconcelos, Divisor Theory in Module Categories, North-Holland Publishing Company,
Amesterdam, (1974).
25. T. Zink, Endlichkeitsbedingungen für moduln über einem Noetherschen ring, Math. Nachr.
164 (1974), 239–252.
26. H. Zöschinger, Minimax-moduln, J. Algebra. 102 (1986), 1–32.
Department of Mathematics, Payame Noor university, Tehran, 19585-83133, Iran
Current address: Department of Mathematics, Payame Noor university, Tehran, 19585-83133,
Iran
E-mail address: [email protected]
Department of Mathematics, Payame Noor university, Tehran, 19585-83133, Iran
E-mail address: [email protected]
| 0 |
Using Simulation and Domain Adaptation to Improve
Efficiency of Deep Robotic Grasping
arXiv:1709.07857v2 [cs.LG] 25 Sep 2017
Konstantinos Bousmalis∗,1 , Alex Irpan∗,1 , Paul Wohlhart∗,2 , Yunfei Bai2 , Matthew Kelcey1 , Mrinal Kalakrishnan2 ,
Laura Downs1 , Julian Ibarz1 , Peter Pastor2 , Kurt Konolige2 , Sergey Levine1 , Vincent Vanhoucke1
Abstract— Instrumenting and collecting annotated visual
grasping datasets to train modern machine learning algorithms
can be extremely time-consuming and expensive. An appealing
alternative is to use off-the-shelf simulators to render synthetic
data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated
data often fail to generalize to the real world. We study how
randomized simulated environments and domain adaptation
methods can be extended to train a grasping system to grasp
novel objects from raw monocular RGB images. We extensively
evaluate our approaches with a total of more than 25,000
physical test grasps, studying a range of simulation conditions
and domain adaptation methods, including a novel extension
of pixel-level domain adaptation that we term the GraspGAN.
We show that, by using synthetic data and domain adaptation,
we are able to reduce the number of real-world samples
needed to achieve a given level of performance by up to 50
times, using only randomly generated simulated objects. We
also show that by using only unlabeled real-world data and
our GraspGAN methodology, we obtain real-world grasping
performance without any real-world labels that is similar to
that achieved with 939,777 labeled real-world samples.
I. I NTRODUCTION
Grasping is one of the most fundamental robotic manipulation problems. For virtually any prehensile manipulation
behavior, the first step is to grasp the object(s) in question.
Grasping has therefore emerged as one of the central areas
of study in robotics, with a range of methods and techniques
from the earliest years of robotics research to the present day.
A central challenge in robotic manipulation is generalization:
can a grasping system successfully pick up diverse new
objects that were not seen during the design or training of the
system? Analytic or model-based grasping methods [1] can
achieve excellent generalization to situations that satisfy their
assumptions. However, the complexity and unpredictability
of unstructured real-world scenes has a tendency to confound these assumptions, and learning-based methods have
emerged as a powerful complement [2], [3], [4], [5], [6].
Learning a robotic grasping system has the benefit of
generalization to objects with real-world statistics, and can
benefit from the advances in computer vision and deep
learning. Indeed, many of the grasping systems that have
shown the best generalization in recent years incorporate
convolutional neural networks into the grasp selection process [2], [5], [4], [7]. However, learning-based approaches
also introduce a major challenge: the need for large labeled
datasets. These labels might consist of human-provided grasp
*Authors contributed equally, 1 Google Brain, 2 X
Fig. 1: Bridging the reality gap: our proposed pixel-level
domain adaptation model takes as input (a) synthetic images
produced by our simulator and produces (b) adapted images
that look similar to (c) real-world ones produced by the
camera over the physical robot’s shoulder. We then train a
deep vision-based grasping network with adapted and real
images, which we further refine with feature-level adaptation.
points [8], or they might be collected autonomously [5], [6].
In both cases, there is considerable cost in both time and
money, and recent studies suggest that the performance of
grasping systems might be strongly influenced by the amount
of data available [6].
A natural avenue to overcome these data requirements
is to look back at the success of analytic, model-based
grasping methods [1], which incorporate our prior knowledge
of physics and geometry. We can incorporate this prior
knowledge into a learning-based grasping system in two
ways. First, we could modify the design of the system
to use a model-based grasping method, for example as a
scoring function for learning-based grasping [7]. Second, we
could use our prior knowledge to construct a simulator, and
generate synthetic experience that can be used in much the
same way as real experience. The second avenue, which we
explore in this work, is particularly appealing because we
can use essentially the same learning system. However, incorporating simulated images presents challenges: simulated
data differs in systematic ways from real-world data, and
simulation must have sufficiently general objects. Addressing
these two challenges is the principal subject of our work.
Our work has three main contributions. (a) Substantial
improvement in grasping performance from monocular RGB
images by incorporating synthetic data: We propose approaches for incorporating synthetic data into end-to-end
training of vision-based robotic grasping that we show
achieves substantial improvement in performance, particularly in the lower-data and no-data regimes. (b) Detailed
experimentation for simulation-to-real world transfer: Our
experiments involved 25, 704 real grasps of 36 diverse test
objects and consider a number of dimensions: the nature of
the simulated objects, the kind of randomization used in simulation, and the domain adaptation technique used to adapt
simulated images to the real world. (c) The first demonstration of effective simulation-to-real-world transfer for purely
monocular vision-based grasping: To our knowledge, our
work is the first to demonstrate successful simulation-to-realworld transfer for grasping, with generalization to previously
unseen natural objects, using only monocular RGB images.
II. R ELATED W ORK
Robotic grasping is one of the most widely explored
areas of manipulation. While a complete survey of grasping
is outside the scope of this work, we refer the reader to
standard surveys on the subject for a more complete treatment [2]. Grasping methods can be broadly categorized into
two groups: geometric methods and data-driven methods.
Geometric methods employ analytic grasp metrics, such
as force closure [9] or caging [10]. These methods often
include appealing guarantees on performance, but typically
at the expense of relatively restrictive assumptions. Practical applications of such approaches typically violate one
or more of their assumptions. For this reason, data-driven
grasping algorithms have risen in popularity in recent years.
Instead of relying exclusively on an analytic understanding
of the physics of an object, data-driven methods seek to
directly predict either human-specified grasp positions [8]
or empirically estimated grasp outcomes [5], [6]. A number
of methods combine both ideas, for example using analytic
metrics to label training data [3], [7].
Simulation-to-real-world transfer in robotics is an important goal, as simulation can be a source of practically
infinite cheap data with flawless annotations. For this reason,
a number of recent works have considered simulation-toreal world transfer in the context of robotic manipulation.
Saxena et al. [11] used rendered objects to learn a visionbased grasping model. Gulatieri et al. and Viereck et al. [4],
[12] have considered simulation-to-real world transfer using
depth images. Depth images can abstract away many of
the challenging appearance properties of real-world objects.
However, not all situations are suitable for depth cameras,
and coupled with the low cost of simple RGB cameras, there
is considerable value in studying grasping systems that solely
use monocular RGB images.
A number of recent works have also examined using randomized simulated environments [13], [14] for simulationto-real world transfer for grasping and grasping-like manipulation tasks, extending on prior work on randomization for
robotic mobility [15]. These works apply randomization in
the form of random textures, lighting, and camera position
to their simulator. However, unlike our work, these prior
methods considered grasping in relatively simple visual
environments, consisting of cubes or other basic geometric
shapes, and have not yet been demonstrated on grasping
diverse, novel real-world objects of the kind considered in
our evaluation.
Domain adaptation is a process that allows a machine
learning model trained with samples from a source domain to
generalize to a target domain. In our case the source domain
is the simulation, whereas the target is the real world. There
has recently been a significant amount of work on domain
adaptation, particularly for computer vision [16], [17]. Prior
work can be grouped into two main types: feature-level and
pixel-level adaptation. Feature-level domain adaptation focuses on learning domain-invariant features, either by learning a transformation of fixed, pre-computed features between
source and target domains [18], [19], [20], [21] or by learning
a domain-invariant feature extractor, often represented by a
convolutional neural network (CNN) [22], [23], [24]. Prior
work has shown the latter is empirically preferable on a
number of classification tasks [22], [24]. Domain-invariance
can be enforced by optimizing domain-level similarity metrics like maximum mean discrepancy [24], or the response
of an adversarially trained domain discriminator [22]. Pixellevel domain adaptation focuses on re-stylizing images from
the source domain to make them look like images from
the target domain [25], [26], [27], [28]. To our knowledge,
all such methods are based on image-conditioned generative
adversarial networks (GANs) [29]. In this work, we compare
a number of different domain adaptation regimes. We also
present a new method that combines both feature-level and
pixel-level domain adaptation for simulation-to-real world
transfer for vision-based grasping.
III. BACKGROUND
Our goal in this work is to show the effect of using simulation and domain adaptation in conjunction with a tested
data-driven, monocular vision-based grasping approach. To
this effect, we use such an approach, as recently proposed
by Levine et al. [6]. In this section we will concisely
discuss this approach, and the two main domain adaptation
techniques [22], [26], [27] our method is based on.
A. Deep Vision-Based Robotic Grasping
The grasping approach [6] we use in this work consists of
two components. The first is a grasp prediction convolutional neural network (CNN) C(xi , vi ) that accepts a tuple
of visual inputs xi = {xi0 , xic } and a motion command vi , and
outputs the predicted probability that executing vi will result
in a successful grasp. xi0 is an image recorded before the
robot becomes visible and starts the grasp attempt, and xic
is an image recorded at the current timestep. vi is specified
in the frame of the base of the robot and corresponds to
a relative change of the end-effector’s current position and
rotation about the vertical axis. We consider only top-down
pinch grasps, and the motion command has, thus, 5 dimensions: 3 for position, and 2 for a sine-cosine encoding of the
rotation. The second component of the method is a simple,
manually designed servoing function that uses the grasp
probabilities predicted by C to choose the motor command
vi that will continuously control the robot. We can train
the grasp prediction network C using standard supervised
learning objectives, and so it can be optimized independently
from the servoing mechanism. In this work, we focus on
extending the first component to include simulated data in
the training set for the grasp prediction network C, leaving
the other parts of the system unchanged.
(a) Simulated World
(b) Real World
(c) Simulated Samples
(d) Real Samples
The datasets for training the grasp prediction CNN C are
collections of visual episodes of robotic arms attempting to
grasp various objects. Each grasp attempt episode consists of
T time steps which result in T distinct training samples. Each
sample i includes xi , vi , and the success label yi of the entire
grasp sequence. The visual inputs are 640 × 512 images that
are randomly cropped to a 472 × 472 region during training
to encourage translation invariance.
The central aim of our work is to compare different
training regimes that combine both simulated and real-world
data for training C. Although we do consider training entirely
with simulated data, as we discuss in Section IV-A, most of
the training regimes we consider combine medium amounts
of real-world data with large amounts of simulated data.
To that end, we use the self-supervised real-world grasping
dataset collected by Levine et al. [6] using 6 physical Kuka
IIWA arms. The goal of the robots was to grasp any object
within a specified goal region. Grasping was performed using
a compliant two-finger gripper picking objects out of a metal
bin, with a monocular RGB camera mounted behind the arm.
The full dataset includes about 1 million grasp attempts on
approximately 1, 100 different objects, resulting in about 9.4
million real-world images. About half of the dataset was
collected using random grasps, and the rest using iteratively
retrained versions of C. Aside from the variety of objects,
each robot differed slightly in terms of wear-and-tear, as well
as the camera pose. The outcome of the grasp attempt was
determined automatically. The particular objects in front of
each robot were regularly rotated to increase the diversity
of the dataset. Some examples of grasping images from the
camera’s viewpoint are shown in Figure 2d.
When trained on the entire real dataset, the best CNN used
in the approach outlined above achieved successful grasps
67.65% of the time. Levine et al. [6] reported an additional
increase to 77.18% from also including 2.7 million images
from a different robot. We excluded this additional dataset
for the sake of a more controlled comparison, so as to avoid
additional confounding factors due to domain shift within
the real-world data. Starting from the Kuka dataset, our
experiments study the effect of adding simulated data and
of reducing the number of real world data points by taking
subsets of varying size (down to only 93, 841 real world
images, which is 1% of the original set).
Fig. 2: Top Row: The setup we used for collecting the (a)
simulated and (b) real-world datasets. Bottom Row: Images
used during training of (c) simulated grasping experience
with procedurally generated objects; and of (d) real-world
experience with a varied collection of everyday physical
objects. In both cases, we see the pairs of image inputs for
our grasp success prediction model C: the images at t = 0
and the images at the current timestamp.
B. Domain Adaptation
As part of our proposed approach we use two domain
adaptation
techniques:
domain-adversarial
training
and pixel-level domain adaptation. Ganin et al. [22]
introduced domain–adversarial neural networks (DANNs),
an architecture trained to extract domain-invariant yet
expressive features. DANNs were primarily tested in the
unsupervised domain adaptation scenario, in the absence
of any labeled target domain samples, although they also
showed promising results in the semi-supervised regime [24].
Their model’s first few layers are shared by two modules:
the first predicts task-specific labels when provided with
source data while the second is a separate domain classifier
trained to predict the domain dˆ of its inputs. The DANN
loss is the cross-entropy
loss for the domain prediction task:
s +Nt
LDANN = ∑Ni=0
di log dˆi + (1 − di ) log(1 − dˆi ) , where
di ∈ {0, 1} is the ground truth domain label for sample i,
and Ns , Nt are the number of source and target samples.
The shared layers are trained to maximize LDANN , while
the domain classifier is trained adversarially to minimize it.
This minimax optimization is implemented by a gradient
reversal layer (GRL). The GRL has the same output as the
identity function, but negates the gradient during backprop.
This lets us compute the gradient for both the domain clas-
sifier and the shared feature extractor in a single backward
pass. The task loss of interest is simultaneously optimized
with respect to the shared layers, which grounds the shared
features to be relevant to the task.
While DANN makes the features extracted from both
domains similar, the goal in pixel-level domain adaptation
[26], [27], [28], [25] is to learn a generator function G
that maps images from a source to a target domain at the
input level. This approach decouples the process of domain
adaptation from the process of task-specific predictions, by
adapting the images from the source domain to make them
appear as if they were sampled from the target domain. Once
the images are adapted, they can replace the source dataset
and the relevant task model can be trained as if no domain
adaptation were required. Although all these methods are
similar in spirit, we use ideas primarily from PixelDA [26]
and SimGAN [27], as they are more suitable for our task.
These models are particularly effective if the goal is to
maintain the semantic map of original and adapted synthetic
images, as the transformations are primarily low-level: the
methods make the assumption that the differences between
the domains are primarily low-level (due to noise, resolution,
illumination, color) rather than high-level (types of objects,
geometric variations, etc).
s
More formally, let Xs = {xsi , ysi }Ni=0 represent a dataset of
t
s
N samples from the source domain and let Xt = {xti , yti }Ni=0
represent a dataset of N t samples from the target domain. The
generator function G(xs ; θ G ) → x f , parameterized by θ G ,
maps a source image xs ∈ Xs to an adapted, or fake, image
x f . This function is learned with the help of an adversary, a
discriminator function D(x; θ D ) that outputs the likelihood d
that a given image x is a real-world sample. Both G and D are
trained using the standard adversarial objective [29]. Given
the learned generator function G, it is possible to create a new
dataset X f = {G(xs ), ys }. Finally, given an adapted dataset
X f , the task-specific model can be trained as if the training
and test data were from the same distribution.
PixelDA was evaluated in simulation-to-real-world transfer. However, the 3D models used by the renderer in [26]
were very high-fidelity scans of the objects in the real-world
dataset. In this work we examine for the first time how
such a technique can be applied in situations where (a) no
3D models for the objects in the real-world are available
and (b) the system is supposed to generalize to yet another
set of previously unseen objects in the actual real-world
grasping task. Furthermore, we use images of 472 × 472,
more than double the resolution in [26], [27]. This makes
learning the generative model G a much harder task and
requires significant changes compared to previous work: the
architecture of both G and D, the GAN training objective,
and the losses that aid with training the generator (contentsimilarity and task losses) are different from the original
implementations, resulting in a novel model evaluated under
these new conditions.
IV. O UR A PPROACH
One of the aims of our work is to study how final
grasping performance is affected by the 3D object models
our simulated experience is based on, the scene appearance
and dynamics in simulation, and the way simulated and real
experience is integrated for maximal transfer. In this section
we outline, for each of these three factors, our proposals for
effective simulation-to-real-world transfer for our task.
A. Grasping in Simulation
A major difficulty in constructing simulators for robotic
learning is to ensure diversity sufficient for effective generalization to real-world settings. In order to evaluate simulationto-real world transfer, we used one dataset of real-world
grasp attempts (see Sect. III-A), and multiple such datasets in
simulation. For the latter, we built a basic virtual environment
based on the Bullet physics simulator and the simple renderer
that is shipped with it [30]. The environment emulates the
Kuka hardware setup by simulating the physics of grasping
and by rendering what a camera mounted looking over the
Kuka shoulder would perceive: the arm, the bin that contains
the object, and the objects to grasp in scenes similar to the
ones the robot encounters in the real world.
A central question here is regarding the realism of the
3D models used for the objects to grasp. To answer it, we
evaluate two different sources of objects in our experiments:
(a) procedurally generated random geometric shapes and
(b) realistic objects obtained from the publicly-available
ShapeNet [31] 3D model repository. We procedurally generated 1, 000 objects by attaching rectangular prisms at
random locations and orientations, as seen in Fig. 3a. We
then converted the set of prisms to a mesh using an offthe-shelf renderer, Blender, and applied a random level of
smoothing. Each object was given UV texture coordinates
and random colors. For our Shapenet-based datasets, we
used the ShapeNetCore.v2 [31] collection of realistic object
models, shown in Figure 3b. This particular collection contains 51, 300 models in 55 categories of household objects,
furniture, and vehicles. We rescaled each object to a random
graspable size with a maximum extent between 12cm and
23cm (real-world objects ranged from 4cm to 20cm in length
along the longest axis) and gave it a random mass between
10g and 500g, based on the approximate volume of the
object.
Once the models were imported into our simulator, we
collected our simulation datasets via a similar process to
the one in the real world, with a few differences. As
mentioned above, the real-world dataset was collected by
using progressively better grasp prediction networks. These
networks were swapped for better versions manually and
rather infrequently [6]. In contrast to the 6 physical Kuka
IIWA robots that were used to collect data in the real world,
we used 1,000 to 2,000 simulated arms at any given time
to collect our synthetic data, and the models that were used
to collect the datasets were being updated continuously by
an automated process. This resulted in datasets that were
(a) Procedural
(b) ShapeNet [31]
(c) Real
Fig. 3: Comparison of (a) some of our 1, 000 procedural,
(b) some of the 51, 300 ShapeNet objects, both used for data
collection in simulation, and the (c) 36 objects we used only
for evaluating grasping in the real-world, that were not seen
during training. The variety of shapes, sizes, and material
properties makes the test set very challenging.
collected by grasp prediction networks of varying performance, which added diversity to the collected samples. After
training our grasping approach in our simulated environment,
the simulated robots were successful on 70%-90% of the
simulated grasp attempts. Note that all of the grasp success
prediction models used in our experiments were trained from
scratch using these simulated grasp datasets.
B. Virtual Scene Randomization
Another important question is whether randomizing the
visual appearance and dynamics in the scene affects grasp
performance and in what way. One of the first kind of
diversities we considered was the addition of ε cm, where
ε ∼ N (0, 1), to the horizontal components of the motor
command. This improved real grasp success in early experiments, so we added this kind of randomization for all
simulated samples. Adding this noise to real data did not
help. To further study the effects of virtual scene randomization, we built datasets with four different kinds of scene
randomization: (a) No randomization: Similar to real-world
data collection, we only varied camera pose, bin location,
and used 6 different real-world images as backgrounds; (b)
Visual Randomization: We varied tray texture, object texture
and color, robot arm color, lighting direction and brightness;
(c) Dynamics Randomization: We varied object mass, and
object lateral/rolling/spinning friction coefficients; and (d)
All : both visual and dynamics randomization.
C. Domain Adaptation for Vision-Based Grasping
As mentioned in Sect. II, there are two primary types of
methods used for domain adaptation: feature-level, and pixellevel. Here we propose a feature-level adaptation method and
a novel pixel-level one, which we call GraspGAN. Given
original synthetic images, GraspGAN produces adapted images that look more realistic. We subsequently use the trained
generator from GraspGAN as a fixed module that adapts our
synthetic visual input, while performing feature-level domain
adaptation on extracted features that account for both the
transferred images and synthetic motor command input.
For our feature-level adaptation technique we use a DANN
loss on the last convolutional layer of our grasp success
prediction model C, as shown in Fig. 4c. In preliminary
experiments we found that using the DANN loss on this layer
yielded superior performance compared to applying it at the
activations of other layers. We used the domain classifier
proposed in [22]. One of the early research questions we
faced was what the interaction of batch normalization (BN)
[32] with the DANN loss would be, as this has not been
examined in previous work. We use BN in every layer of C
and in a naı̈ve implementation of training models with data
from two domains, a setting we call naı̈ve mixing, batch
statistics are calculated without taking the domain labels
of each sample into account. However, the two domains
are bound to have different statistics, which means that
calculating and using them separately for simulated and
real-world data while using the same parameters for C
might be beneficial. We call this way of training data from
two domains domain-specific batch normalization (DBN)
mixing, and show it is a useful tool for domain adaptation,
even when a DANN loss is not used.
In our pixel-level domain adaptation model, GraspGAN, shown in Fig. 4, G is a convolutional neural network
that follows a U-Net architecture [33], and uses average
pooling for downsampling, bilinear upsampling, concatenation and 1 × 1 convolutions for the U-Net skip connections,
and instance normalization [34]. Our discriminator D is a
patch-based [35] CNN with 5 convolutional layers, with an
effective input size of 70 × 70. It is fully convolutional on
3 scales (472 × 472, 236 × 236, and 118 × 118) of the two
input images, xs0 and xsc , stacked into a 6 channel input,
producing domain estimates for all patches which are then
combined to compute the joint discriminator loss. This novel
multi-scale patch-based discriminator design can learn to
assess both global consistency of the generated image, as
well as realism of local textures. Stacking the channels of
the two input images enables the discriminator to recognize
relationships between the two images, so it can encourage
the generator to respect them (e.g., paint the tray with the
same texture in both images, but insert realistic shadows for
the arm). Our task model C is the grasp success prediction
CNN from [6].
To train GraspGAN, we employ a least-squares generative adversarial objective (LSGAN) [36] to encourage G
to produce realistic images. During training, our generator
G(xs ; θ G ) → x f maps synthetic images xs to adapted images
x f , by individually passing xs0 and xsc through two instances
of the generator network displayed in Figure 4. Similar
to traditional GAN training, we perform optimization in
alternating steps by minimizing the following loss terms w.r.t.
the parameters of each sub-network:
minλg Lgen (G, D) + λtg Ltask (G,C) + λc Lcontent (G)
(1)
min λd Ldiscr (G, D) + λtd Ltask (G,C),
(2)
θG
θ D ,θ C
where Lgen and Ldiscr are the LSGAN generator and discriminator losses, Ltask is the task loss, Lcontent is the
content-similarity loss, and λg , λd , λtg , λtd , λc , the respective
weights. The LSGAN discriminator loss is the L2 distance
between its likelihood output dˆ and the domain labels d = 0
Fig. 4: Our proposed approach: (a) Overview of our pixel-level domain adaptation model, GraspGAN. Tuples of images
from the simulation xs are fed into the generator G to produce realistic versions x f . The discriminator D gets unlabeled
real world images xt and x f and is trained to distinguish them. Real and adapted images are also fed into the grasp success
prediction network C, trained in parallel (motion commands v are not shown to reduce clutter). G, thus, gets feedback from
D and C to make adapted images look real and maintain the semantic information. (b) Architectures for G and D. Blue boxes
denote convolution/normalization/activation-layers, where n64s2:IN:relu means 64 filters, stride 2, instance normalization
IN and relu activation. Unless specified all convolutions are 3 × 3 in G and 4 × 4 in D. (c) DANN model: C1 has 7 conv
layers and C2 has 9 conv layers. Further details can be found in [6]. Domain classifier uses GRL and two 100 unit layers.
for fake and d = 1 for real images, while for the generator
loss the label is flipped, such that there is a high loss if the
disciminator predicts dˆ = 0 for a generated image. The task
loss measures how well the network C predicts grasp success
on transferred and real examples by calculating the binomial
cross-entropy of the labels yi .
perceptual loss [37] that uses the activations of an ImageNetpretrained VGG model as a way to anchor the restylization
of an input image. In contrast, here C is trained at the same
time, the loss is specific to our goal, and it helps preserve the
semantics in ways that are relevant to our prediction task.
It is of utmost importance that the GraspGAN generator,
while making the input image look like an image from the
real world scenario, does not change the semantics of the
simulated input, for instance by drawing the robot’s arm or
the objects in different positions. Otherwise, the information
we extract from the simulation in order to train the task network would not correspond anymore to the generated image.
We thus devise several additional loss terms, accumulated in
Lcontent , to help anchor the generated image to the simulated
one on a semantic level. The most straightforward restriction
is to not allow the generated image to deviate much from
the input. To that effect we use the PMSE loss, also used
by [26]. We also leverage the fact that we can have semantic
information about every pixel in the synthetic images by
computing segmentation masks m f of the corresponding
rendered images for the background, the tray, robot arm, and
the objects. We use these masks by training our generator G
to also produce m f as an additional output for each adapted
image, with a standard L2 reconstruction loss. Intuitively, it
forces the generator to extract semantic information about all
the objects in the scene and encode them in the intermediate
latent representations. This information is then available
during the generation of the output image as well. Finally,
we additionally implement a loss term that provides more
dense feedback from the task tower than just the single bit of
information about grasp success. We encourage the generated
image to provide the same semantic information to the task
network as the corresponding simulated one by penalizing
differences in activations of the final convolutional layer of
C for the two images. This is similar in principle to the
This section aims to answer the following research questions: (a) is the use of simulated data from a low quality
simulator aiding in improving grasping performance in the
real world? (b) is the improvement consistent with varying
amounts of real-world labeled samples? (c) how realistic
do graspable objects in simulation need to be? (d) does
randomizing the virtual environment affect simulation-to-real
world transfer, and what are the randomization attributes
that help most? (e) does domain adaptation allow for better
utilization of simulated grasping experience?
In order to answer these questions, we evaluated a number
of different ways for training a grasp success prediction
model C with simulated data and domain adaptation1 . When
simulated data was used, the number of simulated samples
was always approximately 8 million. We follow the grasp
success evaluation protocol described by Levine et al. [6].
We used 6 Kuka IIWA robots for our real-world experiments
V. E VALUATION
1 Visit
https://goo.gl/G1HSws for our supplementary video.
Fig. 5: The effect of using 8 million simulated samples
of procedural objects with no randomization and various
amounts of real data, for the best technique in each class.
TABLE I: The effect of our choices for simulated objects
and randomization in terms of grasp success. We compared the performance of models trained jointly on grasps of
procedural vs ShapeNet objects with 10% of the real data.
Models were trained with DANN and DBN mixing.
Randomization
Procedural
ShapeNet
None
71.93%
69.61%
Visual
74.88%
68.79%
Dynamics
73.95%
68.62%
Both
72.86%
69.84%
and a test set consisting of the objects shown in Fig. 3c, the
same used in [6], with 6 different objects in each bin for
each robot. These objects were not included in the real-world
training set and were not used in any way when creating our
simulation datasets. Each robot executes 102 grasps, for a
total of 612 test grasps for each evaluation. During execution,
each robot picks up objects from one side of the bin and
drops them on the other, alternating every 3 grasps. This
prevents the model from repeatedly grasping the same object.
Optimal models were selected by using the accuracy of C on
a held-out validation set of 94, 000 real samples.
The first conclusion from our results is that simulated
data from an off-the-shelf simulator always aids in improving vision-based real-world grasping performance. As
one can see in Fig. 5, which shows the real grasp success
gains by incorporating simulated data from our procedurallygenerated objects, using our simulated data significantly and
consistently improves real-world performance regardless of
the number of real-world samples.
We also observed that we do not need realistic 3D models
to obtain these gains. We compared the effect of using
random, procedurally-generated shapes and ShapeNet objects
in combination with 10% of the real-world data, under all
randomization scenarios. As shown in Table I we found that
using procedural objects is the better choice in all cases.
This finding has interesting implications for simulation to
real-world transfer, since content creation is often a major
bottleneck in producing generalizable simulated data. Based
on these results, we decided to use solely procedural objects
for the rest of our experiments.
Table III shows our main results: the grasp success performance for different combinations of simulated data generation and domain adaptation methods, and with different
quantities of real-world samples. The different settings are:
Real-Only, in which the model is given only real data; Naı̈ve
Mixing (Naı̈ve Mix): Simulated samples generated with
no virtual scene randomization are mixed with real-world
samples such that half of each batch consists of simulated
images; DBN Mixing & Randomization (Rand.): The simulated dataset is generated with visual-only randomization.
The simulated samples are mixed with real-world samples
as in the naive mixing case, and the models use DBN; DBN
Mixing & DANN (DANN): Simulated samples are generated
with no virtual scene randomization and the model is trained
with a domain-adversarial method with DBN; DBN Mixing,
DANN & Randomization (DANN-R): Simulated samples are
generated with visual randomization and the model is trained
with a domain-adversarial method with DBN; GraspGAN,
TABLE II: Real grasp performance when no labeled real
examples are available. Method names explained in the text.
Sim-Only
23.53%
Rand.
35.95%
GraspGAN
63.40%
TABLE III: Success of grasping 36 diverse and unseen physical objects of all our methods trained on different amounts
of real-world samples and 8 million simulated samples with
procedural objects. Method names are explained in the text.
Method
Real-Only
Naı̈ve Mix.
Rand.
DANN
DANN-R.
GraspGAN
All
9,402,875
67.65%
73.63%
75.58%
76.26%
72.60%
76.67%
20%
1,880,363
64.93%
69.61%
70.16%
68.12%
66.46%
74.07%
10%
939,777
62.75%
65.20%
73.31%
71.93%
74.88%
70.70%
2%
188,094
35.46%
58.38%
63.61%
61.93%
63.73%
68.51%
1%
93,841
31.13%
39.86%
50.99%
59.27%
43.81%
59.95%
DBN Mixing & DANN (GraspGAN): The non-randomized
simulated data is first refined with a GraspGAN generator,
and the refined data is used to train a DANN with DBN
mixing. The generator is trained with the same real dataset
size used to train the DANN. See Figure 1b for examples.
Table III shows that using visual randomization with DBN
mixing improved upon naı̈ve mixing with no randomization
experiments across the board. The effect of visual, dynamics, and combined randomization for both procedural and
ShapeNet objects was evaluated by using 10% of the real data
available. Table I shows that using only visual randomization
slightly improved grasp performance for procedural objects,
but the differences were generally not conclusive.
In terms of domain adaptation techniques, our proposed
hybrid approach of combining our GraspGAN and DANN
performs the best in most cases, and shows the most gains
in the lower real-data regimes. Using DANNs with DBN
Mixing performed better than naı̈ve mixing in most cases.
However the effect of DANNs on Randomized data was not
conclusive, as the equivalent models produced worse results
in 3 out of 5 cases. We believe the most interesting results
however, are the ones from our experiments with no labeled
real data. We compared the best domain adaptation method
(GraspGAN), against a model trained on simulated data with
and without randomization. We trained a GraspGAN on all 9
million real samples, without using their labels. Our grasping
model was then trained only on data refined by G. Results
in Table II show that the unsupervised adaptation model
outperformed not only sim-only models with and without
randomization but also a real-only model with 939,777
labeled real samples.
Although our absolute grasp success numbers are consistent with the ones reported in [6], some previous grasping
work reports higher absolute grasp success. However, we
note the following: (a) our goal in this work is not to
show that we can train the best possible grasping system,
but that for the same amount of real-world data, the inclusion of synthetic data can be helpful; we have relied
on previous work [6] for the grasping approach used; (b)
our evaluation was conducted on a diverse and challenging
range of objects, including transparent bottles, small round
objects, deformable objects, and clutter; and (c) the method
uses only monocular RGB images from an over-the-shoulder
viewpoint, without depth or wrist-mounted cameras. These
make our setup considerably harder than most standard ones.
VI. C ONCLUSION
In this paper, we examined how simulated data can be
incorporated into a learning-based grasping system to improve performance and reduce data requirements. We study
grasping from over-the-shoulder monocular RGB images, a
particularly challenging setting where depth information and
analytic 3D models are not available. This presents a challenging setting for simulation-to-real-world transfer, since
simulated RGB images typically differ much more from real
ones compared to simulated depth images. We examine the
effects of the nature of the objects in simulation, of randomization, and of domain adaptation. We also introduce a novel
extension of pixel-level domain adaptation that makes it suitable for use with high-resolution images used in our grasping
system. Our results indicate that including simulated data can
drastically improve the vision-based grasping system we use,
achieving comparable or better performance with 50 times
fewer real-world samples. Our results also suggest that it is
not as important to use realistic 3D models for simulated
training. Finally, our experiments indicate that our method
can provide plausible transformations of synthetic images,
and that including domain adaptation substantially improves
performance in most cases.
Although our work demonstrates very large improvements
in the grasp success rate when training on smaller amounts of
real world data, there are a number of limitations. Both of the
adaptation methods we consider focus on invariance, either
transforming simulated images to look like real images, or
regularizing features to be invariant across domains. These
features incorporate both appearance and action, due to the
structure of our network, but no explicit reasoning about
physical discrepancies between the simulation and the real
world is done. We did consider randomization of dynamics
properties, and show it is indeed important. Several recent
works have looked at adapting to physical discrepancies
explicitly [38], [39], [40], and incorporating these ideas
into grasping is an exciting avenue for future work. Our
approach for simulation to real world transfer only considers
monocular RGB images, though extending this method to
stereo and depth images would be straightforward. Finally,
the success rate reported in our experiments still has room
for improvement, and we expect further research in this area
will lead to even better results. The key insight from our
work comes from the comparison of the different methods:
we are not aiming to propose a novel grasping system, but
rather to study how incorporating simulated data can improve
an existing one.
ACKNOWLEDGMENTS
The authors thank John-Michael Burke for overseeing the
robot operations. The authors also thank Erwin Coumans,
Ethan Holly, Dmitry Kalashnikov, Deirdre Quillen, and Ian
Wilkes for contributions to the development of our grasping
system and supporting infrastructure.
R EFERENCES
[1] B. Siciliano and O. Khatib, Springer Handbook of Robotics. Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2007.
[2] J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-driven grasp
synthesis-a survey,” IEEE Transactions on Robotics, 2014.
[3] D. Kappler, J. Bohg, and S. Schaal, “Leveraging Big Data for Grasp
Planning,” in ICRA, 2015.
[4] U. Viereck, A. t. Pas, K. Saenko, and R. Platt, “Learning a visuomotor
controller for real world robotic grasping using easily simulated depth
images,” arxiv:1706.04652, 2017.
[5] P. L. and A. Gupta, “Supersizing self-supervision: Learning to grasp
from 50k tries and 700 robot hours,” in ICRA, 2016.
[6] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning
Hand-Eye Coordination for Robotic Grasping with Deep Learning and
Large-Scale Data Collection,” IJRR, 2016.
[7] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A.
Ojea, and K. Goldberg, “Dex-Net 2.0: Deep Learning to Plan Robust
Grasps with Synthetic Point Clouds and Analytic Grasp Metrics,” in
RSS, 2017.
[8] I. Lenz, H. Lee, and A. Saxena, “Deep Learning for Detecting Robotic
Grasps,” IJRR, 2015.
[9] A. Bicchi, “On the Closure Properties of Robotic Grasping,” IJRR,
1995.
[10] A. Rodriguez, M. T. Mason, and S. Ferry, “From Caging to Grasping,”
IJRR, 2012.
[11] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic Grasping of Novel
Objects using Vision,” IJRR, 2008.
[12] M. Gualtieri, A. ten Pas, K. Saenko, and R. Platt, “High precision
grasp pose detection in dense clutter,” in IROS, 2016, pp. 598–605.
[13] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel,
“Domain Randomization for Transferring Deep Neural Networks from
Simulation to the Real World,” arxiv:1703.06907, 2017.
[14] S. James, A. J. Davison, and E. Johns, “Transferring End-to-End
Visuomotor Control from Simulation to Real World for a Multi-Stage
Task,” arxiv:1707.02267, 2017.
[15] F. Sadeghi and S. Levine, “CAD2RL: Real single-image flight without
a single real image.” arxiv:1611.04201, 2016.
[16] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa, “Visual domain
adaptation: A survey of recent advances,” IEEE Signal Processing
Magazine, vol. 32, no. 3, pp. 53–69, 2015.
[17] G. Csurka, “Domain adaptation for visual applications: A comprehensive survey,” arxiv:1702.05374, 2017.
[18] B. Sun, J. Feng, and K. Saenko, “Return of frustratingly easy domain
adaptation,” in AAAI, 2016.
[19] B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for
unsupervised domain adaptation,” in CVPR, 2012.
[20] R. Caseiro, J. F. Henriques, P. Martins, and J. Batista, “Beyond
the shortest path: Unsupervised Domain Adaptation by Sampling
Subspaces Along the Spline Flow,” in CVPR, 2015.
[21] R. Gopalan, R. Li, and R. Chellappa, “Domain Adaptation for Object
Recognition: An Unsupervised Approach,” in ICCV, 2011.
[22] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training
of neural networks,” JMLR, 2016.
[23] M. Long and J. Wang, “Learning transferable features with deep
adaptation networks,” ICML, 2015.
[24] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan,
“Domain separation networks,” in NIPS, 2016.
[25] Y. Taigman, A. Polyak, and L. Wolf, “Unsupervised cross-domain
image generation,” in ICLR, 2017.
[26] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan,
“Unsupervised pixel-level domain adaptation with generative adversarial neural networks,” in CVPR, 2017.
[27] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and
R. Webb, “Learning from simulated and unsupervised images through
adversarial training,” in CVPR, 2017.
[28] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-toImage Translation using Cycle-Consistent Adversarial Networks,” in
ICCV, 2017.
[29] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,”
in NIPS, 2014.
[30] E. Coumans and Y. Bai, “pybullet, a python module for physics simulation in robotics, games and machine learning,” http://pybullet.org/,
2016–2017.
[31] A. X. Chang, T. A. Funkhouser, L. J. Guibas, P. Hanrahan, Q. Huang,
Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu,
“ShapeNet: An Information-Rich 3D Model Repository,” CoRR, 2015.
[32] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Network
Training by Reducing Internal Covariate Shift,” in ICML, 2015.
[33] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional
Networks for Biomedical Image Segmentation,” in MICCAI, 2015.
[34] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization:
The missing ingredient for fast stylization,” arxiv:1607.08022, 2016.
[35] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Transla-
tion with Conditional Adversarial Networks,” arxiv:1611.07004, 2016.
[36] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley, “Least
Squares Generative Adversarial Networks,” arxiv:1611.04076, 2016.
[37] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual Losses for Real-Time
Style Transfer and Super-Resolution,” in ECCV. Springer, 2016.
[38] P. Christiano, Z. Shah, I. Mordatch, J. Schneider, T. Blackwell,
J. Tobin, P. Abbeel, and W. Zaremba, “Transfer from simulation to real world through learning deep inverse dynamics model,”
arXiv:1610.03518, 2016.
[39] A. Rajeswaran, S. Ghotra, S. Levine, and B. Ravindran, “Epopt:
Learning robust neural network policies using model ensembles,”
arxiv:1610.01283, 2016.
[40] W. Yu, C. K. Liu, and G. Turk, “Preparing for the unknown: Learning a
universal policy with online system identification,” arXiv:1702.02453,
2017.
| 1 |
Abstracting an operational semantics to finite
automata
Nadezhda Baklanova, Wilmer Ricciotti, Jan-Georg Smaus, Martin Strecker
arXiv:1409.7841v1 [] 27 Sep 2014
IRIT (Institut de Recherche en Informatique de Toulouse)
Université de Toulouse, France
firstname.lastname @irit.fr ⋆,⋆⋆
Abstract. There is an apparent similarity between the descriptions of
small-step operational semantics of imperative programs and the semantics of finite automata, so defining an abstraction mapping from semantics to automata and proving a simulation property seems to be easy.
This paper aims at identifying the reasons why simple proofs break,
among them artifacts in the semantics that lead to stuttering steps in
the simulation. We then present a semantics based on the zipper data
structure, with a direct interpretation of evaluation as navigation in the
syntax tree. The abstraction function is then defined by equivalence class
construction.
Keywords: Programming language semantics; Abstraction; Finite Automata; Formal Methods; Verification
1
Introduction
Among the formalisms employed to describe the semantics of transition systems,
two particularly popular choices are abstract machines and structural operational semantics (SOS). Abstract machines are widely used for modeling and
verifying dynamic systems, e.g. finite automata, Büchi automata or timed automata [9,4,1]. An abstract machine can be represented as a directed graph
with transition semantics between nodes. The transition semantics is defined by
moving a pointer to a current node. Automata are a popular tool for modeling
dynamic systems due to the simplicity of the verification of automata systems,
which can be carried out in a fully automated way, something that is not generally possible for Turing-complete systems.
This kind of semantics is often extended by adding a background state composed of a set of variables with their values: this is the case of timed automata,
which use background clock variables [2]. The Uppaal model checker for timed
automata extends the notion of background state even further by adding integer and Boolean variables to the state [7] which, however, do not increase the
⋆
⋆⋆
N. Baklanova and M. Strecker were partially supported by the project Verisync
(ANR-10-BLAN-0310).
W. Ricciotti and J.-G. Smaus are supported by the project Ajitprop of the Fondation Airbus.
computational power of such timed automata but make them more convenient
to use.
Another formalism for modeling transition systems is structural semantics
(“small-step”, contrary to “big-step” semantics which is much easier to handle
but which is inappropriate for a concurrent setting), which uses a set of reduction
rules for simplifying a program expression. It has been described in detail in [16]
and used, for example, for the Jinja project developing a formal model of the
Java language [10]. An appropriate semantic rule for reduction is selected based
on the expression pattern and on values of some variables in a state. As a result
of reduction the expression and the state are updated.
s′ = s(v 7−→ eval expr s)
(Assign v expr, s) → (U nit, s′ )
[Assignment]
This kind of rules is intuitive; however, the proofs involving them require
induction over the expression structure. A different approach to writing a structural semantics was described in [3,12] for the CMinor language. It uses a notion
of continuation which represents an expression as a control stack and deals with
separate parts of the control stack consecutively.
(Seq e1 e2 · κ, s) → (e1 · e2 · κ, s)
(Empty · κ, s) → (κ, s)
Here the “·” operator designates concatenation of control stacks. The semantics of continuations does not need induction over the expression, something
which makes proof easier; however it requires more auxiliary steps for maintaining the control stack which do not have direct correspondance in the modeled
language.
For modeling non-local transfer of control, Krebbers and Wiedijk [11] present
a semantics using (non-recursive) “statement contexts”. These are combined
with the above-mentioned continuation stacks. The resulting semantics is situated mid-way between [3] and the semantics proposed below.
The present paper describes an approach to translation from structural operational semantics to finite automata extended with background state. All the
considered automata are an extension of Büchi automata with background state,
i.e. they have a finite number of nodes and edges but can produce an infinite
trace. The reason of our interest in abstracting from structural semantics to
Büchi automata is our work in progress [6]. We are working on a static analysis
algorithm for finding possible resource sharing conflicts in multithreaded Java
programs. For this purpose we annotate Java programs with timing information
and then translate them to a network of timed automata which is later model
checked. The whole translation is formally verified. One of the steps of the translation procedure includes switching from structural operational semantics of a
Java-like language to automata semantics. During this step we discovered some
problems which we will describe in the next section. The solutions we propose
extend well beyond the problem of abstracting a structured language to an automaton. It can also be used for compiler verification, which usually is cluttered
up with arithmetic adress calculation that can be avoided in our approach.
The contents of the paper has been entirely formalized in the Isabelle proof
assistant [14]. We have not insisted on any Isabelle-specific features, therefore
this formalization can be rewritten using other proof assistants. The full Isabelle
formal development can be found on the web [5].
2
Problem Statement
We have identified the following as the main problems when trying to prove the
correctness of the translation between a programming language semantics and
its abstraction to automata:
1. Preservation of execution context: an abstract machine always sees all the
available nodes while a reduced expression loses the information about previous reductions.
2. Semantic artifacts: some reduction rules are necessary for the functionality of
the semantics, but may be missing in the modeled language. Additionally, the
rules can produce expressions which do not occur in the original language.
These problems occur independently of variations in the presentation of semantic rules [16] adopted in the literature, such as [10] (recursive evaluation of
sub-statements) or [3,12] (continuation-style).
We will describe these two problems in detail, and later our approach to
their solution, in the context of a minimalistic programming language which only
manipulates Boolean values (a Null value is also added to account for errors):
datatype val = Bool bool | Null
The language can be extended in a rather straightforward way to more complex expressions. In this language, expressions are either values or variables:
datatype expr = Val val | Var vname
The statements are those of a small imperative language (similarly to [13]):
datatype stmt =
Empty
— no-op
| Assign vname val
— assignment: var := val
| Seq stmt stmt
— sequence: c1 ; c2
| Cond expr stmt stmt — conditional: if e then c1 else c2
| While expr stmt
— loop: while e do c
2.1
Preservation of execution context
Problem 1 concerns the loss of an execution context through expression reductions which is a design feature of structural semantics. Let us consider a simple
example.
Assume we have a structural semantics for our minimal imperative language
(some rules of a traditional presentation are shown in Figure 1): we want to
translate a program written in this language into an abstract machine. Assume
that the states of variable values have the same representation in the two systems:
this means we only need to translate the program expression into a directed
graph with different nodes corresponding to different expressions obtained by
reductions of the initial program expression.
On the abstract machine level the Assign statements would be represented
as two-state automata, and the Cond as a node with two outgoing edges directed
to the automata for the bodies of its branches.
Consider a small program in this language Cond bexp (Assign a 5) Empty
and its execution flow.
Cond bexp (Assign a 5) Empty
(Assign a 5)
a:=5
Empty
Empty
The execution can select any of the two branches depending on the bexp
value. There are two different Empty expressions appearing as results of two
different reductions. The corresponding abstract machine would be a natural
graph representation for a condition statement with two branches (Figure 2).
During the simple generation of an abstract machine from a program expression the two Empty statements cannot be distinguished although they should be
mapped into two different nodes in the graph. We need to add more information
about the context into the translation, and it can be done by different ways.
A straightforward solution would be to add some information in order to
distinguish between the two Empty expressions. If we add unique identifiers
to each subexpression of the program, they will allow to know exactly which
subexpression we are translating (Figure 3). The advantage of this approach is
its simplicity, however, it requires additional functions and proofs for identifier
management.
Another solution for the problem proposed in this paper involves usage of a
special data structure to keep the context of the translation. There are known examples of translations from subexpression-based semantics [10] and continuationbased semantics [12] to abstract machines. However, all these translations do not
address the problem of context preservation during the translation.
s′ = s(v 7−→ eval expr s)
[Assign]
(Assign v expr, s) → (Empty, s′ )
eval bexp s = T rue
eval bexp s = F alse
[CondT]
(Cond bexp e1 e2, s) → (e1, s)
(Cond bexp e1 e2, s) → (e2, s)
Fig. 1: Semantic rules for the minimal imperative language.
[CondF]
Cond bexp (Assign a 5) Empty
Assign a 5
a:=5
Empty
Cond bexp (Assign a 5) Empty
...
...
Fig. 2: The execution flow and the corresponding abstract machine for the program Cond bexp (Assign a 5) Empty.
Cond n1 bexp (Assign n2 a 5) (Empty n3 )
Assign n2 a 5
Cond n1 bexp (Assign n2 a 5) Empty n3
a:=5
Empty n2
n2
...
n3
...
Fig. 3: The execution flow and the corresponding abstract machine for the program with subexpression identifiers Cond n1 bexp (Assign n2 a 5) (Empty n3 ).
2.2
Semantic artifacts
The second problem appears because of the double functionality of the Empty
expression: it is used to define an empty operator which does nothing as well as
the final expression for reductions which cannot be further reduced. The typical
semantic rules for a sequence of expressions look as shown on Figure 4.
(e1, s) → (e1′ , s′ )
(Seq e1 e2, s) → (Seq e1′ e2, s′ )
[Seq1]
(Seq Empty e2, s) → (e2, s)
[Seq2]
Fig. 4: Semantic rules for the sequence of two expressions.
Here the Empty expression means that the first expression in the sequence
has been reduced up to the end, and we can start reducing the second expression.
However, any imperative language translated to an assembly language would not
have an additional operator between the two pieces of code corresponding to the
first and the second expressions. The rule Seq2 must be marked as a silent
transition when translated to an automaton, or the semantic rules have to be
changed.
3
3.1
Zipper-based semantics of imperative programs
The zipper data structure
Our plan is to propose an alternative technique to formalize operational semantics that will make it easier to preserve the execution context during the
translation to an automata-based formalism. Our technique is built around a
zipper data structure, whose purpose is to identify a location in a tree (in our
case: a stmt ) by the subtree below the location and the rest of the tree (in our
case: of type stmt-path). In order to allow for an easy navigation, the rest of the
tree is turned inside-out so that it is possible to reach the root of the tree by
following the backwards pointers. The following definition is a straightforward
adaptation of the zipper for binary trees discussed in [8] to the stmt data type:
datatype stmt-path =
PTop
| PSeqLeft stmt-path stmt
| PSeqRight stmt stmt-path
| PCondLeft expr stmt-path stmt | PCondRight expr stmt stmt-path
| PWhile expr stmt-path
Here, PTop represents the root of the original tree, and for each constructor
of stmt and each of its sub-stmt s, there is a “hole” of type stmt-path where a
subtree can be fitted in. A location in a tree is then a combination of a stmt and
a stmt-path:
datatype stmt-location = Loc stmt stmt-path
Given a location in a tree, the function reconstruct reconstructs the original
tree reconstruct :: stmt ⇒ stmt-path ⇒ stmt, and reconstruct-loc (Loc c sp) =
reconstruct c sp does the same for a location.
fun reconstruct :: stmt ⇒ stmt-path ⇒ stmt where
reconstruct c PTop = c
| reconstruct c (PSeqLeft sp c2 ) = reconstruct (Seq c c2 ) sp
| reconstruct c (PSeqRight c1 sp) = reconstruct (Seq c1 c) sp
| reconstruct c (PCondLeft e sp c2 ) = reconstruct (Cond e c c2 ) sp
| reconstruct c (PCondRight e c1 sp) = reconstruct (Cond e c1 c) sp
| reconstruct c (PWhile e sp) = reconstruct (While e c) sp
fun reconstruct-loc :: stmt-location ⇒ stmt where
reconstruct-loc (Loc c sp) = reconstruct c sp
3.2
Semantics
Our semantics is a small-step operational semantics describing the effect of the
execution a program on a certain program state. For each variable, the state
yields Some value associated with the variable, or None if the variable is unassigned. More formally, the state is a mapping vname ⇒ val option. Defining the
evaluation of an expression in a state is then standard.
Before commenting the rules of our semantics, let us discuss which kind
of structure we are manipulating. The semantics essentially consists in moving
around a pointer within the syntax tree. As explained in Section 3.1, a position
in the syntax tree is given by a stmt-location. However, during the traversal of
the syntax tree, we visit each position at least twice (and possibly several times,
for example in a loop): before executing the corresponding statement, and after
finishing the execution. We therefore add a Boolean flag, where True is a marker
for “before” and False for “after” execution.
↓W hile
Seq
x := T
y := F
W hile
=⇒
↓Seq
x := T
y := F
W hile
=⇒
Seq
↓x := T
y := F
W hile
=⇒
Seq
x := T↑
y := F
W hile
=⇒
Seq
x := T
↓y := F
Fig. 5: Example of execution of small-step semantics
As an example, consider the execution sequence depicted in Figure 5 (with
assignments written in a more readable concrete syntax), consisting of the initial steps of the execution of the program While (e, Seq(x := T , y := F )).
The before (resp. after) marker is indicated by a downward arrow before (resp.
an upward arrow behind) the current statement. The condition of the loop is
omitted because it is irrelevant here. The middle configuration would be coded
as ((Loc (x := T ) (PSeqLeft (PWhile e PTop) (y := F ))), True).
Altogether, we obtain a syntactic configuration (synt-config) which combines
the location and the Boolean flag. The semantic configuration (sem-config) manipulated by the semantics adjoins the state, as defined previously.
type-synonym synt-config = stmt-location × bool
type-synonym sem-config = synt-config × state
The rules of the small-step semantics of Figure 7 fall into two categories:
before execution of a statement s (of the form ((l , True), s)) and after execution
(of the form ((l , False), s)); there is only one rule of this latter kind: SFalse.
fun next-loc :: stmt ⇒ stmt-path ⇒ (stmt-location × bool ) where
next-loc c PTop = (Loc c PTop, False)
| next-loc c (PSeqLeft sp c 2 ) = (Loc c 2 (PSeqRight c sp), True)
| next-loc c (PSeqRight c 1 sp) = (Loc (Seq c 1 c) sp, False)
| next-loc c (PCondLeft e sp c 2 ) = (Loc (Cond e c c 2 ) sp, False)
| next-loc c (PCondRight e c 1 sp) = (Loc (Cond e c 1 c) sp, False)
| next-loc c (PWhile e sp) = (Loc (While e c) sp, True)
Fig. 6: Finding the next location
Let us comment on the rules in detail:
– SEmpty executes the Empty statement just by swapping the Boolean flag.
– SAssign is similar, but it also updates the state for the assigned variable.
– SSeq moves the pointer to the substatement c 1 , pushing the substatement
c 2 as continuation to the statement path.
– SCondT and SCondF move to the then- respectively else- branch of the
conditional, depending on the value of the condition.
– SWhileT moves to the body of the loop.
– SWhileF declares the execution of the loop as terminated, by setting the
Boolean flag to False.
– SFalse comes into play when execution of the current statement is finished.
We then move to the next location, provided we have not already reached
the root of the syntax tree and the whole program terminates.
The move to the next relevant location is accomplished by function next-loc
(Figure 6) which intuitively works as follows: upon conclusion of the first substatement in a sequence, we move to the second substatement. When finishing
the body of a loop, we move back to the beginning of the loop. In all other cases,
we move up the syntax tree, waiting for rule SFalse to relaunch the function.
((Loc Empty sp, True), s) → ((Loc Empty sp, False), s)
[SEmpty]
((Loc (Assign vr vl ) sp, True), s) → ((Loc (Assign vr vl ) sp, False), s(vr 7→ vl ))
((Loc (Seq c 1 c 2 ) sp, True), s) → ((Loc c 1 (PSeqLeft sp c 2 ), True), s)
[SSeq]
eval e s = Bool True
((Loc (Cond e c 1 c 2 ) sp, True), s) → ((Loc c 1 (PCondLeft e sp c 2 ), True), s)
[SCondT]
eval e s = Bool False
((Loc (Cond e c 1 c 2 ) sp, True), s) → ((Loc c 2 (PCondRight e c 1 sp), True), s)
eval e s = Bool True
((Loc (While e c) sp, True), s) → ((Loc c (PWhile e sp), True), s)
eval e s = Bool False
((Loc (While e c) sp, True), s) → ((Loc (While e c) sp, False), s)
sp 6= PTop
((Loc c sp, False), s) → (next-loc c sp, s)
[SAssign]
[SCondF]
[SWhileT]
[SWhileF]
[SFalse]
Fig. 7: Small-step operational semantics
4
4.1
Target language: Automata
Syntax
As usual, our automata are a collection of nodes and edges, with a distinguished
initial state. In this general definition, we will keep the node type ′n abstract.
It will later be instantiated to synt-config. An edge connects two nodes; moving
along an edge may trigger an assignment to a variable (AssAct ), or have no
effect at all (NoAct ).
An automaton ′n ta is a record consisting of a set of nodes, a set of edges and
an initial node init-s. An edge has a source node, an action and a destination
node dest. Components of a record are written between (| ... |).
4.2
Semantics
An automaton state is a node, together with a state as in Section 3.2.
type-synonym ′n ta-state = ′n ∗ state
Executing a step of an automaton in an automaton state (l , s) consists
of selecting an edge starting in node l, moving to the target of the edge and
executing its action. Automata are non-deterministic; in this simplified model,
we have no guards for selecting edges.
l = source e
5
e ∈ set (edges aut )
l ′ = dest e
s ′ = action-effect (action e) s
[Action]
aut ⊢ (l , s) → (l ′, s ′)
Automata construction
The principle of abstracting a statement to an automaton is simple; the novelty
resides in the way the automaton is generated via the zipper structure: as nodes,
we choose the locations of the statements (with their Boolean flags), and as edges
all possible transitions of the semantics.
To make this precise, we need some auxiliary functions. We first define a
function all-locations of type stmt ⇒ stmt-path ⇒ stmt-location list which gathers all locations in a statement, and a function nodes-of-stmt-locations which
adds the Boolean flags.
As for the edges, the function synt-step-image yields all possible successor
configurations for a given syntactic configuration. This is of course an overapproximation of the behavior of the semantics, since some of the source tree
locations may be unreachable during execution.
fun synt-step-image :: synt-config ⇒ synt-config list where
synt-step-image (Loc Empty sp, True) = [(Loc Empty sp, False)]
| synt-step-image (Loc (Assign vr vl ) sp, True) = [(Loc (Assign vr vl ) sp, False)]
| synt-step-image (Loc (Seq c1 c2 ) sp, True) = [(Loc c1 (PSeqLeft sp c2 ), True)]
| synt-step-image (Loc (Cond e c1 c2 ) sp, True) =
[(Loc c1 (PCondLeft e sp c2 ), True), (Loc c2 (PCondRight e c1 sp), True)]
| synt-step-image (Loc (While e c) sp, True) =
[(Loc c (PWhile e sp), True), (Loc (While e c) sp, False)]
| synt-step-image (Loc c sp, False) = (if sp = PTop then [] else [next-loc c sp])
Together with the following definitions:
fun action-of-synt-config :: synt-config ⇒ action where
action-of-synt-config (Loc (Assign vn vl ) sp, True) = AssAct vn vl
| action-of-synt-config (Loc c sp, b) = NoAct
definition edge-of-synt-config :: synt-config ⇒ synt-config edge list where
edge-of-synt-config s =
map(λ t. (|source = s, action = action-of-synt-config s, dest = t|))(synt-step-image s)
definition edges-of-nodes :: synt-config list ⇒ synt-config edge list where
edges-of-nodes nds = concat (map edge-of-synt-config nds)
we can define the translation function from statements to automata:
fun stmt-to-ta :: stmt ⇒ synt-config ta where
stmt-to-ta c =
(let nds = nodes-of-stmt-locations (all-locations c PTop) in
(| nodes = nds, edges = edges-of-nodes nds, init-s = ((Loc c PTop), True) |))
6
Simulation Property
We recall that the nodes of the automaton generated by stmt-to-ta are labeled by
configurations (location, Boolean flag) of the syntax tree. The simulation lemma
(Lemma 1) holds for automata with appropriate closure properties: a successor
configuration wrt. a transition of the semantics is also a label of the automaton
(nodes-closed ), and analogously for edges (edges-closed ) or both nodes and edges
(synt-step-image-closed ).
The simulation statement is a typical commuting-diagram property: a step of
the program semantics can be simulated by a step of the automaton semantics,
for corresponding program and automata states. For this correspondence, we use
the notation ≈, even though it is just plain syntactic equality in our case.
Lemma 1 (Simulation property).
Assume that synt-step-image-closed aut and (((lc, b), s) ≈ ((lca, ba), sa)). If
((lc, b), s) → ((lc ′, b ′), s ′), then there exist lca ′, ba ′, sa ′ such that (lca ′, ba ′)
∈ set (nodes aut ) and the automaton performs the same transition: aut ⊢ ((lca,
ba), sa) → ((lca ′, ba ′), sa ′) and ((lc ′, b ′), s ′) ≈ ((lca ′, ba ′), sa ′).
The proof is a simple induction over the transition relation of the program semantics and is almost fully automatic in the Isabelle proof assistant.
We now want to get rid of the precondition synt-step-image-closed aut in
Lemma 1. The first subcase (edge closure), is easy to prove. Node closure is
more difficult and requires the following key lemma:
Lemma 2.
If lc ∈ set (all-locations c PTop) then set (map fst (synt-step-image (lc, b)))
⊆ set (all-locations c PTop).
With this, we obtain the desired
Lemma 3 (Closure of automaton). synt-step-image-closed (stmt-to-ta c)
For the proofs, see [5].
Let us combine the previous results and write them more succinctly, by using
the notation →∗ for the reflexive-transitive closure for the transition relations
of the small-step semantics and the automaton. Whenever a state is reachable
by executing a program c in its initial configuration, then a corresponding (≈)
state is reachable by running the automaton generated with function stmt-to-ta:
Theorem 1.
If ((Loc c PTop, True), s) →∗ (cf ′, s ′) then ∃ cfa ′ sa ′. stmt-to-ta c ⊢ (init-s
(stmt-to-ta c), s) →∗ (cfa ′, sa ′) ∧ (cf ′, s ′) ≈ (cfa ′, sa ′).
Obviously, the initial configuration of the semantics and the automaton are
in the simulation relation ≈, and for the inductive step, we use Lemma 1.
7
Removal of silent transitions
Our technique for converting the operational semantics of a program to a finite
automaton generally results in automata containing a large number of silent
transitions. Although harmless, such transitions are only a technical device resulting from the structured nature of operational semantics: thus, they lack any
usefulness in the context of an automaton.
Rather than producing immediately an automaton free of silent transitions,
it is possible (and also quite convenient) to remove them as a final operation.
This is obtained by means of a τ -closure algorithm, where τ is the label for silent
transitions generally used in the literature (in our case, τ = NoAct ).
τ -closure amounts to computing, for each node in the automaton, the set of
those nodes which can be reached from it by taking any finite number of silent
transitions. The following tauclose-step computes the set of the nodes of an automaton M that can be reached from a node s after taking one silent transition.
The argument x is used as an accumulator when iterating the operation several
times, and should be ∅ initially:
definition tauclose-step :: ′n ta ⇒ ′n ⇒ ′n set ⇒ ′n set where
tauclose-step M s x = {s} ∪ x ∪ { n ∈ set (nodes M ).
∃ e ∈ set (edges M ). source e ∈ x ∧ action e = NoAct ∧ dest e = n}
The proof that tauclose-step is monotonically increasing (tauclose-step M s
x ⊆ tauclose-step M s y for all x , y such that x ⊆ y) is trivial.
lemma mono-tauclose-step : mono (tauclose-step M s)
Then, the operation tauclose is defined as the least fixpoint of the monotonic
operator:
definition tauclose :: ′n ta ⇒ ′n ⇒ ′n set where
tauclose M n = lfp (tauclose-step M n)
To obtain a τ -closed automaton, we simply map the nodes of the input
automaton to their τ -closed counterpart (and similarly for the initial node).
To compute the set of edges, we consider the rationale behind the definition of
the τ -closure of an automaton. Informally, being in a certain node or in any
other node reachable from it only by means of silent transitions, is equivalent.
When we compute the τ -closure of a certain node, we are essentially identifying
all the nodes in it: thus the edges with source tauclose M s1 should be those
that leave any of the nodes in the τ -closure. To make things more formal, let us
α
introduce the notation x −→ y for edges going from node x to node y labeled
with action α: using this notation, the edges of the τ -closed automaton are taken
α
to be those in the form tauclose M s1 −→ tauclose M s2, such that for some s
α
∈ tauclose M s1, s −→ s2 is a non-silent transition in the input automaton.
{1, 2, 3}
1
τ
τ
2
γ
α
4
{2}
3
β
β
5
β
γ
α
{4}
{3}
α
β
β
{5}
Fig. 8: A simple automaton and its τ -closure.
definition tauclose-nodes :: ′n ta ⇒ ′n set list where
tauclose-nodes M = List.map (tauclose M ) (nodes M )
definition tauclose-init-s :: ′n ta ⇒ ′n set where
tauclose-init-s M = tauclose M (init-s M )
definition acts-of-ta :: ′n ta ⇒ action list where
acts-of-ta M = List.map (λe.(action e)) (edges M )
definition possible-tau-edges :: ′n ta ⇒ ′n set edge list where
possible-tau-edges M =
List.map (λ(s,a,t).(|source = tauclose M s,action = a,dest = tauclose M t|))
(List.product (nodes M ) (List.product (acts-of-ta M ) (nodes M )))
definition tauclose-edges :: ′n ta ⇒ ′n set edge list where
tauclose-edges M = List.filter
(λe.(∃ s1 a s2 .(e = (|source = tauclose M s1 ,action = a,dest = tauclose M s2 |) ∧
a 6= NoAct ∧
(∃ s ∈ tauclose M s1 .(|source = s,action = a,dest = s2 |) ∈ set (edges M )))))
(possible-tau-edges M )
definition tauclose-ta :: ′n ta ⇒ ′n set ta where
tauclose-ta M = (|nodes = tauclose-nodes M ,
edges = tauclose-edges M ,
init-s = tauclose-init-s M |)
The automaton obtained by τ -closure (see example in Figure 8) has no silent
edges any more: when a silent transition is taken in the input automaton, the
corresponding operation in its τ -closure is to stay in the same node; when a
α
non-silent transition s −→ s′ is taken in the input automaton, a transition with
the same label and target is taken in its τ -closure: however the source of this
transition does not have to be tauclose s, but can be the τ -closure of any node
from which s can be reached by taking silent transitions.
This correspondence between an automaton and its τ -closure, is expressed
by the following simulation:
definition tau-sim :: ′n1 ta ⇒ ′n2 ta ⇒ bool where
tau-sim M1 M2 =
(∃ R. R (init-s M1 ) (init-s M2 ) ∧
(∀ s1 s2 . R s1 s2 −→
(∀ s1 ′ a. (|source = s1 ,action = a,dest = s1 ′|) ∈ set (edges M1 ) −→
(a = NoAct ∧ R s1 ′ s2 ) ∨
(∃ s2 ′.(|source = s2 ,action = a,dest = s2 ′|) ∈ set (edges M2 ) ∧ R s1 ′ s2 ′))))
In our case, we shall instantiate the type parameter ′n2 with ′n1 set and
take the relation R to be such that R s s ′ ⇐⇒ (s ∈ set (nodes M ) ∧ s ′ ∈ set
(nodes (tauclose-ta M )) ∧ s ∈ s ′).
We are able to prove the simulation for all well formed automata. An automaton is well formed (regular-ta) when its initial nodes and the sources and
targets of all its edges are in the set of its nodes.
definition regular-ta :: ′n ta ⇒ bool where
regular-ta M =
(init-s M ∈ set (nodes M ) ∧
(∀ e ∈ set (edges M ). source e ∈ set (nodes M ) ∧ dest e ∈ set (nodes M )))
Theorem 2 (simulation of τ -closure).
If regular-ta M then tau-sim M (tauclose-ta M ).
The proof follows from the definitions, proceeding by cases on the possible
actions.
As a final remark, it is worth noting that the definition of tauclose is not entirely satisfying, given that there exists no general method to compute a fixpoint
in a finite amount of time. In our case, however, the fixpoint can be computed
by iterating the tauclose-step function, since it is monotonically increasing with
a finite upper bound, namely the set of nodes of the input automaton. Thus, we
can define the following “computational” version of the τ -closure operation:
function tauclose-comp-aux :: ′n ta ⇒ ′n ⇒ ′n set ⇒ ′n set where
tauclose-step M s x = x =⇒
tauclose-comp-aux M s x = x
| tauclose-step M s x 6= x =⇒
tauclose-comp-aux M s x = tauclose-comp-aux M s (tauclose-step M s x )
by (atomize-elim,auto)
(∗ termination proof omitted ∗)
termination proof
(relation measure (λ(M ,s,x ).length (filter (λv .(v ∈
/ x )) (s # nodes M ))),
simp,unfold measure-def )
fix M s x
assume hneq:tauclose-step M s x 6= x
from hneq mono-tauclose-step have ∃ c.(c ∈ tauclose-step M s x ∧ c ∈
/ x)
by (unfold mono-def tauclose-step-def ,auto)
from this obtain c where hcin:c ∈ tauclose-step M s x and hcnotin:c ∈
/ x by blast
have hmagic:
length [v ←s # nodes M . v ∈
/ tauclose-step M s x ]
< length [v ← s # nodes M . v ∈
/ x ] =⇒
((M , s, tauclose-step M s x ), M , s, x )
∈ inv-image less-than (λ(M , s, x ). length [v ←s # nodes M . v ∈
/ x ])
by (simp)
from hneq have x ⊂ tauclose-step M s x by (unfold tauclose-step-def ,auto)
moreover from hcin hcnotin have c ∈ set (s # nodes M ) by (unfold tauclose-step-def ,auto)
moreover note hcin hcnotin
ultimately have
length [v ←s # nodes M . v ∈
/ tauclose-step M s x ]
< length [v ← s # nodes M . v ∈
/ x]
by (rule-tac filter-subset,auto)
from this hmagic show
((M , s, tauclose-step M s x ), M , s, x )
∈ inv-image less-than (λ(M , s, x ). length [v ←s # nodes M . v ∈
/ x ])
by auto
qed
definition tauclose-comp :: ′n ta ⇒ ′n ⇒ ′n set where
tauclose-comp M s = tauclose-comp-aux M s {}
The function tauclose-comp-aux cannot be proved to be total automatically:
we provide such a proof based on the finite upper bound argument we have just
mentioned. As expected, we can show that tauclose and tauclose-comp compute
the same function.
lemma tauclose-comp-aux-sound :
assumes x ⊆ tauclose M s
shows tauclose-comp-aux M s x = tauclose M s
using assms
proof (induct M s x rule:tauclose-comp-aux .induct,unfold tauclose-def ,simp)
fix Ma sa xa
assume tauclose-step Ma sa xa = xa xa ⊆ lfp (tauclose-step Ma sa)
from this show xa = lfp (tauclose-step Ma sa) by (unfold lfp-def ,auto)
next
fix Ma sa xa
assume tauclose-step Ma sa xa 6= xa
and ih:tauclose-step Ma sa xa ⊆ lfp (tauclose-step Ma sa) =⇒
tauclose-comp-aux Ma sa (tauclose-step Ma sa xa) =
lfp (tauclose-step Ma sa)
and xa ⊆ lfp (tauclose-step Ma sa)
from this show tauclose-comp-aux Ma sa xa = lfp (tauclose-step Ma sa)
proof (simp,rule-tac ih,simp)
assume xa ⊆ lfp (tauclose-step Ma sa)
from this show tauclose-step Ma sa xa ⊆ lfp (tauclose-step Ma sa)
by (subst lfp-unfold , unfold tauclose-step-def ,auto simp add :mono-tauclose-step)
qed
qed
lemma tauclose-comp-sound :
shows tauclose-comp M s = tauclose M s
by (unfold tauclose-comp-def , auto simp add : tauclose-comp-aux-sound )
Theorem 3.
tauclose-comp M s = tauclose M s
The proof is by functional induction on tauclose-comp-aux.
8
Conclusions
This paper has presented a new kind of small-step semantics for imperative
programming languages, based on the zipper data structure. Our primary aim is
to show that this semantics has decisive advantages for abstracting programming
language semantics to automata. Even if the generated automata have a great
number of silent transitions, these can be removed.
The playground of our formalizations is proof assistants, in which SOS has
become a well-established technique for presenting semantics of programming
languages. In principle, our technique could be adapted to other formalization
tools like rewriting-based ones [15].
We are currently in the process of adopting this semantics in a larger formalization from Java to Timed Automata [6]. As most constructs (zipper data
structure, mapping to automata) are generic, we think that this kind of semantics could prove useful for similar formalizations with other source languages.
The proofs (here carried out with the Isabelle proof assistant) have a pleasingly
high degree of automation that are in sharp contrast with the index calculations
that are usually required when naming automata states with numbers.
Renaming nodes from source tree locations to numbers is nevertheless easy
to carry out, see the code snippet provided on the web page [5] of this paper.
For these reasons, we think that the underlying ideas could also be useful in the
context of compiler verification, when converting a structured source program to
a flow graph with basic blocs, but before committing to numeric values of jump
targets.
References
1. Rajeev Alur, Costas Courcoubetis, and David L. Dill. Model-checking for real-time
systems. In LICS, pages 414–425. IEEE Computer Society, 1990.
2. Rajeev Alur and David L. Dill. A theory of timed automata. Theoretical Computer
Science, 126:183–235, 1994.
3. Andrew W. Appel and Sandrine Blazy. Separation logic for small-step cminor. In
Theorem Proving in Higher Order Logics, 20th int. conf. TPHOLS, pages 5–21.
Springer, 2007.
4. Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. MIT Press,
2008.
5. Nadezhda Baklanova, Wilmer Ricciotti, Jan-Georg Smaus, and Martin Strecker.
Abstracting an operational semantics to finite automata (formalization), 2014.
https://bitbucket.org/Martin_Strecker/abstracting_op_sem_to_automata.
6. Nadezhda Baklanova and Martin Strecker. Abstraction and verification of properties of a Real-Time Java. In Proc. ICTERI, volume 347 of Communications in
Computer and Information Science, pages 1–18. Springer, 2013.
7. Johan Bengtsson and Wang Yi. Timed automata: Semantics, algorithms and tools.
In Lectures on Concurrency and Petri Nets, volume 3098 of LNCS, pages 87–124.
Springer, 2004. 10.1007/978-3-540-27755-2.
8. Gérard Huet. Functional pearl: The zipper. Journal of Functional Programming,
7(5):549–554, September 1997.
9. Bakhadyr Khoussainov and Anil Nerode. Automata Theory and Its Applications.
Birkhauser Boston, 2001.
10. Gerwin Klein and Tobias Nipkow. A machine-checked model for a Java-like language, virtual machine, and compiler. ACM Trans. Program. Lang. Syst., 28:619–
695, July 2006.
11. Robbert Krebbers and Freek Wiedijk. Separation logic for non-local control flow
and block scope variables. In Frank Pfenning, editor, Foundations of Software
Science and Computation Structures, volume 7794 of Lecture Notes in Computer
Science, pages 257–272. Springer Berlin Heidelberg, 2013.
12. Xavier Leroy. A formally verified compiler back-end. Journal of Automated Reasoning 43(4)., 43(4), 2009.
13. Tobias Nipkow and Gerwin Klein. Concrete Semantics. TUM, 2014.
14. Tobias Nipkow, Lawrence Paulson, and Markus Wenzel. Isabelle/HOL. A Proof
Assistant for Higher-Order Logic, volume 2283 of LNCS. Springer, 2002.
15. Traian-Florin Serbanuta, Grigore Rosu, and José Meseguer. A rewriting logic
approach to operational semantics. Inf. Comput., 207(2):305–340, 2009.
16. Glynn Winskel. The Formal Semantics of Programming Languages: An Introduction. MIT Press, Cambridge, MA, USA, 1993.
| 6 |
arXiv:1512.01748v1 [cs.NA] 6 Dec 2015
Restricted Low-Rank Approximation via ADMM
Ying Zhang
[email protected]
December 8, 2015
Abstract
The matrix low-rank approximation problem with additional convex constraints can find many applications and
has been extensively studied before. However, this problem is shown to be nonconvex and NP-hard; most of the
existing solutions are heuristic and application-dependent.
In this paper, we show that, other than tons of application
in current literature, this problem can be used to recover
a feasible solution for SDP relaxation. By some sophisticated tricks, it can be equivalently posed in an appropriate
form for the Alternating Direction Method of Multipliers
(ADMM) to solve. The two updates of ADMM include
the basic matrix low-rank approximation and projection
onto a convex set. Different from the general non-convex
problems, the sub-problems in each step of ADMM can
be solved exactly and efficiently in spite of their nonconvexity. Moreover, the algorithm will converge exponentially under proper conditions. The simulation results
confirm its superiority over existing solutions. We believe
that the results in this paper provide a useful tool for this
important problem and will help to extend the application
of ADMM to the non-convex regime.
1
Research Proposal
Notations and Operators
Vectors and matrices are denoted by boldface lower and upper case letters respectively. The set of real and natural numbers are represented by R and N.
The set of vectors and matrices with proper sizes are denoted as Rn , Rm×n . The
vectors are by default column vectors and the ith entry of vector x is denoted
as xi . The entry of matrix A in the ith row and j th column is denoted as Ai,j .
The superscript (·)T stands for transpose.
k · kp denotes the p-norm of a vector (p ≥ 1), i.e.,
kxkp =
X
i
|xi |p
! p1
and k · kF denotes the F −norm of a matrix, i.e.,
kAkF =
X
i,j
21
A2i,j .
To be consistent with Matlab operator, diag (A) returns a vector a with
ai = A(i,i if the input A is a matrix and diag (a) returns a matrix A with
ai if i == j,
Ai,j =
. rank(X) returns the rank of matrix X.
0, otherwise
2
1
Research Proposal
1
INTRODUCTION
Introduction
There is a common belief that the complexity of the systems we study is up to
a limited level and the useful information in our observation is usually sparse.
The property of sparsity can be applied to reduce the necessary sampling rate
of signal reconstruction in compressed sensing [6], to avoid over-fitting in regularized machine learning algorithms [24], to increase robustness against noise
in solving linear equations, to train the deep neural network in unsupervised
learning [16] and beyond.
A direct interpretation of sparsity is that there are many zeros entries in
the data. More often, sparsity is reflected not directly by the data we collect,
but by the underlying pattern to be discovered. For example, the sparsity of
the voice will be more prominent in frequency domain [2]; the sparse pattern
of some astronomical or biomedical imaging data will be reviewed by wavelet
transformation [25, 26]. In recent years, we are able to generate or collect a lot
of data samples with many features, and different data samples and features
are usually correlated with each other. If we form the data into a matrix, the
matrix is often of low rank, which is another representation of sparsity and can
be exploited in many applications [8]. To encourage sparsity, we can incorporate
the sparsity interpretation into the objective function as a regularization term,
or put a sparsity upper bound as a hard constraint.
The sparsity usually leads to a non-convex problem and the problem is intractable. For example, we can let the number of nonzero component (0-norm)
in the data to be upper bounded. However, the function of 0-norm is nonconvex. To tackle the non-convexity, the researchers proposed many solutions,
usually belonging to the following two categories. First, we can use its convex envelop to replace the non-convex function, for example, to use 1-norm to
replace 0-norm, and solve the convex problem instead (this method is usually
called convex relaxation). And we can study under what condition the convex
relaxation is exact or how much performance loss we will suffer due to the relaxation [6]. Second, we can directly design some heuristic algorithms to solve the
non-convex problem based on engineering intuition, like the Orthogonal Matching Pursuit (OMP) algorithm in compressed sensing [21]. This kind of method
usually works well in practice but little theoretical performance guarantee can
be obtained.
1.1
Related work
The rank of a matrix, as the dimensionality of the smallest subspace to which the
data belongs to, is essentially important and the research on it can be traced
back to the origin of matrix. In recent years, many problems with low-rank
matrix are of our interests. However, its non-convex nature makes the problems
generally intractable. In this case, some researchers proposed to use the nuclear
norm or log-det function of the matrix to replace the rank function and then
solve the relaxed convex problem [9, 23]. In [1], the matrix being of low rank is
interpreted as the matrix being factorized into two smaller-sized matrices, then
3
1
Research Proposal
INTRODUCTION
the matrix factorization techniques can be applied.
The alternating direction method of multiplier (ADMM) was invented in
the 1970s and is raising its popularity in the era of big data; many large scale
optimization problems that arise in practice can be formulated or equivalently
posed in a form appropriate for ADMM and distributed algorithms can be obtained [3]. Many successful applications including consensus and sharing have
been found. The theory related to the convergence rate of ADMM is an active
ongoing topic and many results for the convex problem are provided in [3] and
the references therein. In practice, we can meet many problems that are nonconvex and ADMM algorithm is proposed to tackle them in a heuristic sense,
such as optimal power flow problem [28], matrix factorization [4], etc. However,
in the non-convex regime, little theoretical guarantee can be obtained. To the
best of our knowledge, [12, 19] are among the first attempts to characterize the
convergence behavior of ADMM for the general non-convex problems and has
limited theoretical performance guarantee.
In this paper, we consider that low-rank approximation problem with additional convex constraints, which is non-convex and NP-hard. By borrowing
some tricks from [3] [28], we can reformulate the problem into the form that can
be solved by ADMM. In each iteration of ADMM, the first update is to solve a
convex problem and the second update is to solve a standard low-rank approximation, the optimal solution of which can be obtained by singular value decomposition. Different from the general non-convex problems, the sub-problems
in ADMM can be solved efficiently. This approach is motivated by [28], in
which the authors also use ADMM to solve a non-convex problem with similar
structures. The different thing is that, due to the special objective function we
consider, i.e., kX − X̂kF , we can show that the first update is a projection onto
a convex set and then establish the convergence of primal variable by assuming
the convergence of dual variable, which is an appealing result for the ADMM
application in non-convex regime.
The remaining part of this paper is organized as follows,
⊲ In Section 2, we introduce the low rank approximation problem with
additional convex constraints and show that it can capture the structured low
rank approximation and feasible solution recovery of SDP relaxation as two
special cases.
⊲ In Section 3, we leverage ADMM to design a generic algorithm and shows
that each step of this algorithm can be solved efficiently. We also provide some
theoretical results to characterize its convergence behavior.
⊲ In Section 4, we shows the performance of our algorithm by extensive
evaluations with synthetic and real-world data.
⊲ Section 5 is for conclusion and future work.
4
Research Proposal
2
2.1
2
RESTRICTED LOW RANK APPROXIMATION
Restricted Low Rank Approximation
Problem formulation
In this paper, we are particularly interested in the data-fitting problem and
restrict our attention to the low-rank solutions. More precisely, we are given a
matrix X̂ and we want to find a low-rank matrix to approximate X̂. The first
version of this problem is formulated as the Low Rank Approximation problem
(LRA) as follows,
LRA
min
kX̂ − XkF
X
s.t.
rank(X) ≤ K;
var.
(1a)
X,
where K is an integer to specify the upper bound of the matrix rank. The
problem is non-convex due to the rank constraint but an optimal solution can
be given by the well known Eckart-Young-Mirsky Theorem,
Theorem 1 (Eckart-Young-Mirsky Theorem [7]) If the matrix X̂ admits
the singular value decomposition X̂ = U ΣV H with Σ = diag([σ1 , σ2 , · · · , σn ])
and σ1 ≥ σ2 ≥ · · · ≥ σn ≥ 0, an optimal solution to problem LRA is given by
X∗ =
K
X
σk uk vkH .
k=1
Furthermore, the minimizer is unique if σK and σK+1 are not equal.
This method is called truncated singular value decomposition (SVD). We
can see that even though LRA is non-convex, it can be solved in polynomial
time because SVD can be computed in polynomial time.
In this paper, we put some additional constraints on LRA and call the
new problem as Restricted Low Rank Approximation problem (RLRA). The
problem is casted as follows,
RLRA
kX̂ − Xk2F
min
X
s.t.
rank(X) ≤ K;
g(X) ≤ 0;
var.
(2a)
(2b)
X,
where g(X) is a convex function, requiring that the approximation X is located
in a convex set.
The difference between LRA and RLRA is the constraint g(X) ≤ 0. The
convexity of g(X) will makes the readers feel that the difference is not significant,
since the convex things are usually treated as easy in most literature. However,
this is not true. The new constraint makes the truncated SVD not applicable
5
Research Proposal
2
RESTRICTED LOW RANK APPROXIMATION
to solve RLRA 1 . More importantly, RLRA is believed to be NP-hard [20].
There is no hope to always achieve the optimal solution in polynomial time
unless P=NP.
There are several proper ways to tackle this non-convex problem. Firstly,
we can replace the rank constraints by some constraints on the nuclear norm
of X [8] and solve the relaxed problem instead; secondly, we can use the kernel
representation, for example,to equivalently transform the rank constraint to the
fact that the matrix can be factorized into two smaller sized matrices and then
apply the matrix factorization technique [20]. In this paper, we propose to use
ADMM, which has found its success in many problems [3], to solve this problem.
2.2
Two specific instances
The generic problem RLRA can be used to solve many problems and we review
two of them in this section to provide more motivations.
2.2.1
Low-rank approximation with linear structures
The data fitting task can be formulated as a structured low-rank approximation problem (SLRA)if the system generating the data is of linear model and
bounded complexity [20]. Many applications can be found in system theory,
signal processing, computer algebra, etc [20].
We denote A a set of matrices with specific affine structures, for example,
Hankel matrix, Toplitiz matrix, etc. The problem is formally given as follows,
kX − X̂k2F
SLRA min
x
s.t.
var.
rank(X) ≤ K,
(3a)
X ∈ A,
X ∈ Cm×n .
(3b)
Different applications lead to different requirements of A [20]. Without
diving into to the details, we list some of them here,
• Hankel matrix: approximate realization, model reduction, output error
identification;
• Sylvester matrix: Pole place by low-order controller, approximate common
divisor;
• Hankel&Toeplitz matrix: Harmonic retrieval;
• Non-negative matrix: image mining, Markov chains.
Some heuristic or local-optimization based algorithms for different problems
are summarized in [20].
1 It
is easy to imagine that the solution by truncated SVD may not respect g(X) ≤ 0.
6
Research Proposal
2.2.2
2
RESTRICTED LOW RANK APPROXIMATION
Feasible solution recovery of SDP relaxation
Many communication problems, like multicast downlink transmit beamforming
problem, can be formulated as a quadratically constrained quadratic program
QCQP, which is non-convex and generally NP-hard,
QCQP
min
x
s.t.
xH Cx
xH Fi x ≥ gi , i = 1, ..., k
xH Hi x = li , i = 1, ..., m
var.x ∈ Rn ,
and it can be shown to be equivalent to problem SDP-QCQP.
QCQP-SDP
min
x
s.t.
var.
trace(C · X)
trace(Fi · X) ≥ gi , i = 1, ..., k
(5a)
trace(Hi · X) = li , i = 1, ..., m
rank(X) = 1,
(5b)
(5c)
X 0, X ∈ Cn×n .
By dropping the rank-1 constraint (5c) we can have a standard SDP problem
(denoted as QCQP-SDPR) and it can be solved by standard solver like CVX
[10]. This technique is called SDP relaxation and more details can be found
in [18].
If the optimal solution of the relaxed problem, denoted as X̂, happens to
respect the rank-1 constraint, then the optimal solution of the problem QCQP,
denoted as x∗ , can be obtained by the fact that X̂ = x∗ x∗T . In this case, the
SDP relaxation is called exact relaxation, and the original QCQP problem can
be solved efficiently and exactly even though it is non-convex. For the optimal
power flow problem, the exact relaxation always happens if some conditions
hold [17] [15].
However, the rank of X̂ is more often larger than 1 2 and the relaxation is
not exact. In this case, another step is needed to recover a good solution if we
do not want to waste pervious effort. The most direct way is to find the rank-1
matrix that is closest to X̂, i.e., solving LRA with K = 1 by truncated SVD.
This approach is suggested in [30] for the state estimation problem of power
system. However, we want to point out that this approach is not guaranteed to
produce a feasible solution of the original problem, because the rank-1 matrix
by truncated SVD may not satisfy the other constraints of SDP-QCQP like
(5a) (5b). We provide a successful case in Fig 1 and an unsuccessful case in
Fig 2 for a more clear illustration; X̃ is the optimal solution of QCQP-SDPR
and X1 is the rank-1 approximation of X̃.
2 Otherwise
SDP relaxation will always solve a non-convex problem exactly, which is not
true
7
3
Research Proposal
ALGORITHM DESIGN
Figure 1: An example that trun- Figure 2: An example that truncated SVD works for FSR-SDPR cated SVD fails to work for FSRSDPR
Some heuristic and problem-dependent algorithms are proposed in [18]. For
example, if it is required that kxk2 ≤ 1 for QCQP, we can first obtain a solution
x̂ from the rank-1 matrix by truncated SVD and then normalize the solution
x̂
if kx̂k2 > 1.
by kx̂k
2
Here we propose a generic method to solve the problem FSR-SDPR (Feasible Solution Recovery of SDP Relaxation) by finding a rank-1 matrix from
which we can recover a feasible solution to QCQP.
FSR-SDPR
min
kX − X̂kF
x
s.t.
g(X) ≤ 0,
rank(X) = 1,
X 0, X ∈ Cn×n ,
var.
where g(X) ≤ 0 represents the original convex constraints like (5a) and (5b).
The rationale behind this approach is that we want to find the point closest
to X̂ in the feasible region. If the objective function, trace(C · X), is Lipschitz
continuous, the performance of the recovered solution is guaranteed to be close
to the optimal value of QCQP-SDPR, hence close to the optimal solution of
QCQP. It is not difficult to see that solving this problem can be viewed as a
special case of RLRA and the solution is far from being trivial.
3
Algorithm Design
Next we will present how to solve the problem RLRA via the ADMM algorithm.
3.1
ADMM algorithm
In this section, we review the basic version of ADMM [3] to bring all the readers
to the same page 3 . We present the standard problem that ADMM can solve
3 The
readers are recommended to read [3] for more details.
8
3
Research Proposal
ALGORITHM DESIGN
in SP, in which the objective function is separable and two variables x, y are
coupled with each other by a linear constraint.
SP
min
f (x) + g(y)
s.t.
var.
Ax + By = c,
x, y.
x,y
(7a)
The augmented Lagrange multiplier function of the above problem is given
by
ρ
Lρ (x, y, λ) = f (x) + g(y) + λ(Ax + By − c) + kAx + By − ck22 ,
2
the scaled form of which is
ρ
Lρ (x, y, u) = f (x) + g(y) + kAx + By − c + uk22 ,
2
(8)
and we will use the scaled form in the sequel unless specified.
In each iteration, ADMM algorithm consists of the following three steps
• x update:
xk+1 = argminx Lρ (x, y k , uk ).
• y update:
y k+1 = argminy Lρ (xk+1 , y, uk ).
• u update:
uk+1 = uk + Axk+1 + By k+1 − c.
The ADMM algorithm is guaranteed to converge under some conditions, as
shown in Theorem 2.
Theorem 2 ( [3]) If f (x) and g(y) are closed, proper , convex and the unaugmented Lagrange function L0 has a saddle point, ADMM algorithm is guaranteed
to converge in the following sense,
• The residual Ax + By − c will converge to 0, i.e., the solution xk , y k will
approach feasibility.
• The objective value f (xk ) + g(y k ) will converge to the optimality.
• The dual variable u will converge to the dual optimal point.
We highlight that under the conditions provided in Theorem 2, the primal
variables x, y are not guaranteed to converge. But since the objective value is
guaranteed to converge, all the solutions after enough iterations will produce
the same results and are equivalently good.
9
3
Research Proposal
3.1.1
ALGORITHM DESIGN
Some remarks
We remark some properties of ADMM here. First, the ADMM algorithm can
produce a reasonably accurate solution very fast, but takes more time to generate a high accurate solution. For many machine learning or statistical tasks,
the overall performance depends on both problem formulation (including feature engineering and model selection) and parameter estimation (solving the
optimization problem). Usually the performance bottleneck comes from the
first one and a more accurate solution will not lead to significant performance
improvement. In this way, a reasonably accurate solution is totally acceptable
and this property of ADMM is appealing. Second, the optimization variable
of Problem SP is (x, y). In conventional algorithm such as gradient descent,
x, y are updated simultaneously, but in ADMM, we update x first and use the
newly updated x to update y. The intuition is that we want the newly updated information to take effect as soon as possible, which is similar to the
logic of Gauss-Sidel algorithm. Last but not least, the ADMM algorithm can
be implemented in a distributed manner with a parameter server [3].
3.2
Problem reformulation and applying ADMM
The original problem formulation in RLRA does not have the form of SP to
be readily solved by ADMM. We need to reformulate the problem. Firstly we
define two indicator functions as follows,
(
0, if rank(X) ≤ K,
I(X) =
(9)
+∞, otherwise,
(
0, if g(X) ≤ 0,
J (X) =
(10)
+∞, otherwise.
And we reformulate Problem RLRA equivalent to Problem RLRA-ADMM
as,
RLRA-ADMM
min
X,Y
kX − X̂kF + J (X) + I(Y )
|
{z
}
| {z }
X involved
s.t.
var.
X −Y =0
Y involved
(11a)
X, Y.
The equivalence of the two problems is formally established in Lemma 1.
Lemma 1 If Problem RLRA is feasible, then the optimal solutions and optimal
objective values of RLRA and RLRA-ADMM are the same.
And more importantly, RLRA-ADMM is readily solved by AMDD algorithm.
We denote f (X) = kX − X̂kF +I(X), g(Y ) = J(Y ) and the augmented function
with dual variable U as
ρ
Lρ (X, Y, U ) = f (X) + g(Y ) + kX − Y + U k2F .
2
10
3
Research Proposal
ALGORITHM DESIGN
Following the ADMM procedure, we will have the following three updates in
each iteration.
• X update:
X k+1 = argminX Lρ (X, Y k , U k ).
• Y update:
Y k+1 = argminY Lρ (X k+1 , Y, U k ).
• U update:
U k+1 = U k + X k+1 − Y k+1 .
In the next part, we carefully study the details of the algorithm and show
that each update can be carried out efficiently even though the problems can
be non-convex.
3.2.1
The subproblems
The U update is simple and direct. We revisit the other two in this part.
3.2.1.1
X update
The optimization problem in X update is equivalent to
X-MIN min
X
s.t.
ρ
kX − X̂k2F + kX − Y k + U k k2F
2
g(X) ≤ 0.
The problem X-MIN is convex and can be solved efficiently. We wit can be
viewed as a projection of Xk+1 + Uk onto the convex set S = {Y|g(Y) ≤ 0}.
It can be solved efficiently or even have closed-form solutions.
Lemma 2 The optimal solution of X-MIN can be obtained by solving
min
X
kX −
s.t.
1
1+
ρ
2
ρ
X̂ + (Y k − U k ) k2F
2
g(X) ≤ 0.
which is a projection of 1+1 ρ X̂ + ρ2 (Y k − U k ) onto the convex set {X|g(X) ≤
2
0}.
Proof We prove the equivalence between the two problems by showing the
linear relationship between the two objective functions. We will use the equation
that kAk2F = trace(AAH ) in the proof.
11
3
Research Proposal
ALGORITHM DESIGN
ρ
kX − X̂k2F + kX − Y k + U k k2F
2
ρ
=trace((X − X̂)(X − X̂)H ) + trace((X − Y k + U k )(X − Y k + U k )H )
2 ρ
ρ
H
=trace 1 +
XX − 2 X̂ + (Y k − U k ) X H + C1
2
2
!
H
1
ρ
1
ρ k
ρ
trace
X−
= 1+
X−
X̂ + (Y − U k )
X̂ + (Y k − U k )
2
1 + ρ2
2
1 + ρ2
2
+ C2
ρ
1
= 1+
kX −
2
1+
ρ
2
ρ
X̂ + (Y k − U k ) k2F + C2 .
2
The proof is completed.
3.2.1.2
Y update
The optimization problem in Y update is equivalent to
Y-MIN
min
Y
s.t.
ρ k+1
kX
− Y + Uk k2F
2
rank(Y) ≤ 0.
The above optimization problem is non-convex because the rank constraint
is non-convex. Thus the problem is challenging on the first sight. The following
lemma shows that its optimal solution can be obtained by truncated SVD. We
summarize the algorithm into Algorithm 1 to end this part.
Algorithm 1 ADMM-RLRA: ADMM algorithm for RLRA
Require: pm ,pg ,pe (t),ek (t)
Ensure: uk (t),v k (t)
1: initialization
2: while not terminate do
3:
Obtain Xk+1 by solving X-MIN
4:
Obtain Yk+1 by solving Y-MIN
5:
Uk+1 = Uk + Xk+1 − Yk+1
6:
k =k+1
7: end while
3.3
On the convergence of the algorithm
The convergence of ADMM for the non-convex problem is an open problem [3].
Some positive results are obtained with some assumptions [12, 19]. We provide
some preliminary results in this section, regarding to the convergence and feasibility of Algorithm 1. An assumption that the dual variable Uk converges is
12
3
Research Proposal
ALGORITHM DESIGN
used in the theoretical analysis, which is also the assumption for the convergence analysis of ADMM for polynomial optimization [14], non-negative matrix
factorization [29] and non-negative matrix factorization [27].
Lemma 3 If the dual variable Uk converges, the solution Xk in the X update
of Algorithm 1 will approach feasibility of RLRA.
Proof Since Uk converges, Xk − Yk converges to 0. Since Yk satisfies the
constraint g(X) ≤ 0 for all k, Xk will satisfy the same condition when k is large
enough. Meanwhile, X k satisfies the rank constraint for all k, then Xk is a
feasible solution for RLRA.
The proof is completed.
When we characterize the convergence of ADMM in Theorem 2, the primal
variables are not guaranteed to converge, but can oscillate in the optimal region.
For the non-convex problem, because we do not know whether the objective
function will converge or not, the convergence of the primal variable is more
important, which we will discuss next.
Since Uk converges Ū, Xk − Yk converges to 0, then the X update can be
denoted as
1
ρ k
k
2
Xk+1 = argmin kX −
ρ X̂ + (X − U ) kF .
1
+
2
rank(X)≤K
2
Let D = X̂ − ρ2 Ū , it can be further simplified as
Xk+1 = argmin kX −
rank(X)≤K
1
1+
ρ
2
ρ
2
Xk + D k2F
= argminkX − αXk + (1 − α)D k2F ,
α=
g(X)≤0
ρ
2+ρ
= C(Xk ).
Based on this understanding, we have the following theorem to characterize
its convergence.
Theorem 3 If the dual variable converges, the primal variable will converge,
and when k is large enough, we can have
kXk+2 − Xk+1 kF ≤
ρ
kXk+1 − Xk kF .
ρ+2
Proof X k+2 is the projection of αX k+1 + (1 − α)D and X k+1 is the projection
of αX k + (1 − α)D, with the fact that the feasible region of the projections is
convex, and projection onto a convex set is non-expansive [11] we can have
kXk+2 − Xk+1 kF ≤ kαX k+1 + (1 − α)D − αX k − (1 − α)DkF
= αkX k+1 − X k kF
The proof is completed.
13
3
Research Proposal
ALGORITHM DESIGN
We provide some remarks regarding this theorem. Firstly, the convergence of X k
will guarantee that we can have a local optimal or stationary point. Secondly,
the inequality indicates that the primal variable will converge exponentially and
a smaller ρ will lead to a faster convergence.
3.4
Another ADMM
3.4.1
Algorithm design
In this part, we provide another ADMM algorithm, which considers different
constraints compared with the algorithm we previously proposed, i.e., in X
update we consider the rank constraint while in Y update we consider the
convex constraint.
We denote f˜(X) = kX − X̂kF + I(X), g̃(Y ) = J(Y ) and the augmented
function with dual variable U as
ρ
L̃ρ (X, Y, U ) = f˜(X) + g̃(Y ) + kX − Y + U k2F .
2
Following the ADMM procedure, we will have the following three updates in
each iteration.
• X update:
X k+1 = argminX L̃ρ (X, Y k , U k ).
• Y update:
Y k+1 = argminY L̃ρ (X k+1 , Y, U k ).
• U update:
U k+1 = U k + X k+1 − Y k+1 .
3.4.1.1
The Subproblems
By similar arguments, the optimization problem in X update is equivalent to
1
ρ k
k
min
kX −
(Y
−
U
)
k2F
X̂
+
X
1 + ρ2
2
s.t.
rank(X) ≤ K.
which is to find a low-rank matrix to approximate 1+1 ρ X̂ + ρ2 (Y k − U k ) and
2
can be solved by truncated SVD.
The optimization problem in Y update is equivalent to
min
Y
s.t.
ρ k+1
kX
− Y + Uk k2F
2
g(Y) ≤ 0.
The problem is convex and it can be viewed as a projection of Xk+1 + Uk
onto the convex set S = {Y|g(Y) ≤ 0}. It can be solved efficiently or even have
closed-form solutions.
14
3
Research Proposal
ALGORITHM DESIGN
Figure 3: An illustration of X update
3.4.2
Convergence analysis
The theoretical analysis is also based on the assumption that the dual variable
will converge. With the same argument in Section 3.3, the value of primal
variable is iteratively updated by the function Y = H(X), where
H(X) = argmin kY − (αX + (1 − α)D) k2F ,
rank(Y)≤K
α=
ρ 4
,
2+ρ
i.e., X k+1 = H(X k ) An interpretation of the update is provide in Fig 3.
If the feasible approximation region is convex, the update rule will converge
with kXk+2 −Xk+1 kF ≤ αkXk+1 −Xk kF , but unfortunately, the set {X|rank ≤
K} is non-convex and the update is not guaranteed to converge. In the following,
we provide some simple necessary conditions for the convergence of the primal
variable. Firstly, if Xk converges to X∗ , then X∗ is a fixed point of H(X) 5 , as
shown in Proposition 1.
Proposition 1 . For any sequence Xk converging to X∗ we will have
X∗ = H(X∗ ).
We can see that if Xk converges to X∗ , then X∗ is a fixed point of Y = H(X).
We define a set F as
F = {X|X = H(X), g(X) ≤ 0},
4 To make function H(X) well-defined, we assume that the minimizer of the X update
is unique, which means that, in each
iteration, σK 6= σK+1 holds for the singular value
decomposition of αXk + (1 − α)D . This assumption is used in the sequel.
5 This result depends on the continuity of H(X) at X∗ , which requires a formal proof. We
conjecture that H(X) is continuous at the points where it is well-defined.
15
3
Research Proposal
ALGORITHM DESIGN
and provide a necessary condition for the convergence of Xk in Lemma 4.
Lemma 4 Under the condition that the dual variable Uk converges to Ū, if Xk
converges to X∗ , then X∗ ∈ F .
Lemma 4 is useful in the sense that, with the knowledge of D, we can restrict
our attention to F. On one hand, if we observe that the dual variable converges,
we do not need to keep updating since the only possible limiting points are in
F ; on the other hand, if we find F is empty, then we can say that the primal
variable will not converge and there is no need to spend more effort on updating.
3.4.2.1
the Fixed Points of H(X)
It turns out that the fixed points of Y = H(X) are highly related to the singular
value decomposition of D, which we will show in this section.
To warm up, we firstly provide one fixed point in Corollary 1.
Corollary 1 If X̃ is the optimal solution of
min
rank(X)≤K
kX − DkF by truncated
SVD, then X̃ = H(X̃).
Proof
Let D = U ΣV T , we can obtain that the optimal solution is X̃ =
Σ1:K 0
U
V T . Then,
0
0
Σ1:K 0
αX̃ + (1 − α)D = αU
V T + (1 − α)U ΣV T
0
0
Σ1:K
0
V T,
=U
0
(1 − α)ΣK+1:N
which is the singular value
decomposition of αX̃ + (1 − α)D. Then the value of
Σ1:K 0
∗
H(X ) is U
V T.
0
0
The proof is completed.
We note that even though X̃ is a fixed point of Y = H(X), it may not be a
stationary point of X update because X̃ may not satisfy g(X) ≤ 0.
Next, we characterize all possible fixed points of H(X) in Corollary 2.
Corollary 2 Suppose D admits the singular value decomposition of U ΣV T with
σ1 ≥ σ2 ≥ ... ≥ σn ≥ 0, and I = {i1 , i2 , ..., iK } is a subset of {1, 2, ..., n}
with size K. If for any i ∈ I and j ∈
/ I we can have σi ≥ (1 − α)σj , then
P
X̃ = i∈I σi ui viT is a fixed point of Y = H(X). On the other hand, if X̃ is a
P
fixed point, then there must exists such a subset I, such that X̃ = i∈I σi ui viT .
Proof Suppose X̃ is a fixed point of H(X) and αX̃ + (1 − α)D admits
the
Σ
0
1:K
singular value decomposition of UΣVT . Then X̃ = U
VT . By
0
0
16
4 SIMULATIONS
Research Proposal
substituting into (1 − α)D + αX̃ = UΣVT we can have
Σ1:K 0
(1 − α)D + αU
VT = UΣVT .
0
0
Thus
D=U
Σ1:K
0
0
ΣK+1:n
1−α
VT .
However, to have the standard
SVD of D, we need to permutate the diagonal
ΣK
0
elements of
to make them decreasing and the corresponding
ΣK+1:n
0
1−α
columns of U, V.
Σ1:K
0
are decreasing, which
However, the diagonal elements of
0
ΣK+1:n
establishes the results in this corollary.
We provide some remarks regarding this corollary.
• By Corollary 2, it is easy to see that Corollary 1 is true.
n
• The number of fixed points is upper bounded by K
because we need to
choose K singular values to form one fixed point of C(X).
• A larger value of α (meaning a larger ρ) will make the condition σi ≥
(1 − α)σj more difficult to satisfy and the possible fixed points will be
fewer.
4
Simulations
In this section, we conduct some simulations to show the performance of the
proposed algorithm. Our purpose is to investigate (i) the performance comparison with other algorithms under different scenarios, (ii) the impact of values of
ρ, and (iii) the performance on a real application.
4.1
Non-negative low-rank approximation
Firstly, we consider a special case with the convex constraints being Xi,j ≥
0, ∀i, j, i.e., the non-negative low-rank approximation problem.
Other than the algorithm we propose, there are two alternatives to solve
this problem: Alternating Direction Projection (ADP) [5], and Non-negative
Matrix Factorization (NMF) [20]. We briefly discuss them here.
In each iteration of ADP, we first make a projection onto the rank-K set and
then another projection onto the convex set {X|g(X) ≤ 0}, both of which can
be solved efficiently. One advantage of ADP is that the algorithm is guaranteed
to converge, but this is also its disadvantage, because it is easily to be trapped
by a local optimal. In NMF, we firstly equivalently transform the rank-K
constraints to the fact that X = AB, A ∈ RM×K , B ∈ RK×N and impose
17
4 SIMULATIONS
Research Proposal
93
90
92
89
96
ADMM−RLRA
ADP
NNMF
94
90
Objective values
ADMM−RLRA
ADP
NNMF
89
88
88
87
Objective values
91
Objective values
98
ADMM−RLRA
ADP
NNMF
86
92
90
88
86
85
87
84
86
84
85
0
83
0
5
10
15
20
Number of iterations
25
30
82
5
10
15
20
Number of iterations
25
30
80
0
5
10
15
20
Number of iterations
25
30
Figure 4: Non-negative Figure 5: Non-negative Figure 6: Non-negative
low-rank approximation low-rank approximation low-rank approximation
with K = 3
with K = 6
with K = 10
constraints Ai,j ≥ 0, Bi,j ≥ 0 to ensure that Xi,j ≥ 0, which is obviously
more strict. Then we apply the ADMM algorithm propose in [29] to obtain the
factorization.
In the experiments, the original data X̂ ∈ R100×80 is randomly generate and
ρ = 5 for the algorithm we propose. The objective values of different algorithms
in different scenarios (different rank constraints) are shown in from Fig 4 to
Fig 6.
It can be seen that the performance of our proposed algorithm is always the
best in three cases. NMF fails to converge when K = 10. Another observation
is that the objective value of AD will remain the same after several iterations,
the reason of which is that the optimization variable will not change once it
is feasible. The reason that our algorithm outperforms NMF is that NMF
requires A and B are both non-negative, which is more restrictive than the
original problem.
4.1.1
The impact of ρ
We have little theoretical results on how the parameter ρ will impact the performance and how to choose a proper value of ρ for different problems. We provide
some simulation results regarding to the convergence and performance here. In
this simulation, we fix the data matrix X̂, keep K = 5 and examine the residual
value kX − YkF and objective value for ρ = 1, 5, 9, 15. The result is shown in
Fig 7 8.
From Fig 7, we can see that the residual value converges to 0 when ρ = 5, 9, 15
and converges faster with a larger ρ, but fails to converge when ρ = 1; on the
other hand, as shown in Fig 8, for the scenarios when the algorithm converges,
the objective value decreases more slowly if ρ is larger. As we can see, a larger
ρ will make the algorithm more easily to converge, but will result in a worse
performance, indicating that there is a underlying tradeoff, which we leave for
future work.
18
4 SIMULATIONS
Research Proposal
2
90
ρ=1
ρ=5
ρ=9
ρ=15
1
10
0
10
−1
10
0
10
20
Number of iterations
30
Objective values
)QRUPRIUHVLGXDO
10
ρ=1
ρ=5
ρ=9
ρ=15
88
86
84
82
0
10
20
Number of iterations
30
Figure 7: Values of residual with Figure 8: Values of objective with
different ρ
different ρ
4.2
Image denoising by partial prior knowledge
Next, we consider an application of image denoising. More precisely, we have
a noisy image X̂ and two kinds of prior knowledge: (a) the original image is of
low rank and the rank upper bound is K; (b) the original values of some pixels.
And we want to obtain a clear image with less noise.
Singular value decomposition is shown useful in image denoising [22] [13].
But direct truncated SVD cannot use the second kind of prior knowledge in the
scenario we consider. Here we formulate an optimization problem in the form
of RLRA with the convex constraint g(X) ≤ 0 being Xi,j = xi,j , (i, j) ∈ P,
where P is the set of pixels with known values. We do the experiment on MIT
logo (the rank of the original image is 5) with ρ = 5 and the values of 5% of the
pixels are known as a prior. The truncated SVD algorithm (TSVD) is used as
a comparison. The original image, noised image, denoised image by ADMMRLRA and denoised image by TSVD are shown from Fig 9 to Fig 12. The
image qualities measured by PSNR and SNR are shown in Table 1.
Table 1: Image quality, PSNR, SNR
PSNR
SNR
Noisy Image -3.504
-4.895
By TSVD
8.675
7.284
By RLRA
12.895 11.504
The simulation results shows that the second kind of prior knowledge is useful
and ADMM-RLRA can use that to almost double the PSNR and improve the
image quality significantly, which verifies the effectiveness of our algorithm.
19
Research Proposal
5
CONCLUSION AND FUTURE WORK
Figure 9: Original image: MIT logo Figure 10: Noisy image with Gaussian noise
Figure 11:
RLRA
5
Recovered image by Figure 12:
TSVD
Recovered image by
Conclusion and Future Work
This paper proposes to use ADMM algorithm to solve the restricted low-rank
approximation problem. We show that all subproblems in ADMM can be solved
efficiently. And more interestingly, under the condition that the dual variable
converges, the update of primal variable is carried out by a function and the
possible limiting points are its stationary points,. We provide some preliminary
theoretical results based on this understanding and show its performance by
experiments.
The theoretical part in this paper is limited because most of them is based on
the assumption that the dual variable converges, which is not guaranteed. The
most important future work is to study under what condition the dual variable
will converge. From Corollary 2 and experiments in Section 4.1.1, we can see
that the parameter ρ plays an important role and an interesting question is to
ask how the value of ρ will affect the convergence of this algorithm and how to
choose a proper ρ.
There can be several different ways to reformulate the original problem and
apply ADMM. As we mention in Section 3.4, we can change the update order.
And also, we can let I(X) be the indicator function for the convex constraints
and J (Y ) be the indicator function for the rank constraint. It is interesting
to study how the different approaches will affect the solution theoretically and
20
REFERENCES
Research Proposal
empirically.
References
[1] A. Berman and N. Shaked-Monderer. Completely positive matrices. World
Scientific, 2003.
[2] P. Bofill and M. Zibulevsky. Underdetermined blind source separation using
sparse representations. Signal processing, 81(11):2353–2362, 2001.
[3] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed
optimization and statistical learning via the alternating direction method
of multipliers. Foundations and Trends R in Machine Learning, 3(1):1–122,
2011.
[4] R. Chartrand. Nonconvex splitting for regularized low-rank+ sparse decomposition. Signal Processing, IEEE Transactions on, 60(11):5810–5819,
2012.
[5] M. T. Chu, R. E. Funderlic, and R. J. Plemmons. Structured low rank
approximation. Linear algebra and its applications, 366:157–172, 2003.
[6] D. L. Donoho. Compressed sensing. Information Theory, IEEE Transactions on, 52(4):1289–1306, 2006.
[7] C. Eckart and G. Young. The approximation of one matrix by another of
lower rank. Psychometrika, 1(3):211–218, 1936.
[8] M. Fazel. Matrix rank minimization with applications. PhD thesis, PhD
thesis, Stanford University, 2002.
[9] M. Fazel, H. Hindi, and S. P. Boyd. Log-det heuristic for matrix rank
minimization with applications to hankel and euclidean distance matrices.
In American Control Conference, 2003. Proceedings of the 2003, volume 3,
pages 2156–2162. IEEE, 2003.
[10] M. Grant and S. Boyd. Cvx: Matlab software for disciplined convex programming.
[11] J.-B. Hiriart-Urruty and C. Lemaréchal. Convex analysis and minimization algorithms I: fundamentals, volume 305. Springer Science & Business
Media, 2013.
[12] M. Hong, Z.-Q. Luo, and M. Razaviyayn. Convergence analysis of admm
for a family of nonconvex problems.
[13] Z. Hou. Adaptive singular value decomposition in wavelet domain for image
denoising. Pattern Recognition, 36(8):1747–1763, 2003.
21
REFERENCES
Research Proposal
[14] B. Jiang, S. Ma, and S. Zhang. Alternating direction method of multipliers for real and complex polynomial optimization models. Optimization,
63(6):883–898, 2014.
[15] J. Lavaei and S. H. Low. Zero duality gap in optimal power flow problem.
Power Systems, IEEE Transactions on, 27(1):92–107, 2012.
[16] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief
networks for scalable unsupervised learning of hierarchical representations.
In Proceedings of the 26th Annual International Conference on Machine
Learning, pages 609–616. ACM, 2009.
[17] S. H. Low. Convex relaxation of optimal power flow: a tutorial. In Bulk
Power System Dynamics and Control-IX Optimization, Security and Control of the Emerging Power Grid (IREP), 2013 IREP Symposium, pages
1–15. IEEE, 2013.
[18] Z.-Q. Luo, W.-K. Ma, A.-C. So, Y. Ye, and S. Zhang. Semidefinite relaxation of quadratic optimization problems. Signal Processing Magazine,
IEEE, 27(3):20–34, 2010.
[19] S. Magnússon, P. C. Weeraddana, M. G. Rabbat, and C. Fischione. On
the convergence of alternating direction lagrangian methods for nonconvex
structured optimization problems. arXiv preprint arXiv:1409.8033, 2014.
[20] I. Markovsky. Structured low-rank approximation and its applications.
Automatica, 44(4):891–909, 2008.
[21] D. Needell and J. A. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic
Analysis, 26(3):301–321, 2009.
[22] A. Rajwade, A. Rangarajan, and A. Banerjee. Image denoising using the
higher order singular value decomposition. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 35(4):849–862, 2013.
[23] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions
of linear matrix equations via nuclear norm minimization. SIAM review,
52(3):471–501, 2010.
[24] B. Schölkopf and A. J. Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2002.
[25] J.-L. Starck and J. Bobin. Astronomical data analysis and sparsity: from
wavelets to compressed sensing. Proceedings of the IEEE, 98(6):1021–1030,
2010.
[26] M. Unser. Wavelets, sparsity and biomedical image reconstruction.
22
REFERENCES
Research Proposal
[27] Y. Xu, W. Yin, Z. Wen, and Y. Zhang. An alternating direction algorithm
for matrix completion with nonnegative factors. Frontiers of Mathematics
in China, 7(2):365–384, 2012.
[28] S. You and Q. Peng. A non-convex alternating direction method of multipliers heuristic for optimal power flow. In Smart Grid Communications
(SmartGridComm), 2014 IEEE International Conference on, pages 788–
793. IEEE, 2014.
[29] Y. Zhang. An alternating direction algorithm for nonnegative matrix factorization. preprint, 2010.
[30] H. Zhu and G. Giannakis. Estimating the state of ac power systems using
semidefinite programming. In North American Power Symposium (NAPS),
2011, pages 1–7. IEEE, 2011.
23
| 8 |
Limits of CSP Problems and Efficient
Parameter Testing
arXiv:1406.3514v2 [] 4 Aug 2016
Marek Karpinski∗
Roland Markó†
Abstract
We present a unified framework on the limits of constraint satisfaction problems (CSPs) and efficient parameter testing which depends only on array exchangeability and the method of cut decomposition without recourse to the
weakly regular partitions. In particular, we formulate and prove a representation theorem for compact colored r-uniform directed hypergraph (r-graph) limits,
and apply this to rCSP limits. We investigate the sample complexity of testable
r-graph parameters, we discuss the generalized ground state energies and demonstrate that they are efficiently testable.
1
Introduction
We study the limits and efficient parameter testing properties of Maximum Constraint Satisfaction Problems of arity r (MAX-rCSP or rCSP for short), c.f. e.g., [4]. These two topics,
limiting behavior and parameter estimation, are treated in the paper to a degree separately,
as they require a different set of ideas and could be analyzed on their own right. The establishment of the underlying connection between convergence and testability is one of the
main applications of the limit theory of dense discrete structures, see [10], [11].
In the first part of the paper we develop a general framework for the above CSP problems which depends only on the principles of the array exchangeability without a recourse
to the weakly regular partitions used hitherto in the general graph and hypergraph settings.
Those fundamental techniques and results were worked out in a series of papers by Borgs,
Chayes, Lovász, Sós, Vesztergombi and Szegedy [10],[11],[24], and [26] for graphs including
connections to statistical physics and complexity theory, and were subsequently extended to
Key words and phrases. Approximation, graphs, hypergraphs, graph limits, constraint satisfaction problems, hypergraph parameter testing, exchangeability.
∗
Dept. of Computer Science and the Hausdroff Center for Mathematics, University of Bonn. Supported
in part by DFG grants, the Hausdorff grant EXC59-1. Research partly supported by Microsoft Research
New England. E-mail: [email protected]
†
Hausdorff Center for Mathematics, University of Bonn. Supported in part by a Hausdorff scholarship.
E-mail: [email protected]
1
hypergraphs by Elek and Szegedy [15] via the ultralimit method. The central concept of
r-graph convergence is defined through convergence of sub-r-graph densities, or equivalently
through weak convergence of probability measures on the induced sub-r-graph yielded by
uniform node sampling. Our line of work particularly relies on ideas presented in [14] by
Diaconis and Janson, where the authors shed some light on the correspondence between combinatorial aspects (that is, graph limits via weak regularity) and the probabilistic viewpoint
of sampling: Graph limits provide an infinite random graph model that has the property
of exchangeability. The precise definitions, references and results will be given in Section 2,
here we only formulate our main contribution informally: We prove a representation theorem
for compact colored r-uniform directed hypergraph limits. This says that every limit object
in this setup can be transformed into a measurable function on the (2r − 2)-dimensional unit
cube that takes values from the probability distributions on the compact color palette, see
Theorem 2.12 below. This extends the result of Diaconis and Janson [14], and of Lovász
and Szegedy [25]. As an application, the description of the limit space of rCSPs is presented
subsequent to the aforementioned theorem.
The second part of the paper, Sections 3 to 5, is dedicated to the introduction of a
notion of efficient parameter testability of r-graphs and rCSP problems. We use the limit
framework from the first part of the paper to formulate several results on it, which are proved
with the aid of the cut decomposition method. We set our focus especially on parameters
called ground state energies and study variants of them. These are in close relationship
with MAX-rCSP problems, our results can be regarded as the continuous generalization of
the former. We rely on the notion of parameter testing and sample complexity, that was
introduced by Goldreich, Goldwasser, and Ron [17] and was employed in the graph limit
theory in [11]. A graph parameter is testable in the sense of [11], when its value is estimable
through a uniform sampling process, where the sample size only depends on the desired error
gap, see Definition 3.3 below for the precise formulation. The characterization of the real
functions on the graph space was carried out in [11], the original motivation of the current
paper was to provide an analogous characterization for efficiently testable parameters. These
latter are parameters, whose required sample size for the estimation is at most polynomial
in the multiplicative inverse of the error.
The investigation of such parameters has been an active area of research for the finite
setting in complexity theory. The method of exhaustive sampling in order to approximately
solve NP-hard problems was proposed by Arora, Karger and Karpinski [5], their upper bound
on the required sample size was still logarithmically increasing in the size of the problem. The
approach in [5] enabled the employment of linear programming techniques. Subsequently, the
testability of MAX-CUT was shown in [17], explicit upper bounds for the sample complexity
in the general boolean MAX-rCSP were given by Alon, F. de la Vega, Kannan and Karpinski
[4] using cut decomposition of r-arrays and sampling, that was inspired by the introduction
of weak regularity by Frieze and Kannan [16]. In [4] and [16], the design of polynomial time
approximation schemes (PTAS) in order to find not only an approximate value for MAXrCSP, but also an assignment to the base variables that certify this value was an important
subject, we did not pursue the generalization regarding this aspect in the current work. The
2
achievements of these two aforementioned contributions turned out to be highly influential,
and took also a key role in the first elementary treatment of graph limits and in the definition
of the δ -metric in [10] that defines an equivalent topology on the limit space to the subgraph
density convergence.
The best currently known upper bound on the sample complexity of MAX-rCSP is
O(ε−4 ), and has been shown by Mathieu and Schudy [27], see also Alon, F. de la Vega,
Kannan and Karpinski [4]. Unfortunately, the approach of [27] does not seem to have a natural counterpart in the continuous setting, although one can use their result on the sample
to achieve an improved upper bound on the sample complexity. We mention that for the
original problem we do not aim to produce an assignment for MAX-rCSP, or a partition for
the ground state energy whose evaluation is nearly optimal as opposed to the above works,
although we believe this could be done without serious difficulties.
Our contribution in the second part of the paper is the following. By employing a refined
version of the proof of the main result of [4] adapted to the continuous setting we are able to
prove the analogous efficient testability result for a general finite state space for ground state
energies, see Theorem 4.4 in Section 4 for a precise formulation. Among the applications
of this development we analyze the testability of the microcanonical version of ground state
energies providing the first explicit upper bounds on efficiency. For the finite version a similar
question was investigated by F. de la Vega, Kannan and Karpinski [13] by imposing additional
global constraints (meaning a finite number of them with unbounded arity). Furthermore,
the continuous version of the quadratic assignment problem is treated the first time in a
sample complexity context, this subject is related to the recent contributions to the topic of
approximate graph isomorphism and homomorphism, see [22] and [8].
1.1
Outline of the paper
The organization of this paper is as follows. In Section 2 we develop the limit theory for Kdecorated r-uniform directed hypergraphs with reference to previously known special (and
in some way generic) cases, and use the representation of the limit to describe the limit
space of rCSP problems. In Section 3 the basic notion of efficiency in context of parameter
testing is given with some additional examples. The subsequent Section 4 contains the
proof of Theorem 4.4 regarding ground state energies of r-graphons, and in the following
Section 5 some variants are examined, in particular microcanonical energies and the quadratic
assignment problem. We summarize possible directions of further research in Section 6.
2
Limit theory and related notions
We will consider the objects called rCSP formulas that are used to define instances of
the decision and optimization problems called rCSP and MAX-rCSP, respectively. In the
current framework a formula consists of a variable set and a set of boolean or integer valued
functions. Each of these functions is defined on a subset of the variables, and the sets of
possible assignments of values to the variables are uniform. Additionally, it will be required
3
that each of the functions, which we will call constraints in what follows, depend exactly
on r of the variables.
For the treatment of an rCSP (of a MAX-rCSP) corresponding to a certain formula we
are required to simultaneously evaluate all the constraints of the formula by assigning values
to each of the variables in the variable set. If we deal with an rCSP optimization problem
on some combinatorial structure, say on graphs, then the formula corresponding to a certain
graph has to be constructed according to the optimization problem in question. The precise
definitions will be provided next.
Let r ≥ 1, K be a finite set, and f be a boolean-valued function f : K r → {0, 1} on
r variables (or equivalently f ⊆ K r ). We call f a constraint-type on K in r variables,
C = C(K, r) denotes the set of all such objects.
Definition 2.1 (rCSP formula). Let V = {x1 , x2 , . . . , xn } be the set of variables, xe =
(xe1 , . . . , xer ) ∈ V r and f a constraint-type on K in r variables. We call an n-variable
function ω = (f ; xe ) : K V → {0, 1} with ω(l1 , . . . , ln ) = f (le1 , . . . , ler ) a constraint on V in r
variables determined by an r-vector of constrained variables and a constraint type.
We call a collection F of constraints on V (F ) = {x1 , x2 , . . . , xn } in r variables of type
C(K, r) for some finite K an rCSP formula.
Two constraints (f1 ; xe1 ) and (f2 ; xe2 ) are said to be equivalent if they constrain the same
r variables, and their evaluations coincide, that is, whenever there exists a π ∈ Sr such that
e1 = π(e2 ) (here π permutes the entries of e2 ) and f1 = π̂(f2 ), where [π̂(f )](l) = f (π(l)).
Two formulas F1 and F2 are equivalent if there is a bijection φ between their variable sets
such that there is a one-to-one correspondence between the constraints of F1 and F2 such
that the corresponding pairs (f1 ; xe1 ) ∈ F1 and (f2 ; xe2 ) ∈ F2 satisfy (f1 ; φ(xe1 )) ≡ (f2 ; xe2 ).
In the above definition the set of states of the variables in V (F ) denoted by K is not
specified for each formula, it will be considered as fixed similar to the dimension r whenever
we study a family of rCSPs. We say that F is symmetric, if it contains only constraints with
constraint-types which are invariant under the permutations of the constrained variables.
When we relax the notion of the types to be real or K-valued functions on K r with K being
a compact space, then we speak of weighted rCSP formulas.
The motivation for the name CSP formula is immediately clear from the notation used
in Theorem 2.1 if we consider constraints to be satisfied at some point in K n , whenever they
evaluate to 1 there. Most problems defined on these objects ask for parameters that are, in
the language of real analysis, global or conditioned extreme values of the objective function
given by an optimization problem and a formula. A common assumption is that equivalent
formulas should get the same parameter value.
Definition 2.2 (MAX-rCSP). Let F be an rCSP formula over a finite domain K. Then
the MAX-rCSP value of F is given by
MAX−rCSP(F ) = max
l∈K V (F )
X
ω(l),
ω=(f ;xe )∈F
and F is satisfiable, if MAX−rCSP(F ) = |{ ω | ω = (f ; xe ) ∈ F }|.
4
(2.1)
Such problems are for example MAX-CUT, fragile MAX-rCSP, MAX-3-SAT, and NotAll-Equal-3-SAT, where only certain constraint types are allowed for instances, or MAXBISECTION, where additionally only specific value assignments are permitted in the above
maximization. In general, formulas can also be viewed as directed r-graphs, whose edges
are colored with constraint types (perhaps with multiple types), and we will exploit this
representation in our analysis.
Typically, we will not store and recourse to an rCSP formula F as it is given by its
definition above, but we will only consider the r-array tuple (F z )z∈K r , where
X X
f (zφ(1) , . . . , zφ(r) )
(2.2)
F z (e) =
φ∈Sr (f ;xφ(e) )∈F
for each e ∈ [n]r . The data set (F z )z∈K r is called the evaluation representation of F , or short
eval(F ), we regard eval(F ) as a parallel colored (with colors from [q]r ) multi-r-graph, see
below. We impose a boundedness criteria on CSPs that will apply throughout the paper,
that means we fix d ≥ 1 for good, and require that kF z k∞ ≤ d for every z ∈ K r and CSP
formula F with eval(F ) = (F z )z∈K r in consideration. We note that for each z ∈ K r , e ∈ [n]r
and φ ∈ Sr we have the symmetry F z (e) = F zφ(1) ,...,zφ(r) (eφ(1) , . . . , eφ(r) ), also, on the diagonal
F z is 0.
The main motivation for what follows in the current section originates from the aim to
understand the long-range behavior of a randomly evolving rCSP formula together with the
value of the corresponding MAX-rCSP by making sense of a limiting distribution. This task
is equivalent to presenting a structural description of rCSP limits analogous to the graph
limits of [24].
The convergence notion should agree with parameter estimation via sampling. In this
setting we pick a set of variables of fixed size at random from the constrained set V (F ) of
an rCSP formula F defined on a large number of variables, and ask for all the constraints
in which the sampled variables are involved and no other, this is referred to as the induced
subformula on the sample. Then we attempt to produce some quantitative statement about
the parameter value of the original formula by relying only on the estimation of the corresponding value of the parameter on a subformula, see Theorem 3.3.
Having formally introduced the notion of rCSP formulas and MAX-rCSP, we proceed to
the outline of the necessary notation and to the analysis of the limit behavior regarding the
colored hypergraph models that are used to encode these formulas.
2.1
Limits of K-colored r-uniform directed hypergraphs
Let K be a compact Polish space and r ≥ 1 an integer. Recall that a space K is called Polish
if it is a separable completely metrizable topological space. In what follows we will consider
the limit space of K-colored r-uniform directed hypergraphs, or with different words r-arrays
with non-diagonal entries from K, and the diagonal entries are occupied by a special element
which also can be in K, but in general this does not have to be the case.
The basic content of the current subsection starts with the general setting given above,
CSPs will be considered as a special case in this topic whose limit characterization will be
5
derived at the end. Some of the basic cases are already settled regarding the representation of
the limits, we refer to Lovász and Szegedy [24], [25], [23] for the r = 2, general K, undirected
case, to Elek and Szegedy [15] for the general r, K = {0, 1}, undirected case; and Diaconis and
Janson [14] for r = 2, K = {0, 1}, directed and undirected case. These three approaches are
fundamentally different in their proof methodology (they rely on weak regularity, ultralimits,
and exchangeability principles respectively) and were further generalized or applied by Zhao
[30] to general r; respectively by Aroskar [7] to the directed case; respectively by Austin [9]
general r and by Janson [20] to the directed case where the graph induces a partial order on
the vertex set.
Definition of convergence Let C denote space C(K) of continuous functionals on K, and
let F ⊂ C be a countable generating set with kf k∞ ≤ 1 for each f ∈ F , that is, the linear
subspace generated by F is dense in C in the L∞ -norm.
Denote by Π(S) = Πr (S) the set of all unlabeled S-decorated directed r-uniform hypergraphs for some arbitrary set S, where we will suppress r in the notation, when it is clear
which r is meant (alternatively, Π(S) denotes the isomorphism classes of the node labeled
respective objects). The set Πk (S) denotes the elements of Π(S) of vertex cardinality k. Let
G(k, F ) denote the random induced subformula of F on the set S ⊂ V (F ) that is chosen uniformly among the subsets of V (F ) of cardinality k. We define the homomorphism densities
next.
Definition 2.3. Let K be an arbitrary set or space, and C(K) be the set of continuous
functionals on K. If for some r ≥ 1 F ∈ Πr (C(K)) is a uniform directed graph with
V (F ) = [k] and G ∈ Πr (K), then the homomorphism density of F in G is defined as
1
t(F, G) =
|V (G)|k
X
k
Y
F (i1 , . . . , ir )(G(φ(i1 ), . . . , φ(ir ))).
(2.3)
φ : [k]→V (G) i1 ,...,ir =1
The injective homomorphism density tinj (F, G) is defined similarly, with the difference
that the average of the products is taken over all injective φ maps (normalization changes
accordingly).
In the special case when K is finite we can associate to the elements of Π(K) functions in
Π(C(K)) through replacing the edge colors in K by the corresponding indicator functions.
Note that this way if F, G ∈ Πr , then tinj (F, G) = P(G(k, G) = F ).
Let the map τ be defined as τ (G) = (t(F, G))F ∈Π(F ) ∈ [0, 1]Π(F ) for each G ∈ Π(K). We
set Π(K)∗ = τ (Π(K)) ⊂ [0, 1]Π(F ) , and Π(K)∗ to the closure of Π(K)∗ . Also, let Π(K)+ =
{ (τ (G), 1/|V (G)|) | G ∈ Π(K) } ⊂ [0, 1]Π(F ) × [0, 1], and let Π(K)+ be the closure of Π(K)+ .
The function τ + (G) = (τ (G), 1/|V (G)|) will be useful for our purposes, because, opposed
to τ , it is injective, which can be verified easily. For any F ∈ Π(F ) the function t(F, .) on
Π(K) can be uniquely continuously extended to a function t(F, .) on Π(K)+ , this is due to
the compactness of [0, 1]Π(F ) × [0, 1]. For an element Γ ∈ Π(K)+ \ Π(K)+ , let t(F, Γ) for
F ∈ Π(F ) denote the real number in [0, 1] that is the coordinate of Γ corresponding to F .
6
+
The functions τinj (G) and τinj
(G), and the sets Πinj (K) = τinj (Π(K)) and Πinj (K)+ are
defined analogously. It was shown in [24] that
|tinj (F, G) − t(F, G)| ≤
|V (F )|2kF k∞
2|V (G)|
(2.4)
for any pair F ∈ Π(C) and G ∈ Π(K).
The precise definition of convergence will be given right after the next theorem which is
analogous to a result of [25].
Theorem 2.4. Let (Gn )∞
n=1 be a random sequence in Π(K) with |V (Gn )| tending to infinity
in probability. Then the following are equivalent.
+
(1) The sequence (τ + (Gn ))∞
n=1 converges in distribution in Π(K) .
(2) For every F ∈ Π(F ), the sequence (t(F, Gn ))∞
n=1 converges in distribution.
(3) For every F ∈ Π(C), the sequence (t(F, Gn ))∞
n=1 converges in distribution.
(4) For every k ≥ 1, the sequence (G(k, Gn ))∞
n=1 of random elements of Π(K) converges in
distribution.
If any of the above apply, then the respective limits in (2) and (3) are t(F, Γ) with Γ being
a random element of Π(K)+ given by (1), and also Γ ∈ Π(K)+ \ Π(K)+ , almost surely.
If t(F, Gn ) in (2) and (3) is replaced by tinj (F, Gn ), then the equivalence of the four
statements still persists and the limits in (2) and (3) are t(F, Γ).
If every Gn is concentrated on some single element of Π(K) (non-random case), then
the equivalence holds with the sequences in (1), (2), and (3) being numerical instead of
distributional, while (4) remains unchanged.
Proof. The equivalence of (1) and (2) is immediate. The implication from (3) to (2) is also
clear by definition.
For showing that (2) implies (3), we consider first an arbitrary F ∈ Π(hF i), where hF i is
the linear space generated by F . Then there exist F 1 , . . . , F l ∈ Π(F ) on the same vertex set
as F , say [k], and λ1 , . . . , λl ∈ R such that for any non-random G ∈ Π(K) and φ : [k] → V (G)
it holds that
k
Y
F (i1 , . . . , ir )(G(φ(i1 ), . . . , φ(ir )))
i1 ,...,ir =1
=
l
X
j=1
λj
k
Y
F j (i1 , . . . , ir )(G(φ(i1 ), . . . , φ(ir ))).
i1 ,...,ir =1
P
So therefore we can express t(F, G) = lj=1 λj t(F j , G). We return to the case when Gn
is random. The weak convergence of t(F, Gn ) is equivalent to the convergence of each of
7
its moments, its tth moment can be written by the linearity of the expectation as a linear combination of a finite number of mixed moments of the densities corresponding to
F 1 , . . . , F l ∈ Π(F ). For an arbitrary vector of non-negative integers α = (α1 , . . . , αl ), let F α
be the element of Π(F ) that is the disjoint union α1 copies of F 1 , α2 copies of F 2 , and so
on. It holds that t(F 1 , Gn )α1 . . . t(F l , Gn )αl = t(F α , Gn ), and in particular the two random
variables on the two sides are equal in expectation. Condition (2) implies that E[t(F α , Gn )]
converges for each α, therefore the mixed moments of the t(F i , Gn ) densities and the moments of t(F, Gn ) also do. This implies that t(F, Gn ) also converges in distribution for any
F ∈ Π(hF i). Now let F ′ ∈ Π(C) and ε > 0 be arbitrary, and F ∈ Π(hF i) on the same vertex
set [k] as F ′ be such that its entries are at most ε-far in L∞ from the corresponding entries
of F ′ . Then
|t(F ′ , G) − t(F, G)|
=
1
|V (G)|k
X
k
Y
F (i1 , . . . , ir )(G(φ(i1 ), . . . , φ(ir )))
φ : [k]→V (G) i1 ,...,ir =1
−
=
1
|V (G)|k
X
k
Y
F ′ (i1 , . . . , ir )(G(φ(i1 ), . . . , φ(ir )))
i1 ,...,ir =1
k
X
Y
F (j1 , . . . , jr )(G(φ(j1 ), . . . , φ(jr )))
φ : [k]→V (G) i1 ,...,ir =1 (j1 ,...,jr )<(i1 ,...,ir )
Y
F ′ (j1 , . . . , jr )(G(φ(j1 ), . . . , φ(jr )))
(j1 ,...,jr )>(i1 ,...,ir )
|F (i1 , . . . , ir )(G(φ(i1 ), . . . , φ(ir ))) − F ′ (i1 , . . . , ir )(G(φ(i1 ), . . . , φ(ir )))|
≤ k r ε max{(kF ′k∞ + ε)k
r −1
, 1}
for any G ∈ Π(K) (random or non-random), which implies (3), as ε > 0 was chosen arbitrarily.
We turn to show the equivalence of (3) and (4). Let Πk (K) ⊂ Π(K) the set of elements of
Π(K) with vertex cardinality k. The sequence (G(k, Gn ))∞
n=1 converges in distribution exactly
when for each continuous function f ∈ C(Πk (K)) on Πk (K) the expectation E[f (G(k, Gn ))]
converges as n → ∞. For each F ∈ Π(C) and α ≥ 1, the function tαinj (F, G) is continuous on
Π|V (F )| (K) and tinj (F, G) = tinj (F, G(|V (F )|, G)), so (3) follows from (4).
For showing the other direction, that (3) implies (4), let us fix k ≥ 1. We claim that
the linear function space M = ht(F, .)|F ∈ Π(C)i ⊂ C(Πk (K)) is an algebra containing
the constant function, and that it separates any two elements of Πk (K). It follows that
ht(F, .)|F ∈ Π(C)i is L∞ -dense in C(Πk (K)) by the Stone-Weierstrass theorem, which implies
by our assumptions that E[f (G(k, Gn ))] converges for any f ∈ C(Πk (K)), since we know
that E[tinj (F, G(k, Gn ))] = E[tinj (F, Gn )] whenever |V (F )| ≤ k. We will see in a moment
that tinj (F, .) ∈ M, convergence of E[tinj (F, Gn )] follows from (2.4) and the requirement that
|V (Gn )| tends to infinity in probability.
8
Now we turn to show that our claim is indeed true. For two graphs F1 , F2 ∈ Π(C) we
have t(F1 , G)t(F2 , G) = t(F1 F2 , G) for any G ∈ Πk (K), where the product F1 F2 denotes
the disjoint union of the two C-colored graphs. Also, t(F, G) = 1 for the graph F on
one node with a loop colored with the constant 1 function. Furthermore we have that
hom(F, G) = k |V (F )| t(F, G) ∈ M for |V (G)| = k, so therefore
X
Y
inj(F, G) =
(−1)|V (F )|−|P|
(|S| − 1)! hom(F/P, G) ∈ M,
S∈P
P partition of V (F )
where inj(F, G) = tinj (F, G)k(k − 1) . . . (k − |V (F )| + 1) and F/P ∈ Π|P| (C) whose edges
are colored by the product of the colors of F on the edges between the respective classes of
P.
P This equality is the consequence of the Mobius inversion formula, and that inj(F, G) =
P partition of V (F ) hom(F/P, G). For G and F defined on the node set [k] recall that
inj(F, G) =
X
k
Y
F (i1 , . . . , ir )(G(φ(i1 ), . . . , φ(ir ))).
(2.5)
φ∈Sk i1 ,...,ir =1
Now fix G1 , G2 ∈ Πk (K) and let F ∈ Πk (C) such that {F (i1 , . . . , ir )(Gj (l1 , . . . , lr ))} are
algebraically independent elements of R (such an F exists, we require a finite number of
algebraically independent reals, and can construct each entry of F by polynomial interpolation). If G1 and G2 are not isomorphic, than for any possible node-relabeling for G2 there
is at least one term in the difference inj(F, G1 ) − inj(F, G2) written out in the form of (2.5)
that does not get canceled out, so therefore inj(F, G1 ) 6= inj(F, G2 ).
We examine the remaining statements of the theorem. Clearly, Γ ∈
/ Π(K)+ , because
|V (Gn )| → ∞ in probability. The results for the case where the map in (1) and the densities
in (2) and (3) are replaced by the injective version are yielded by (2.4), the proof of the
non-random case carries through in a completely identical fashion.
We are now ready to formulate the definition of convergence in Π(K).
Definition 2.5. If (Gn )∞
n=1 is a sequence in Π(K) with |V (Gn )| → ∞ and any of the conditions above of Theorem 2.4 hold, then we say that (Gn )∞
n=1 converges.
We would like to add that, in the light of Theorem 2.4, the convergence notion is independent from the choice of the family F .
The next lemma gives information about the limit behavior of the sequences where the
vertex set cardinality is constant.
Lemma 2.6. Let (Gn )∞
n=1 be a random sequence in Πk (K), and additionally be such that for
every F ∈ Π(F ) the sequences (tinj (F, Gn ))∞
n=1 converge in distribution. Then there exists
a random H ∈ Πk (K), such that for every F ∈ Π(F ) we have t(F, Gn ) → t(F, H) and
tinj (F, Gn ) → tinj (F, H) in distribution.
9
Proof. We only sketch the proof. The distributional convergence of (Gn )∞
n=1 follows the
same way as in the proof of Theorem 2.4, the part about condition (2) implying (3) together
with the part stating that (3) implies (4). The existence of a random H satisfying the
statement of the lemma is obtained by invoking the Riesz representation theorem for positive
functionals.
Exchangeable arrays The correspondence analogous to the approach of Diaconis and
Janson in [14] will be established next between the elements of the limit space Π(K)+ that is
compact, and the extreme points of the space of random exchangeable infinite r-arrays with
entries in K. These are arrays, whose distribution is invariant under finite permutations of
the underlying index set.
Definition 2.7 (Exchangeable r-array). Let (H(e1 , . . . , er ))1≤e1 ,...,er <∞ be an infinite r-array
of random entries from a Polish space K. We call the random array separately exchangeable
if
(H(e1 , . . . , er ))1≤e1 ,...,er <∞
has the same probability distribution as
(H(ρ1 (e1 ), . . . , ρr (er )))1≤e1 ,...,er <∞
for any ρ1 , . . . , ρr ∈ SN collection of finite permutations, and jointly exchangeable (or simply
exchangeable), if the former holds only for all ρ1 = · · · = ρr ∈ SN .
For a finite set S, let h0 (S) and h(S) denote the power set and the set of nonempty subsets
of S, respectively, and h(S, m) the set of nonempty subsets of S of cardinality at most m, also
h0 (S, m) = h(S, m) ∪ {∅}. A 2r − 1-dimensional real vector xh(S) denotes (xT1 , . . . , xT2r −1 ),
where T1 , . . . , T2r −1 is a fixed ordering of the nonempty subsets of S with T2r −1 = S, for a
permutation π of the elements of S the vector xπ(h(S)) means (xπ∗ (T1 ) , . . . , xπ∗ (T2r −1 ) ), where
π ∗ is the action of π permuting the subsets of S. Similar conventions apply when x is indexed
by other set families.
It is clear that if we consider a measurable function f : [0, 1]h0 ([r]) → K, and independent
random variables uniformly distributed on [0, 1] that are associated with each of the subsets
of N of cardinality at most r, then by plugging in these random variables into f for every
e ∈ Nr in the right way suggested by a fixed natural bijection le : e → [r], the result will be
an exchangeable random r-array. The shorthand Samp(f ) denotes this law of the infinite
directed r-hypergraph model generated by f .
The next theorem, states that all exchangeable arrays with values in K arise from some
f in the former way.
Theorem 2.8. [21] Let K be a Polish space. Every K-valued exchangeable r-array (H(e))e∈Nr
has law equal to Samp(f ) for some measurable f : [0, 1]h0([r]) → K, that is, there exists a
function f , so that if (Us )s∈h0 (N,r) are independent uniform [0, 1] random variables, then
H(e) = f (U∅ , U{e1 } , U{e2 } , . . . , Ue\{er } , Ue )
for every e = (e1 , . . . , er ) ∈ Nr , where H(e) are the entries of the infinite r-array.
10
(2.6)
If H in the above theorem is invariant under permuting its coordinates, then the corresponding function f is invariant under the coordinate permutations that are induced by the
set permuting Sr -actions.
Theorem 2.8 was first proved by de Finetti [12] (in the case K = {0, 1}) and by Hewitt
and Savage [18] (in the case of general K) for r = 1, independently by Aldous [1] and Hoover
[19] for r = 2, and by Kallenberg [21] for arbitrary r ≥ 3. For equivalent formulations, proofs
and further connections to related areas see the recent survey of Austin [9].
In general, there are no symmetry assumptions on f , in the directed case H(e) might
differ from H(e′ ), even if e and e′ share a common base set. In this case these two entries
do not have the property of conditional independence over a σ-algebra given by some lower
dimensional structures, that means for instance the independence over { Uα | α ( e } for an
exchangeable r-array with law Samp(f ) given by a function f as above.
With the aid of Theorem 2.8 we will provide a form of representation of the limit space
Π(K)+ through the points of the space of random infinite exchangeable r-arrays. The correspondence will be established through a sequence of theorems analogous to the ones stated
and proved in [14, Section 2 to 5], combined with the compactification argument regarding
the limit space from [25], see also [23, Chapter 17.1] for a more accurate picture. The proofs
in our case are mostly ported in a straightforward way, if not noted otherwise we direct the
reader for the details to [14].
Let L∞ = L∞ (K) denote the set of all node labeled countably infinite K-colored r-uniform
directed hypergraphs. Set the common vertex set of the elements of L∞ to N, and define the
set of [n]-labeled K-colored r-uniform directed hypergraphs as Ln = Ln (K). Every G ∈ Ln
can be viewed as an element of L∞ simply by adding isolated vertices to G carrying the
labels N \ [n] in the uncolored case, and the arbitrary but fixed color c ∈ K to edges incident
to these vertices in the colored case, therefore we think about Ln as a subset of L∞ (and also
of Lm for every m ≥ n). Conversely, if G is a (random) element of L∞ , then by restricting
G to the vertices labeled by [n], we get G|[n] ∈ Ln . If G is a labeled or unlabeled K-colored
r-uniform directed hypergraph (random or not) with vertex set of cardinality n, then let Ĝ
stand for the random element of Ln (and also L∞ ) which we obtain by first throwing away
the labels of G (if there where any), and then apply a random labeling chosen uniformly
from all possible ones with the label set [n].
A random element of L∞ is exchangeable analogously to Theorem 2.7 if its distribution is
invariant under any permutation of the vertex set N that only moves finitely many vertices,
for example infinite hypergraphs whose edge-colors are independently identically distributed
are exchangeable. An element of L∞ can also be regarded as an infinite r-array whose
diagonal elements are colored with a special element ι that is not contained in K, therefore
the corresponding r-arrays will be K ∪ {ι}-colored.
The next theorem relates the elements of Π(K)+ to exchangeable random elements of
L∞ .
Theorem 2.9. Let (Gn )∞
n=1 be a random sequence in Π(K) with |V (Gn )| tending to infinity
in probability. Then the following are equivalent.
(1) τ + (Gn ) → Γ in distribution for a random Γ ∈ Π(K)+ \ Π(K)+ .
11
(2) Ĝn → H in distribution in L∞ (K), where H is a random element of L∞ (K).
If any of these hold true, then Et(F, Γ) = Etinj (F, H|[k]) for every F ∈ Πk (C), and also, H
is exchangeable.
Proof. If G ∈ Π(K) is deterministic and F ∈ Πk (F ) with |V (G)| ≥ k then Etinj (F, Ĝ|[k]) =
tinj (F, G), where the expectation E is taken with respect to the random (re-)labeling Ĝ of G.
For completeness we mention that for a labeled, finite G the quantity t(F, G) is understood
as t(F, G′ ) with G′ being the unlabeled version of G, also, F in t(F, G) is always regarded a
priori as labeled, however the densities of isomorphic labeled graphs in any graph coincide.
If we consider G to be random, then by the fact that 0 ≤ t(F, G) ≤ 1 (as kF k∞ ≤ 1) we
have that |EEtinj (F, Ĝ|[k]) − Etinj (F, G)| ≤ P(|V (G)| < k) for F ∈ Πk (F ).
Assume (1), then the above implies, together with P(|V (Gn )| < k) → 0 and (1), that
EEtinj (F, Ĝn |[k]) → Et(F, Γ) (see Theorem 2.4). This implies that Ĝn |[k] → Hk in distribution
for some random Hk ∈ Lk with Etinj (F, Hk ) = Et(F, Γ), see Theorem 2.6, furthermore, with
appealing to the consistency of the Hk graphs in k, there exists a random H ∈ L∞ such that
H|[k] = Hk for each k ≥ 1, so (1) yields (2).
Another consequence is that H is exchangeable: the exchangeability property is equivalent to the vertex permutation invariance of the distributions of H|[k] for each k. This
is ensured by the fact that H|[k] = Hk , and Hk is the weak limit of a vertex permutation
invariant random sequence, for each k.
For the converse direction we perform the above steps in the reversed order using
|EEt(F, Ĝn |[k] ) − Et(F, Gn ) ≤ P(|V (Gn )| < k)
again in order to establish the convergence of (Et(F, Gn ))∞
n=1 . Theorem 2.4 certifies now the
+
+
existence of the suitable random Γ ∈ Π(K) \ Π(K) , this shows that (2) implies (1).
We built up the framework in the preceding statements Theorem 2.4 and Theorem 2.9
in order to formulate the following theorem, which is the crucial ingredient to the desired
representation of limits.
Theorem 2.10. There is a one-to-one correspondence between random elements of Π(K)+ \
Π(K)+ and random exchangeable elements of L∞ . Furthermore, there is a one-to-one correspondence between elements of Π(K)+ \ Π(K)+ and extreme points of the set of random
exchangeable elements of L∞ . The relation is established via the equalities Et(F, Γ) =
Etinj (F, H|[k]) for every F ∈ Πk (C) for every k ≥ 1.
Proof. Let Γ a random element of Π(K)+ \ Π(K)+ . Then by definition of Π(K)+ there
+
is a sequence (Gn )∞
n=1 in Π(K) with |V (Gn )| → ∞ in probability such that τ (Gn ) → Γ
in distribution in Π(K)+ . By virtue of Theorem 2.9 there exists a random H ∈ L∞ so
that Ĝn → H in distribution in L∞ , and H is exchangeable. The distribution of H|[k] is
determined by the numbers Etinj (F, H|[k]), see Theorem 2.4, Theorem 2.6, and the arguments
therein, and these numbers are provided by the correspondence.
12
For the converse direction, let H be random exchangeable element of L∞ . Then let
Gn = H|[n] , we have Gn → H in distribution, and also Ĝn → H in distribution by the vertex
permutation invariance of Gn as a node labeled object. Again, we appeal to Theorem 2.9,
so τ + (Gn ) → Γ for a Γ random element of Π(K)+ \ Π(K)+ , which is determined completely
by the numbers Et(F, Γ) that are provided by the correspondence, see Theorem 2.9.
The second version of the relation between non-random Γ’s and extreme points of exchangeable elements is proven similarly, the connection is given via t(F, Γ) = Etinj (F, H|[k])
between the equivalent objects.
The characterization of the aforementioned extreme points in Theorem 2.10 was given
[14] in the uncolored graph case, we state it next for our general setting, but refrain from
giving the proof here, as it is completely identical to [14, Theorem 5.5.].
Theorem 2.11. [14] The distribution of H that is an exchangeable random element of L∞
is exactly in that case an extreme point of the set of exchangeable measures if the random
objects H|[k] and H|{k+1,... } are probabilistically independent for any k ≥ 1. In this case the
representing function f from Theorem 2.8 does not depend on the variable corresponding to
the empty set.
Graphons as limit objects Let the r-kernel space Ξ̂r0 denote the space of the bounded
measurable functions of the form W : [0, 1]h([r],r−1) → R, and the subspace Ξr0 of Ξ̂r0 the
symmetric r-kernels that are invariant under coordinate permutations π ∗ induced by some
π ∈ Sr , that is W (xh([r],r−1)) = W (xπ∗ (h([r],r−1)) ) for each π ∈ Sr . We will refer to this
invariance in the paper both for r-kernels and for measurable subsets of [0, 1]h([r]) as rsymmetry. The kernels W ∈ ΞrI take their values in some interval I, for I = [0, 1] we
call these special symmetric r-kernels r-graphons, and their set Ξr . In what follows, λ as
a measure always denotes the usual Lebesgue measure in Rd , where the dimension d is
everywhere clear from the context.
If W ∈ Ξr and F ∈ Πr , then the F -density of W is defined as
t(F, W ) =
Z
[0,1]h([k],r−1)
Y
e∈E(F )
W (xh(e,r−1) )
Y
(1 − W (xh(e,r−1) ))dλ(xh([k],r−1)).
(2.7)
e∈E(F
/
)
Let K be a compact Polish space, and W : [0, 1]h([r]) → K be a measurable function, we
will refer to such an object as a (K, r)-digraphon, their set is denoted by Ξ̃r (K). Note that
there are no symmetry assumptions in this general case, if additionally W is r-symmetric,
then we speak about (K, r)-graphons, their space is Ξr (K). For K = {0, 1} the set P(K) can
be identified with the [0, 1] interval encoding the success probabilities of Bernoulli trials to
r
get the common r-graphon form as a function W : [0, 1]2 −2 → [0, 1] employed in [15].
13
The density of a K-colored graph F ∈ Π̃k (C(K)) in the (K, r)-digraphon W is defined
analogously to (2.3) and (2.7) as
Z
Y
F (e)(W (xh(e,r)))dλ(xh([k],r)).
(2.8)
t(F, W ) =
[0,1]h([k],r)
e∈[k]r
For k ≥ 1 and an undirected W ∈ Ξr (K) the random (K, r)-graph G(k, W ) is defined
on the vertex set [k] by selecting a uniform random point (XS )S∈h([k],r) ∈ [0, 1]h([k],r) that
enables the assignment of the color W (Xh(e) ) to each edge e ∈ [k]
. For a directed W the
r
sample point is as above, the color of the directed edge e ∈ [k]r is W (Xh(e) ), but in this case
the ordering of the power set of the base set e of e matters in contrast to the undirected
situation and is given by e, as W is not necessarily r-symmetric.
Additionally we define the averaged sampled r-graph for
K ⊂ R denoted by H(k, W ),
[k]
it has vertex set [k], and the weight of the edge e ∈ r is the conditional expectation
E[W (Xh(e) ) | Xh(e,1) ], and therefore the random r-graph is measurable with respect to Xh([k],1).
We will use the compact notation Xi for X{i} for the elements of the sample indexed by
singleton sets.
We define the random exchangeable r-array HW in L∞ as the element that has law
Samp(W ) for the (K, r)-digraphon W , as in Theorem 2.8. Furthermore, we define ΓW ∈
Π(K)+ \ Π(K)+ to be the element associated to HW through Theorem 2.9.
Now we are able to formulate the representation theorem for K-colored r-uniform directed
hypergraph limits using the representation of exchangeable arrays, see (Theorem 2.8). It is
an immediate consequence of Theorem 2.9 and Theorem 2.10 above.
Theorem 2.12. Let (Gn )∞
n=1 be a sequence in Π(K) with |V (Gn )| → ∞ such that for every
F ∈ Π(F ) the sequence t(F, Gn ) converges. Then there exists a function W : [0, 1]h([r]) → K
(that is W ∈ Ξr (K)) such that t(F, Gn ) → t(F, ΓW ) for every F ∈ Π(F ). In the directed
case when the sequence is in Π̃(K), then the corresponding limit object W is in Ξ̃r (K).
We mention that t(F, ΓW ) = t(F, W ) for every F ∈ Π(C(K)) and W ∈ Ξr (K). Alternatively we can also use the form W : [0, 1]h([r],r−1) → P(K) for (K, r)-graphons and digraphons
in Ξr (K) whose values are probability measures, this representation was applied in [25].
In previous works, for example in [14], the limit object of a sequence of simple directed
graphs without
was represented by a 4-tuple of 2-graphons (W (0,0) , W (1,0) , W (0,1) , W (1,1) )
Ploops (i,j)
that satisfies i,j W (x, y) = 1 and W (1,0) (x, y) = W (0,1) (y, x) for each (x, y) ∈ [0, 1]2 . A
generalization of this representation can be given in our case of the Π(K) limits the following
way. We only present here the case when K is a continuous space, the easier finite case can
be dealt with analogously.
We have to fix a Borel probability measure µ on K, we set this to be the uniform distribution if K ⊂ Rd is a domain or K is finite. The limit space consists of collections of
(R, r)-kernels W =R (W u )u∈U , where U is the set of all functions u : Sr → K. Additionally,
∗
W has to satisfy U W u (x)dµ⊗Sr (u) = 1 and 0 ≤ W π (u) (x) = W u (xπ∗ (h([r],r−1)) ) for each
π ∈ Sr and x ∈ [0, 1]h([r],r−1). As before, the action π ∗ of π on [0, 1]h([r],r−1) is the induced
14
coordinate permutation by π, with the unit cubes coordinates indexed by non-trivial subsets
of [r]. Without going into further details we state the connection between the limit form
spelled out above and that in Theorem 2.12. It holds
Z
W u ((xh([r],r−1))dµ⊗Sr (u) = P[(W (xπ∗ (h([r],r−1)) , Y )π∈Sr ) ∈ U]
U
for every measurable U ⊂ U and x ∈ [0, 1]h([r],r−1), where Y is uniform on [0, 1], and the W
on the right-hand side is a (K, r)-digraphon, whereas on the left we have the corresponding
representation as a (possibly infinite) collection of (R, r)-kernels.
In several applications it is more convenient to use a naive form for the limit representation, from which the limit element in question is not decisively retrievable. The naive limit
space consists of naive (K, r)-graphons W : [0, 1]r → P(K), where now the arguments of W
are indexed with elements of [r]. From a proper r-graphon W : [0, 1]h([r]) → K we get its naive
counterpart by averaging, the K-valued random variable E[W (xh([r],1) , Uh([r],r−1)\h([r],1), Y )|Y ]
has distribution W (x1 , . . . , xr ), where (US )S∈h([r],r−1)\h([r],1) and Y are i.i.d. uniform on [0, 1].
On a further note we introduce averaged naive (K, r)-graphons for the case, when K ⊂
R, these are of the form W̃ : [0, 1]r → R and are given by complete averaging, that is
E[W (x1 , . . . , xr , Uh([r])\h([r],1), Y )] = W̃ (x1 , . . . , xr ), where (US )S∈h([r])\h([r],1) are i.i.d. uniform
on [0, 1]. A naive r-kernel is a real-valued, bounded function on [0, 1]r , or equivalently on
[0, 1]h([r],1).
We can associate to each G ∈ Πrn (K) an element WG ∈ Ξr (K ∪ {ι}) by subdividing
the unit r-cube [0, 1]h([r],1) into nr small cubes the natural way and defining the function
W ′ : [0, 1]h([r],1) → K that takes the value G(i1 , . . . , ir ) on [ i1n−1 , in1 ] ×· · · ×[ irn−1 , inr ] for distinct
i1 , . . . , ir , and the value ι on the remaining diagonal cubes, note that these functions are naive
(K, r)-graphons. Then we set WG (xh([r],r−1) ) = W ′ (ph([r],1) (xh([r],r−1))), where ph([r],1) is the
projection to the suitable coordinates. The special color ι here stands for the absence of
colors has to be employed in this setting as rectangles on the diagonal correspond to loop
edges. The corresponding r-graphon W ι is {0, 1}-valued. The sampled random r-graphs
G(k, WG ) and H(k, WG ) from the naive r-graphons are defined analogously to the general
case. If K ⊂ R, then note that H(k, WG ) = G(k, WG ) for every G, because the colors of G
are all point measures.
Note that t(F, G) = t(F, WG ), and
k
|tinj (F, G) − t(F, WG )| ≤
2
n−
k
2
(2.9)
for each F ∈ Πrk , hence the representation as naive graphons is compatible in the sense that
limn→∞ tinj (F, Gn ) = limn→∞ t(F, WGn ) for any sequence (Gn )∞
n=1 with |V (Gn )| tending to
infinity. This implies that dtv (G(k, Gn ), G(k, WGn )) → 0 as n tends to infinity.
We remark that naive and averaged naive versions in the directed case are defined analogously.
15
2.2
Representation of rCSP formulas as hypergraphs, and their
convergence
In this subsection we elaborate on how homomorphism and sampling is meant in the CSP
context, and formulate a representation the limit space in that context. Recall Theorem 2.1
for the way how we perceive rCSP formulas.
Let F be an rCSP formula on the variable set {x1 , . . . , xn } over an arbitrary domain K,
and let F [xi1 , . . . , xik ] be the induced subformula of F on the variable set {xi1 , . . . , xik }. Let
G(k, F ) denote the random induced subformula on k uniformly chosen variables from the
elements of V (F ).
It is clear using the terminology of Theorem 2.7 that the relation ω = (f ; xe ) ∈ F [xi1 , . . . , xik ]
is equivalent to the relation
φ(ω) = (fφ , xφ(e) ) ∈ F [xi1 , . . . , xik ]
(2.10)
for permutations φ ∈ Sr , where fφ (l1 , . . . , lr ) = f (lφ(1) , . . . , lφ(r) ) and φ(e) = (eφ(1) , . . . , eφ(r) ).
This emergence of symmetry will inherently be reflected in the limit space, we will demonstrate this shortly.
Special limits Let K = [q]. As we mentioned above, in general it is likely not to be fruitful to consider formula sequences as sequences of C(K, r)-colored r-graphs obeying certain
symmetries due to constraint splitting. However, in the special case when each r-set of variables carries exactly one constraint we can derive a meaningful representation, MAX-CUT
is an example. A direct consequence of Theorem 2.12 is the following.
Corollary 2.13. Let r ≥ 1, and K be a finite set, further, let K ⊂ C(K, r), so that K is
permutation invariant. Let (Fn )∞
n=1 be a sequence of rCSP formulas with |V (Fn )| tending
to infinity, and each r-set of variables in each of the formulas carries exactly one constraint
of type K. If for every formula H obeying the same conditions the sequences (t(H, Fn ))∞
n=1
converge as (K, r)-graphs, then there exists a (K, r)-digraphon W : [0, 1]h([r]) → K such that
t(H, Fn ) → t(H, W ) as n tends to infinity for every H as above.
Additionally, W satisfies for each x ∈ [0, 1]h([r]) and π ∈ Sr that W (xπ(h([r])) ) = π̂(W (xh([r]) )),
where π̂ is the action of π on constraint types in C(K, r) that permutes the rows and columns
of the evaluation table according to π, that is (π̂(f ))(l1 , . . . , lr ) = f (lπ(1) , . . . , lπ(r) ).
General limits via evaluation In the general case of rCSP formulas we regard them as
their evaluation representation eval.
For |K| = q we identify the set of rCSP formulas with the set of arrays whose entries are
the sums of the evaluation tables of the constraints on r-tuples, that is F with V (F ) = [n]
r
corresponds to a map eval(F ) : [n] × · · · × [n] → {0, 1, . . . , d}([q] ) that obeys the symmetry
condition given after (2.2). This will be the way throughout the paper we look at these
objects from here on. It seems that storing the whole structure of an rCSP formula does
not provide any further insight, in fact splitting up constraints would produce non-identical
formulas in a complete structure representation, which does not seem sensible.
16
r
We denote the set {0, 1, . . . , d}([q] ) by L for simplicity, which one could also interpret as
the set of multisets whose base set is [q]r and whose elements have multiplicity at most d.
This perspective allows us to treat rCSPs as directed r-uniform hypergraphs whose edges
are colored by the aforementioned elements of L, and leads to a representation of rCSP
limits that is derived from the general representation of the limit set of Π(L). We will
show in a moment that the definition of convergence in the previous subsection given by
densities of functional-colored graphs is basically identical to the convergence via densities
of sub-multi-hypergraphs in the current case.
The definition of convergence for a general sequence of rCSP formulas, or equivalently of
elements of Π(L), was given in Theorem 2.5. We describe here the special case for parallel
multicolored graphs, see also [25].
Consider the evaluation representation of the rCSP formulas now as r-graphs whose
oriented edges are parallel multicolored by [q]r . The map ψ : eval(H) → eval(F ) is a homomorphism between two rCSP formulas H and F if it maps edges to edges of the same color
from the color set [q]r and is consistent when restricted to be a mapping between vertex
sets, ψ ′ : V (H) → V (F ), for simple graphs instead of CSP formulas this is the multigraph
homomorphism notion.
Let H be an rCSP formula, and let H̃ be the corresponding element in C(L) on the same
vertex set such that if the color on the fixed edge e of H is the q-sized r-array (H z (e))z∈[q]r
Q H z (e)
xz
. More
with the entries being non-negative integers, then the color of H̃ at e is
z∈[q]r
precisely, for an element A ∈ L the value is given by
Y
z
A(z)H (e) .
[H̃(e)](A) =
z∈[q]r
The linear space generated by the set
Y
dz
L̃ =
xz | 0 ≤ dz1 ,...,zr ≤ d
r
z∈[q]
forms an L∞ -dense subset in C(L), therefore Theorem 2.4 applies, and for a sequence (Gn )∞
n=1
requiring the convergence of t(F, eval(Gn )) for all F = H̃ with H ∈ Π(L) provides one of
the equivalent formulations of the convergence of rCSP formulas in the subformula density
sense with respect to the evaluations.
The limit object will be given by Theorem 2.12 as the space of measurable functions
W : [0, 1]h([r]) → L, where, as in the general case, the coordinates of the domain of W are
indexed by the non-empty subsets of [r]. In our case, not every possible W having this form
will serve as a limit of some sequence, the above mentioned symmetry in (2.10) of the finite
objects is inherited in the limit.
We state now the general evaluation rCSP version of Theorem 2.12.
Corollary 2.14. Let (Fn )∞
n=1 be a sequence of rCSP formulas that evaluate to at most
d on all r-tuples with |V (Fn )| → ∞ such that for every finite rCSP formula H obeying
17
the same upper bound condition the sequence (t(H̃, eval(Fn )))∞
n=1 converges. Then there
exists an (L, r)-graphon W : [0, 1]h([r]) → L such that t(H̃, eval(Fn )) → t(H̃, W ) for every H.
Additionally, W satisfies for each x ∈ [0, 1]h([r]) and π ∈ Sr that W (xπ(h([r])) ) = π̂(W (xh([r]))),
where π̂ is as in Theorem 2.13 when elements of L are considered as maps from [q]r to
non-negative integers.
Exchangeable partition-indexed processes We conclude the subsection with a remark
that is motivated by the array representation of rCSPs. The next form presented seems to
be the least redundant in some aspect, since no additional symmetry conditions have to be
fulfilled by the limit objects.
The most natural exchangeable infinite random object fitting the one-to-one correspondence of Theorem 2.9 with rCSP limits is the following process, that preserves every piece
of information contained in the evaluation representation.
Definition
2.15. Let Nqr = { P = (P1 , . . . , Pq ) | the sets Pi ⊂ N are pairwise disjoint and
Pq
i=1 |Pi | = r } be the set of directed q-partitions of r-subsets of N. We call the random
process (XP )P∈Nqr that takes values in some compact Polish space K a partition indexed
process. The process (XP )P∈Nqr has the exchangeability property if its distribution is invariant
d
under the action induced by finite permutations of N, i.e., (XP )P∈Nqr = (Xρ∗ (P) )P∈Nqr for any
ρ ∈ Sym0 (N).
Unfortunately, the existence of a representation theorem for partition-indexed exchangeable processes analogous to Theorem 2.8 that offers additional insight over the directed
colored r-array version is not established, and there is little hope in this direction. The
reason for this is again the fact that there is no standard way of separating the generating
process of the elements XP and XP ′ non-trivially in the case when P and P ′ have the same
underlying base set of cardinality r but are different as partitions into two non-trivial random stages with the first being identical for the two variables and the second stage being
conditionally independent over the outcome of the first stage.
3
Graph and graphon parameter testability
First we will invoke the method of sampling from K-colored r-graphs and r-graphons, as
well as inspect the metrics that will occur later.
Let (US )S∈h([k],r) be an independent uniform sample from [0, 1]. Then for an r-graph
G, respectively an r-graphon W , the random r-graphs G(k, G) and G(k, W ) have vertex
set [k], and edge weights WG ((Upe (S) )S∈h(ê,r) ), respectively W ((Upe (S) )S∈h(ê,r) ). Keep in mind,
that G(k, G) 6= G(k, WG ), the first term corresponds to sampling without, the second with
2
replacement, but it is true that P(G(k, G) 6= G(k, WG )) ≤ |V r(G)| .
Norms and distances We also mention the definitions of the norms and distances that
will play a important role in what follows. In the next definition each object is real-valued.
18
Definition 3.1. The cut norm of an n × · · · × n r-array A is
kAk =
1
nr
max
S1 ,...,Sr ⊂[n]
and the 1-norm is
kAk1 =
1
nr
n
X
i1 ,...,ir =1
|A(S1 , . . . , Sr )| ,
|A(i1 , . . . , ir )|.
The cut distance of two labeled r-graphs or r-arrays F and G on the same vertex set [n] is
d (F, G) = kF − Gk ,
where F (S1 , . . . , Sr ) =
P
ij ∈Sj
F (i1 , . . . , ir ). The edit distance of the same pair is
d1 (F, G) = kF − Gk1 .
The continuous counterparts are described as follows.
Definition 3.2. The cut norm of a naive r-graphon W is
kW k =
max
S1 ,...,Sr ⊂[0,1]
Z
W (x)dx ,
S1 ×···×Sr
the cut distance of two naive r-graphons W and U is
δ (W, U) = inf kW φ − U ψ k ,
φ,ψ
where the infimum runs over all measure-preserving permutations of [0, 1], and the graphon
W φ is defined as W φ (x1 , . . . , xr ) = W (φ(x1 ), . . . , φ(xr )). The cut distance for arbitrary
unlabeled r-graphs or r-arrays F and G is
δ (F, G) = δ (WF , WG ).
We remark that the above definition of the cut norm and distance is not satisfactory from
one important aspect for r ≥ 3: Not all sub-r-graph densities are continuous functions in the
topology induced by this norm even in the most simple case, when K = {0, 1}. Examples of
subgraphs whose densities behave well with respect to the above norms are linear hypegraphs,
that have the property that any two distinct edges intersect at most in one node.
Originally, in [10], testability of (K, r)-graph parameters (which are real functions invariant under r-graph-isomorphisms) was defined as follows.
Definition 3.3. A (K, r)-graph parameter f is testable, if for every ε > 0 there exists a
k(ε) ∈ N such that for every k ≥ k(ε) and simple (K, r)-graph G on at least k vertices
P(|f (G) − f (G(k, G))| > ε) < ε.
A (K, r)-graphon parameter f is a functional on the space of r-graphons that is invariant
under the action induced by measure preserving maps from [0, 1] to [0, 1], that is, f (W ) =
f (W φ ). Their testability is defined analogously to Definition 3.3.
19
Testing parameters A characterization of the testability of a graph parameter in terms
of graph limits was developed in [10] for K = {0, 1} in the undirected case, we will focus
in the next paragraphs on this most simple setting and give an overview on previous work.
Recall Theorem 3.3.
Theorem 3.4. [10] Let f be a simple graph parameter, then the following statements are
equivalent.
(i) The parameter f is testable.
(ii) For every ε > 0 there exists a k(ε) ∈ N such that for every k ≥ k(ε) and simple graph
G on at least k vertices
|f (G) − Ef (G(k, G))| < ε.
(iii) For every convergent sequence (Gn )∞
n=1 of simple graphs with |V (Gn )| → ∞ the numer∞
ical sequence (f (Gn ))n=1 also converges.
(iv) For every ε > 0 there exist a ε′ > 0 and a n0 ∈ N such that for every pair G1
and G2 of simple graphs |V (G1 )|, |V (G2 )| ≥ n0 and δ (G1 , G2 ) < ε′ together imply
|f (G1 ) − f (G2 )| < ε.
(v) There exists a δ -continuous functional f ′ on the space of graphons, so that f (Gn ) →
f ′ (W ) whenever Gn → W .
A closely related notion to parameter testing is property testing. A simple graph property
P is characterized by the subset of the set of simple graphs containing the graphs which have
the property, in what follows P will be identified with this subset.
Definition 3.5. [26] P is testable, if there exists another graph property P ′ , such that
(a) P(G(k, G) ∈ P ′ ) ≥
2
3
for every k ≥ 1 and G ∈ P, and
(b) for every ε > 0 there is a k(ε) such that for every k ≥ k(ε) and G with d1 (G, P) ≥ ε we
have that P(G(k, G) ∈ P ′ ) ≤ 13 .
Note that 31 and 32 in the definition can be replaced by arbitrary constants 0 < a < b < 1,
this change may alter the corresponding certificate P ′ , but not the characteristic of testability.
The link below between the two notions is a simple consequence of the definitions. These
concepts may be extended to the infinitary space of graphons, where a similar notion of
sampling is available.
Lemma 3.6. [26] P is a testable graph property if and only if d1 (., P) is a testable graph
parameter.
We provide some remarks yielded by Theorem 3.4.
Remark 3.7. In the case r = 2, the testability of a graphon parameter is equivalent to
continuity in the δ distance.
20
Remark 3.8. The intuitive reason for the absence of an analogous, easily applicable characterization of testability for higher rank uniform hypergraphs as in Theorem 3.4 is that no
natural notion of a suitable distance is available at the moment. The construction of such a
metric would require to establish a standard method to compare a large hypergraph Hn to
its random induced subgraph on a uniform sample.
The δ metric for graphs is convenient because of its concise formulation and it induces a
compact limit space, the main characteristic that is exploited that the total variation distance
of probability measures of induced subgraphs of fixed size is continuous in this distance, any
other δvar with this property would fit into the above framework.
3.1
Examples of testable properties and parameters
We introduce now a notion of efficient parameter testability. Theorem 3.3 of testability does
not ask for a specific upper bound on k(ε) in terms of ε, but in applications the order of
magnitude of this function may be an important issue once its existence has been verified.
Therefore we introduce a more restrictive class of graph parameters, we refer to them as
being efficiently testable.
Definition 3.9. An r-graph parameter f is called β-testable for a family of measurable
functions β = { βi | βi : R+ → R+ , i ∈ I }, if there exists an i ∈ I such that for every ε > 0
and r-graph G we have
P(|f (G) − f (G(βi (ε), G))| > ε) < ε.
With slight abuse of notation we will also use the notion of β-testability for a family
containing only a single function β. The term efficient testability will serve as shorthand
for β-testability for some (family) of functions β(ε) that are polynomial in 1ε . One could
rephrase this in the light of Theorem 3.9 by saying that a testable parameter f is efficiently
testable if its sample complexity is polynomial in 1/ε.
We will often deal with statistics that are required to be highly concentrated around their
mean, this might be important for us even if their mean is not known to us in advance. A
quite universal tool for this purpose is a Chernoff-type large deviation result, the AzumaHoeffding-inequality for martingales with bounded jumps. Mostly, we require the formulation
given below, see e.g. [3] for a standard proof and a wide range of applications. We will also
apply a more elaborate version of this concentration inequality below.
Lemma 3.10 (Azuma-Hoeffding-inequality). Let (Mk )nk=0 be a super-martingale with the
natural filtration such that with probability 1 for every k ∈ [n] we have |Mk − Mk−1 | ≤ ck .
Then for every ε > 0 we have
ε2
P(|Mn − M0 | ≥ ε) ≤ 2 exp − Pn 2 .
2 k=1 ck
We will list some examples of graph parameters, for which there is information available
about their sample complexity implicitly or explicitly in the literature.
21
Example 3.11. One of the most basic testable simple graph parameters are subgraph densities fF (G) = t(F, G), where F is a simple graph. The next result was formulated as Theorem
2.5 in [24], see also for hypergraphs Theorem 11 in [15].
Lemma 3.12. [24, 15] Let ε > 0 q, r ≥ 1 be arbitrary. For any q-colored r-graphs F and
G, and integer k ≥ |V (F )| we have
ε2 k
,
P(|tinj (F, G) − tinj (F, G(k, G))| > ε) < 2 exp −
2|V (F )|2
and
P(|t(F, G) − t(F, G(k, G))| > ε) < 2 exp −
ε2 k
18|V (F )|2
For any q-colored r-graphon W we have
ε2 k
P(|t(F, W ) − tinj (F, G(k, W ))| > ε) < 2 exp −
2|V (F )|2
and
P(|t(F, W ) − t(F, G(k, W ))| > ε) < 2 exp −
ε2 k
8|V (F )|2
.
(3.1)
,
.
This implies that for any F that the parameter fF is O(log( 1ε )ε−2 )-testable. In the
case of (K, r)-graphs for arbitrary r the same as Theorem 3.12 holds, this can be shown by
a straightforward application of the Azuma-Hoeffding inequality, Theorem 3.10, as in the
original proofs.
Example 3.13. For r = 2, q, n ∈ N, J ∈ Rq×q , h ∈ Rq , and G ∈ Π2n we consider the energy
Eφ (G, J, h) =
1 X
1 X
−1
−1
hi |φ−1 (i)|,
J
e
(φ
(i),
φ
(j))
+
ij
G
n2 1≤i,j≤q
n 1≤i≤q
(3.2)
of a partition φ : V (G) → [q], and
ˆ
E(G,
J, h) =
max
φ : V (G)→[q]
Eφ (G, J, h),
(3.3)
that is the ground state energy of the graph G (cf. [11]) with respect to J and h, where
eG (S, T ) denotes the number of edges going form S to T in G. These graph functions
originate from statistical physics, for the rigorous mathematical treatment of the topic see
e.g. Sinai’s book [29]. The energy expression whose maximum is sought is also referred to
as a Hamiltonian. In the literature this notion is also often to be found with negative sign
or different normalization, more on this below.
This graph parameter can be expressed in the terminology applied for MAX-2CSP. Let
the corresponding 2CSP formula to the pair (G,J) be F with domain K = [q]. The formula F
is comprised of the constraints (g0 ; x(i,j) ) for every edge (i, j) of G, where g0 is the constraint
22
type whose evaluation table is J, and additionally it contains n copies of (g1 ; xi ) for every
vertex i of G, where g1 is the constraint type in one variable with evaluation vector h. Then
the optimal value of the objective function of the MAX-2CSP problem of the instance F is
ˆ
equal to E(G,
J, h). Note that this correspondence is consistent with the sampling procedure,
that is, to the pair (G(k, G),J) corresponds the 2CSP formula G(k, F ). Therefore Ê(., J, h)
has sample complexity O( ε14 )(see [4],[27]).
These energies are directly connected to the number hom(G, H) of admissible vertex
colorings of G by the colors V (H) for a certain small weighted graph H. This was pointed
out in [11], (2.16), namely
1
1
,
(3.4)
ln hom(G, H) = Ê(G, J) + O
|V (G)|2
|V (G)|
where the edge weights of H are βij (H) = exp(Jij ). The former line of thought of transforming ground state energies into MAX-2CSPs is also valid in the case of r-graphs and rCSPs
for arbitrary r.
The results on the sample complexity of MAX-rCSP for q = 2 can be extended beyond
the case of simple hypergraphs, higher dimensional Hamiltonians are also expressible as
rCSP formulas. The generalization for arbitrary q and to r-graphons will follow in the next
section. Additionally we note, that an analogous statement to (3.4) on testability of coloring
numbers does not follow immediately for r ≥ 3.
On the other hand, with the notion of the ground state energy available, we may rewrite
the MAX-2CSP in a compact form as an energy problem. We will execute this task right
away for limit objects. First, we introduce the ground state energy of a 2-kernel with respect
to an interaction matrix J. The collection φ = (φ1 , . . . , φq ) is a fractional q-partition of [0, 1]
with the components
being measurable non-negative functions on [0, 1], if for every x ∈ [0, 1]
P
it holds that qi=1 φi (x) = 1.
Definition 3.14. Let q ≥ 1, J ∈ Rq×q . Then the ground state energy of the 2-kernel W
with respect to J is
Z
X
Jz
E(W, J) = max
φz1 (x)φz2 (y)W (x, y)dxdy,
φ
z∈[q]2
[0,1]2
where φ runs over all fractional q-partitions of [0, 1].
2
Let K = [q], L = {0, 1, . . . , d}[q] and (Fn )∞
n=1 be a convergent sequence of 2CSP formulas.
Consider the corresponding sequence of graphs eval(Fn ) = (F̃nz )z∈[q]2 for each n, and let W =
(W z )z∈[q]2 be the respective limit. Let f be the (L, 2)-graph parameter so that f (eval(F )) is
equal to the density of the MAX-2CSP value for the instance F . Then it is not hard to see
that f can be extended to the limit space the following way
Z
q
X
f (W ) = max
φi (x)φj (y)W (i,j)(x, y)dxdy,
φ
i,j=1
[0,1]2
23
where φ runs over all fractional q-partitions of [0, 1]. The formula is a special case of the
layered ground state energy with the interaction matrices defined by J i,j (k, l) = Ii (k)Ij (l)
that is defined below.
Example 3.15. The efficiency of testing a graph parameter can be investigated in terms of
some additional continuity condition in the δ metric. Direct consequence of results from
[10] will be presented in the next lemma.
Lemma 3.16. Let f be a simple graph parameter that is α-Hölder-continuous in the δ
metric in the following sense: There exists a C > 0 such that for every ε > 0 there exists
n0 (ε) so that if for the simple graphs G1 , G2 it holds that |V (G1 )|, |V (G2 )| ≥ n0 (ε) and
δ (G1 , G2 ) ≤ ε, then |f (G1 ) − f (G2 )| ≤ Cδα (G1 , G2 ). Then f is max{2
testable.
O
1
ε2/α
, n0 (ε)}-
Proof. To see this, let us fix ε > 0. Then for an arbitrary simple graph G with |V (G)| ≥ n0 (ε)
and k ≥ n0 (ε) we have
!α
10
α
|f (G) − f (G(k, G))| ≤ C [δ (G, G(k, G))] < C p
,
(3.5)
log2 k
2
k
with probability at least 1−exp(− 2 log
). The last probability bound in (3.5) is the statement
2k
α
10
, the substitution
of Theorem 2.9 of [10]. We may rewrite (3.5) by setting ε = C √
log2 k
implies that f is 2O(
ε−2/α
) -testable, whenever n (ε) ≤ 2
0
O(ε−2/α )
.
This latter approach is hard to generalize in a meaningful way to r-graphs for r ≥ 3
because of the absence of a suitable metric, see the discussion above. The converse direction,
namely formulating a qualitative statement about the continuity of f with respect to δ
obtained from the information about the sample complexity is also a worthwhile problem.
4
Testability of the ground state energy
Assume that K is a compact Polish space, and r is a positive integer. First we provide the
basic definition of the energy of a (K, r)-graphon W : [0, 1]h([r]) → K with respect to some
q ≥ 1, an r-array J ∈ C(K)q×···×q , and a fractional partition φ = (φ1 , . . . , φq ). With slight
abuse of notation, the graphons in the upcoming parts of the section assume both the Kvalued and the probability measure valued form, it will be clear from the context which one
of them is meant.
Recall ?? of the energies of naive r-kernels, the version for true (K, r)-graphons is
Eφ (W, J) =
q
X
Z
Jz1 ,...,zr (W (xh([r])))
z1 ,...,zr =1
[0,1]h([r])
r
Y
j=1
24
φzj (x{j} )dλ(xh([r]) ).
(4.1)
The value of the above integral can be determined by first integrating over the coordinates
corresponding to subsets of [r] with at least two elements, and then over the remaining ones.
The interior partial integral is then not dependent on φ, so it can be calculated in advance
in the case when we want to optimize over all choices of fractional partitions. Therefore
focusing attention on the naive kernel version does not lead to any loss of generality in terms
of testing, see below.
When dealing with a so-called integer partition φ = (IT1 , . . . , ITq ), one is able to rewrite
the former expression (4.1) as
Z
q
X
Jz1 ,...,zr (W (xh([r])))dλ(xh([r] ),
Eφ (W, J) =
z1 ,...,zr =1 −1
ph([r],1) (Tz1 ×···×Tzr )
where pD stands for the projection of [0, 1]h([r]) to the coordinates contained in the set D.
The energy of a (K, r)-graph G on k vertices with respect to the J ∈ C(K)q×···×q for
the fractional q-partition xn = (xn,1 , . . . , xn,q ) for n = 1, . . . , k (i. e., xn,m ∈ [0, 1] and
P
m xn,m = 1) is defined as
1
Ex (G, J) = r
k
q
X
k
X
Jz1 ,...,zr (G(n1 , . . . , nr ))
z1 ,...,zr =1 n1 ,...,nr =1
r
Y
xnj ,zj .
(4.2)
j=1
In the case when K = {0, 1} and Jz1 ,...,zr (x) = az1 ,...,zr I1 (x) is a constant multiple of
the indicator function of 1 we retrieve the original GSE notion in Theorem 3.13 and Theorem 3.14.
Remark 4.1. Ground state energies and subgraph densities are Lipschitz continuous graph
parameters in the sense of Theorem 3.16 ([10],[11]), but that result implies much weaker
upper bounds on the sample complexity, than the best ones√known to date. This is due to
the fact, that δ (G, G(k, G)) decreases with magnitude 1/ log k in k, which is the result
of the difficulty of finding a near optimal overlay between two graphons through a measure
preserving permutation of [0, 1] in order to calculate their δ distance. On the other hand,
if the sample size k(ε) is exponentially large in 1/ε, then the distance δ (G, G(k, G)) is
small with high probability, therefore all Hölder-continuous graph parameters at G can be
estimated simultaneously with high success probability by looking at the values at G(k, G)).
Next we introduce the layered version of the ground state energy. This is a generalized
optimization problem where we wish to obtain the optimal value corresponding to fractional
partitions of the sums of energies over a finite layer set.
Definition 4.2. Let E be a finite layer set, K be a compact set, and W = (W e )e∈E be a tuple
of (K, r)-graphons. Let q be a fixed positive integer and let J = (J e )e∈E with J e ∈ C(K)q×···×q
for every e ∈ E. For a φ = (φ1 , . . . , φq ) fractional q-partition of [0, 1] let
X
Eφ (W e , J e )
Eφ (W, J) =
e∈E
25
and let
E(W, J) = max Eφ (W, J),
φ
denote the layered ground state energy, where the maximum runs over all fractional qpartitions of [0, 1].
We define for G = (Ge )e∈E the energy Ex (G, J) analogously as the energy sum over E, see
(4.2) above, and Ê(G, J) = maxx Ex (G, J) where the maximum runs over integer q-partitions
(xn,m ∈ {0, 1} ), respectively E(G, J) = maxx Ex (G, J), where the maximum is taken over all
fractional q-partitions x.
Now we will rewrite the unweighted boolean limit MAX-rCSP (recall Theorem 2.2) as a
layered ground state energy problem. Let E = {0, 1}r , K = {0, 1, . . . , 2r }, W = (W z )z∈{0,1}r
with W z being (K, r)-graphons, and let
α(W ) = max
φ
X
z∈{0,1}r
Z
[0,1]h([r])
r
Y
j=1
φ(x{j} )zj (1 − φ(x{j} ))1−zj W z (x)dλ(x),
where the maximum is taken over all measurable functions φ : [0, 1] → [0, 1]. If eval(F ) =
(F z )z∈{0,1}r is a (KE , r)-graph corresponding to a boolean rCSP formula F with k variables,
then the finite integer version of α is given by
α̂(eval(F )) = max
x
1
kr
X
z∈{0,1}r
k
X
n1 ,...,nr =1
F z (n1 , . . . , nr )
r
Y
xnj ,zj ,
j=1
where the maximum runs over integer 2-partitions of [k]. It is clear that α̂(eval(F )) is equal
to the density of the optimum of the MAX-rCSP problem of F .
We return to the general setting and summarize the involved parameters in the layered
ground state energy problem. These are the dimension r, the layer set E, the number of
states q, the color set K, the finite or limit case. Our main theorem on the paper will be a
generalization of the following theorem on sample complexity of rCSPs with respect to these
factors.
The main result of [4] was the following.
Theorem 4.3. [4] Let F be an unweighted boolean rCSP formula. Then for any ε > 0 and
δ > 0 we have that for k ∈ O(ε−4 log( 1ε )) it holds that
P (|α̂(eval(F )) − α̂(G(k, eval(F )))| > ε) < δ.
The upper bound on k in the above result was subsequently improved by Mathieu and
Schudy [27] to k ∈ O(ε−4 ). We will see in what follows that also the infinitary version of the
above statement is true. It will be stated in terms of layered ground state energies of edge
colored hypergraphs, and will settle the issue regarding the efficiency of testability of the
mentioned parameters in the greatest generality with respect to the previously highlighted
26
aspects. However, what the exact order of the magnitude of the sample complexity of the
MAX-rCSP and the GSE problem is remains an open question.
In order to simplify the analysis we introduce the canonical form of the problem, that
denote layered ground state energies of [q]r -tuples of ([−d, d], r)-graphons with the special
interaction r-arrays Jˆz for each z ∈ [q]r , that have the identity function f (x) = x as the
(z1 , . . . , zr ) entry and the constant 0 function for the other entries. In most of what follows
we will drop the dependence on J in the energy function when it is clear that we mean
ˆ and will employ the notation Ex (G), E(G), E(G),
ˆ
the aforementioned canonical J,
Eφ (W ),
and E(W ) (dependence on q is hidden in the notation), where G and W are [q]r -tuples of
([−d, d], r)-graphs and graphons, respectively. We are ready to state the main result of the
paper.
Theorem 4.4. Let r ≥ 1, q ≥ 1, and ε > 0. Then for any [q]r -tuple of ([−kW k∞ , kW k∞ ], r)r+7 r
graphons W = (W z )z∈[q]r and k ≥ Θ4 log(Θ)q r with Θ = 2 ε q r we have
ˆ
P(|E(W ) − E(G(k,
W ))| > εkW k∞ ) < ε.
(4.3)
A direct consequence of Theorem 4.4 is the corresponding result for layered ground state
energies.
Corollary 4.5. Let E be a finite layer set, K a compact Polish color set, q ≥ 1, r-arrays
J = (J e )e∈E with J e ∈ C(K)q×···×q , and ε > 0. Then we have that for any E-tuple of
r+7 r
(K, r)-graphon W = (W e )e∈E and k ≥ Θ4 log(Θ)q r with Θ = 2 ε q r that
ˆ
P(|E(W, J) − E(G(k,
W ), J)| > ε|E| kJk∞ kW k∞ ) < ε.
Proof. We make no specific restrictions on the color set K and on the set E of layers except
for finiteness of the second, therefore it will be convenient to rewrite the layered energies
Eφ (W, J) into a more universal form as a sum of proper Hamiltonians in order to suppress
the role of K and E. Let
Z
Y
X X
Eφ (W, J) =
φzj (x{j} )Jze1 ,...,zr (W e (x))dλ(xh([r]) )
e∈E z1 ,...,zr ∈[q]
j∈[q]
[0,1]h([r])
=
X
Z
Y
z1 ,...,zr ∈[q]
j∈[r]
[0,1]h([r])
φzj (x{j} )
"
X
e∈E
#
Jze1 ,...,zr (W e (x)) dλ(xh([r]) ).
Motivated by this reformulation we introduce for every (W, J) pair a special auxiliary instance
r
of the ground state problem that is defined for a [q]
of ([−d, d], r)-graphons, where d =
P -tuple
r
z
|E| kJk∞ kW k∞ . For any z ∈ [q] , let Ŵ (x) = e∈E Jze1 ,...,zr (W e (x)) for each x ∈ [0, 1]h([r]),
and let the interaction matrices Jˆz be of the canonical form. We obtain for any fractional
ˆ and also Ex (G(k, W ), J) =
partition φ of [0, 1] into q parts that Eφ (W, J) = Eφ (Ŵ , J),
ˆ
Ex (G(k, Ŵ ), J) for any fractional partition x, where the two random r-graphs are obtained via
the same sample. Therefore, without loss of generality, we are able to reduce the statement of
the corollary to the statement of Theorem 4.4 dealing with ground state energies of canonical
form.
27
We start with the proof of Theorem 4.4 by providing the necessary background. We
will proceed loosely along the lines of the proof of Theorem 4.3 from [4] with most of the
required lemmas being refinements of the respective ones in the proof of that theorem. We
will formulate and verify these auxiliary lemmas one after another, afterwards we will compile
them to prove the main statement. The arguments made in [4] carry through adapted to
our continuous setting with some modifications, and we will also draw on tools from [10] and
[11]. The first lemma tells us that in the real-valued case the energy of the sample and that
of the averaged sample do not differ by a large amount.
Lemma 4.6. Let W be a ([−d, d], r)-graphon, q ≥ 1, J ∈ Rq×···×q . Then for every k ≥ 1
there is a coupling of G(k, W ) and H(k, W ) such that
2
εk
P |Ê(G(k, W ), J) − Ê(H(k, W ), J)| > εkJk∞ kW k∞ ≤ 2 exp −k
− log q
2
Proof. Let us fix a integer q-partition x of [k], and furthermore let the two random r-graphs
be generated by the same sample (US )S∈h([k],r). Then
1
Eˆx (G(k, W ), J) = r
k
q
X
k
X
z1 ,...,zr =1 n1 ,...,nr =1
Jz1 ,...,zr W ((US )S∈h({n1 ,...,nr },r) )
r
Y
xnj ,zj ,
j=1
and
Êx (H(k, W ), J)
1
= r
k
q
X
k
X
z1 ,...,zr =1 n1 ,...,nr =1
Jz1 ,...,zr E[W ((US )S∈h({n1 ,...,nr },r) ) | (US )S∈h({n1 ,...,nr },1) ]
Let us enumerate the elements of
k
2
r
Y
xnj ,zj .
j=1
as e1 , e2 , . . . , e(k) , and define the martingale
2
Y0 = E[Eˆx (G(k, W ), J) | { Uj | j ∈ [k] }],
and
h
i
Yt = E Eˆx (H(k, W ), J) | { Uj | j ∈ [k] } ∪ ∪tj=1 { US | ej ⊂ S }
for each 1 ≤ t ≤
k
2
, so that Y0 = Eˆx (H(k, W ), J) and Y(k) = Êx (G(k, W ), J). For each t ∈
2
we can upper bound the difference, |Yt−1 − Yt | ≤
inequality, Theorem 3.10, it follows that
ρ2 k 4
P(|Yt − Y0 | ≥ ρ) ≤ 2 exp − k
2 2 kJk2∞ kW k2∞
for any ρ > 0.
28
1
kJk∞ kW k∞ .
k2
!
≤ 2 exp −
k
2
By the Azuma-Hoeffding
ρ2 k 2
2kJk2∞ kW k2∞
,
(4.4)
There are q k distinct integer q-partitions of [k], hence
2
εk
− log q
. (4.5)
P |Ê(G(k, W ), J) − Ê(H(k, W ), J)| > εkJk∞ kW k∞ ≤ 2 exp −k
2
In the following lemmas every r-graph or graphon is meant to be as bounded real-valued
and directed.
We would like to point out in the beginning that in the finite case we are able to shift
from the integer optimization problem to the relaxed one with having a reasonably good
upper bound on the difference of the optimal values of the two.
Lemma 4.7. Let G be a real-valued r-graph on [k] and J ∈ Rq×···×q . Then
1
ˆ
|E(G, J) − E(G,
J)| ≤ r 2 kGk∞ kJk∞ .
2k
ˆ
Proof. Trivially we have E(G, J) ≥ E(G,
J). We define G′ by setting all entries of G to 0
which have at least two coordinates which are the same (for r = 2 these are the diagonal
entries). Thus, we get that
r 1
′
|E(G, J) − E(G , J)| ≤
kGk∞ kJk∞ .
2 k
Now assume that we are given a fractional partition x so that Ex (G′ , J) attains the maximum
E(G′ , J). We fix all the entries xn,1 , . . . xn,q of x with n = 2, . . . , k and regard Ex (G′ , J) as a
function of x1,1 , . . . , x1,q . P
This function will be linear in the variables x1,1 , . . . , x1,q , and with
the additional condition rj=1 x1,j = 1 we obtain a linear program. By standard arguments
this program possesses an integer valued optimal solution, so we are allowed to replace
x1,1 , . . . , x1,q by integers without letting Ex (G′ , J) decrease. We repeat this procedure for each
ˆ ′ , J).
n ∈ [k], obtaining an integer optimum for Ex (G′ , J), which implies that E(G′, J) = E(G
Hence, the claim follows.
Next lemma is the continuous generalization of Theorem 4 from [4], and is closely related
to the Weak Regularity Lemma, ??, of [16], and its continuous version ??. The result is a
centerpiece of the cut decomposition method.
Lemma 4.8. Let ε > 0 arbitrary. For any bounded measurable function W : [0, 1]r → R
j
there exist an s ≤ ε12 , measurable sets
PsSi ⊂ [0, 1] with i = 1, . . . , s, j = 1, . . . , r, and real
numbers d1 , . . . , ds so that with B = i=1 di ISi1 ×···×Sir it holds that
(i) kW k2 ≥ kW − Bk2 ,
(ii) kW − Bk < εkW k2 , and
Ps
1
(iii)
i=1 |di | ≤ ε kW k2 .
29
Proof. We construct stepwise the required rectangles and the respective coefficients implicitly. Let W 0 = W , and suppose that after the t’th step of the construction we have already
obtained every setPSij ⊂ [0, 1] with i = 1, . . . , t, j = 1, . . . , r, and the real numbers d1 , . . . , dt .
Set W t = W − ti=1 di ISi1 ×···×Sir . We proceed to the (t + 1)’st step, where two possible
situations can occur. The first case is when
kW t k ≥ εkW k2.
1
r
This implies
by definition that there exist measurable subsets St+1
, . . . , St+1
of [0, 1] such
R
t
that | S 1 ×···×S r W (x)dλ(x)| ≥ εkW k2. We define dt+1 to be the average of W t on the
t+1
t+1
1
r
product set St+1
× · · · × St+1
, and proceed to the (t + 2)’nd step. In the case of
kW t k < εkW k2
we are ready with the construction and set s = t.
We analyze the first case to obtain an upper bound on the total number of steps required
by the construction. So suppose that the first case above occurs. Then
Z
Z
t 2
t+1 2
t 2
kW k2 − kW k2 =
(W ) (x)dλ(x) −
(W t (x) − dt+1 )2 dλ(x)
1 ×···×S r
St+1
t+1
1 ×···×S r
St+1
t+1
1
r
= d2t+1 λ(St+1
) . . . λ(St+1
) ≥ ε2 kW k22.
(4.6)
This means that the square of the 2-norm of W t decreases in t in every step when the first
case occurs in the construction by at least ε2 kW k22 , therefore it can happen only at most ε12
times, with other words s ≤ ε12 . It is also clear that the 2-norm decreases in each step, so
we are left to verify the upper bound on the sum of the absolute values of the coefficients
di . From (4.6) we get, that
kW k22
=
s
X
t=1
kW t−1 k22
−
kW t k22
≥
s
X
d2t λ(St1 ) . . . λ(Str ).
t=1
We also know for every t ≤ s that |dt |λ(St1) . . . λ(Str ) ≥ εkW k2 . Hence,
s
X
t=1
and therefore
Ps
t=1
|dt |εkW k2 ≤
s
X
t=1
d2t λ(St1 ) . . . λ(Str ) ≤ kW k22,
|dt | ≤ 1ε kW k2 .
Next we state that the cut approximation provided by Theorem 4.8 is invariant under
sampling. This is a crucial point of the whole argument, and is the r-dimensional generalization of Lemma 4.6 from [10].
30
Lemma 4.9. For any ε > 0 and bounded measurable function W : [0, 1]r → R we have that
ε2 k
P (|kH(k, W )k − kW k | > εkW k∞) < 2 exp −
32r 2
2 4
.
for every k ≥ 16rε
Proof. Fix an arbitrary 0 < ε <1, r ≥ 2, and further let W be a real-valued naive r-kernel.
4
2
Set the sample size to k ≥ 16rε
. Let us consider the array representation of H(k, W ) and
denote the r-array AH(k,W ) by G that has zeros on the diagonal. We will need the following
lemma from [4].
Lemma 4.10. G is a real r-array on some finite product set V1 × · · · × Vr , where Vi are
copies of V of cardinality k. Let S1 ⊂ V1 , . . . , Sr ⊂ Vr be fixed subsets and Q1 a uniform
random subset of V2 × · · · × Vr of cardinality p. Then
kr
G(S1 , . . . , Sr ) ≤ EQ1 G(P (Q1 ∩ S2 × · · · × Sr ), S2 , . . . , Sr ) + √ kGk2 ,
p
P
where P (Q1 ) = PG (Q1 ) = { x1 ∈ V1 | (y2 ,...yr )∈Q1 G(x1 , y2 , . . . , yr ) > 0 } and the 2-norm
1/2
P
2
xi ∈Vi G (x1 ,...,xr )
.
denotes kGk2 =
|V1 |...|Vr |
If we apply Theorem 4.10 repeatedly r times to the r-arrays G and −G, then we arrive
at an upper bound on G(S1 , . . . , Sr ) ((−G)(S1 , . . . , Sr ) respectively) for any collection of the
S1 , . . . , Sr which does not depend on the particular choice of these sets any more, so we get
that
max{G(PG (Q′1 ), . . . , PG (Q′r )); (−G)(P−G (Q′1 ), . . . , P−G (Q′r ))}
k r kGk ≤ EQ1 ,...,Qr max
′
r
Qi ⊂Qi
rk
+ √ kGk∞ ,
p
(4.7)
since kGk2 ≤ kGk∞ .
Let us recall that G stands for the random H(k, W ). We are interested in the expectation
E of the left hand side of (4.7) over the sample that defines G. Now we proceed via the method
of conditional expectation. We establish an upper bound on the expectation of right hand
side of (4.7) over the sample U1 , . . . , Uk for each choice of the tuple of sets Q1 , . . . , Qr . This
bound does not depend on the actual choice of the Qi ’s, so if we take the average (over the
Qi ’s), that upper bound still remains valid.
In order to do this, let us fix Q1 , . . . , Qr , set Q to be the set of elements of V (G) which
are contained in at least one of the Qi ’s, and fix also the sample points of UQ = { Ui | i ∈ Q }.
Take the expectation EUQc only over the remaining Ui sample points.
To this end, by Fubini we have the estimate
max{G(PG (Q′1 ) ∩ Qc , . . . , PG (Q′r ) ∩ Qc );
k r EU[k] kGk ≤ EQ1 ,...Qr EUQ [EUQc max
′
Qi ⊂Qi
31
(−G)(P−G (Q′1 )
c
∩Q
, . . . , P−G (Q′r )
rk r
∩ Q )}] + √ kGk∞ + pr 3 k r−1 kGk∞ ,
p
(4.8)
c
where US = { Ui | i ∈ S }.
Our goal is to uniformly upper bound the expression in the brackets in (4.8) so that in
the dependence on the particular Q1 , . . . Qr and the sample points from UQ vanishes. To
achieve this, we consider additionally a tuple of subsets Q′i ⊂ Qi , and introduce the random
variable Y (Q′1 , . . . , Q′r ) = G(PG (Q′1 ) ∩ Qc , . . . , PG (Q′r ) ∩ Qc ), where the randomness comes
from UQc exclusively. Let
X
W (Uy1 , . . . , Uyi−1 , xi , Uyi+1 , . . . Uyr ) > 0 }
Ti = { xi ∈ [0, 1] |
(y1 ,...,yi−1 ,yi+1 ,...yr )∈Q′i
for i ∈ [r]. Note that ti ∈ PG (Q′i ) is equivalent to Uti ∈ Ti . Then
X
EUQc G(t1 , . . . , tr )IPG (Q′1 ) (t1 ) . . . IPG (Q′r ) (tr ) + r 2 k r−1 kW k∞
EUQc Y (Q′1 , . . . , Q′r ) ≤
t1 ,...,tr ∈Qc
ti 6=tj
≤k
r
Z
W (x)dλ(x) + r 2 k r−1 kW k∞ ≤ k r kW k + r 2 k r−1 kW k∞ .
T1 ×···×Tr
By the Azuma-Hoeffding inequality we also have high concentration of the random variable
Y (Q′1 , . . . , Q′r ) around its mean, that is
2
ρk
′
′
′
′
r
P(Y (Q1 , . . . , Qr ) ≥ EUQc Y (Q1 , . . . , Qr ) + ρk kW k∞ ) < exp − 2 ,
(4.9)
8r
since modification of one sampled element changes the value of Y (Q′1 , . . . , Q′r ) by at most
2rk r−1 kW k∞ . Analogous upper bounds on the expectation and the tail probability hold for
each of the expressions (−G)(P−G (Q′1 ), . . . , PG (Q′r )).
With regard to the maximum expression in (4.8) over the Q′i sets we have to this end either
that the concentration event from (4.9) holds for each possible choice of the Q′i subsets for
2
both expressions in the brackets in (4.8), this has probability at least 1 − 2pr+1 exp(− ρ8rk2 ), or
it fails for some choice. In the first case we can employ the upper bound k r kW k + (r 2 k r−1 +
ρk r )kW k∞ , and in the event of failure we still have the trivial upper bound of k r kW k∞ .
Eventually we presented an upper bound on the expectation that does not depend on the
choice of Q1 , . . . , Qr , and the sample points from UQ . Hence by taking expectation and
assembling the terms, we have
2
r2
ρk
r
pr 3
pr+1
+ρ+
+2
exp − 2
.
EU[k] kGk ≤ kW k + kW k∞ √ +
p
k
k
8r
√
2
√
Let p = k and ρ = 4r
4 . Then
k
√
√
r3
4r 2 r 2
r
2
+ exp 2 kr − 2r k
+√ +√
+
EU[k] kGk ≤ kW k + kW k∞ √
4
4
k
k
k
k
32
≤ kW k + kW k∞
ε2
ε
ε4
ε2
ε
+ 8 + + 16 6 + 8 6
16r 2 r 4 2 r
2r
≤ kW k + ε/2kW k∞ .
The direction concerning the lower bound, EkGk ≥ kW k −ε/2 follows from a standard
sampling argument, the idea is that we can project each set S ⊂ [0, 1] to a set Ŝ ⊂ [k]
through the sample, which will fulfill the desired conditions, we leave the details to the
reader. Concentration follows by the Azuma-Hoeffding inequality. We conclude that
1
P (|kGk − kW k | > εkW k∞ ) ≤ P EkGk − r kGk > ε/2kW k∞
k
2
εk
≤ 2 exp −
.
32r 2
Next we state a result on the relationship of a continuous linear program (LP) and its
randomly sampled finite subprogram. We will rely on the next concentration result that is a
generalization of the Azuma-Hoeffding inequality, Theorem 3.10, and suits well the situation
when the martingale jump sizes have inhomogeneous distribution. It can be found together
with a proof in the survey [28] as Corollary 3.
Lemma 4.11 (Generalized Azuma-Hoeffding inequality). Let k ≥ 1 and (Xn )kn=0 be a martingale sequence with respect to the natural filtration (Fn )kn=1 . If |Xn − Xn+1 | ≤ d almost
surely and E[(Xn − Xn+1 )2 | Fn ] ≤ σ 2 for each n ∈ [k], then for every n ≤ k and δ > 0 it
holds that
σ2
δd
δd
δd
P(Xn − X0 > δn) ≤ exp −n 2 (1 + 2 ) ln(1 + 2 ) − 2
.
(4.10)
d
σ
σ
σ
Measurability for all of the following functions is assumed.
Lemma 4.12. Let cm : [0, 1] → R, Ui,m : [0, 1] → R for i = 1, . . . , s, m = 1, . . . , q, u ∈ Rs×q ,
2
α ∈ R. Let d and σ be positive reals such that kck∞ ≤ d and kck2 ≤ σ and set γ = σd2 . If the
optimum of the linear program
maximize
subject to
Z1 X
q
fm (t)cm (t)dt
0
m=1
Z1
fm (t)Ui,m (t)dt ≤ ui,m
0
0 ≤ fm (t) ≤ 1
q
X
fm (t) = 1
for i ∈ [s] and m ∈ [q]
for t ∈ [0, 1] and m ∈ [q]
for t ∈ [0, 1]
m=1
33
is less than α, then for any ε, δ > 0 and k ∈ N and a uniform random sample {X1 , . . . , Xk }
of [0, 1]k the optimum of the sampled linear program
maximize
subject to
q
X X
1
xn,m cm (Xn )
k
1≤n≤k m=1
X 1
xn,m Ui,m (Xn ) ≤ ui,m − δkUk∞
k
1≤n≤k
0 ≤ xn,m ≤ 1
q
X
xn,m = 1
for i ∈ [s] and m ∈ [q]
for n ∈ [k] and m ∈ [q]
for n ∈ [k]
m=1
is less than α + ε with probability at least
2
ε
ε
ε
δ k
+ exp −kγ (1 + ) ln(1 + ) −
.
1 − exp −
2
γd
γd
γd
Proof. We require a continuous version of Farkas’ Lemma.
R1
Claim 1. Let (Af )i,m = 0 Ai,m (t)fm (t)dt for the bounded measurable functions Ai,m on
[0, 1] for i ∈ [s] and m ∈ [q] , and let v ∈ Rsq . There is no fractional q-partition solution
f = (f1 , . . . , fq ) to Af ≤ v if and only if, there exists a non-zero 0 ≤ y ∈ Rsq with kyk1 = 1
such that there is no fractional q-partition solution f to y T (Af ) ≤ y T v.
For clarity we remark that in the current claim and the following one Af and v are indexed
by a pair of parameters, but are regarded as 1-dimensional vectors in the multiplication
operation.
Proof. One direction is trivial: if there is a solution f to Af ≤ v, then it is also a solution
to y T (Af ) ≤ y T v for any y ≥ 0.
We turn to show the opposite direction. Let
C = { Af | f is a fractional q-partition of [0, 1] }.
The set C is a nonempty convex closed subset of Rsq containing 0. Let B = { x | xi,m ≤
vi,m } ⊂ Rsq , this set is also a nonempty convex closed set. The absence of a solution to
Af ≤ v is equivalent to saying that C ∩ B is empty. It follows from the Separation Theorem
for convex closed sets that there is a 0 6= y ′ ∈ Rsq such that y ′T c < y ′T b for every c ∈ C
′
and b ∈ B. Additionally every coordinate yi,m
has to be non-positive. To see this suppose
′
that yi0 ,m0 > 0, we pick a c ∈ C and b ∈ B, and send bi0 ,m0 to minus infinity leaving every
other coordinate of the two points fixed (b will still be an element of B), for bi0 small enough
the inequality y ′T c < y ′T b will be harmed eventually. We conclude that for any f we have
′
y ′T (Af ) < y ′T v, hence for y = ky−y′ k1 the inequality y T (Af ) ≤ y T v has no solution.
From this lemma the finitary version follows without any difficulties.
34
Claim 2. Let B be a real sq × k matrix, and let v ∈ Rsq . There is no fractional q-partition
x ∈ Rkq so that Bx ≤ v if and only if, there is a non-zero 0 ≤ y ∈ Rsq with kyk1 = 1 such
that there is no fractional q-partition x ∈ Rkq so that y T Bx ≤ y T v.
P
B
Proof. Let Ai,m (t) = kn=1 (i,m),n
I[ n−1 , n ) (t) for i = 1, . . . , s. The nonexistence of a fractional
k
k
k
kq
q-partition x ∈ R so that Bx ≤ v is equivalent to nonexistence of a fractional q-partition f
so that Af ≤ v. For any nonzero 0 ≤ y, the nonexistence of a fractional q-partition x ∈ Rkq
so that y T Bx ≤ y T v is equivalent to the nonexistence of a fractional q-partition f so that
y T (Af ) ≤ y T v. Applying Claim 1 verifies the current claim.
The assumption of the lemma is by P
ClaimP
1 equivalent to the statement that there exists
a nonzero 0 ≤ y ∈ Rsq and 0 ≤ β with si=1 qm=1 yi,m + β = 1 such that
Z1 X
q
s X
0
i=1 m=1
Z1
yi,m Ui,m (t)fm (t)dt −
β
0
q
X
m=1
cm (t)fm (t) ≤
q
s X
X
i=1 m=1
yi,m ui,m − βα
has no solution f among fractional q-partitions. This is equivalent to the condition
Z1
h(t)dt > A,
0
Ps Pq
P
where h(t) = min [ si=1 yi,mUi,m (t) − βcm (t)], and A =
m=1 yi,m ui,m − βα. Let
i=1
mP
s
Tm = { t | h(t)
i=1 yi,m Ui,m (t) − βcm (t)
P=
P}q for m ∈ [q] and define the functions h1 (t) =
P
s
q
(t)
[
y
U
(t)]
and
h
(t)
=
I
i,m i,m
2
m=1 ITm (t)βcm (t). Clearly, h(t) = h1 (t)−h2 (t).
m=1 Tm
P
Pi=1
q
s
Set also A1 = i=1 m=1 yi,m ui,m and A2 = βα. Fix an arbitrary δ > 0 and k ≥ 1. By the
2
Azuma-Hoeffding inequality it follows that with probability at least 1 − exp(− kδ2 ) we have
that
k
1X
h1 (Xn ) > A1 − δkh1 k∞ .
k n=1
P P
P P
Note that kh1 k∞ = k si=1 qm=1 ITm Ui,m yi,m k∞ ≤ kUk∞ si=1 qm=1 |yi,m| ≤ kUk∞ . Moreover, by Theorem 4.11 the event
k
1X
h2 (Xn ) < A2 + ε
k n=1
has probability at least 1 − exp −kγ (1 +
k
s
ε
) ln(1
γd
q
+
ε
)
γd
(4.11)
−
ε
γd
. Thus,
XX
1X
yi,m (ui,m − δkUk∞ ) − β(α + ε)
h(Xn ) >
k n=1
i=1 m=1
35
with probability at least
2
δ k
ε
ε
ε
1 − exp −
+ exp −kγ (1 + ) ln(1 + ) −
.
2
γd
γd
γd
We conclude the proof by noting that the last event is equivalent to the event in the
statement of our lemma by Claim 2.
We start the principal part of the proof of the main theorem in this paper.
Proof of Theorem 4.4. It is enough to prove Theorem 4.4 for tuples of naive ([−d, d], r)digraphons. We first employ Theorem 4.6 to replace the energy Ê(G(k, W )) by the energy
ˆ
of the averaged sample E(H(k,
W )) without altering the ground state energy of the sample
substantially with high probability. Subsequently, we apply Theorem 4.7 to change from the
ˆ
integer version of the energy E(H(k,
W )) to the relaxed one E(H(k, W )). That is
|Ê(G(k, W )) − E(H(k, W ))| ≤ ε2 kW k∞
with probability at least 1 − ε2 .
We begin with the main argument by showing that the ground state energy of the sample
can not be substantially smaller than that of the original, formally
E(H(k, W )) ≥ E(W ) −
r2
kW k∞
k 1/4
(4.12)
with high probability. In what follows E denotes the expectation with respect to the uniform
independent random sample (US )S∈h([k],r) from [0, 1]. To see the correctness of the inequality,
we consider a fixed fractional partition φ of [0, 1], and define the random fractional partition
of [k] as yn,m = φm (Un ) for every n ∈ [k] and m ∈ [q]. Then we have that
EE(H(k, W )) ≥ EEy (H(k, W ))
k
r
Y
1 X X
z
=E r
W (Uh({n1 ,...,nr },r) )
ynj ,zj
k
r
j=1
z∈[q] n1 ,...,nr =1
Z
r
Y
X
r2
k!
W z (th([r]) )
φzj (tj )dλ(t) − kW k∞
≥ r
k (k − r)!
k
r
j=1
z∈[q]
[0,1]h([r])
2
≥ Eφ (W ) −
r
kW k∞ .
k
This argument proves the claim in expectation, concentration will be provided by standard martingale arguments. For convenience, we define a martingale by Y0 = EE(H(k, W ))
and Yj = E [E(H(k, W )) | { US | S ∈ h([j], r − 1) }] for 1 ≤ j ≤ k. The difference |Yj −
36
kW k∞ is bounded from above for any j, thus by the inequality of Azuma and
Yj+1| ≤ 2r
k
Hoeffding, Theorem 3.10, it follows that
2r 2
P E(H(k, W )) < E(W ) − 1/4 kW k∞
k
r2
≤ P E(H(k, W )) < EE(H(k, W )) − 1/4 kW k∞
k
√ !
r2 k
r2
.
(4.13)
= P Yk < Y0 − 1/4 kW k∞ ≤ exp −
k
8
So the lower bound (4.12) on E(H(k, W )) is established. Note that
by the condition regarding
√
r2 k
k we can establish (rather crudely) the upper bound exp(− 8 ) ≤ ε2−7.
Now we turn to prove that E(H(k, W )) < E(W ) + ε holds also with high probability
r+7 r 4
r+7 r
for k ≥ 2 ε q r log( 2 ε q r )q r . Our two main tools will be Theorem 4.8, that is a variant
the Cut Decomposition Lemma from [4] (closely related to the Weak Regularity Lemma
by Frieze and Kannan [16]), and linear programming duality, in the form of Theorem 4.12.
Recall the definition of the cut norm, for W : [0, 1]r → R, it is given as
kW k =
Z
max
S 1 ,...,S r ⊂[0,1]
W (x)dλ(x) ,
S 1 ×···×S r
and for an r-array G by the expression
kGk =
1
kr
max
S 1 ,...,S r ⊂V (G)
G(S 1 , . . . , S r ) .
Before starting the second part of the technical proof, we present an informal outline. Our
task is to certify that there is no assignment of the variables on the sampled energy problem,
which produces an overly large value relative to the ground state energy of the continuous
problem. For this reason we build up a cover of subsets over the set of fractional partitions of
the variables of the finite problem, also build a cover of subsets over the fractional partitions
of the original continuous energy problem, and establish an association scheme between the
elements of the two in such a way, that with high probability we can state that the optimum
on one particular set of the cover of the sampled energy problem does not exceed the optimal
value of the original problem on the associated set of the other cover. To be able to do this,
first we have to define these two covers, this is done with the aid of the cut decomposition, see
Theorem 4.8. We will replace the original continuous problem by an auxiliary one, where the
number of variables will be bounded uniformly in terms of our error margin ε. Theorem 4.9
makes it possible for us to replace the sampled energy problem by an auxiliary problem with
the same complexity as for the continuous problem. This second replacement will have a
straightforward relationship to the approximation of the original problem. We will produce
the cover sets of the two problems by localizing the auxiliary problems, association happens
37
through the aforementioned straightforward connection. Finally, we will linearize the local
problems, and use the linear programming duality principle from Theorem 4.12 to verify
that the local optimal value on the sample does not exceed the local optimal value on the
original problem by an infeasible amount, with high probability.
Recall that for a φ = (φ1 , . . . , φq ) a fractional q-partition of [0, 1] the energy is given by
the formula
X Z Y
φzj (tj )W z (t)dλ(t),
(4.14)
Eφ (W ) =
z∈[q]r[0,1]r j∈[r]
and for an x = (x1,1 , x1,2 , . . . , x1,q , x2,1 , . . . , xk,q ) a fractional q-partition of [k] by
X 1
Ex (H(k, W )) =
kr
r
z∈[q]
k
X
Y
xtj ,zj W z (Un1 , . . . , Unr ).
(4.15)
n1 ,...,nr =1 j∈[r]
We are going to establish a term-wise connection with respect to the parameter z in the
previous formulas. Therefore we consider the function
Z Y
z
z
Eφ (W ) =
φzj (tj )W z (t)dλ(t),
(4.16)
[0,1]r j∈[r]
P
it follows that Eφ (W ) =
z∈[q]r
Eφz (W z ). Analogously we consider
Exz (H(k, W z ))
1
= r
k
k
X
Y
xtj ,zj W z (Un1 , . . . , Unr ),
n1 ,...,nr =1 j∈[r]
so Ex (H(k, W )) = z∈[q]r Exz (H(k, W z )) with the sampled graphs on the right generated by
the same sample points. Note that the formulas (4.14)-(4.16) make prefect sense even when
the parameters φ and x are only vectors of bounded functions and reals respectively without
forming partition.
6 2r
Theorem 4.8 delivers for any z ∈ [q]r an integer sz ≤ 2 εq2 , measurable sets Sz,i,j ⊂ [0, 1]
with i = 1, . . . , sz , j = 1, . . . , r, and the real numbers dz,1 , . . . , dz,sz such that the conditions
of the lemma are satisfied, namely
P
kW z −
Psz
8q r
sz
X
i=1
dz,i ISz,i,1 ×···×Sz,i,r k ≤
ε
kW z k2 ,
8q r
and i=1 |dz,i | ≤ ε kW z k2 . The
Psz cut function allows a sufficiently good approximation for
z
z
Eφ (W ), for any φ. Let D = i=1 dz,i ISz,i,1 ×···×Sz,i,r . Then
|Eφz (W z )
−
Eφz (D z )|
=
Z
Y
[0,1]r j∈[r]
φzj (tj ) [W z (t) − D z (t)] dλ(t)
38
≤ kW z − D z k ≤
ε
kW z k∞ .
8q r
We apply the cut approximation to W z for every z ∈ [q]r to obtain the [q]r -tuple of naive
r-kernels D = (D z )z∈[q]r . We define the ”push-forward” of this approximation for the sample
H(k, W ). To do this we need to define the subsets [k] ⊃ Ŝz,i,j = { m | Um ∈ Sz,i,j }. Let
Pz
dz,i IŜz,i,1 ×···×Ŝz,i,r . First we condition on the event from Theorem 4.9, call this
D̂ z = si=1
event E1 , that is
\
ε
z
z
z
z
kH(k, W ) − D̂ k − kW − D k < r kW k∞ .
E1 =
8q
r
z∈[q]
On E1 it follows that for any x that is a fractional q-partition
|Exz (H(k, W z )) − Exz (D̂ z )| ≤ kH(k, W z ) − D̂ z k
ε
≤ kW z − D z k + r kW k∞ .
8q
This implies that
ε
and |Ex (H(k, W )) − Ex (H(k, D))| ≤ kW k∞ .
4
7 r 2 4
2
The probability that E1 fails is at most 2q r exp − 211εr2kq2r whenever k ≥ 2 qε r
due to
r+7 r 4
r+7 r
Theorem 4.9, in the current theorem we have the condition k ≥ 2 ε q r log( 2 ε q r )q r ,
which implies the aforementioned one. The failure probability of E1 is then strictly less than
ε
.
27
Let S = { Sz,i,j | z ∈ [q]r , 1 ≤ i ≤ sz , 1 ≤ j ≤ r } denote their set, and let S ′ stand for
the corresponding set on the sample. Note that s′ = |S| ≤ 26 rq 3r ε12 in general, but in some
cases the W z functions are constant multiples of each other, so the cut approximation can be
chosen in a way that Sz,i,j does not depend on z ∈ [q]r , and in this case we have the slightly
refined upper bound 26 rq 2r ε12 for s′ , consequences of this in the special case are discussed in
the remark after the proof. Let η > 0 be arbitrary, and define the sets
ε
|Eφ (W ) − Eφ (D)| ≤ kW k∞
8
I(b, η) =
and
I ′ (b, η) =
φ | ∀z ∈ [q]r , 1 ≤ i ≤ sz , 1 ≤ j ≤ r :
Z
φzj (t)dt − bz,i,j ≤ 2η
Sz,i,j
x | ∀z ∈ [q]r , 1 ≤ i ≤ sz , 1 ≤ j ≤ r :
39
1
kU
X
n ∈Sz,i,j
,
xn,zj − bz,i,j ≤ η
For a collection of non-negative reals {bz,i,j }. At this point in the definitions of the above sets
we do not require φ and x to be fractional q-partitions, but to be vectors of bounded functions
and vectors respectively. We will use the grid points A = { (bz,i,j )z,i,j | ∀z, i, j : bz,i,j ∈
[0, 1] ∩ ηZ }.
On every nonempty set I(b, η) we can produce a linear approximation of Eφ (D) (linearity
is meant in the functions φm ) which carries through to a linear approximation of Ex (H(k, D))
via sampling. The precise description of this is given in the next auxiliary result.
Lemma 4.13 (Local linearization). If η ≤ 16qεr 2r , then for every b ∈ A there exist l0 ∈ R
and functions l1 , l2 , . . . , lq : [0, 1] → R such that for every φ ∈ I(b, η) it holds that
Z1 X
q
Eφ (D) − l0 −
0
lm (t)φm (t)dt <
m=1
ε
2r+3
kW k∞ ,
and for every x ∈ I ′ (b, η) we have
q
k X
X
1
ε
Ex (H(k, D)) − l0 −
xn,m lm (Ui ) < r+5 kW k∞ .
k
2
n=1 m=1
Additionally we have that l1 , l2 , . . . , lq are bounded from above by
22r+9 r 2 q 3r kW k2∞ .
8q 2r
kW k∞
ε
and
R 1 Pq
0
2
m=1 lm (t)dt
Proof. Recall the decomposition of the energies as sums over z ∈ [q]r into terms
Eφz (D z )
=
sz
X
dz,i
i=1
=
sz
X
[0,1]r
dz,i
i=1
and
Exz (D̂ z )
Z Y
r
=
Z
[0,1]r
sz
X
i=1
j=1
φzj (tj )ISz,i,1 ×···×Sz,i,r (t)dt
q
r
Y
Y
m=1 j=1
zj =m
φm (tj )ISz,i,1 ×···×Sz,i,r (t)dt,
q
r
1 Y Y
dz,i r
k m=1 j=1
zj =m
X
xn,m .
n : Un ∈Sz,i,j
We linearize and compare the functions Eφz (D z ) and Exz (D̂ z ) term-wise. In the end we will
sum up the errors and deviations occurred at each term. Let b ∈ A and η > 0 as in the
statement of the lemma with I(b, η) being nonempty. Let us fix an arbitrary φ ∈ I(b, η),
z ∈ [q]r , and 1 ≤ i ≤ sz . Then
1
1
Z
Z
r
r
Y
X
φzj (tj )ISz,i,j (tj )dtj = B i (z) +
φzj (tj )ISz,i,j (tj )dtj − bz,i,j B i,j (z) + ∆
j=1
0
j=1
0
40
≤
1
= (1 − r)B i (z) +
q Z
X
m=1 0
φm (t)
r
X
j=1,zj =m
ISz,i,j (t)B i,j (z) dt + ∆,
Q
Q
where B i (z) stands for rj=1 bz,i,j , B i,j (z) = l6=j bz,i,l , and |∆| ≤ 4η 2 2r . Analogously for an
arbitrary fixed element x ∈ I ′ (b, η) and a term of Exz (D̂ z ) we have
r
Y
X
1
xn,zj − bz,i,j + bz,i,j
k
j=1
n : Un ∈Sz,i,j
q
k
r
XX 1
X
= (1 − r)B i (z) +
xn,m
ISz,i,j (Un )B i,j (z) + ∆′ ,
k
m=1 n=1
j=1,z =m
j
where |∆′ | ≤ η 2 2r .
If we multiply these former expressions by the respective coefficient dz,i and sum up over
i and z, then we obtain the final linear approximation consisting of the constant l0 and the
functions l1 , . . . , lq . We would like to add that these objects do not depend on η if I(b, η) is
nonempty, only the accuracy of the approximation does. As overall error in approximating
2r
ε
kW k∞ , and in
the energies we get in the first case of Eφ (D) at most 32η 2 2r qε kW k∞ ≤ 2r+3
ε
the second case of Ex (H(k, D)) at most 2r+5 kW k∞ .
Now we turn to prove the upper bound on |lm (t)|. Looking at the above formulas we
could write out lm (t) explicitly, for our upper bound it is enough to note that
r
X
ISz,i,j (t)B i,j (z)
j=1,zj =m
is at most r. So it follows that for any t ∈ [0, 1] it holds that
8q 2r
rkW k∞ .
ε
R1P
2
It remains to verify the assertion regarding 0 qm=1 lm
(t)dt. Note that I(b, η) ⊂ I(b, 2η),
so we can apply the same linear approximation to elements ψ of I(b, 2η) as above with a
ε
kW k∞ from Eψ (D). Let φ be an arbitrary element
deviation of at most 2r+1
Pq of I(b, η), and
let T ⊂ [0, 1] denote the set of measure η corresponding to the largest m=1 |lm (t)| values.
Define
(
φm (t) + sgn(lm (t)) if t ∈ T
φ̂m (t) =
φm (t)
otherwise.
|lm (t)| ≤
Then φ̂ ∈ I(b, 2η), since kφm − φ̂m k1 ≤ η for each m ∈ [q], but φ̂ is not necessarily a fractional
partition. Therefore we have
Z X
q
T
m=1
|lm (t)|dt =
Z1 X
q
0
(φ̂m (t) − φm (t))lm (t)dt
m=1
41
≤
Z1 X
q
m=1
0
φ̂m (t)lm (t)dt − Eφ̂ (D) + |Eφ̂ (D) − Eφ (D)|
Z1 X
q
+
m=1
0
5
≤
φm (t)lm (t)dt − Eφ (D)
εkW k∞ + |Eφ̂ (D) − Eφ (D)|.
2r+3
We have to estimate the last term of the above expression.
!
r
r
X Z
Y
Y
|Eφ̂ (D) − Eφ (D)| ≤
φzj (tj ) −
φ̂zj (tj ) D z (t)dt
z∈[q]r [0,1]r
≤ 2kW k∞
j=1
r
r
Y
X Z X
z∈[q]r[0,1]r j=1
r r−1
≤ 2kW k∞ 2 q
T
m=1
r
q
X
m=1
We conclude that
Z X
q
j=1
i>j
5
2r+3
r
+ 3
2
This further implies that for each t ∈
/ T we have
r+1
r
(10 + 2 r) q kW k∞ . These former bounds indicate
Z1 X
q
0
2
lm
(t)dt
Z
=
m=1
[0,1]\T
≤2
q
X
2
lm
(t)dt
+
m=1
r q
kW k2∞
εkW k∞.
Pq
m=1
Z X
q
T
2r+8 2 2r
φ̂zi (ti )(φ̂zj (tj ) − φzj (tj )) dt
kφm − φ̂m k1 ≤ 2kW k∞ 2r q r rη.
|lm (t)|dt ≤
φzi (ti )
i<j
r
Y
|lm (t)| ≤
≤
≤
(2
r
23
ε
η
kW k∞ ≤
m=1
+ klk∞
2
r q kW k2∞ +
22r+9 r 2 q 3r kW k2∞ .
+
2
lm
(t)dt
Z X
q
T
2r+8 2 2r
5
2r+3
r+4
m=1
|lm (t)|dt
rq r )(8q 2r r)kW k2∞
We return to the proof of the main theorem, and set η = 16qεr 2r . For each b ∈ A we apply
Theorem 4.13, so that we have for any φ ∈ I(b, η) and x ∈ I ′ (b, η) that
1
Eφ (W ) − l0 −
q Z
X
φm (t)lm (t)dt =
m=1 0
42
ε
2r+3
kW k∞ ,
Ex (H(k, W )) − l0 −
k
X
1
n=1
k
xn,m lm (Un ) =
ε
2r+5
kW k∞ ,
since η is small enough. Note that l0 , l1 , . . . , and lq inherently depend on b. We introduce
the event E2 (b), which stands for the occurrence of the following implication:
If the linear program
maximize
subject to
q
k X
X
1
l0 +
xn,m lm (Un )
k
n=1 m=1
x ∈ I ′ (b, η)
0 ≤ xn,m ≤ 1
q
X
xn,m = 1
for n = 1, . . . , k and m = 1, . . . , q
for m = 1, . . . , q
m=1
has optimal value α, then the continuous linear program
maximize
l0 +
Z1 X
q
0
subject to
lm (t)φm (t)dt
m=1
φ ∈ I(b, η)
0 ≤ φm (t) ≤ 1
q
X
φm (t) = 1
for t ∈ [0, 1] and m = 1, . . . , q
for t ∈ [0, 1]
m=1
has optimal value at least α − (ε/2)kW k∞ .
2r
We apply Theorem 4.12 with δ = η, σ 2 = 22r+9 r 2 q 3r kW k2∞ , d = 8qε rkW k∞ , and γ =
and attain that the probability that E2 (b) fails is at most
εkW k∞
kη 2
εkW k∞
εkW k∞
+ exp −kγ (1 +
exp −
) ln(1 +
)−
2
γd
γd
γd
2
1
kε
≤ exp − 8 2r 2r + exp −kε2 22r+3 q −r
4r+15
2 q 2
2
qr r2
kε2
kε2
kε2
= exp − 2r+8 2r + exp − 2r+12 2r 2 ≤ 2 exp − 2r+12 2r 2 ,
2
q
2
q r
2
q r
σ2
,
d2
where we used that (1 + x) ln(1 + x) − x ≥ (1 + x)(x − x2 /2) − x = x2 /2 − x3 /2 ≥ x2 /4 for
0 ≤ x ≤ 41 . Denote by E2 the event that for each b ∈ A the event E2 (b) occurs. Then we
have
r+3 r 26 rq3r 12
ε
kε2
2 q
exp − 2r+12 2r 2
P(E2 ) ≥ 1 − 2
ε
2
q r
43
r+3 r
2r+7q r r 2r+16 2 3r −2
2 q
6 3r −2
2 rq ε − log(
)2
r q ε
≥ 1 − 2 exp log
ε
ε
2r+7q r r 2r+15 2 3r −2
)2
r q ε
≥ 1 − 2 exp − log(
ε
≥ 1 − ε/4.
r+7 r 4
r+7 r
Therefore for k ≥ 2 ε q r log( 2 ε q r )q r we have that P(E1 ∩ E2 ) ≥ 1 − ε/2. We only need
to check that conditioned on E1 and E2 our requirements are fulfilled. For this, consider
an arbitrary fractional q-partition of [k] denoted by x. For some b ∈ A we have that
x ∈ I ′ (b, η). If we sum up the error gaps that were allowed for the Cut Decomposition and at
the local linearization stage, then the argument we presented above yields that there exists
a φ ∈ I(b, η) such that conditioned on the event E1 ∩ E2 it holds
Eφ (W ) ≥ Ex (H(k, W )) − εkW k∞ .
This is what we wanted to show.
We can improve on the tail probability bound in Theorem 4.4 significantly by a constant
factor strengthening of the lower threshold condition imposed on the sample size.
Corollary 4.14. Let r ≥ 1, q ≥ 1, and ε > 0. Then for any [q]r -tuple of ([−d, d], r)-graphons
r+10 r
W = (W z )z∈[q]r and k ≥ Θ4 log(Θ)q r with Θ = 2 ε q r we have that
2
εk
(4.17)
P(|E(W ) − Ê(G(k, W ))| > εkW k∞) < 2 exp − 2 .
8r
Proof. For k ≥ Θ4 log(Θ)q r we appeal to Theorem 4.4, hence
ˆ
ˆ
|E(W ) − EE(G(k,
W ))| ≤ P(|E(W ) − E(G(k,
W ))| > ε/8kW k∞)2kW k∞ + ε/8kW k∞
< ε/2kW k∞ .
Using a similar martingale construction to the one in the first part of the proof of Theorem 4.4
the Azuma-Hoeffding inequality can be applied, thus
ˆ
P(|E(W ) − E(G(k,
W ))| > εkW k∞ ) ≤ P(|EÊ(G(k, W )) − Ê(G(k, W ))| > ε/2kW k∞)
2
ε k
≤ 2 exp − 2 .
8r
Remark 4.15. A simple investigation of the above proof also exposes that in the case when
the W z ’s are constant multiples of each other then we can employ the same cut decomposition
to all of them with the right scaling, which implies that the upper bound on |S| can be
strengthened to 26 rq 2r ε12 , gaining a factor of q r . Therefore in this case the statement of
r+10 r 4
r+10 r
Theorem 4.14 is valid with the improved lower bound condition 2 ε q r log( 2 ε q r ) on
k.
44
Remark 4.16. Suppose that f is the following simple graph parameter. Let q ≥ 1, m0 ≥ 1,
and g be a polynomial of l variables and degree d with values between 0 and 1 on the unit
cube, where l is the number of unlabeled node-q-colored graphs on m0 vertices, whose set
2
we denote by Mq,m0 . Note that l ≤ 2m0 /2 q m0 . Let then
f (G) = max g((t(F, (G, T )))F ∈Mq,m0 ),
(4.18)
T
where the maximum goes over all node-q-colorings of G, and (G, T ) denotes the node-qcolored graph by imposing T on the node set of G. Using the identity t(F1 , G)t(F2 , G) =
t(F1 ∪ F2 , G), where F1 ∪ F2 is the disjoint union of the (perhaps colored) graphs F1 and F2 ,
we can replace in (4.18) g by g ′ that is linear, and its variables are indexed by Mq,dm0 . Then
it becomes clear that f can be regarded as a ground state energy of dm0 -dimensional arrays
by associating to every G an tuple (Az )z∈[q]r with r = dm0 , where the entries Az (i1 , . . . , ir )
are the coefficients of g ′ corresponding to the element of Mq,dm0 given by the pair z and
G|(i1 ,...,ir ) . We conclude that f is efficiently testable by Theorem 4.4.
5
Testability of variants of the ground state energy
In the current section we derive further testability results using the techniques employed
in the proofs of the previous section, and apply Theorem 4.4 to some specific quadratic
programming problems.
5.1
Microcanonical version
Next we will state and prove the microcanonical version of Theorem 4.4, that is the continuous generalization of the main result of [13] for an arbitrary number q of the states. To be
able to do this, we require the microcanonical analog of Theorem 4.7, that will be a generalization of Theorem 5.5 from [11] for arbitrary r-graphs (except for the fact that we are
not dealing with node weights), and its proof will also follow the lines of the aforementioned
theorem. Before stating the lemma, we outline some notation and state yet another auxiliary
lemma.
P
Definition 5.1. Let for a = (a1 , . . . , aq ) ∈ Pdq (that is, ai ≥ 0 for each i ∈ [q] and i ai = 1)
denote
Z1
Ωa = φ fractional q-partition of [0, 1] | φi (t)dt = ai for i ∈ [q] ,
0
ωa =
x fractional q-partition of V (G) |
45
1
|V (G)|
X
u∈V (G)
xu,i = ai for i ∈ [q]
,
and
ω̂a =
(
x integer q-partition of V (G) |
P
u∈V (G)
xu,i
|V (G)|
1
− ai ≤
for i ∈ [q]
|V (G)|
)
.
The elements of the above sets are referred to as integer a-partitions and fractional apartitions, respectively.
We call the following expressions microcanonical ground state energies with respect to a
for (K, r)-graphs and graphons and C(K)-valued r-arrays J, in the finite case we add the
term fractional and integer respectively to the name. Denote
Ea (W, J) = max Eφ (W, J),
φ∈Ωa
Ea (G, J) = max Ex (G, J),
x∈ωa
Êa (G, J) = max Ex (G, J).
x∈ω̂a
The layered versions for a finite layer set E, and the canonical versions Ea (W ), Ea (G),
and Eˆa (G) are defined analogously.
The requirements for an x to be an integer fractional a-partition (that is φ ∈ Ωa ) are
rather strict and we are not able to guarantee with high probability that if we sample from
an fractional a-partition of [0, 1], that we will receive an fractional a-partition on the sample,
in fact this will not happen with probability 1. To tackle this problem we need to establish
an upper bound on the difference of two microcanonical ground state energies with the same
parameters. This was done in the two dimensional case in [11], we slightly generalize that
approach.
Lemma 5.2. Let r ≥ 1, and q ≥ 1. Then for any [q]r -tuple of naive r-kernels W =
(W z )z∈[q]r , and probability distributions a, b ∈ Pdq we have
|Ea (W ) − Eb (W )| ≤ rkW k∞ ka − bk1 .
The analogous statement is true for a [q]r -tuple of ([−d, d], r)-digraphs G = (Gz )z∈[q]r ,
|Ea (G) − Eb (G)| ≤ rkGk∞ ka − bk1 .
Proof. We will find for each fractional a-partition φ a fractional b-partition φ′ and vice
versa, so that the corresponding energies are as close to each other as in the statement. So
let φ = (φ1 , . . . , φq ) be an arbitrary fractional a-partition, we define φ′i so that the following
holds: if ai ≥ bi then φ′i (t) ≤ φi (t) for every t ∈ [0, 1], otherwise φ′i (t) ≥ φi (t) for every
t ∈ [0, 1]. It is easy to see that such a φ′ = (φ′1 , . . . , φ′q ) exists. Next we estimate the energy
deviation.
|Eφ (W ) − Eφ′ (W )| ≤
X
Z
z∈[q]r [0,1]r
φz1 (x1 ) . . . φzr (xr ) − φ′z1 (x1 ) . . . φ′zr (xr )dλ(x) kW k∞
46
≤
=
Z
r
X X
z∈[q]r m=1 [0,1]r
r Z
X X
z∈[q]r
=
m=1
[0,1]
q Z
r X
X
m=1 j=1
(φzm (xm ) − φ′zm (xm ))
Y
j<m
φzm (xm ) − φ′zm (xm ) dxm
φj (t) − φ′j (t) dt
[0,1]
q
X
j=1
φzj (xj )
aj
Y
j<m
!m−1
Y
j>m
azj
Y
j>m
q
X
bj
j=1
φ′zj (xj )dλ(x) kW k∞
bzj kW k∞
!r−m−1
kW k∞
= rka − bk1 kW k∞ .
The same way we can find for any fractional b-partition φ an fractional a-partition φ′ so that
their respective energies differ at most by rka − bk1 kW k∞ . This implies the first statement
of the lemma. The finite case is proven in a completely analogous fashion.
We are ready to show that the difference of the fractional and the integer ground state
energies is o(|V (G)|) whenever all parameters are fixed, this result is a generalization with
respect to the dimension in the non-weighted case of Theorem 5.5 of [11], the proof proceeds
similar to the one concerning the graph case that was dealt with in [11].
Lemma 5.3. Let q, r, k ≥ 1,a ∈ Pdq , and G = (Gz )z∈[q]r be a tuple of ([−d, d], r)-graphs on
[k]. Then
1
|Ea (G) − Eˆa (G)| ≤ kGk∞ 5r q r+1 .
k
Proof. The inequality Ea (G) ≤ Êa (G) + k1 kGk∞ 5r q r+1 follows from Theorem 5.2. Indeed, for
this bound a somewhat stronger statement it possible,
Eˆa (G) ≤
q
Eb (G) ≤ Ea (G) + r kGk∞ .
b : |bi −ai |≤1/k
k
max
Now we will show that Eˆa (G) ≥ Ea (G) − k1 kGk∞ 5r q r+1 . We consider an arbitrary fractional
a-partition x. A node i from [n] is called bad in a fractional partition x, if at least two
elements of {xi,1 , . . . , xi,q } are positive. We will reduce the number of fractional entries of
the bad nodes of x step by step until we have at most q of them, and keep track of the cost
of each conversion, at the end we round the corresponding fractional entries of the remaining
bad nodes in some certain way.
We will describe a step of the reduction of fractional entries. For now assume that we
have at least q + 1 bad nodes and select an arbitrary set S of cardinality q + 1 of them. To
each element of S corresponds a q-tuple of entries and each of these q-tuples has at least two
non-{0, 1} elements.
We reduce the number of fractional entries corresponding to S while not disrupting any
entries corresponding to nodes that lie outside of S. To do this we fix for each i ∈ [q] the
47
P
P
sums v∈S xv,i and for each v ∈ S the sums qi=1 xv,i (these latter are naturally fixed to be
1), in total 2q + 1 linear equalities. We have at least 2q + 2 fractional entries corresponding to
S, therefore there exists a subspace of solutions of dimension at least 1 for the 2q + 1 linear
equalities. That is, there is a family of fractional partitions parametrized by −t1 ≤ t ≤ t2
for some t1 , t2 > 0 that obey our 2q + 1 fixed equalities and have the following form. Let
xti,j = xi,j + tβi,j , where βi,j = 0 if i ∈
/ S or xi,j ∈ {0, 1}, and βi,j 6= 0 else, together these
entries define xt . The boundaries −t1 and t2 are non-zero and finite, because eventually
an entry corresponding to S would exceed 1 or would be less than 0 with t going to plus,
respectively minus infinity. Therefore at these boundary points we still have an fractional apartition that satisfies our selected equalities, but the number of fractional entries decreases
by at least one. We will formalize how the energy behaves when applying this procedure.
Ext (G) = Ex (G) + c1 t + · · · + cr tr ,
where for l ∈ [r] we have
cl =
1 X
kr
r
z∈[q]
X
βu1 ,zπ(1) . . . βul ,zπ(l) xul+1 ,zπ(l+1) . . . xur ,zπ(r) Gz (uπ(1) , . . . , uπ(r) ),
u1 ,...,ul ∈S
ul+1 ,...,ur ∈V \S
π
where the second sum runs over permutations π of [k] that preserves the ordering of the
elements of {1, . . . , l} and {l + 1, . . . , r} at the same time. We deform the entries corresponding to S through t in the direction so that c1 t ≥ 0 until we have eliminated at
least one fractional entry, that is we set t = −t1 , if c < 0, and t = t2 otherwise. Note,
that as xt is a fractional partition, therefore 0 ≤ xP
i,j + tβi,j ≤ 1, which implies that for
tβ
Pi,j ≤ 0 we have
P |tβi,j | ≤ xi,j . On the
P other hand, j tβi,j = 0 for any t and i. Therefore
|tβ
|
=
2
|tβ
|I
≤
2
i,j
i,j {tβi,j ≤0}
j
j
j xi,j = 2 for any i ∈ [k]. This simple fact enables us
to upper bound the absolute value of the terms cl tl .
|cl tl | ≤
X X
(k − q − 1)r−l
|tβu1 ,zπ(1) | . . . |tβul ,zπ(l) |
kGk
∞
kr
r u ,...,u ∈S
z∈[q]
1
π
l
(k − q − 1)r−l
r r−l X X
=
q
|tβu1 ,z1 | . . . |tβul ,zl |
kGk
∞
l
kr
z∈[q]l u1 ,...,ul ∈S
l
X
r r−l
r r−l
1
1
q
|tβu,j | ≤ l kGk∞
q (2q + 2)l .
≤ l kGk∞
k
l
k
l
u∈S,j∈[q]
It follows that in each step of elimination of a fractional entry of x we have to admit a
decrease of the energy value of at most
r
X
l=2
|cl tl | ≤
1
kGk∞ (3q + 2)r .
2
k
48
There are in total kq entries in x, therefore, since in each step the number of fractional entries
is reduced by at least 1, we can upper bound the number of required steps for reducing the
cardinality of bad nodes to at most q by k(q − 1), and conclude that we admit an overall
energy decrease of at most k1 kGk∞ (q − 1)(3q + 2)r to construct from x a fractional partition
x′ with at most q nodes with fractional entries In the second stage we proceed as follows.
Let B = {u1 , . . . , um } be the set of the remaining bad nodes of x′ , with m ≤ q. For ui ∈ B
we set x′′ui ,j = Ii (j), for the rest of the nodes we set x′′ = x′ , obtaining an integer a-partition
of [k]. Finally, we estimate the cost of this operation. We get that
Ex′′ (G) ≥ Ex′ (G) −
1
kGk∞ |B|k r−1 q r .
r
k
The original fractional a-partition was arbitrary, therefore it follows that
1
Ea (G) − Eˆa (G) ≤ kGk∞ 5r q r+1 .
k
We are ready state the adaptation of Theorem 4.4 adapted to the microcanonical setting.
Theorem 5.4. Let r ≥ 1, q ≥ 1, a ∈ Pdq , and ε > 0. Then for any [q]r -tuple of ([−d, d, r])r+7 r
graphons W = (W z )z∈[q]r and k ≥ Θ4 log(Θ)q r with Θ = 2 ε q r we have
P |Ea (W ) − Êa (G(k, W ))| > εkW k∞ < ε.
r+7 r
Proof. Let W be as in the statement and k ≥ Θ4 log(Θ)q r with Θ = 2 ε q r . We start with
pointing out that we are allowed to replace the quantity Eˆa (G(k, W )) by Ea (G(k, W )) in
the statement of the theorem by Theorem 5.3 and only introduce an initial error at most
1
kGk∞ 5r q r+1 ≤ 2ε kW k∞ .
k
The lower bound on Ea (G(k, W )) is the result of standard sampling argument combined
with Theorem 5.2. Let us consider a fixed a-partition φ of [0, 1], and define the random
fractional partition of [k] as yn,m = φm (Un ) for every n ∈ [k] and m ∈ [q]. The partition y
is not necessarily an fractional a-partition, but it can not be very far from being one. For
m ∈ [q] it holds that
!
Pk
n=1 yn,m
− am ≥ ε ≤ 2 exp(−ε2 k/2),
P
k
therefore for our choice of k the sizes of the partition classes obey | k1
for every m ∈ [q] with probability at least 1 − ε/2.
We appeal to Theorem 5.2 to conclude
EEa (G(k, W )) ≥ EEy (G(k, W )) − (ε/2)kW k∞
49
Pk
n=1 yn,m −am |
<
ε
2(q+1)
r
k
Y
1 X X
ynj ,zj − (ε/2)kW k∞
W (Un1 , . . . , Unr )
=E r
k
r
n
,...,n
=1
j=1
z∈[q] 1
r
2
Z
r
Y
X
r
k!
W (t1 , . . . , tr )
φzj (tj )dt −
+ ε/2 kW k∞
≥ r
k (k − r)!
k
r
j=1
z∈[q] [0,1]r
≥ Eφ (W ) −
r2
+ ε/2 kW k∞ .
k
The concentration of the random variable Ea (G(k, W )) can be obtained through martingale
arguments identical to the technique used in the proof of the lower bound in Theorem 4.4.
For the upper bound on Ea (G(k, W )) we are going to use the cut decomposition and local
linearization, the approach to approximate the energy of Eφ (W ) and Ex (G(k, W )) for certain
partitions φ, respectively x is completely identical to the proof of Theorem 4.4, therefore we
borrow all the notation from there, and we do not refer to again in what follows.
Now we consider a b ∈ A and define the event E3 (b) that is occurrence the following
implication.
If the linear program
maximize
subject to
q
k X
X
1
l0 +
xn,m lm (Un )
k
n=1 m=1
x ∈ I ′ (b, η) ∩ ωa
0 ≤ xn,m ≤ 1
q
X
xn,m = 1
for n = 1, . . . , k and m = 1, . . . , q
for n = 1, . . . , k
m=1
has optimal value α, then the continuous linear program
maximize
l0 +
Z1 X
q
0
subject to
lm (t)φm (t)dt
m=1
φ ∈ I(b, η) ∩
[
c : |ai −ci |≤η
0 ≤ φm (t) ≤ 1
q
X
φm (t) = 1
Ωc
for t ∈ [0, 1] and m = 1, . . . , q
for t ∈ [0, 1]
m=1
has optimal value at least α − 2ε kW k∞ .
Recall that η = 16qεr 2r . It follows by applying Theorem 4.12 that E3 (b) has probability
kε2
at least 1 − 2 exp − 22r+12 q2r r2 . When conditioning on E1 , the event from the proof of
50
Theorem 4.4, and E3 = ∩b∈A E3 (b) we conclude that
Ea (G(k, W )) ≤
max
c : |ai −ci |≤η
Ec (W )+ε/2)kW k∞ ≤ Ea (W )+(rqη+ε/2)kW k∞ ≤ Ea (W )+εkW k∞.
Also, like in Theorem 4.4, the probability of the required events to happen simultaneously
is at least 1 − ε/2. This concludes the proof.
5.2
Quadratic assignment and maximum acyclic subgraph problem
The two optimization problems that are the subject of this subsection, the quadratic assignment problem (QAP) and maximum acyclic subgraph problem (AC), are known to be
NP-hard, similarly to MAX-rCSP that was investigated above. The first polynomial time
approximation schemes were designed for the QAP by Arora, Frieze and Kaplan [6]. Dealing with a QAP means informally that one aims to minimize the transportation cost of his
enterprise that has n production locations and n types of production facilities. This is to
be achieved by an optimal assignment of the facilities to the locations with respect to the
distances (dependent on the location) and traffic (dependent on the type of the production).
In formal, terms this means that we are given two real quadratic matrices of the same size,
G and J ∈ Rn×n , and the objective is to calculate
n
X
1
Q(G, J) = 2 max
Ji,j Gρ(i),ρ(j) ,
n ρ i,j=1
where ρ runs over all permutations of [n]. We speak of metric QAP, if the entries of J are all
non-negative with zeros on the diagonal, and obey the triangle inequality, and d-dimensional
geometric QAP if the rows and columns of J can be embedded into a d-dimensional Lp
metric space so that distances of the images are equal to the entries of J.
The continuous analog of the problem is the following. Given the measurable functions
W, J : [0, 1]2 → R, we are interested in obtaining
Z
Q̂ρ (W, J) =
J(x, y)W (ρ(x), ρ(y))dxdy,
Q̂(W, J) = max Q̂ρ (W, J),
ρ
[0,1]2
where ρ in the previous formula runs over all measure preserving permutations of [0, 1].
In even greater generality we introduce the QAP with respect to fractional permutations of
[0, 1]. A fractional permutation µ is a probability kernel, that is µ : [0, 1] × L([0, 1]) → [0, 1]
so that
(i) for any A ∈ L([0, 1]) the function µ(., A) is measurable,
(ii) for any x ∈ [0, 1] the function µ(x, .) is a probability measure on L([0, 1]), and
R1
(iii) for any A ∈ L([0, 1]) 0 dµ(x, A) = λ(A).
51
Here L([0, 1]) is the σ-algebra of the Borel sets of [0, 1].
Then we define
Z Z
Qµ (W, J) =
J(α, β)W (x, y)dµ(α, x)dµ(β, y)dαdβ,
[0,1]2 [0,1]2
and
Q(W, J) = max Qµ (W, J),
µ
where the maximum runs over all fractional permutations. For each measure preserving
permutation ρ one can consider the fractional permutation µ with the probability measure
µ(α, .) is defined as the atomic measure δρ(α) concentrated on ρ(α), for this choice of µ we
have Qρ (W, J) = Qµ (W, J).
An r-dimensional generalization of the problem for J and W : [0, 1]r → R is
Z Z
Q(W, J) = max
J(α1 , . . . , αr )W (x1 , . . . , xr )dµ(α1 , x1 ) . . . dµ(αr , xr )dα1 . . . dαr ,
µ
[0,1]r [0,1]r
where the maximum runs over all fractional permutations µ of [0, 1]. The definition of the
finitary case in r dimensions is analogous.
A special QAP is the maximum acyclic subgraph problem (AC). Here we are given a
weighted directed graph G with vertex set of cardinality n, and our aim is to determine the
maximum of the total value of edge weights of a subgraph of G that contains no directed
cycle. We can formalize this as follows. Let G ∈ Rn×n be the input data, then the maximum
acyclic subgraph density is
n
X
1
AC(G) = 2 max
Gi,j I(ρ(i) < ρ(j)),
n ρ i,j=1
where ρ runs over all permutations of [n].
This can be thought of as a QAP with the restriction that J is the upper triangular
n × n matrix with zeros on the diagonal and all nonzero entries being equal to 1. However in
general AC cannot be reformulated as metric QAP. The continuous version of the problem
Z
ˆ
AC(W ) = sup
I(φ(x) > φ(y))W (x, y)dxdy
φ
[0,12 ]
for a function W : [0, 1]2 → R is defined analogous to the QAP, where the supremum runs
over measure preserving permutations φ : [0, 1] → [0, 1], as well as the relaxation AC(W ),
where the supremum runs over probability kernels.
Both the QAP and the AC problems resemble the ground state energy problems that
were investigated in previous parts of this paper. In fact, if the number of clusters of the
distance matrix J in the QAP is bounded from above by an integer that is independent from
52
n, then this special QAP is a ground state energy with the number of states q equal to the
number of clusters of J. By the number of clusters we mean here the smallest number m
such that there exists an m × m matrix J ′ so that J is a blow-up of J ′ , that is not necessarily
equitable. To establish an approximation to the solution of the QAP we will only need the
cluster condition approximately, and this will be shown in what follows.
Definition 5.5. We call a measurable function J : [0, 1]r → R ν-clustered for a non-increasing
function ν : R+ → R+ , if for any ε > 0 there exists another measurable function J ′ : [0, 1]r →
R that is a step function with ν(ε) steps and kJ − J ′ k1 < εkJk∞ .
Note, that by the Weak Regularity Lemma ([16]), ??, any J can be well approximated
1
by a step function with ν(ε) = 2 ε2 steps in the cut norm. To see why it is likely that this
approximation will not be sufficient for our purposes, consider an arbitrary J : [0, 1]r → R.
Suppose that we have an approximation in the cut norm of J at hand denoted by J ′ . Define
the probability kernel µ0 (α, .) = δα and the naive r-kernel W0 = J − J ′ . In this case
|Qµ0 (W, J) − Qµ0 (W, J ′ )| = kJ − J ′ k22 . This 2-norm is not granted to be small compared to
ε by any means.
In some special cases, for example if J is a d-dimensional geometric array or the array
corresponding to the AC, we are able to require bounds on the number of steps required for
the 1-norm approximation of J that are sub-exponential in 1ε . By the aid of this fact we can
achieve good approximation of the optimal value of the QAP via sampling. Next we state
an application of Theorem 4.4 to the clustered QAP.
Lemma 5.6. Let ν : R+ → R+ be nondecreasing, and let J : [0, 1]r → R be a ν-clustered
measurable function. Then there exists an absolute constant c > 0 so that for every ε > 0,
r
r
)( ν(ε)
)4 we have
every naive r-kernel W , and k ≥ c log( ν(ε)
ε
ε
P(|Q(W, J) − Q(G(k, W ), G′ (k, J))| ≥ εkW k∞ kJk∞ ) ≤ ε,
where G(k, W ) and G′ (k, J) are generated by distinct independent samples.
Proof. Without loss of generality we may assume that kJk∞ ≤ 1. First we show that under
the cluster condition we can introduce a microcanonical ground state energy problem whose
optimum is close to Q(W, J), and the same holds for the sampled problem. Let ε > 0 be
arbitrary and J ′ be an approximating step function with q = ν(ε) steps. We may assume
that kJ ′ k∞ ≤ 1. We set a = (a1 , . . . , aq ) to be the vector of the sizes of the steps of J ′ , and
construct from J ′ a real r-array of size q in the natural way by associating to each class of
the steps of J ′ an element of [q] (indexes should respect a), and set the entries of the r-array
corresponding to the value of the respective step of J ′ . We will call the resulting r-array J ′′ .
From the definitions it follows that
Q(W, J ′ ) = Ea (W, J ′′ )
for every r-kernel W . On the other hand we have
|Q(W, J)−Q(W, J ′ )|
53
≤ max |Qµ (W, J) − Qµ (W, J ′ )|
µ
= max
µ
Z
′
(J − J )(α1 , . . . , αr )
[0,1]r
≤ max
µ
Z
[0,1]r
Z
W (x)dµ(α1 , x1 ) . . . dµ(αr , xr )dα1 . . . dαr
[0,1]r
|(J − J ′ )(α1 , . . . , αr )|kW k∞ dα1 . . . dαr
= kJ − J ′ k1 kW k∞ ≤ εkW k∞ .
Now we proceed to the sampled version of the optimization problem. First we gain control
over the difference between the QAPs corresponding to J and J ′ . G(k, W ) is induced by the
sample U1 , . . . , Uk , G′ (k, J) and G′ (k, J ′ ) by the distinct independent sample Y1 , . . . , Yk .
|Q(G(k, W ), G(k, J)) − Q(G(k, W ), G(k, J ′ ))|
≤ max |Qρ (G(k, W ), G(k, J)) − Qρ (G(k, W ), G(k, J ′ ))|
ρ
1
= max r
ρ k
k
X
(J − J ′ )(Yi1 , . . . , Yir ) kW k∞ .
(5.1)
i1 ,...,ir =1
We analyze the random sum on the right hand side of (5.1) by first upper bounding its
expectation.
k
X
1
EY
kr
(J − J ′ )(Yi1 , . . . , Yir )
i1 ,...,ir =1
2
r
[kJk∞ + kJ ′ k∞ ] + EY |(J − J ′ )(Y1 , . . . , Yr )|
k
2r 2
=
+ ε ≤ 2ε.
k
≤
By the Azuma-Hoeffding inequality the sum is also sufficiently small in probability.
!
k
X
1
(J − J ′ )(Yi1 , . . . , Yir ) ≥ 4ε ≤ 2 exp(−ε2 k/8) ≤ ε.
P
k r i ,...,i =1
1
r
We obtain that
|Q(G(k, W ), G′ (k, J)) − Q(G(k, W ), G′ (k, J ′ ))| ≤ 4εkW k∞
with probability at least 1 − ε, if k is such as in the statement
P of the lemma. Set b =
(b1 , . . . , bq ) to be the probability distribution for that bi = k1 kj=1 ISi (Yj ), where Si is the
ith step of J ′ with λ(Si ) = ai . Then we have
Q(G(k, W ), G(k, J ′ )) = Êb (G(k, W ), J ′′ ).
54
It follows again from the Azuma-Hoeffding inequality that we have P(|ai − bi | > ε/q) ≤
2
2 exp(− ε2qk2 ) for each i ∈ [q], thus we have ka − bk1 < ε with probability at least 1 − ε. We
can conclude that with probability at least 1 − 2ε we have
|Q(W, J) − Q(G(k, W ), G(k, J))| ≤ |Q(W, J) − Q(W, J ′ )|
+ |Ea (W, J ′′ ) − Êb (G(k, W ), J ′′ )| + |Q(G(k, W ), G(k, J)) − Q(G(k, W ), G(k, J ′ ))|
≤ (5 + 2r)εkW k∞ + |Ea (W, J ′′ ) − Êa (G(k, W ), J ′′ )|.
By the application of Theorem 5.4 the claim of the lemma is verified.
Next we present the application of Theorem 5.6 for two special cases of QAP.
Corollary 5.7. The optimal values of the d-dimensional geometric QAP and the maximum
acyclic subgraph problem are efficiently testable. That is, let d ≥ 1, for every ε > 0 there
exists an integer k0 = k0 (ε) such that k0 is a polynomial in 1/ε, and for every k ≥ k0 and
any d-dimensional geometric QAP given by the pair (G, J) we have
P(|Q(G, J) − Q(G(k, G), G′ (k, J))| ≥ εkGk∞ kJk∞ ) ≤ ε,
(5.2)
where G(k, W ) and G′ (k, J) are generated by distinct independent samples. The formulation
regarding the testability of the maximum acyclic subgraph problem is analogous.
Note that testability here is meant in the sense of the statement of Theorem 5.6, since
the size of J is not fixed and depends on G.
Proof. In the light of Theorem 5.6 it suffices to show that for both cases any feasible J is
ν-clustered, where ν(ε) is polynomial in 1/ε. For both settings we have r = 2.
We start with the continuous version of the d-dimensional geometric QAP given by the
measurable function J : [0, 1]2 → R+ , and an instance is given by the pair (W, J), where W
is a 2-kernel. Note, that d refers to the dimension corresponding to the embedding of the
indices of J into an Lp metric space, not the actual dimension of J. We are free to assume
that 0 ≤ J ≤ 1, simply by rescaling. By definition, there exists a measurable embedding
ρ : [0, 1] → [0, 1]d , so that J(i, j) = kρ(i) − ρ(j)kp for every (i, j) ∈ [0, 1]2 . Fix ε > 0 and
, 1]) of the unit interval into
consider the partition P ′ = (T1 , . . . , Tβ ) = ([0, β1 ), [ β1 , β2 ), . . . , [ β−1
β
√
p
β = ⌈ 2 ε d ⌉ classes. Define the partition P = (P1 , . . . , Pq ) of [0, 1] consisting of the classes
d d/p
ρ−1 (Ti1 × · · · × Tid ) for each (i1 , . . . , id ) ∈ [β]r , where |P| = q = β d = 2 εdd . We construct
the approximating step function J ′ of J by averaging J on the steps determined by the
partition classes of P. It remains to show that this indeed is a sufficient approximation in
the L1 -norm.
Z
Z
q
q
X
X
1
′
′
kJ − J k1 =
|J(x) − J ′ (x)|dx ≤
|J(x) − J (x)|dx =
ε = ε.
2
q
i,j
i,j=1
Pi ×Pj
[0,1]2
55
By Theorem 5.6 and Theorem 5.4 it follows that the continuous d-dimensional metric QAP
1
is O(log( 1ε ) ε4rd+4
)-testable, and so is the discrete version of it.
Next we show that the AC is also efficiently testable given by the upper triangular matrix
J whose entries above the diagonal are 1. Note that here we have r = 2. Fix ε > 0 and
consider the partition P = (P1 , . . . , Pq ) with q = 1ε , and set J ′ to 0 on every step Pi × Pj
whenever i ≥ j, and to 1 otherwise. This function is indeed approximating J in the L1 -norm.
Z
Z
q
X
′
′
|J(x) − J ′ (x)|dx ≤ ε.
kJ − J k1 =
|J(x) − J (x)|dx =
i=1 P ×P
i
i
[0,1]2
Again, by Theorem 5.6 and Theorem 5.4 it follows that the AC is O(log( 1ε ) ε112 )-testable.
6
Further Research
Our framework based on exchangeability principles allows us to extend the notion of a limit
to the case of unbounded hypergraphs and efficient testability of ground state energies in
this setting. The notion of exchangeability is crucial here. The notion of efficient testability
in an unbounded case could be of independent interest, perhaps the results on ground state
energy carry through for the setting when the r-graphons (induced by r-graphs) are in an
Lp space for some p ≥ 1.
Another problem is to characterize more precisely the class of problems which are efficiently parameter testable as opposed to the hard ones. Improving the bounds in 1/ε for the
efficiently testable problems is also a worthwhile question.
Acknowledgement
We thank Jennifer Chayes, Christian Borgs and Tim Austin for a number of interesting and
stimulating discussions and the relevant new ideas in the early stages of this research.
References
[1] David J. Aldous. Representations for partially exchangeable arrays of random variables.
J. Multivariate Anal., 11(4):581–598, 1981.
[2] Noga Alon and Asaf Shapira. A characterization of the (natural) graph properties
testable with one-sided error. SIAM J. Comput., 37(6):1703–1727, 2008.
[3] Noga Alon and Joel H. Spencer. The probabilistic method. Wiley-Interscience Series in
Discrete Mathematics and Optimization. John Wiley & Sons, Inc., Hoboken, NJ, third
edition, 2008. With an appendix on the life and work of Paul Erdős.
56
[4] Noga Alon, W. Fernandez de la Vega, Ravi Kannan, and Marek Karpinski. Random
sampling and approximation of MAX-CSP problems. In Proceedings of the ThirtyFourth Annual ACM Symposium on Theory of Computing, pages 232–239, 2002. Also
appeared in J. Comput. System Sci., 67(2):212–243,2003.
[5] Sanjeev Arora, David R. Karger, and Marek Karpinski. Polynomial time approximation
schemes for dense instances of NP-hard problems. In Proceedings of the Twenty-Seventh
Annual ACM Symposium on Theory of Computing, pages 284–293, 1995. Also appeared
in J. Comput. System Sci., 58(1):193–210, 1999.
[6] Sanjeev Arora, Alan Frieze, and Haim Kaplan. A new rounding procedure for the
assignment problem with applications to dense graph arrangement problems. Math.
Program., 92(1, Ser. A):1–36, 2002.
[7] Ashwini
Aroskar.
Limits,
Regularity
and
Removal
for
tional and Weighted Structures.
dissertation, CMU, 2012.
http://repository.cmu.edu/dissertations/144.
RelaURL
[8] Vikraman Arvind, Johannes Köbler, Sebastian Kuhnert, and Yadu Vasudev. Approximate graph isomorphism. In Mathematical foundations of computer science 2012, volume 7464 of Lecture Notes in Comput. Sci., pages 100–111. Springer, Heidelberg, 2012.
[9] Tim Austin. On exchangeable random variables and the statistics of large graphs and
hypergraphs. Probab. Surv., 5:80–145, 2008.
[10] C. Borgs, J. T. Chayes, L. Lovász, V. T. Sós, and K. Vesztergombi. Convergent sequences of dense graphs. I. Subgraph frequencies, metric properties and testing. Adv.
Math., 219(6):1801–1851, 2008.
[11] C. Borgs, J. T. Chayes, L. Lovász, V. T. Sós, and K. Vesztergombi. Convergent sequences of dense graphs II. Multiway cuts and statistical physics. Ann. of Math. (2),
176(1):151–219, 2012.
[12] B. de Finetti. Funzione Caratteristica Di un Fenomeno Aleatorio, pages 251–299. 6.
Memorie. Academia Nazionale del Linceo, 1931.
[13] Wenceslas Fernandez de la Vega, Ravi Kannan, and Marek Karpinski. Approximation
of global max-csp problems. 2006. Technical Report TR06-124.
[14] Persi Diaconis and Svante Janson. Graph limits and exchangeable random graphs.
Rend. Mat. Appl. (7), 28(1):33–61, 2008.
[15] Gábor Elek and Balázs Szegedy. A measure-theoretic approach to the theory of dense
hypergraphs. Adv. Math., 231(3-4):1731–1772, 2012.
[16] Alan M. Frieze and Ravi Kannan. Quick approximation to matrices and applications.
Combinatorica, 19(2):175–220, 1999.
57
[17] Oded Goldreich, Shafi Goldwasser, and Dana Ron. Property testing and its connection
to learning and approximation. J. ACM, 45(4):653–750, 1998.
[18] Edwin Hewitt and Leonard J. Savage. Symmetric measures on Cartesian products.
Trans. Amer. Math. Soc., 80:470–501, 1955.
[19] D. N. Hoover. Relations on probability spaces and arrays of random variables (preprint),
1979.
[20] Svante Janson. Poset limits and exchangeable random posets. Combinatorica, 31(5):
529–563, 2011.
[21] Olav Kallenberg. Symmetries on random arrays and set-indexed processes. J. Theoret.
Probab., 5(4):727–765, 1992.
[22] Michael Langberg, Yuval Rabani, and Chaitanya Swamy. Approximation algorithms for
graph homomorphism problems. In Approximation, randomization and combinatorial
optimization, volume 4110 of Lecture Notes in Comput. Sci., pages 176–187. Springer,
Berlin, 2006.
[23] László Lovász. Large networks and graph limits, volume 60 of American Mathematical
Society Colloquium Publications. American Mathematical Society, Providence, RI, 2012.
[24] László Lovász and Balázs Szegedy. Limits of dense graph sequences. J. Combin. Theory
Ser. B, 96(6):933–957, 2006.
[25] László Lovász and Balázs Szegedy. Limits of compact decorated graphs, 2010. preprint,
arXiv:1010.5155.
[26] László Lovász and Balázs Szegedy. Testing properties of graphs and functions. Israel
J. Math., 178:113–156, 2010.
[27] Claire Mathieu and Warren Schudy. Yet another algorithm for dense max cut: go
greedy. In Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete
Algorithms, pages 176–182. ACM, New York, 2008.
[28] Igal Sason. On refined versions of the azuma-hoeffding inequality with applications in
information theory, 2011. preprint, arXiv:1111.1977.
[29] Ya. G. Sinaı̆. Theory of phase transitions: rigorous results, volume 108 of International
Series in Natural Philosophy. Pergamon Press, Oxford-Elmsford, N.Y., 1982. Translated
from the Russian by J. Fritz, A. Krámli, P. Major and D. Szász.
[30] Yufei Zhao. Hypergraph limits: a regularity approach, 2013. preprint, arXiv:1302.1634.
58
| 8 |
Input Sparsity Time Low-Rank Approximation
via Ridge Leverage Score Sampling
Michael B. Cohen
Cameron Musco
Christopher Musco∗
arXiv:1511.07263v2 [] 6 Oct 2016
Massachusetts Institute of Technology, EECS
Cambridge, MA 02139, USA
Email: {micohen,cnmusco,cpmusco}@mit.edu
October 10, 2016
Abstract
We present a new algorithm for finding a near optimal low-rank approximation of a matrix
A in O(nnz(A)) time. Our method is based on a recursive sampling scheme for computing a
representative subset of A’s columns, which is then used to find a low-rank approximation.
This approach differs substantially from prior O(nnz(A)) time algorithms, which are all
based on fast Johnson-Lindenstrauss random projections. It matches the guarantees of these
methods while offering a number of advantages.
Not only are sampling algorithms faster for sparse and structured data, but they can also
be applied in settings where random projections cannot. For example, we give new single-pass
streaming algorithms for the column subset selection and projection-cost preserving sample problems. Our method has also been used to give the fastest algorithms for provably approximating
kernel matrices [MM16].
∗
Part of this work was completed while the author interned at Yahoo Labs, NYC.
1
Introduction
Low-rank approximation is a fundamental task in statistics, machine learning, and computational
science. The goal is to find a rank k matrix that is as close as possible to an arbitrary input matrix
A ∈ Rn×d , with distance typically measured using the spectral or Frobenius norms.
Traditionally, the problem is solved using the singular value decomposition (SVD), which takes
O(nd2 ) time to compute. This high cost can be reduced using iterative algorithms like the power
method or Krylov methods, which require just O(nnz(A) · k) time per iteration, where nnz(A) is
the number of non-zero entries in A1 .
More recently, the cost of low-rank approximation has been reduced even further using sketching
methods based on Johnson-Lindenstrauss random projection [Sar06]. Remarkably, so-called “sparse
random projections” [CW13, MM13, NN13, BDN15, Coh16] give algorithms that run in time2 :
O(nnz(A)) + Õ (n · poly(k, ǫ)) .
These methods output a low-rank approximation within a (1 + ǫ) factor of optimal when error is
measured using the Frobenius norm. They are typically referred to as running in “input sparsity
time” since the O(nnz(A)) term is considered to dominate the runtime.
Input sparsity time algorithms for low-rank approximation were an important theoretical achievement and have also been influential in practice. Implementations are now available in a variety of
languages and machine learning libraries [Liu14, Oka10, IBM14, PVG+ 11, HRZ+ 09, VM15].
1.1
Our Contributions
We give an entirely different approach to obtaining input sparsity time algorithms for low-rank
approximation. Random projection methods are based on multiplying A by a sparse random
matrix Π ∈ Rd×poly(k,ǫ) to form a smaller matrix AΠ that contains enough information about A
to compute a near optimal low-rank approximation. Our techniques on the other hand are based
on sampling Õ(k/ǫ) columns from A, and computing a low-rank approximation using this sample.
Sampling itself is simple and extremely efficient. However, to obtain a good approximation
to A, columns must be sampled with non-uniform probabilities, carefully chosen to reflect their
relative importance. It is known that variations on the standard statistical leverage scores give
probabilities that are provably sufficient for low-rank approximation [Sar06, DMM08, CEM+ 15].
Unfortunately, computing any of these previously studied “low-rank leverage scores” is as difficult as low-rank approximation itself, so sampling did not yield fast algorithms3 .
We address this issue for the first time by introducing new importance sampling probabilities
which can be approximated efficiently using a simple recursive algorithm. In particular, we adapt
the so-called ridge leverage scores to low-rank matrix approximation. These scores have been used
as sampling probabilities in the context of linear regression and spectral approximation [LMP13,
KLM+ 14, AM15] but never for low-rank approximation.
By showing that ridge leverage scores display a unique monotonicity property under perturbations to A, we are able to prove that, unlike any prior low-rank leverage scores, they can be approximated using a relatively large uniform subsample of A’s columns. While too large to use directly,
the size of this subsample can be reduced recursively to give an overall fast algorithm. This approach
1
The number of iterations depends on the accuracy ǫ and/or spectral gap conditions. See [MM15] for an overview.
Õ(·) hides logarithmic factors, including a failure probability dependence.
3
ℓ2 norm sampling does yield very fast algorithms [FKV04, DKM06a, BJS15], but cannot give relative error
guarantees matching those of random projection or leverage score methods without additional assumptions on A.
2
1
resembles work on recursive methods for computing standard leverage scores, which were recently
used to give the first O(nnz(A)) time sampling algorithms for linear regression [LMP13, CLM+ 15].
Our main algorithmic result, which nearly matches the state-of-the-art in [NN13] follows:
Theorem 1. For any
1], there exists a recursive column sampling algorithm that, in time
θ1+θ∈ 2(0,
n
k
−1
, returns Z ∈ Rn×k satisfying:
O θ nnz(A) + Õ
ǫ4
kA − ZZT Ak2F ≤ (1 + ǫ)kA − Ak k2F .
(1)
Here Ak is the best rank k approximation to A. To prove Theorem 1, we show how to compute
2
a sampling matrix S such that AS ∈ Rn×Õ(k/ǫ ) satisfies a projection-cost preservation guarantee
(formalized in Section 2). This property ensures that it is possible to extract a near optimal lowrank approximation from the sample. It also allows AS to be used to approximately solve a broad
class of constrained low-rank approximation problems, including k-means clustering [CEM+ 15].
With a slightly smaller sample, we also prove that AS satisfies a standard (1 + ǫ) error column subset selection guarantee. Ridge leverage score sampling is the first algorithm, efficient or
otherwise, that obtains both of these important approximation goals simultaneously.
1.2
Why Sampling?
Besides the obvious goal of obtaining alternative state-of-the-art algorithms for low-rank approximation, we are interested in sampling methods for a few specific reasons:
Sampling maintains matrix sparsity and structure.
Without additional assumptions on A, our recursive sampling algorithms essentially match random
projection methods. However, they have the potential to run faster for sparse or structured data.
Random projection linearly combines all columns in A to form AΠ ∈ Rn×poly(k,ǫ), so this sketched
matrix is usually dense and unstructured. On the other hand, AS will remain sparse or structured
if A is sparse or structured, in which case it can be faster to post-process. Potential gains are
especially important when the Õ(n · poly(k, ǫ)) runtime term is not dominated by O(nnz(A)).
We note that the ability to maintain sparsity and structure motivated similar work on recursive sampling algorithms for fast linear regression [CLM+ 15]. While these techniques only match
random projection for general matrices, they have been important ingredients in designing faster algorithms for highly structured Laplacian and SDD matrices [LPS15, KLP+ 16, JK16]. We hope our
sampling methods for low-rank approximation will provide a foundation for similar contributions.
Sampling techniques lead to natural streaming algorithms.
In data analysis, sampling itself is often the primary goal. The idea is to select a subset of columns
from A whose span contains a good low-rank approximation for the matrix and hence represents
important or influential features [PZB+ 07, MD09].
Computing this subset in a streaming setting is of both theoretical and practical interest [Str14].
Unfortunately, while random projection methods adapt naturally to data streams [CW09], importance sampling is more difficult. The leverage score of one column depends on every other column,
including those that have not yet appeared in the stream. While random projections can be used
to approximate leverage scores, this approach inherently requires two passes over the data.
Fortunately, the same techniques used in our recursive algorithms apply naturally in the streaming setting. We can compute coarse approximations to the ridge leverage scores using just a small
number of columns and refine these approximations as the stream is revealed. By rejection sampling
columns as the probabilities are adjusted, we obtain the first space efficient single-pass streaming
algorithms for both column subset selection and projection-cost preserving sampling (Section 6).
2
Sampling offers additional flexibility for non-standard matrices.
In recent follow up work, the techniques in this paper are adapted to give the most efficient,
provably accurate algorithms for kernel matrix approximation [MM16]. In nearly all settings this
well-studied problem cannot be solved efficiently by random projection methods.
The goal in kernel approximation is to replace an n × n positive semidefinite kernel matrix
K with a low-rank approximation that takes less space to represent [AMS01, WS01, FS02, MD05,
RR07, BW09a, BW09b, Bac13, GM13, HI15, LJS16]. However, unlike in the standard low-rank approximation problem, K is not represented explicitly. Its entries can only be accessed by evaluating
a “kernel function” between each pair of the n points in a data set.
Sketching K using random projection requires computing the full matrix first, using Θ(n2 )
kernel evaluations. On the other hand, with recursive ridge leverage score sampling this is not
necessary – it is possible to compute entries of K ‘on the fly’, only when they are required to
compute ridge scores with respect to a subsample. [MM16] shows that this technique gives the first
provable algorithms for approximating kernel matrices that only require time linear in n. In other
words, the methods only evaluate a tiny fraction of the dot products required to build K. Notably
they do not require any coherence or regularity assumptions to achieve this runtime.
Aside from kernel approximation, we note that in [BJS15] the authors present a low-rank
approximation algorithm based on elementwise sampling that they show can be applied to the
product of two matrices without ever forming this product explicitly. This result again highlights
the flexibility of sampling-based methods for non-standard matrices. Without access to an efficiently
computable leverage score distribution for elementwise sampling, [BJS15] applies an approximation
based on ℓ2 sampling. This approximation only performs well under additional assumptions on A
and an interesting open question is to see if our techniques can be adapted to their framework in
order to eliminate such assumptions.
1.3
Techniques and Paper Layout
Sampling Bounds (Sections 2, 3): In Section 2 we review technical background and introduce
ridge leverage scores. In Section 3 we prove that sampling by ridge leverage scores gives solutions
to the projection-cost preserving sketch and column subset selection problems. These sections do
not address algorithmic considerations.
While the proofs are technical, we reduce both problems to a simple “additive-multiplicative
spectral guarantee,” which resembles the ubiquitous subspace embedding guarantee [Sar06]. This
approach greatly simplifies prior work on low-rank approximation bounds for sampling methods
[CEM+ 15] and we hope that it will prove generally useful in studying future sketching methods.
Ridge Leverage Score Monotonicity (Section 4): In Section 4 we prove a basic theorem
regarding the stability of ridge leverage scores. Specifically, we show that the ridge leverage score
of a column cannot increase if another column is added to the matrix. This fact, which do not
hold for any prior “low-rank leverage scores”, is essential in proving the correctness of our recursive
sampling procedure and streaming algorithms.
Recursive Sampling Algorithm (Section 5): In Section 5, we describe and prove the correctness of our main sampling algorithm. We show how a careful implementation of the algorithm
gives O(nnz(A)) running time for computing ridge leverage scores, and accordingly for solving the
low-rank approximation problem.
Application to Streaming (Section 6): We conclude with an application of our results to
low-rank sampling algorithms for single-pass column streams that are only possible thanks to the
stability result of Section 4.
3
2
2.1
Technical Background
Low-rank Approximation
Using the singular value decomposition (SVD), any rank r matrix A ∈ Rn×d can be factored as
A = UΣVT . U ∈ Rn×r and V ∈ Rd×r are orthonormal matrices whose columns are the left and
right singular vectors of A. Σ is a diagonal matrix containing A’s singular values σ1 ≥ σ2 ≥ . . . ≥
σr > 0 in decreasing order from top left to bottom right. When quality is measured with respect
to the Frobenius norm, the best low-rank approximation for A is given by Ak = Uk Σk VkT where
Uk ∈ Rn×k , Vk ∈ Rd×k , and Σk ∈ Rk×k contain the just the first k components of U, V, and Σ
respectively. In other words,
Ak =
arg min kA − BkF .
B:rank(B)≤k
Since U has orthonormal columns, we can rewrite Ak = Uk UTk A. That is, the best rank k
approximation can be found by projecting A onto the span of its top k singular vectors. Throughout,
we will use the shorthand A\k to denote the residual A − Ak . U\k ∈ Rn×r−k , V\k ∈ Rd×r−k , and
Σ\k ∈ Rr−k×r−k denote U, V, and Σ restricted to just their last k components.
When solving the low-rank approximation problem approximately, our goal is to find an orthonormal span Z ∈ Rn×k satisfying kA − ZZT AkF ≤ (1 + ǫ)kA − Uk UTk AkF .
2.2
Sketching Algorithms
Like many randomized linear algebra routines, our low-rank approximation algorithms are based
on “linear sketching”. Sketching algorithms use a typically randomized procedure to compress A ∈
′
Rd×n into an approximation (or “sketch”) C ∈ Rd ×n with many fewer columns (d′ ≪ d). Random
projection algorithms construct C by forming d′ random linear combinations of the columns in A.
Random sampling algorithms construct C by selecting and possibly reweighting a d′ columns in A.
After compression, a post-processing routine, which is often deterministic, is used to solve the
original linear algebra problem with just the information contained in C. In our case, the postprocessing step needs to extract an approximation to the span of A’s top left singular vectors. If
C is much smaller than A, the cost of post-processing is typically considered a low-order term in
comparison to the cost of computing the sketch to begin with.
When analyzing sketching algorithms it is common to separate the post-processing step from the
dimensionality reduction step. Known post-processing routines give good approximate solutions
to linear algebra problems under the condition that C satisfies certain approximation properties
with respect to A. The challenge then becomes proving that a specific dimensionality reduction
algorithm produces a sketch satisfying these required guarantees.
2.3
Sampling Guarantees for Low-rank Approximation
For low-rank approximation, most algorithms aim for one of two standard approximation guarantees, which we describe below. Since we will be focusing on sampling methods, from now on we
assume that C is a subset of A’s columns.
′
Definition 2 (Rank k Column Subset Selection). For d′ < d, a subset of A’s columns C ∈ Rn×d
′
is a (1 + ǫ) factor column subset selection if there exists a rank k matrix Q ∈ Rd ×d with
kA − CQk2F ≤ (1 + ǫ)kA − Ak k2F .
4
(2)
In other words, the column span C contains a good rank k approximation for A. Algorithmically,
we can recover this low-rank approximation via projection to the column subset [Sar06, CW13].
Beyond sketching for low-rank approximation, the column subset selection guarantee is used as
a metric in feature selection for high dimensional datasets [PZB+ 07, MD09]. With columns of A
interpreted as features and rows as data points, (2) ensures that we select d′ features that span the
feature space nearly as well as the top k principal components. The guarantee is also important in
algorithms for CUR matrix decomposition [DKM06b, MD09, BW14] and Nyström approximation
[WS01, MD05, BW09b, BW09a, GM13, HI15, MM16].
In addition to Definition 2, we consider a stronger guarantee for weighted column selection,
which has a broader range of algorithmic applications:
Definition 3 (Rank k Projection-Cost Preserving Sample). For d′ < d, a subset of rescaled columns
′
C ∈ Rn×d is a (1 + ǫ) projection-cost preserving sample if, for all rank k orthogonal projection
matrices X ∈ Rn×n ,
(1 − ǫ)kA − XAk2F ≤ kC − XCk2F ≤ (1 + ǫ)kA − XAk2F
(3)
Definition 3 is formalized in two recent papers [FSS13, CEM+ 15], though it appears implicitly
in prior work [DFK+ 04, BZMD15]. It ensures that C approximates the cost of any rank k column
projection of A. C can thus be used as a direct surrogate of A to solve low-rank projection problems.
Specifically, it’s not hard to see that if we use a post-processing algorithm that sets Z equal to the
top k left singular vectors of C, it will hold that kA−ZZT AkF ≤ (1+ǫ)kA−Uk UTk AkF [CEM+ 15].
Definition 3 also ensures that C can be used in approximately solving a variety of constrained
low-rank approximation problems, including k-means clustering of A’s rows (see [CEM+ 15]).
2.4
Leverage Scores
It is well known that sketches satisfying Definitions 2 and 3 can be constructed via importance
sampling routines which select columns using carefully chosen, non-uniform probabilities. Many of
these probabilities are modifications on traditional “statistical leverage scores”.
The statistical leverage score of the ith column ai of A is defined as4 :
def
τi = aTi (AAT )+ ai .
(4)
τi measures how important ai is in composing the range of A. It is maximized at 1 when ai is linearly
independent from A’s other columns and decreases when many other columns approximately align
with ai or when kai k2 is small.
Leverage scores are used in fast sketching algorithms for linear regression and matrix preconditioning [DMM06, Sar06, CLM+ 15]. They have also been applied to convex optimization [LSW15],
linear programming [LS14, LS15], matrix completion [CBSW15], multi-label classification [BK13],
and graph sparsification, where they are known as effective resistances [SS11].
2.5
Existing Low-rank Leverage Scores
For low-rank approximation problems, leverage scores need to be modified to only capture how
important each column ai is in composing the top few singular directions of A’s range.
4
+ denotes the Moore-Penrose pseudoinverse of a matrix. When AAT is full rank (AAT )+ = (AAT )−1
5
d′
In particular, it is known that a sketch C satisfying Definition 2 can be constructed by sampling
= O(k log k + k/ǫ) columns according to the so-called rank k subspace scores [Sar06, DMM08]:
(k) def
ith rank k subspace score:
ssi
= aTi (Ak ATk )+ ai .
(5)
These scores are exactly equivalent to standard leverage scores computed with respect to Ak , an
optimal low-rank approximation for A. The stronger projection-cost preservation guarantee of
Definition 3 can be achieved by sampling O(k log k/ǫ2 ) columns using a related, but somewhat
more complex, leverage score modification [CEM+ 15].
2.6
Ridge Leverage Scores
Notably, prior low-rank leverage scores are defined in terms of Ak , which is not always unique and
regardless can be sensitive to matrix perturbations5 . As a result, the scores can change drastically
when A is modified slightly or when only partial information about the matrix is known. This
largely limits the possibility of quickly approximating the scores with sampling algorithms, and
motivates our adoption of a new leverage score for low-rank approximation.
Rather than use scores based on Ak , we employ regularized scores called ridge leverage scores,
which have been used for approximate kernel ridge regression [AM15] and in work on iteratively
computing standard leverage scores [LMP13, KLM+ 14]. We extend their applicability to low-rank
approximation. For a given regularization parameter λ, define the λ-ridge leverage score as:
def
τiλ (A) = aTi (AAT + λI)+ ai .
(6)
We will always set λ = kA − Ak k2F /k and thus, for simplicity, use “ith ridge leverage score” to refer
+
kA−Ak k2F
I
ai .
to τ̄i (A) = aTi AAT +
k
For prior low-rank leverage scores, Ak truncates the spectrum of A, removing all but its top k
singular values. Regularization offers a smooth alternative: adding λI to AAT ‘washes out’ small
singular directions, causing them to be sampled with proportionately lower probability.
This paper proves that regularization can not only replace truncation, but is more natural and
stable. In particular, while τ̄i depends on the value of kA − Ak k2F , it does not depend on a specific
low-rank approximation. This is sufficient for stability since kA − Ak k2F changes predictably under
matrix perturbations even when Ak itself does not.
Before showing our sampling guarantees for ridge leverage scores, we prove that the sum of
these scores is not too large. Thus, when we use them for sampling, we will achieve column subsets
and projection-cost preserving samples of small size. Specifically we have:
Pn
Lemma 4.
i=1 τ̄i (A) ≤ 2k.
Proof. We rewrite (6) using A’s singular value decomposition:
τ̄i (A) =
aTi
= aTi
5
!+
kA\k k2F
T
UU
ai
UΣ U +
k
+
UΣ̄2 UT ai = aTi UΣ̄−2 UT ai ,
2
T
It is often fine to use a near-optimal low-rank approximation in place of Ak , but similar instability issues remain.
6
where Σ̄2i,i = σi2 (A) +
n
X
i=1
(Σ2 Σ̄−2 )i,i =
2
tr(Σ Σ̄
kA−Ak k2F
k
τ̄i (A) = tr AT UΣ̄−2 UT A = tr VΣΣ̄−2 ΣVT = tr(Σ2 Σ̄−2 )
σi2 (A)
σi2 (A)+
−2
kA−Ak k2
F
k
)=k+
n
X
i=k+1
3
. We then have:
. For i ≤ k we simply upper bound this by 1. So:
σi2
σi2 +
kA−Ak k2F
k
≤k+
n
X
σi2
i=k+1
kA−Ak k2F
k
=k+
Pn
2
i=k+1 σi
kA−Ak k2F
k
≤ k + k.
Core Sampling Results
Before considering how to efficiently compute ridge leverage scores, we prove that they can be used
to construct sketches satisfying the guarantees of Definitions 2 and 3. To do so, we introduce a
natural intermediate guarantee (Theorem 5), from which our results on column subset selection and
projection-cost preservation follow. This approach is the first to treat these guarantees in a unified
way and we hope it will be useful in future work on sketching methods for low-rank approximation.
Specifically, we will show that our selected columns spectrally approximate A up to additive error
depending on the ridge parameter λ = kA − Ak kF /k. This approximation is akin to the ubiquitous
subspace embedding guarantee [Sar06] which is used as a primitive for full rank problems like linear
regression and generally requires sampling Θ(d) columns.
Intuitively,
√ sampling by ridge leverage scores is equivalent to sampling by the standard leverage
scores of [A, λIn×n ]. A matrix Chernoff bound can be used to show that sampling by these scores
will yield C satisfying the subspace embedding property: (1 − ǫ)CCT AAT + λI (1 + ǫ)CCT .
(Recall that M N indicates that sT Ms ≤ sT Ns for every vector s.)
However, we do not actually sample columns of the identify, only columns of A. Subtracting
off the identity yields the mixed additive-multiplicative bound of Theorem 5.
Theorem 5 (Additive-Multiplicative Spectral Approximation). For i ∈ {1, . . . , d}, let τ̃i ≥ τ̄i (A)
P
be an overestimate for the ith ridge leverage score. Let pi = Pτ̃i τ̃i . Let t = c log(k/δ)
i τ̃i for some
ǫ2
i
sufficiently large constant c. Construct C by sampling t columns of A, each set to
probability pi . With probability 1 − δ, C satisfies:
√1 ai
tpi
with
ǫ
ǫ
(1 − ǫ)CCT − kA − Ak k2F In×n AAT (1 + ǫ)CCT + kA − Ak k2F In×n
(7)
k
k
columns.
By Lemma 4, if each τ̃i is within a constant factor of τ̄i (A) then C has O k log(k/δ)
2
ǫ
Note that Theorem 5 and our other sampling results hold for independent sampling without replacement. A proof is included in Appendix B.
Proof. Following Lemma 4, we have τ̄i (A) = aTi UΣ̄−2 UT ai , where Σ̄2i,i = σi2 (A) +
Let Y = Σ̄−1 UT CCT − AAT UΣ̄−1 . We can write
Y=
t
X
j=1
Σ̄
−1
T
U
cj cTj
1
− AAT
t
7
UΣ̄
−1
def
=
t
X
j=1
[Xj ] .
kA\k k2F
k
.
For each j ∈ 1, . . . , t, Xj is given by:
1
1
−1 T
T
T
Xj = · Σ̄ U
ai ai − AA
UΣ̄−1 with probability pi .
t
pi
i
h
E Y = 0 since E p1i ai aTi − AAT = 0. Furthermore, CCT = UΣ̄Y Σ̄U + AAT . Showing
kYk2 ≤ ǫ gives −ǫI Y ǫI, and since UΣ̄2 UT = AAT +
(1 − ǫ)AAT −
kA\k k2F
k
I would give:
ǫkA\k k2F
ǫkA\k k2F
I CCT (1 + ǫ)AAT +
I.
k
k
After rearranging and adjusting constants on ǫ, this statement is equivalent to (7).
To prove that kYk2 is small with high probability we use a stable rank (intrinsic dimension)
matrix Bernstein inequality from [Tro15] that was first proven in [Min13] following work in [HKZ12].
This inequality requires upper bounds on the spectral norm of each Xj and on variance of Y.
kA
k2
1
F
I. This is a well known property of
We use the fact that, for any i, τ̄i (A)
ai aTi AAT + \k
k
leverage scores, shown for example in the proof of Lemma 11 in [CLM+ 15]. It lets us bound:
!
2
kA
k
1
\k F
· Σ̄−1 UT ai aTi UΣ̄−1 Σ̄−1 UT AAT +
I UΣ̄−1 = I.
τ̄i (A)
k
So we have:
1
1
ǫ2
P
Xj + Σ̄−1 UT AAT UΣ̄−1
·
· τ̄i (A) · I
t
tpi
c log(k/δ) i τ̃i
P
i τ̃i
τ̃i
· τ̄i (A) · I
ǫ2
I.
c log(k/δ)
Additionally,
ǫ2
ǫ2
1 −1 T
P
Σ̄ U AAT UΣ̄−1 =
· Σ̄−2 Σ2
I,
t
c log(k/δ) i τ̃i
c log(k/δ)
where the inequality follows from the fact that:
X
X
τ̄i (A) = tr AT UΣ̄−2 UT A = tr UΣ̄−2 Σ2 UT = tr Σ̄−2 Σ2 ≥ kΣ̄−2 Σ2 k2 .
τ̃i ≥
i
i
Overall this gives kXj k2 ≤
E(Y 2 ) = t · E(X2j ) =
ǫ2
c log(k/δ) .
1X
pi ·
t
Next we bound the variance of Y.
1 −1 T
Σ̄ U ai aTi UΣ̄−2 UT ai aTi UΣ̄−1
p2i
1 −1 T
T
−2 T
T
−1
−1 T
T
−2 T
T
−1
−2 Σ̄ U ai ai UΣ̄ U AA UΣ̄ + Σ̄ U AA UΣ̄ U AA UΣ̄
pi
P
τ̃i
1
1X
−1 T
T
−1
· τ̄i (A) · Σ̄ U ai ai UΣ̄
− Σ̄−1 UT AAT UΣ̄−2 UT AAT UΣ̄−1
t
τ̃i
t
2
ǫ
Σ̄−1 UT AAT UΣ̄−1
c log(k/δ)
ǫ2
ǫ2
Σ2 · Σ̄−2
D,
(8)
c log(k/δ)
c log(k/δ)
8
where we set Di,i = 1 for i ∈ 1, . . . , k and Di,i =
σi2
σi2 +
kA\k k2
F
k
for all i ∈ k + 1, ..., n. By the stable
rank matrix Bernstein inequality given in Theorem 7.3.1 of [Tro15], for ǫ < 1,
P [kYk ≥ ǫ] ≤
4 tr(D)
·e
kDk2
−ǫ2 /2
ǫ2
(kDk2 +ǫ/3)
c log(k/δ)
.
(9)
Clearly kDk2 = 1. Furthermore, following Lemma 4, tr(D) ≤ 2k. Plugging into (9), we see that
P [kYk ≥ ǫ] ≤ 8ke−
c log(k/δ)
2
) ≤ δ/2,
if we choose the constant c large enough. So we have established (7).
3.1
Projection-Cost Preserving Sampling
We now use Theorem 5 to prove that sampling by ridge leverage scores is sufficient for constructing projection-cost preserving samples. The following theorem is a basic building block in our
O(nnz(A)) time low-rank approximation algorithm.
Theorem 6 (Projection-Cost Preservation). For i ∈ {1, . . . , d}, let τ̃i ≥ τ̄i (A) be an overestimate
P
for the ith ridge leverage score. Let pi = Pτ̃i τ̃i . Let t = c log(k/δ)
i τ̃i for any ǫ < 1 and some
ǫ2
i
sufficiently large constant c. Construct C by sampling t columns of A, each set to
probability pi . With probability 1 − δ, for any rank k orthogonal projection X,
√1 ai
tpi
with
(1 − ǫ)kA − XAk2F ≤ kC − XCk2F ≤ (1 + ǫ)kA − XAk2F .
Note that the theorem also holds for independent sampling without replacement, as shown in
Appendix B. By Lemma 4, when each approximation τ̃i is within a constant factor of the true ridge
leverage score τ̄i (A), we obtain a projection-cost preserving sample with t = O(k log(k/δ)/ǫ2 ).
To simplify bookkeeping, we only worry about proving a version of Theorem 6 with (1 ± aǫ)
error for some constant a, and assume ǫ ≤ 1/2. By simply adjusting our constant oversampling
parameter, c, we can recover the result as stated.
The challenge in proving Theorem 6 comes from the mixed additive-multiplicative error of
Theorem 5. Pure multiplicative error, e.g. from a subspace embedding, or pure additive error, e.g.
from a “Frequent Directions” sketch [GLPW15], are easily converted to projection-cost preservation
results [Mus15], but merging the analysis is intricate. To do so, we split AAT and CCT into their
projections onto the top “head” singular vectors of A and onto the remaining “tail” singular vectors.
Restricted to the span of A’s top singular vectors, Theorem 5 gives a purely multiplicative bound.
Restricted to vectors spanned by A’s lower singular vectors, the bound is purely additive.
Proof. For notational convenience, let Y denote I − X, so kA − XAk2F = tr(YAAT Y) and kC −
XCk2F = tr(YCCT Y).
3.1.1
Head/Tail Split
2 ≥ kA − A k2 /k. Let P
Let m be the index of the smallest singular value of A such that σm
m
k F
T
T
denote Um Um and P\m denote U\m U\m = I − Pm . We split:
tr(YAAT Y) = tr(YPm AAT Pm Y) + tr(YP\m AAT P\m Y) + 2 tr(YPm AAT P\m Y)
= tr(YAm ATm Y) + tr(YA\m AT\m Y).
9
(10)
The “cross terms” involving Pm A and P\m A equal 0 since the two matrices have mutually orthogT and VT , respectively). Additionally, we split:
onal rows (spanned by Vm
\m
tr(YCCT Y) = tr(YPm CCT Pm Y) + tr(YP\m CCT P\m Y) + 2 tr(YPm CCT P\m Y)
(11)
In (11) cross terms do not cancel because, in general, Pm C and P\m C will not have orthogonal
rows, even though they have orthogonal columns. Regardless, while these terms make our analysis
more difficult, we proceed with comparing corresponding parts of (10) and (11).
3.1.2
Head Terms
We first bound the terms involving Pm , beginning by showing that:
1+ǫ
1−ǫ
Pm CCT Pm Am ATm
Pm CCT Pm .
1+ǫ
1−ǫ
(12)
For any vector x, let y = Pm x. Note that xT Am ATm x = yT AAT y since Am ATm = Pm AAT Pm
and since Pm Pm = Pm . So, using (7) we can bound:
(1 − ǫ)yT CCT y − ǫ
kA\k k2F T
kA\k k2F T
y y ≤ xT Am ATm x ≤ (1 + ǫ)yT CCT y + ǫ
y y.
k
k
(13)
By our definition of m, y is orthogonal to all singular directions of A except those with squared
singular value greater than or equal to kA\k k2F /k. It follows that
xT Am ATm x = yT AAT y ≥
kA\k k2F T
y y,
k
and accordingly, from the left side of (13), that (1 − ǫ)yT CCT y ≤ (1 + ǫ)xT Am ATm x. Additionally,
from the right side of (13), we have that (1 + ǫ)yT CCT y ≥ (1 − ǫ)xT Am ATm x. Since yT CCT y =
xT Pm CCT Pm x, these inequalities combine to prove (12). From (12) we can bound the diagonal
entries of YAm ATm Y in terms of the corresponding diagonal entries of YPm CCT Pm Y, which are
all positive, and conclude that:
1−ǫ
1+ǫ
tr(YPm CCT Pm Y) ≤ tr(YAm ATm Y) ≤
tr(YPm CCT Pm Y).
1+ǫ
1−ǫ
Assuming ǫ < 1/2, this is equivalent to:
(1 − 4ǫ) tr(YAm ATm Y) ≤ tr(YPm CCT Pm Y) ≤ (1 + 4ǫ) tr(YAm ATm ).
3.1.3
(14)
Tail Terms
For the lower singular directions of A, Theorem 5 does not give a multiplicative spectral approximation, so we do things a bit differently. Specifically, we start by noting that:
tr(YA\m AT\m Y) = tr(A\m AT\m ) − tr(XA\m AT\m X) and
tr(YP\m CCT P\m Y) = tr(P\m CCT P\m ) − tr(XP\m CCT P\m X).
We handle tr(A\m AT\m ) = kA\m k2F and tr(P\m CCT P\m ) = kP\m Ck2F first. Since C is constructed
via an unbiased sampling of A’s columns, E kP\m Ck2F = kA\m k2F and a scalar Chernoff bound
10
is sufficient for showing that this value concentrates around its expectation. Our proof is included
as Lemma 20 in Appendix A and implies the following bound:
−ǫkA\k k2F ≤ tr(A\m AT\m ) − tr(P\m CCT P\m ) ≤ ǫkA\k k2F .
(15)
Next, we compare tr(XA\m AT\m X) to tr(XP\m CCT P\m X). We first claim that:
P\m CCT P\m −
4ǫ
4ǫ
kA\k k2F I A\m AT\m P\m CCT P\m + kA\k k2F I.
k
k
(16)
The argument is similar to the one for (12). For a vector x, let y = P\m x. xT A\m AT\m x =
yT AAT y since A\m AT\m = P\m AAT P\m and since P\m P\m = P\m . Applying (7) gives:
kA\k k2F T
kA\k k2F T
T
T
T
T
y y ≤ x A\m A\m x ≤ (1 + ǫ)y CC y + ǫ
y y.
(1 − ǫ)y CC y − ǫ
k
k
T
T
Noting that yT y ≤ xT x and assuming ǫ ≤ 1/2 gives the following two inequalities:
kA\k k2F T
x x ≤ (1 + 2ǫ)xT A\m AT\m x,
k
kA\k k2F T
x x.
(1 − 2ǫ)xT A\m AT\m x ≤ yT CCT y + 2ǫ
k
yT CCT y − 2ǫ
kA
(17)
(18)
k2
\k F
By our choice of m, xT A\m AT\m x ≤
xT x. So, substituting y with P\m x and rearranging
k
(17) and (18) gives (16).
Now, since X is a rank k projection matrix, it can be written as X = ZZT where Z ∈ Rn×k is
a matrix with k orthonormal columns, z1 , . . . , zk . By cyclic property of the trace,
tr(XA\m AT\m X) = tr(ZT A\m AT\m Z) =
Similarly, tr(XP\m CCT P\m X) =
Pk
T
T
i=1 zi P\m CC P\m zi
k
X
i=1
zTi A\m AT\m zi .
and we conclude from (16) that:
tr(XP\m CCT P\m X) − 4ǫkA\k k2F ≤ tr(XA\m AT\m X) ≤ tr(XP\m CCT P\m X) + 4ǫkA\k k2F ,
which combines with (15) to give the final bound:
tr(YA\m AT\m Y) − 5ǫkA\k k2F ≤ tr(YP\m CCT P\m Y) ≤ tr(YA\m AT\m Y) + 5ǫkA\k k2F .
3.1.4
(19)
Cross Terms
Finally, we handle the cross term 2 tr(YPm CCT P\m Y). We do not have anything to compare this
term to, so we just need to show that it is small. To do so, we rewrite:
tr(YPm CCT P\m Y) = tr(YAAT (AAT )+ Pm CCT P\m ),
(20)
which is an equality since the columns of Pm CCT P\m fall in the span of A’s columns. We eliminate
the trailing Y using the cyclic property of the trace. hM, Ni = tr(M(AAT )+ NT ) is a semi-inner
11
product since AAT is positive semidefinite. Thus, by the Cauchy-Schwarz inequality,
| tr(YAAT (AAT )+ Pm CCT P\m )| ≤
q
tr(YAAT (AAT )+ AAT Y) · tr(P\m CCT Pm (AAT )+ Pm CCT P\m )
q
T
T
= tr(YAAT Y) · tr(P\m CCT Um Σ−2
m Um CC P\m )
q
q
2
= tr(YAAT Y) · kP\m CCT Um Σ−1
(21)
m kF .
To bound the second term, we separate:
kP\m CC
T
2
Um Σ−1
m kF
=
m
X
i=1
kP\m CCT ui k22 σi−2 .
(22)
We next show that the summand is small for every i. Take pi to be a unit vector in the direction
of CCT ui ’s projection onto P\m . I.e. pi = P\m CCT ui /kP\m CCT ui k2 . Then:
kP\m CCT ui k22 = (pTi CCui )2 .
√
Now, suppose we construct the vector m = σi−1 ui + kA kkF pi . From (7) we know that:
(23)
\k
(1 − ǫ)mT CCT m −
ǫkA\k k2F
k
mT m ≤ mT AAT m,
which expands to give:
(1 −
σi−2 uTi AAT ui +
√
2 k
pT CCT pi + (1 − ǫ)
+ (1 − ǫ)
pT CCT ui ≤
σi kA\k kF i
kA\k k2F i
ǫkA\k k2F T
ǫkA\k k2F T
k
k
T
T
m
m
=
1
+
m m.
+
p
AA
p
+
i
k
k
kA\k k2F
kA\k k2F i
k
ǫ)σi−2 uTi CCT ui
(24)
There are no cross terms on the right side because pi lies in the span of U\m and is thus orthogonal
to ui over AAT . Now, from (12) we know that uTi CCT ui ≥ (1 − 2ǫ)uTi AAT ui ≥ (1 − 2ǫ)σi2 . From
(16) we also know that pTi CCT pi ≥ pTi AAT pi − 4ǫ
kA\k k2F
k
. Plugging into (28) gives:
√
2 k
k
−2 T
T
T
T
p AA pi − 4ǫ + (1 − ǫ)
(1 − 3ǫ)σi ui AA ui + (1 − ǫ)
pT CCT ui
σi kA\k kF i
kA\k k2F i
≤1+
k
kA\k k2F
pTi AAT pi +
Noting that pTi AAT pi ≤
ǫkA\k k2F T
m m.
k
(25)
kA\k k2F
k
since pi lies in the column span of U\m , rearranging (25) gives:
√
ǫkA\k k2F T
2 k
m m ≤ 12ǫ.
pTi CCT ui ≤ 8ǫ +
(1 − ǫ)
σi kA\k kF
k
√ 2
√
The second inequality follows from the fact that σi−1 ≤ kA kkF so kmk22 ≤ kA2 kkF . Assuming
\k
\k
again that ǫ ≤ 1/2 gives our final bound:
√
k
pi CCT uTi ≤ 12ǫ
σi kA\k kF
(pi CCT uTi )2 ≤ 144ǫ2
12
σi2 kA\k k2F
.
k
(26)
Plugging into (22) gives:
2
kP\m CCT Um Σ−1
m kF ≤
m
X
144ǫ2
i=1
σi2 kA\k k2F −2
σi ≤ 288ǫ2 kA\k k2F .
k
(27)
Note that we get an extra factor of 2 because m ≤ 2k. Returning to (21), we conclude that:
q
q
| tr(YAAT (AAT )+ Pm CCT P\m )| ≤ tr(YAAT Y) · 288ǫ2 kA\k k2F ≤ 17ǫ tr(YAAT Y). (28)
The last inequality follows from the fact that kA\k k2F ≤ tr(YAAT Y) since A\k is the best rank k
approximation to A. tr(YAAT Y) = kA−XAk2F is the error of a suboptimal rank k approximation.
3.1.5
Final Bound
Ultimately, from(11), (14), (19), and (28), we conclude:
(1 − 4ǫ) tr(YAm ATm Y) + tr(YA\m AT\m Y) − 5ǫkA\k k2F − 34ǫ tr(YAAT Y) ≤ tr(YCCT Y)
≤ (1 + 4ǫ) tr(YAm ATm ) + tr(YA\m AT\m Y) + 5ǫkA\k k2F + 34 tr(YAAT Y).
Applying the fact that kA\k k2F ≤ tr(YAAT Y) proves Theorem 6 for a constant factor of ǫ.
3.2
Column Subset Selection
Although not required for our main low-rank approximation algorithm, we also prove that ridge
leverage score sampling can be used to obtain (1 + ǫ) error column subsets (Definition 2). The
column subset selection problem is of independent interest and the following result allows ridge
leverage scores to be used in our single-pass streaming algorithm for this problem (Section 6).
Theorem 7. For i ∈ {1, .. . , d}, let τ̃i ≥
τ̄i (A) be an overestimate for the ith ridge leverage score.
P
Let pi = Pτ̃i τ̃i . Let t = c log k + log(1/δ)
i τ̃i for ǫ < 1 and some sufficiently large constant c.
ǫ
i
Construct C by sampling t columns of A, each set to ai with probability pi . With probability 1 − δ:
kA − CC+ A k k2F ≤ (1 + ǫ)kA − Ak k2F .
P
Furthermore, C contains a subset of O ( i τ̃i /ǫ) columns that satisfies Definition 2 and can be
identified in polynomial time.
Note that (CC+ A)k is a rank k matrix in the column span of C, so Theorem 7 implies that
C is a (1 + ǫ) error column subset according to Definition 2. By Lemma 4, if each τ̃i is within a
constant factor of τ̄i (A), the approximate ridge leverage scores sum to O(k) so Theorem 7 gives a
column subset of size O (k log k + k/ǫ), which contains a near optimally sized column subset with
O (k/ǫ) columns. Again, the theorem also holds for sampling without replacement (see Appx. B).
Our proof relies on establishing a connection between ridge leverage sampling and well known
adaptive sampling techniques for column subset selection [DRVW06, DV06]. We start with the
following lemma on adaptive sampling for column subset selection:
Lemma 8 (Theorem 2.1 of [DRVW06]). Let C be any subset of A’s columns and let Z be
an orthonormal matrix whose
columns span those of C. If we sample an additional set S of
k log(1/δ) kA−ZZT Ak2F
columns from A with probability proportional to k(A − ZZT A)i k22 , then
O
· kA k2
ǫ
\k F
[S ∪ C] is a (1 + ǫ) error column subset for A with probability (1 − δ).
6
6
Theorem 2.1 was originally stated as an expected error result, but it can be seen to hold with constant probability
via Markov’s inequality and accordingly with (1 − δ) probability when oversampling by a factor of log(1/δ)
13
When C is a constant error column subset, then kA − ZZT Ak2F ≤ kA − (ZZT A)k k2F =
O(kA\k k2F ) and accordingly we only need O(k log(1/δ)/ǫ) additional adaptive samples. So one
potential algorithm for column subset selection is as follows: apply Theorems 5 and 6, sampling
O(k log(k/δ)) columns by ridge leverage score to obtain a constant error projection-cost preserving sample, will also be a constant error column subset. Then sample O(k log(1/δ)/ǫ) additional
columns adaptively against C.
However, it turns out that ridge leverage scores well approximate adaptive sampling probabilities computed with respect to any constant error additive-multiplicative spectral approximation
satisfying Theorem 5! That is, surprisingly, they achieve the performance of adaptive sampling
without being adaptive at all. Simply sampling O(k log(1/δ)/ǫ) more columns by ridge leverage
score and invoking Lemma 8 suffices to achieve (1 + ǫ) error.
Proof of Theorem 7. We formally prove that C is itself a good column subset before showing our
stronger guarantee, that it also contains a column subset of optimal size, up to constants.
3.2.1
Primary Column Subset Selection Guarantee
P
We split our sample C, into C
,
which
contains
the
first
c
log(k/δ)
1
i τ̃i columns and C2 , which
P
contains the next c log(1/δ)/ǫ i τ̃i columns. Note that in our final sample complexity the log(1/δ)
factor in the size of C1 is not shown as it is absorbed into the larger size of C2 when log(1/δ) > log(k)
and into the log(k) otherwise. By Theorem 6, we know that, appropriately reweighted, C1 is a
constant error projection-cost preserving sample of A. This means that C1 is also a constant error
column subset. Let Z be an orthonormal matrix whose columns span the columns of C1 .
To invoke Lemma 8 to boost C1 to a (1 + ǫ) column subset, we need to sample columns with
probabilities proportional to k(A − ZZT A)i k22 . This is equivalent to sampling proportional to:
(aTi − aTi ZZT )(ai − ZZT ai ) = aTi ai − 2aTi ZZT ai + aTi ZZT ZZT ai = aTi I − ZZT ai .
We can assume kA\k k2F > 0 or else C1 must fully span A’s columns and we’re done. Scaling τ̄i (A):
kA\k k2F
τ̄i (A) = aTi
k
k
kA\k k2F
T
AA + I
!+
ai .
Since C1 satisfies Theorem 5 with constant error, for large enough constant c1 ,
!
!
Tk
k
kkC
C
k
1 1 2
AAT + I c1
C1 CT1 + I c1 I +
ZZT .
kA\k k2F
kA\k k2F
kA\k k2F
Furthermore, I − ZZT I + cZZT
c1
+
for any positive c so,
!+
!+
Tk
k
kkC
C
2
1
1
AAT + I
ZZT
I − ZZT .
I+
kA\k k2F
kA\k k2F
c1 kA\k k2F
c1 kA\k k2F
T
τ̄i (A) ≥ k(A − ZZP
A)i k22 for all i and hence
τ̃i ≥ k(A − ZZT A)i k22 .
k
k
C2 is a set of c log(1/δ)/ǫ · i τ̃i columns sampled with probability proportional to approximate
ridge leverage scores. Consider forming C′2 by setting (C2 )i = 0 with probability:
So
k(A − ZZT A)j(i) k22
c1 kA\k k2F
τ̃j(i)
k
14
,
where j(i) is just the index of the column of A that (C2 )i is equal to. Clearly, if not equal to 0,
each column of C′2 is equal to ai with probability proportional to the adaptive sampling probability
k(A − ZZT A)i k22 . Additionally, in expectation, the number of nonzero columns will be:
!
T
2
X
X
k(A
−
ZZ
A)
k
τ̃
ck log(1/δ) kA − ZZT Ak2F
j 2
Pj
·
=
.
c log(1/δ)/ǫ ·
τ̃i ·
2
c1 kA\k k2F
c1 ǫ
τ̃i
kA
k
\k
i
F
τ̃
j
i
k
j
By a Chernoff bound, with probability 1 − δ/2 at least half this number of columns will be
nonzero, and by Lemma 8, for large enough c, conditioning on the above column count bound
holding, [C1 ∪ C′2 ] is a (1 + ǫ) error column subset for A with probability 1 − δ/2. Just noting that
span([C1 ∪ C′2 ]) ⊆ span([C1 ∪ C2 ]) and union bounding over the two possible fail conditions, gives
that [C1 ∪ C2 ] = C is a (1 + ǫ) column subset with probability at least 1 − δ.
3.2.2
Stronger Containment Guarantee
P
It now remains to show the second condition of Theorem 7: C contains a subset of O( i τ̃i /ǫ)
columns that also satisfies Definition 2. This follows from noting that we can apply, for example,
the polynomial time deterministic column selection algorithm of [CEM+ 15] to produce a matrix
C′1 with O(k) columns that is both a constant error additive-multiplicative spectral approximation
and a constant error projection-cost preserving sample for C1 . If C′1 has constant error for C1 , it
does for A as well and so is a constantPerror column subset.
O(log(1/δ))
. By our argument
C2 contains O(log(1/δ)) sets of O( i τ̃i /ǫ) columns, C12 , C22 , . . . , C2
probability.
So with
above, for each Ci2 , [C′1 , Ci2 ] is a (1+ ǫ) error column subset of A with constant
P
P
i
′
probability 1 − δ, at least one [C1 , C2 ] is good. This set contains just O(k + i τ̃i /ǫ) = O( i τ̃i /ǫ)
columns, giving the theorem.
4
Monotonicity of Ridge Leverage Scores
With our main sampling results in place, we focus on the algorithmic problem of how to efficiently
approximate the ridge leverage scores of a matrix A. In the offline setting, we will show that these
scores can be approximated in O(nnz(A)) time using a recursive sampling algorithm. We will also
show how to compute and sample by the scores in a single-pass column stream.
Both of these applications will require a unique stability property of the ridge leverage scores:
Lemma 9 (Ridge Leverage Score Monotonicity). For any A ∈ Rn×d and vector x ∈ Rn , for every
i ∈ 1, . . . , d we have:
τ̄i (A) ≤ τ̄i (A ∪ x),
where A ∪ x is simply A with x appended as its final column.
This statement is extremely natural, given that leverage scores are meant to be a measure of
importance. It ensures that the importance of a column can only decrease when additional columns
are added to A. While it holds for standard leverage scores, surprisingly no prior low-rank leverage
scores satisfy this property.
We begin by defining the generalized ridge leverage score as the ridge leverage score of a column
estimated using a matrix other than A itself.
15
′
Definition 10 (Generalized Ridge Leverage Score). For any A ∈ Rn×d and M ∈ Rn×d , the ith
generalized ridge leverage score of A with respect to M is defined as:
aT MMT + kM−Mk k2F I + a for a ∈ span MMT + kM−Mk k2F I
i
i
i
k
k
τ̄iM (A) =
∞
otherwise.
This definition is the intuitive one. Since our goal is typically to compute over-estimates of
kM−Mk k2F
τ̄i (A) using M, if ai does not fall in the span of MMT +
I we conservatively set its
k
generalized leverage score to ∞ instead of 0. Note that this case only applies when M is rank k
kM−Mk k2F
and thus
I is 0.
k
We now prove a general monotonicity theorem, from which Lemma 9 follows immediately by
setting M = A and A = A ∪ x.
′
Theorem 11 (Generalized Monotonicity Bound). For any A ∈ Rn×d and M ∈ Rn×d with MMT
AAT we have:
τ̄i (A) ≤ τ̄iM (A).
Proof. We first note that kM − Mk k2F ≤ kA − Ak k2F since, letting Pk be the projection onto the
top k column singular vectors of A, by the optimality of Mk we have:
kM − Mk k2F ≤ k(I − Pk )Mk2F ≤ k(I − Pk )Ak2F = kA − Ak k2F .
Accordingly,
MMT +
kA − Ak k2F
kM − Mk k2F
I AAT +
I.
k
k
kM−M k2
k F
Let R be a projection matrix onto the column span of MMT +
I. Since for any PSD
k
+
+
matrices B and C with the same column span, B C implies B C (see [MA77]) we have:
+
+
kA − Ak k2F
kM − Mk k2F
T
R R AA +
R.
I
I
R MM +
k
k
kM−Mk k2F
For any ai not lying in span MMT +
I
, τ̄iM (A) = ∞ and the theorem holds trivially.
k
Otherwise, we have Rai = ai and so:
τ̄i (A) =
aTi R
T
kA − Ak k2F
AA +
I
k
T
+
Rai ≤
aTi R
kM − Mk k2F
MM +
I
k
T
+
Rai = τ̄iM (A).
This gives the theorem.
5
Recursive Ridge Leverage Score Approximation
With Theorem 11 in place, we are ready to prove that ridge leverage scores can be approximated in
O(nnz(A)) time. Our work closely follows [CLM+ 15], which shows how to approximate traditional
leverage scores via recursive sampling.
16
5.1
Intuition and Preliminaries
The central idea behind recursive sampling is as follows: if we uniformly sample, for example, 1/2
of A’s columns to form C and compute ridge leverage score estimates with respect to just these
columns, by monotonicity, the estimates will upper bound A’s true ridge leverage scores. While
some of these upper bounds will be crude, we can show that their overall sum is small.
Accordingly, we can use the estimates to sample O(k log k) columns from A to obtain a constant
factor additive-multiplicative spectral approximation by Theorem 5, as well as a constant factor
projection-cost preserving sample by Theorem 6. This approximation is enough to obtain constant
factor estimates of the ridge leverage scores of A.
C may still be relatively large (e.g. half the size of A), but it can be recursively approximated
via the same sampling scheme, eventually giving our input sparsity time algorithm.
We first give a foundational lemma showing that an approximation of the form given by Theorems 5 and 6 is enough to give constant factor approximations to ridge leverage scores.
Lemma 12. Assume that, for an ǫ ≤ 1/2, we have C satisfying equation (7) from Theorem 5:
ǫ
ǫ
(1 − ǫ)CCT − kA − Ak k2F I AAT (1 + ǫ)CCT + kA − Ak k2F I,
k
k
along with equation (3) from Definition 3:
(1 − ǫ)kA − XAk2F ≤ kC − XCk2F ≤ (1 + ǫ)kA − XAk2F , ∀ rank k X.
Then for all i,
(1 − 4ǫ)τ̄i (A) ≤ τ̄iC (A) ≤ (1 + 4ǫ)τ̄i (A).
Proof. Let Pk be the projection onto A’s top k column singular vectors. By the optimality of Ck
in approximating C and the projection-cost preservation condition, we know that kC − Ck k2F ≤
kC−Pk Ck2F ≤ (1+ǫ)kA−Ak k2F . Also, letting P̃k be the projection onto C’s top k column singular
vectors, we have (1 − ǫ)kA − Ak k2F ≤ (1 − ǫ)kA − P̃k Ak2F ≤ kC − Ck k2F . So overall:
(1 − ǫ)kA − Ak k2F ≤ kC − Ck k2F ≤ (1 + ǫ)kA − Ak k2F .
(29)
Using the guarantee from Theorem 5 we have:
(1 − ǫ)CCT +
kA − Ak k2F
(1 + ǫ)kA − Ak k2F
(1 − ǫ)kA − Ak k2F
I AAT +
I (1 + ǫ)CCT +
I.
k
k
k
Combining with our bound on kC − Ck k2F gives:
T
(1 − ǫ)CC +
(1−ǫ)
(1+ǫ) kC
− Ck k2F
k
kA − Ak k2F
I AA +
I (1 + ǫ)CCT +
k
T
(1+ǫ)
(1−ǫ) kC
− Ck k2F
k
I,
and when ǫ ≤ 1/2, we can simplify to:
kA − Ak k2F
kC − Ck k2F
kC − Ck k2F
T
T
T
I AA +
I (1 + 4ǫ) CC +
I .
(1 − 4ǫ) CC +
k
k
k
If kA − Ak k2F = 0, and thus by (29) kC − Ck k2F = 0, then A and C must have the same column
span or else it could not hold that (1 − 4ǫ)CCT AAT (1 + 4ǫ)CCT . On the other hand, if
17
kA−A k2
kC−C k2F
k
k F
I and CCT +
kA − Ak k2F > 0, and thus by (29) kC − Ck k2F > 0, both AAT +
k
k
n
span all of R . Either way, the two matrices have the same span and so by [MA77] we have:
kA\k k2F
(1 − 4ǫ) AAT +
I
k
!+
kC\k k2F
CCT +
I
k
!+
kA\k k2F
(1 + 4ǫ) AAT +
I
k
!+
I
,
which gives the lemma.
Our next lemma, which is analogous to Theorem 2 of [CLM+ 15], shows that by reweighting a
small number of columns in A, we can obtain a matrix with all ridge leverage scores bounded by
a small constant, which ensures that it can be well approximated by uniform sampling.
Lemma 13 (Ridge Leverage Score Bounding Column Reweighting). For any A ∈ Rn×d and any
score upper bound u > 0, there exists a diagonal matrix W ∈ Rd×d with 0 W I such that:
∀i, τ̄i (AW) ≤ u,
(30)
and
|{i : Wii 6= 1}| ≤
3k
.
u
(31)
Proof. This result follows from Theorem 2 of [CLM+ 15], to which we refer the reader for details. To
show the existence of a reweighting W satisfying (30) and (31), we will argue that a simple iterative
process (which we never actually need to implement) converges on the necessary reweighting.
Specifically, if a column has too high of a leverage score, we simply decrease its weight until
τ̄i (AW) ≤ u. We want to argue that, given AW0 with τ̄i (AW0 ) > u, we can decrease the weight
on ai to produce W1 with τ̄i (AW1 ) ≤ u. By Lemma 5 of [CLM+ 15] we can always decrease
the weight on ai to ensure τi (AW1 ) ≤ u, where τi (·) is the traditional leverage score. And since
+
k(AW 1 )\k k2F
2 AT + , τ̄ (AW ) ≤ τ (AW ), so an equivalent or smaller
I
AW12 AT +
AW
i
1
i
1
1
k
weight decrease suffices to decrease τ̄i (AW1 ) below u.
Furthermore, we can see that τ̄i (AW) is continuous with respect to W. This is due to the fact
that both the traditional leverage scores of AW (shown in Lemma 6 of [CLM+ 15]) and k(AW)\k k2F
are continuous in W. From Theorem 2 of [CLM+ 15], continuity implies that iteratively reweighting
individual columns converges, and thus there is always exists a reweighting satisfying (30).
It remains to show that this reweighting satisfies (31). By continuity, we can always decrease
τ̄i (AW0 ) to exactly u unless τ̄i (AW) = 1, in which case the only option is to set the weight on the
column to 0 and hence set τ̄i (AW) = 0. However, if kA\k k2F > 0, then every ridge leverage score
is strictly less than 1. If kA\k k2F = 0, then A has rank k, the ridge leverage scores are the same as
the true leverage scores, and the number of columns with leverage
score 1 is at most k. Therefore,
P
+
by Theorem 2 of [CLM 15], monotonicity, and the fact that i τ̄i (AW) ≤ 2k for any W, we have
the lemma.
5.2
Uniform Sampling for Ridge Leverage Score Approximation
Using Lemmas 12 and 13 we can prove the key step of our recursive sampling method: if we
uniformly sample columns from A and use them to estimate ridge leverage scores, these scores can
be used to resample a set of columns that give constant factor ridge leverage scores approximations.
18
Theorem 14 (Ridge Leverage Score Approximation via Uniform Sampling). Given A ∈ Rn×d ,
construct Cu by independently sampling each column of A with probability 21 . Let
n
o
τ̃i = min 1, τ̄iCu (A) .
If we form C by sampling each column of A independently with probability pi = min {1, τ̃i c1 log(k/δ)}
√
and reweighting by 1/ pi if selected, then for large enough constant c1 , with probability 1 − δ, C
will have just O(k log(k/δ)) columns and will satisfy the conditions of Lemma 12 for some constant
error. Accordingly, we have:
1
τ̄i (A) ≤ τ̄iC (A) ≤ 2τ̄i (A).
2
Cu
Proof. Clearly Cu CTu AAT , so by the monotonicity shown in
n Theorem o11 we have τ̄i (A) ≥
τ̄i (A). Since τ̄i (A) is always ≤ 1, it follows that τ̃i = min 1, τ̄iCu (A) ≥ τ̄i (A). Then we
can just use the τ̃i ’s obtained from Cu in independent sampling versions of Theorems 5 and 6,
which can be proven from Lemmas 21 and 22 in Appendix B. Accordingly, with probability 1 −
δ/3, C gives a constant factor additive-multiplicative spectral approximation and projection-cost
preserving sample of A. Hence by Lemma 12, τ̄iC (A) is a constant factor approximation to τ̄i (A).
To prove the theorem, we still have to show that C does not have too many columns. Its
expected number of columns is:
X
X
pi =
min {1, τ̃i c1 log(k/δ)} .
i
i
1
By Lemma 13 instantiated with u = 2c2 log(k/δ)
, we know that there is some reweighting matrix W
1
for all i. We have:
with only 3k · 2c2 log(k/δ) entries not equal to 1 such that τ̄i (AW) ≤ 2c2 log(k/δ)
X
i
pi =
X
i:Wii 6=1
pi +
X
pi
i:Wii =1
≤ 6kc2 log(k/δ) +
X
i:Wii =1
c log(k/δ) · τ̄iCu (A)
= 6kc2 log(k/δ) + c1 log(k/δ) ·
≤ 6kc2 log(k/δ) + c1 log(k/δ) ·
≤ 6kc2 log(k/δ) + c1 log(k/δ) ·
X
τ̄iCu (AW)
i:Wii =1
X
τ̄iCu W (AW)
i:Wii =1
X
τ̄iCu W (AW).
(32)
i
1
Now, since every ridge leverage score of AW is bounded by 2c2 log(k/δ)
, if c2 is set large enough,
the uniformly sampled Cu W is a proper ridge leverage score oversampling of AW, except that its
columns were not reweighted by a factor of 2 (they were each sampled with probability 1/2).
Accordingly, with probability 1 − δ/3, 2Cu W satisfies the approximation conditions of Lemma
12 for AW with ǫ = 1/2. Thus, for all i, 21 τ̄iCu W (AW) = τ̄i2Cu W (AW) ≤ 3τ̄i (AW). By Lemma
P
P
4, i τ̄i (AW) ≤ 2k so overall i τ̄iCu W (AW) ≤ 12k. Plugging back in to (32), we conclude that
C has O(k log(k/δ)) columns in expectation, and actually with probability 1 − δ/3 by a Chernoff
bound. Union bounding over our failure probabilities gives the theorem.
19
5.3
Basic Recursive Algorithm
Theorem 14 immediately proves correct Algorithm 1 for ridge leverage score approximation:
Algorithm 1 Repeated Halving
input: A ∈ Rn×d
output: A reweighted column sample C ∈ Rn×O(k log(k/δ)) satisfying the guarantees of Theorems
5 and 6 with constant error.
1: Uniformly sample d2 columns of A to form Cu
2: If Cu has > O(k log k) columns, recursively apply Repeated Halving to compute a constant
factor approximation C̃u for Cu with O(k log k) columns.
3: Compute generalized ridge leverage scores of A with respect to C̃u
4: Use these estimates to sample columns of A to form C
5: return C
Note that, by Lemma 12, generalized ridge leverage scores computed with respect to C̃u are
constant factor approximations to generalized ridge leverage scores computed with respect to Cu .
Accordingly, by Theorem 14, we conclude that C is a valid ridge leverage score sampling of A.
Before giving our full input sparsity time result, we warm up with a simpler theorem that
obtains a slightly suboptimal runtime.
Lemma 15. A simple implementation of Algorithm 1 that succeeds with probability 1 − δ runs in
O (nnz(A) log(d/δ)) + Õ(nk2 ) time.
For clarity of exposition, we use Õ(·) to hide log factors in k, d, and 1/δ on the lower order term.
Proof. The algorithm has log(d/k) levels of recursion and, since we sample our matrix uniformly,
nnz(A) is cut approximately in half at each level, with high probability. It thus suffices to show
that the work done at the top level is O (nnz(A) log(d/δ)) + Õ(nk 2 ).
To compute the generalized ridge leverage scores of A with respect to C̃u we must (approximately) compute, for each ai ,
aTi
kC̃u − (C̃u )k k2F
I
C̃u C̃Tu +
k
!+
ai .
(33)
kC̃ −(C̃ ) k2
We are going to ignore that C̃u C̃Tu + u k u k F I could be sparse and well conditioned (and
thus ideal for iterative solvers) and use direct methods for simplicity.
kC̃ −(C̃ ) k2
u
u k F
and let R ∈ Rn×Õ(k) be an orthonormal basis containing the left
Let λ denote
k
singular vectors of C̃u . We can rewrite:
C̃u C̃Tu + λI = C̃u C̃Tu + λRRT + λ I − RRT ,
and accordingly, using the fact that RRT and (I − RRT ) are orthogonal,
+
+ 1
I − RRT .
C̃u C̃Tu + λI
= C̃u C̃Tu + λRRT
+
λ
20
Now, using an SVD of C̃u , which can be computed in Õ(nk 2 ) time, we compute λ and then
+
write C̃u C̃Tu + λRRT
as RΣ−2 RT for some diagonal matrix Σ ∈ RÕ(k)×Õ(k) . Accordingly, to
evaluate (33), we need just need to compute:
1
1
T
T −1 T
T
−2 T
T
I − RR
ai = k R Σ R + √ I − RR
ai RΣ R +
ai k22 .
λ
λ
Since R has Õ(k) columns, naively evaluating this norm for all of A’s columns would require a
total of Õ(nnz(A)k) time. However, we can accelerate the computation via a Johnson-Lindenstrauss
embedding technique thathas become standard for computing
regular leverage scores [SS11].
1
T
−1
T
T
√
Specifically, denoting R Σ R + λ I − RR
as M, we can embed M’s columns into
O(log(d/δ)) × n dimensions by multiplying on the left by a matrix Π ∈ RO(log(d/δ)×n with scaled
random Gaussian or random sign entries. Even though M is n×n, we can perform the multiplication
in Õ(nk log(d/δ)) by working with our factored form of the matrix.
By standard Johnson-Lindenstrauss results, kΠMai k22 will be within a constant factor of
kMai k22 for all i with probability 1 − δ. Furthermore, we can evaluate kΠMai k22 for all ai in
O (nnz(A) log(d/δ)) total time.
Our final cost for approximating all ridge leverage scores is thus
O (nnz(A) log(d/δ)) + Õ nk2 time, which gives the lemma.
5.4
True Input-Sparsity Time
Sharpening Lemma 15 to eliminate log factors on the nnz(A) runtime term requires standard
optimizations for approximating leverage scores with respect to a subsample [LMP13, CLM+ 15].
In particular, we can actually apply a Johnson-Lindenstrauss embedding matrix to M with just
θ −1 rows for some small constant θ. Doing so will approximate each ridge leverage score to within
a factor of dθ with high probability (see Lemma 4.5 of [LMP13] for example).
This level of approximation is sufficient to resample O kdθ log(k/δ) columns from A to form
an approximation C′ that satisfies the guarantees of Theorems 5 and 6. To form C, we further
sample C′ down to O(k log(k/δ)) columns using its ridge leverage scores, which takes Õ(nk 2 d2θ )
time. Finally, under the reasonable assumption that ǫ and δ are poly(n), we can also assume
d = poly(n). Otherwise, nnz(A) ≥ d dominates the Õ(nk2 d2θ ) term. This yields the following:
Lemma 16. An optimized
implementation of Algorithm 1, succeeding with probability 1 − δ, runs
−1
in time O θ nnz(A) + Õ(n1+θ k2 ) time, for any θ ∈ (0, 1].
Once we have used Algorithm 1 to obtain C satisfying the guarantees of Theorems 5 and 6 with
constant error, we can approximate A’s ridge leverage scores and resample one final time to obtain
an ǫ error projection-cost preserving sketch. This immediately yields our main algorithmic result:
Theorem 1. For any
1], there exists an iterative column sampling algorithm that, in time
θ1+θ∈ 2(0,
n
k
−1
, returns Z ∈ Rn×k satisfying:
O θ nnz(A) + Õ
ǫ4
kA − ZZT Ak2F ≤ (1 + ǫ)kA − Ak k2F .
(34)
All significant linear algebraic operations of the algorithm involve matrices whose columns are subsets of those of A, and thus inherit any structure from the original matrix, including sparsity.
Proof.
θ/2We use the same technique as Lemma 16, but in the last round of sampling we select
columns to obtain an O(ǫ) factor projection-cost preserving sample, C. Setting Z
O kn log(k/δ)
ǫ2
to the top k column singular vectors of C, which takes Õ(n1+θ k2 /ǫ4 ) time, gives (34) [CEM+ 15].
21
6
Streaming Ridge Leverage Score Sampling
We conclude with an application of our results to novel low-rank sampling algorithms for singlepass column streams. While random projection algorithms work naturally in the streaming setting, the study of single-pass streaming column sampling has been limited to the “full-rank” case
[KL13, CMP15, KLM+ 14]. Column subset selection algorithms based on simple norm sampling are
adaptable to streams, but do not give relative error approximation guarantees [DKM06a, FKV04].
Relative error algorithms are obtainable by combining our projection-cost preserving sampling
procedures with the “merge-and-reduce” framework for coresets [BS80, AHPV04, HPM04]. This
approach relies on the composability of projection-cost preserving samples: a (1 + ǫ) error sample
for A unioned with a (1 + ǫ) error sample for B gives a (1 + ǫ) error sample for [A, B] [FSS13].
However, merge-and-reduce requires storage of O(log4 dk/ǫ2 ) scaled columns from A, where d is
the length of our stream (and its value is known ahead of time).
Our algorithms eliminate the logc d stream length dependence, storing a fixed number of columns
that only depends on ǫ and k. We note that our space bounds are given in terms of the number of
real numbers stored. We do not bound the required precision of these numbers, which would include
at least a single logarithmic dependence on d. In particular, we employ a Frequent Directions sketch
that requires words with at least Θ(log(nd)) bits of precision. Rigorously bounding maximum wordsize required for Frequent Directions and our algorithms could be an interesting direction for future
work.
6.1
General Approach
The basic idea behind our algorithms is quite simple and follows intuition from prior work on
standard leverage score sampling [KL13]. Suppose we have some space budget t for storing a
column sample C. As soon as we have streamed in t columns, we can downsample by ridge
leverage scores to say t/2 columns. As more columns are received, we will eventually reach our
storage limit and need to downsample columns again. Doing so naively would compound error: if
we resampled r times, our final sample would have error (1 + ǫ)r .
However, we can avoid compounding error by exploiting Lemma 9, which ensures that, as new
columns are added, the ridge leverage scores of columns already seen only decrease. Whenever we
add a column to C, we can record the probability it was kept with. In the next round of sampling,
we only discard that column with probability equal to the proportion that its ridge leverage score
decreased by (or keep the column with probability 1 if the score remained constant). New columns
are simply sampled by ridge leverage score. This process ensures that, at any point in the stream,
we have a set of columns sampled by true ridge scores with respect to the matrix seen so far.
Accordingly, we will have a (1 + ǫ) error column subset or projection-cost preserving sample at the
end of the stream.
This overview hides a number of details, the most important of which is how to compute ridge
leverage scores at any given point in the stream with respect to the columns of A observed so far. We
do not have direct access to these columns since we have only stored a subset of them. We could use
the fact that our current sample is projection-cost preserving and can be used to approximate ridge
leverage scores (see Lemma 12). However, this approach would introduce sampling dependencies
between columns and would require a logarithmic dependence on stream length to ensure that our
approximation does not fail at any round of sampling.
22
6.2
Frequent Directions for Approximating Ridge Leverage Scores
Instead, we use a constant error deterministic “Frequent Directions” sketch to estimate ridge leverage scores. Introduced in [Lib13] and further analyzed in a series of papers culminating with
[GLPW15], Frequent Directions sketches are easily maintained in a single-pass column stream of
A. The sketch always provides an approximation B ∈ Rn×(ℓ+1)k guaranteeing:
BBT AAT BBT +
1 kA\k k2F
.
ℓ
k
(35)
B does not contain columns from A, so it could be dense even for a sparse input matrix. However,
we will only be setting ℓ to a small constant. Precise information about A will be stored in our
column sample C, which maintains sparsity.
We first show that B can be used to compute constant factor approximations to the ridge
leverage scores of A.
Lemma 17. For every column ai ∈ A, define
+
kAk2F − kBk k2F
def T
T
ai .
τ̃i = ai BB +
I
k
If B ∈ Rn×3k is a Frequent Directions sketch for A with accuracy parameter ℓ = 2, then
1
τ̄i (A) ≤ τ̃i ≤ 2τ̄i (A).
2
kAk2F is obviously computable in a single-pass column stream, so τ̃i can be evaluated in the
streaming setting as long as we have access to ai .
Proof. By the Frequent Directions guarantee, either BBT = AAT giving the lemma trivially, or
kA k2F
I and
kA\k k2F ≥ 0. In this case, since BBT AAT , kAk2F − kBk k2F > 0. So both AAT + \k
k
2 +
2 −kB k2
kA
k
kAk
\k
k
T
F
F
F
BBT +
I span all of Rn . Recalling that τ̄i (A) = aTi AA +
I ai , to prove
k
k
the lemma it suffices to show:
kA\k k2F
kAk2F − kBk k2F
kAk2F − kBk k2F
1
T
T
T
I AA +
I 2 BB +
I .
(36)
BB +
2
k
k
k
Recall that the squared Frobenius norm of a matrix is equal to the sum of its squared singular
values. Additionally, a standard property of the relation M N is that, for all i, the ith singular
value σi (M) ≤ σi (N). From the right hand side of (35) it follows that, when ℓ = 2, σi2 (B) ≥
σi2 (A) −
kA\k k2F
2k
. Accordingly, since kBk k2F is the sum of the top k singular values of B,
kA\k k2F
≤ 1.5kA\k k2F .
−
≤
+k
2k
kA k2F
kAk2F −kBk k2F
Since BBT AAT , it follows that that BBT +
I
AAT + 1.5 \k
I, which is
k
k
more than tight enough to give the left hand side of (36).
kAk2F
kBk k2F
kA\k k2F
Furthermore , kAk2F − kBk k2F ≥ kA\k k2F , and since ℓ = 2, BBT AAT −
kA\k k2F
kAk2F − kBk k2F
T
I AAT +
,
BB +
k
2k
which is more than tight enough to give the right hand side of (36).
23
kA\k k2F
2k
. Overall,
6.3
Streaming Column Subset Selection
Lemma 17 gives rise to a number of natural algorithms for rejection sampling by ridge leverage score.
The simplest approach is to emulate sampling columns from A independently without replacement
(see Lemmas 21 and 22). However, since sampling without replacement produces a variable number
of samples, this method would require a log d dependence to ensure that our space remains bounded
throughout the algorithm’s execution with high probability.
Instead, we apply our “with replacement” bounds, which sample a fixed number of columns, t.
We start by describing Algorithm 2 for column subset selection. The constant c used below is the
necessary oversampling parameter from Theorem 7. C ∈ Rn×t stores our actual column subset and
D ∈ Rn×t stores a queue of new columns. B is a Frequent Directions sketch with parameter ℓ = 2.
Algorithm 2 Streaming Column Subset
input: A ∈ Rn×d , accuracy ǫ, success probability (1 − δ)
output: C ∈ Rn×t such that t =h32c(k log k + k log(1/δ)/ǫ) and each column
i ci is equal to
1 τ̃j c(k log k+k log(1/δ)/ǫ) τ̃j c(k log k+k log(1/δ)/ǫ)
,
column aj with probability pj ∈ 2
and 0 otherwise,
t
t
Pn
where τ̃j ≥ 2τ̄j (A) for all j and j=1 τ̃j ≤ 16k.
1: count := 1, C := 0n×t , D := 0n×t , f robA := 0
⊲ Initialize storage
old
old
⊲ Initialize sampling probabilities
2: [τ̃1 , ..., τ̃t ] := 1
3: for i := 1, . . . , d do
⊲ Process column stream
4:
B := FreqDirUpdate(B, ai )
5:
if count ≤ t then
⊲ Collect t new columns
6:
dcount := ai .
⊲ Update kAk2F
7:
f robA := f robA + kai k22
8:
count := count + 1
9:
else
⊲ Prune columns
10:
[τ̃1 , ..., τ̃t ] := min [τ̃1old , ..., τ̃told ], ApproximateRidgeScores(B, C, f robA)
11:
[τ̃1D , ..., τ̃tD ] := ApproximateRidgeScores(B, D, f robA)
12:
for j := 1, . . . , t do
⊲ Rejection sample
13:
if cj 6= 0 then
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
1:
2:
3:
4:
5:
With probability 1 − τ̃j /τ̃jold set cj := 0 and set τ̃jold := 1.
Otherwise set τ̃jold := τ̃j .
end if
if cj = 0 then
⊲ Sample from new columns in D
for ℓ := 1, . . . , t do
With probability τ̃ℓ c(k log k+kt log(1/δ)/ǫ) set cj := dℓ and set τ̃jold := τ̃ℓ
end for
end if
end for
count := 0
end if
end for
function ApproximateRidgeScores(B, M ∈ Rn×t , f robA)
for i := t + 1, . . . , d do
+
f robA−kBk k2F
τ̃k := 4mTi BBT +
I
mi
k
end for
return [τ̃1 , ..., τ̃t ]
24
6:
end function
To prove the correctness of Algorithm 2, we first note that, if our output C has columns
belonging to the claimed distribution, then with probability (1 − δ), C is a (1 + ǫ) error column
subset for A satisfying the guarantees of Theorem 7. Our procedure is not quite equivalent to
the sampling procedure from
a
Pn Theorem 7 because we have some positive probability of choosing
1
0 column (in fact, since j=1 τ̃j ≤ 16k, by our choice of t that probability is greater than 2 for
each column). However, Algorithm 2 samples from a distribution that is equivalent to sampling
from A with an all zeros column 0 tacked on and assigned a high ridge leverage score overestimate.
Furthermore, by inspecting Algorithm 2, we can see that each column is sampled independently, as
all ridge leverage score estimates are computed using the deterministic sketch B. Thus, we obtain
a column subset for [A ∪ 0], which is clearly also a column subset for A.
So, we just need to argue that we obtain an output according to the claimed distribution.
Consider the state of the algorithm after each set of t “Process column stream” iterations, or
equivalently, after each time the “Prune columns” else statement is entered. Denote A’s first t
columns as A(1) , its first 2t columns as A(2) , and in general, its first m · t columns as A(m) . These
submatrices represent the columns of A processed by the end of each epoch of t “Process column
stream” iterations. Let’s take as an inductive assumption that after every prior set of t steps, each
column in C equals:
c=
(
aj ∈ [A(m) ] with probability pj ∈
P
0 with probability (1 − j pj ),
h
1 τ̃j c(k log k+k log(1/δ)/ǫ) τ̃j c(k log k+k log(1/δ)/ǫ)
,
2
t
t
i
,
(37)
P
where τ̃j ≥ 2τ̄j (A(m) ) for all j and j τ̃j ≤ 16k. This is simply equivalent to our claimed output
property of C once all columns have been processed.
(37) holds for A(1) because all of its columns are initially stored in the buffer D and each
τ̃ c(k log k+k log(1/δ)/ǫ)
(see line 19). From Lemma 17 and our
c is set to dj with probability pj = j
t
chosen scaling by 4 (line 3 of ApproximateRidgeScores),
we know that τ̃j ≥ 2τ̄j (A(1) ). Additionally,
P
(1)
τ̃j ≤ 8τ̄j (A ), so it follows from Lemma 4 that j τ̃j ≤ 16k.
For future iterations, A(m) equals [A(m−1) , D]. Consider the columns in A(m−1) first. By our
(m−1) with probability
inductive
assumption each column in C has already
been set to aj ∈ A
pj ∈
old c(k log k+k log(1/δ)/ǫ)
1 τ̃j
2
t
,
τ̃jold c(k log k+k log(1/δ)/ǫ)
t
. Our “Rejection sample” step additionally
filters out any column sampled with probability τ̃j /τ̃jold , meaning that in total aj is sampled with
the desired probability from (37). We note that τ̃j /τ̃jold is trivially ≤ 1 since τ̃j was set to the
minimum of τ̃jold and the ridge leverage score of aj computed with respect to our updated Frequent
Directions sketch (see line 10).
If it was set based on the updated Frequent Directions sketch, then the argument that τ̃j ≥
2τ̄j (A(m) ) is the same as for A(1) . On the other hand, if τ̃j was set to equal τ̃jold , then we can apply
Lemma 9: from the inductive assumption, τ̃j = τ̃jold ≥ 2τ̄j (A(m−1) ) and τ̄j (A(m−1) ) ≥ τ̄j (A(m) )
from the monotonicity property so τ̃j ≥ 2τ̄j (A(m) ).
Next consider any aj ∈ D. Each column c is set to aj with the correct probability for (37),
but only conditioned on the fact that c = 0 before the “Sample from new columns in D” if
statement is reached. This conditioning should mean that we effectively sample each aj ∈ D
with lower probability. However,
P the probability cannot be much lower: by our choice of t and
the inductive assumption on j τ̃j , every column is set to 0 with at minimum 1/2 probability.
25
Accordingly, c is available at least half the time, meaning that we at least sample aj with probability
τ̃ c(k log k+k log(1/δ)/ǫ)
, which satisfies (37).
pj = 12 j
t
P
All that is left to argue is that j τ̃j ≤ 16k for A(m) . The argument is the same as for A(1) , the
only difference being that for some values of j, we could have set τ̃j = τ̃jold , which only decreases
the total sum. We conclude by induction that (37) holds for A itself, and thus C is a (1 + ǫ) error
column subset (Theorem 7). Algorithm 2 requires O(nk) space to store B and maintains at most
t = O(k log k + k log(1/δ)/ǫ) sampled columns. It thus proves Theorem 18:
Theorem 18 (Streaming Column Subset Selection). There exists a streaming algorithm that uses
just a single-pass over A’s columns to compute a (1 + ǫ) error column subset C with O(k log k +
k log(1/δ)/ǫ) columns. The algorithm uses O(nk) space in addition to the space required to store
C and succeeds with probability 1 − δ.
We note that, by using the stronger containment condition of Theorem 7 and the streaming
projection-cost preserving sampling algorithm described below we can easily modify the above
algorithm to output an optimally sized column subset with O(k/ǫ) columns. In order to select this
subset, we require a Frequent Directions sketch with ǫ error, so that we can evaluate each O(k/ǫ)
sized subset in our set of O(k log(1/δ)/ǫ) ‘adaptively sampled’ columns and return one giving ǫ
error. The higher accuracy Frequent Directions sketch incurs space overhead O(nk/ǫ).
6.4
Streaming Projection-Cost Preserving Samples
Our single-pass streaming procedure for projection-cost preserving samples is similar to Algorithm
2, although with one important difference. When constructing column subsets, we sampled new
columns in the buffer D while ignoring the fact that “available slots” in C (i.e. columns currently
set to 0) had already been consumed with some probability. This decision was deliberate, rather
than a convenience for analysis. We could not account for the probability of slots being unavailable
because calculating that probability precisely would require knowing the ridge leverage scores of
already discarded columns.
Fortunately, the probability of a column not being set to 0 was bounded by 1/2 and our
procedure hits its sampling target up to this factor. However, while a constant factor approximation
to sampling probabilities is also sufficient for our Theorem 6 projection-cost preservation result,
the fact that columns need to be reweighted by the inverse of their sampling probability adds a
complication: we do not know the true probability with which we sampled each column!
Unfortunately, approximating the reweighting
up to a constant factor is insufficient. We need
p
√
to reweight columns by a factor within (1 ± ǫ) of 1/ tpi for Theorem 5 and Lemma 20 to hold
(which are both required for Theorem 6). This is easily checked by noting that such a reweighting
is equivalent to replacing CCT with CWCT where (1 − ǫ)Id×d W (1 + ǫ)Id×d .
We achieve this accuracy by modifying our algorithm so that it maintains an even higher “open
rate” within C. Specifically, we choose t so that each column c has at least a (1 − ǫ) probability of
equaling 0 at any given point in our stream. The procedure is given as Algorithm 3. The constant
c is the required oversampling parameter from Theorem 7.
Algorithm 3 Streaming Projection-Cost Preserving Samples
input: A ∈ Rn×d , accuracy ǫ, success probability (1 − δ)
1
2 and each column c is equal to column
ck log(k/δ)/ǫ
output: C ∈ Rn×t such that t = 16ǫ
i
h
i
τ̃
ck
log(k/δ)/ǫ2 τ̃j ck log(k/δ)/ǫ2
j
1
√
a
with
probability
p
∈
(1
−
ǫ)
,
and 0 otherwise,
j
j
t
t
τ̃j ck log(k/δ)/ǫ2
Pn
where τ̃j ≥ 2τ̄j (A) for all j and j=1 τ̃j ≤ 16k.
26
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
1:
2:
3:
4:
5:
6:
count := 1, C := 0n×t , D := 0n×t , f robA := 0
⊲ Initialize storage
old
old
⊲ Initialize sampling probabilities
[τ̃1 , ..., τ̃t ] := 1
for i := 1, . . . , d do
⊲ Process column stream
B := FreqDirUpdate(B, ai )
if count ≤ t then
⊲ Collect t new columns
dcount := ai .
⊲ Update kAk2F
f robA := f robA + kai k22
count := count + 1
else
⊲ Prune columns
old
old
[τ̃1 , ..., τ̃t ] := min [τ̃1 , ..., τ̃t ], ApproximateRidgeScores(B, C, f robA)
[τ̃1D , ..., τ̃tD ] := ApproximateRidgeScores(B, D, f robA)
for j := 1, . . . , t do
if cj 6= 0 then
⊲ Rejection sample
With probability 1 − τ̃j /τ̃jold set cj := 0 and set τ̃jold := 1.
q
Otherwise set τ̃jold := τ̃j and multiply cj by τ̃jold /τ̃j .
end if
if cj = 0 then
⊲ Sample from new columns in D
for ℓ := 1, . . . , t do
2
1
set cj := √
d and set τ̃jold := τ̃ℓ
With probability τ̃ℓ ck log(k/δ)/ǫ
t
2 ℓ
τ̃ℓ ck log(k/δ)/ǫ
end for
end if
end for
count := 0
end if
end for
function ApproximateRidgeScores(B, M ∈ Rn×t , f robA)
for i := t + 1,. . . , d do
+
f robA−kBk k2F
I
mi
τ̃k := 4mTi BBT +
k
end for
return [τ̃1 , ..., τ̃t ]
end function
The analysis of Algorithm 3 is equivalent to that of Algorithm 2, along with the additional
observation that our true sampling probability, pj , is within an ǫ factor of the sampling probability
τ̃ ck log(k/δ)/ǫ2
used for reweighting, j
. Note that while C contains just O(k log(k/δ)/ǫ2 ) non-zero
t
columns in expectation, during the course of a the column stream it could contain as many as
O(k log(k/δ)/ǫ3 ) columns. Regardless, it is always possible to resample from C after running
Algorithm 3 to construct an optimally sized sample for A with error (1 + 2ǫ). Overall we have:
Theorem 19 (Streaming Projection-Cost Preserving Sampling). There exists a streaming algorithm that uses just a single-pass over A’s columns to compute a (1 + ǫ) error projection-cost
preserving sample C with O(k log(k/δ)/ǫ2 ) columns. The algorithm requires a fixed O(nk) space
overhead along with space to store O(k log(k/δ)/ǫ3 ) columns of A. It succeeds with probability 1− δ.
27
References
[AHPV04] Pankaj K. Agarwal, Sariel Har-Peled, and Kasturi R. Varadarajan. Approximating
extent measures of points. J. ACM, 51(4):606–635, 2004.
[AM15]
Ahmed El Alaoui and Michael W. Mahoney. Fast randomized kernel methods with statistical guarantees. In Advances in Neural Information Processing Systems 28 (NIPS),
2015. Full version at arXiv:1411.0306v1.
[AMS01]
Dimitris Achlioptas, Frank Mcsherry, and Bernhard Schölkopf. Sampling techniques
for kernel methods. In Advances in Neural Information Processing Systems 14 (NIPS),
2001.
[Bac13]
Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In Proceedings
of the 26th Annual Conference on Computational Learning Theory (COLT), 2013.
[BDN15]
Jean Bourgain, Sjoerd Dirksen, and Jelani Nelson. Toward a unified theory of sparse dimensionality reduction in euclidean space. Geometric and Functional Analysis (GAFA),
25(4):1009–1088, 2015. Preliminary version in the 47th Annual ACM Symposium on
Theory of Computing (STOC).
[BJS15]
Srinadh Bhojanapalli, Prateek Jain, and Sujay Sanghavi. Tighter low-rank approximation via sampling the leveraged element. In Proceedings of the 26th Annual ACM-SIAM
Symposium on Discrete Algorithms (SODA), 2015.
[BK13]
Wei Bi and James Kwok. Efficient multi-label classification with many labels. In
Proceedings of the 30th International Conference on Machine Learning (ICML), pages
405–413, 2013.
[BS80]
Jon Louis Bentley and James B. Saxe. Decomposable searching problems I: Static-todynamic transformation. Journal of Algorithms, 1(4):301–358, 1980.
[BW09a]
Mohamed-Ali Belabbas and Patrick J. Wolfe. On landmark selection and sampling
in high-dimensional data analysis. Philosophical Transactions of the Royal Society A,
367:4295–4312, 2009.
[BW09b]
Mohamed-Ali Belabbas and Patrick J. Wolfe. Spectral methods in machine learning:
New strategies for very large datasets. Proceedings of the National Academy of Sciences
of the USA, 106:369–374, 2009.
[BW14]
Christos Boutsidis and David P. Woodruff. Optimal CUR matrix decompositions. In
Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC),
pages 353–362, 2014.
[BZMD15] Christos Boutsidis, Anastasios Zouzias, Michael W. Mahoney, and Petros Drineas.
Randomized dimensionality reduction for k-means clustering. IEEE Transactions on
Information Theory, 61(2):1045–1062, Feb 2015.
[CBSW15] Yudong Chen, Srinadh Bhojanapalli, Sujay Sanghavi, and Rachel Ward. Coherent
matrix completion. Journal of Machine Learning Research, 2015. Preliminary version
in the 31st International Conference on Machine Learning (ICML).
28
[CEM+ 15] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina
Persu. Dimensionality reduction for k-means clustering and low rank approximation.
In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC),
pages 163–172, 2015.
[CLM+ 15] Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng,
and Aaron Sidford. Uniform sampling for matrix approximation. In Proceedings of the
6th Conference on Innovations in Theoretical Computer Science (ITCS), pages 181–
190, 2015.
[CMP15]
Michael B. Cohen, Cameron Musco, and Jakub Pachocki. Online row sampling. In Proceedings of the 19th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2015.
[Coh16]
Michael B. Cohen. Nearly tight oblivious subspace embeddings by trace inequalities.
In Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms
(SODA), pages 278–287, 2016.
[CW09]
Kenneth L. Clarkson and David P. Woodruff. Numerical linear algebra in the streaming
model. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing
(STOC), pages 205–214, 2009.
[CW13]
Kenneth L. Clarkson and David P. Woodruff. Low rank approximation and regression
in input sparsity time. In Proceedings of the 45th Annual ACM Symposium on Theory
of Computing (STOC), pages 81–90, 2013.
[DFK+ 04] Petros Drineas, Alan Frieze, Ravi Kannan, Santosh Vempala, and V Vinay. Clustering
large graphs via the singular value decomposition. Machine Learning, 56(1-3):9–33,
2004. Preliminary version in the 10th Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA).
[DKM06a] Petros Drineas, Ravi Kannan, and Michael W. Mahoney. Fast Monte Carlo algorithms
for matrices II: Computing a low-rank approximation to a matrix. SIAM J. Comput.,
36(1):158–183, 2006.
[DKM06b] Petros Drineas, Ravi Kannan, and Michael W. Mahoney. Fast Monte Carlo algorithms
for matrices III: Computing a compressed approximate matrix decomposition. SIAM
J. Comput., 36(1):184–206, 2006. Preliminary version in the 14th Annual ACM-SIAM
Symposium on Discrete Algorithms (SODA).
[DMM06]
Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Sampling algorithms
for ℓ2 regression and applications. In Proceedings of the 17th Annual ACM-SIAM
Symposium on Discrete Algorithms (SODA), pages 1127–1136, 2006.
[DMM08]
Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Relative-error CUR
matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844–
881, 2008.
[DRVW06] Amit Deshpande, Luis Rademacher, Santosh Vempala, and Grant Wang. Matrix approximation and projective clustering via volume sampling. Theory of Computing,
2(1):225–247, 2006. Preliminary version in the 17th Annual ACM-SIAM Symposium
on Discrete Algorithms (SODA).
29
[DV06]
Amit Deshpande and Santosh Vempala. Adaptive sampling and fast low-rank matrix
approximation. In Proceedings of the 10th International Workshop on Randomization
and Computation (RANDOM), pages 292–303, 2006.
[FKV04]
Alan Frieze, Ravi Kannan, and Santosh Vempala. Fast Monte-Carlo algorithms for
finding low-rank approximations. J. ACM, 51(6):1025–1041, November 2004. Preliminary version in the 39th Annual IEEE Symposium on Foundations of Computer Science
(FOCS).
[FS02]
Shai Fine and Katya Scheinberg. Efficient SVM training using low-rank kernel representations. The Journal of Machine Learning Research, 2:243–264, 2002.
[FSS13]
Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data:
Constant-size coresets for k-means, PCA, and projective clustering. In Proceedings
of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages
1434–1453, 2013.
[GLPW15] Mina Ghashami, Edo Liberty, Jeff M. Phillips, and David P. Woodruff. Frequent
Directions: Simple and deterministic matrix sketching. arXiv:1501.01711, 2015.
[GM13]
Alex Gittens and Michael Mahoney. Revisiting the Nyström method for improved
large-scale machine learning. In Proceedings of the 30th International Conference on
Machine Learning (ICML), pages 567–575, 2013.
[HI15]
John T. Holodnak and Ilse C. F. Ipsen. Randomized approximation of the Gram matrix:
Exact computation and probabilistic bounds. SIAM Journal on Matrix Analysis and
Applications, 36(1):110–137, 2015.
[HKZ12]
Daniel Hsu, Sham Kakade, and Tong Zhang. Tail inequalities for sums of random
matrices that depend on the intrinsic dimension. Electron. Commun. Probab., 17:1–13,
2012.
[HPM04]
Sariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing
(STOC), pages 291–300, 2004.
[HRZ+ 09] David Hall, Daniel Ramage, Jason Zaugg, Alexander Lehmann, Jonathan Merritt,
Keith Stevens, Jason Baldridge, Timothy Hunter, Dave DeCaprio, Daniel Duckworth,
Eric Christiansen, Marc Millstone, Mérő László, Alexey Noskov, Devon Bryant, Kentaroh Takagaki, Sam Halliday, Chris Stucchio, and Xiangrui Meng. ScalaNLP: Breeze.
http://www.scalanlp.org/, 2009.
[IBM14]
IBM Reseach Division, Skylark Team. libskylark: Sketching-based Distributed Matrix
Computations for Machine Learning. IBM Corporation, Armonk, NY, 2014.
[JK16]
Gorav Jindal and Pavel Kolev. An efficient parallel algorithm for spectral sparsification
of laplacian and SDDM matrix polynomials. arXiv:1507.07497, 2016.
[KL13]
Jonathan A. Kelner and Alex Levin. Spectral sparsification in the semi-streaming
setting. Theory of Computing Systems, 53(2):243–262, 2013. Preliminary version in the
28th International Symposium on Theoretical Aspects of Computer Science (STACS).
30
[KLM+ 14] Michael Kapralov, Yin Tat Lee, Cameron Musco, Christopher Musco, and Aaron Sidford. Single pass spectral sparsification in dynamic streams. In Proceedings of the
55th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages
561–570, 2014.
[KLP+ 16] Rasmus Kyng, Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman.
Sparsified cholesky and multigrid solvers for connection laplacians. In Proceedings of
the 48th Annual ACM Symposium on Theory of Computing (STOC), pages 842–850,
2016.
[Lib13]
Edo Liberty. Simple and deterministic matrix sketching. In Proceedings of the 19th
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
(KDD), pages 581–588, 2013.
[Liu14]
Antoine Liutkus. Randomized SVD. http://www.mathworks.com/matlabcentral/fileexchange/478
2014. MATLAB Central File Exchange.
[LJS16]
Chengtao Li, Stefanie Jegelka, and Suvrit Sra. Fast DPP sampling for nyström with
application to kernel methods. In Proceedings of the 33rd International Conference on
Machine Learning (ICML), 2016.
[LMP13]
Mu Li, Gary L. Miller, and Richard Peng. Iterative row sampling. In Proceedings of
the 54th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages
127–136, 2013. Preliminary version at arXiv:1211.2713v1.
[LPS15]
Yin Tat Lee, Richard Peng, and Daniel A. Spielman. Sparsified cholesky solvers for
SDD linear systems. arXiv:1506.08204, 2015.
[LS14]
Yin Tat Lee and Aaron√Sidford. Path finding methods for linear programming: Solving
linear programs in Õ( rank) iterations and faster algorithms for maximum flow. In
Proceedings of the 55th Annual IEEE Symposium on Foundations of Computer Science
(FOCS), pages 424–433, 2014.
[LS15]
Yin Tat Lee and Aaron Sidford. Efficient inverse maintenance and faster algorithms
for linear programming. In Proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2015.
[LSW15]
Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong. A faster cutting plane method
and its implications for combinatorial and convex optimization. In Proceedings of the
56th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2015.
[MA77]
George A. Milliken and Fikri Akdeniz. A theorem on the difference of the generalized inverses of two nonnegative matrices. Communications in Statistics - Theory and
Methods, 6(1):73–79, 1977.
[MD05]
Michael W. Mahoney and Petros Drineas. On the Nyström method for approximating a gram matrix for improved kernel-based learning. Journal of Machine Learning
Research, 6:2153–2175, 2005. Preliminary version in the 18th Annual Conference on
Computational Learning Theory (COLT).
[MD09]
Michael W. Mahoney and Petros Drineas. CUR matrix decompositions for improved
data analysis. Proceedings of the National Academy of Sciences of the USA, 106(3):697–
702, 2009.
31
[Min13]
Stanislav Minsker. On some extensions of Bernstein’s inequality for self-adjoint operators. arXiv:1112.5448, 2013.
[MM13]
Michael W. Mahoney and Xiangrui Meng. Low-distortion subspace embeddings in
input-sparsity time and applications to robust linear regression. In Proceedings of the
45th Annual ACM Symposium on Theory of Computing (STOC), pages 91–100, 2013.
[MM15]
Cameron Musco and Christopher Musco. Randomized block krylov methods for
stronger and faster approximate singular value decomposition. In Advances in Neural Information Processing Systems 28 (NIPS), 2015. Full version at arXiv:1504.05477.
[MM16]
Cameron Musco and Christopher Musco. Provably useful kernel matrix approximation
in linear time. arXiv:1605.07583, 2016.
[Mus15]
Cameron Musco. Dimensionality reduction for k-means clustering. Master’s thesis,
Massachusetts Institute of Technology, 2015.
[NN13]
Jelani Nelson and Huy L. Nguyen. OSNAP: Faster numerical linear algebra algorithms
via sparser subspace embeddings. In Proceedings of the 54th Annual IEEE Symposium
on Foundations of Computer Science (FOCS), pages 117–126, 2013.
[Oka10]
Daisuke
Okanohara.
redsvd:
https://code.google.com/p/redsvd/, 2010.
RandomizED
SVD.
[PVG+ 11] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau,
M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python.
Journal of Machine Learning Research, 12:2825–2830, 2011.
[PZB+ 07]
Peristera Paschou, Elad Ziv, Esteban G. Burchard, Shweta Choudhry, William
Rodriguez-Cintron, Michael W. Mahoney, and Petros Drineas. PCA-correlated SNPs
for structure identification in worldwide human populations. PLoS Genet, 3(9):1672–
1686, 2007.
[RR07]
Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In
Advances in Neural Information Processing Systems 20 (NIPS), pages 1177–1184, 2007.
[Sar06]
Támas Sarlós. Improved approximation algorithms for large matrices via random projections. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 143–152, 2006.
[SS11]
Daniel A. Spielman and Nikhil Srivastava. Graph sparsification by effective resistances.
SIAM Journal on Computing, 40(6):1913–1926, 2011. Preliminary version in the 40th
Annual ACM Symposium on Theory of Computing (STOC).
[Str14]
Martin Tobias Strauch. Column subset selection with applications to neuroimaging
data. PhD thesis, Universität Konstanz, 2014.
[Tro15]
Joel A. Tropp. An introduction to matrix concentration inequalities. Foundations and Trends in Machine Learning, 8(1-2):1–230, 2015. Preliminary version at
arXiv:1501.01571.
32
[VM15]
Sergey Voronin and Per-Gunnar Martinsson. RSVDPACK: Subroutines for computing
partial singular value decompositions via randomized sampling on single core, multi
core, and GPU architectures. arXiv:1502.05366, 2015.
[WS01]
Christopher K. I. Williams and Matthias Seeger. Using the Nyström method to speed
up kernel machines. In Advances in Neural Information Processing Systems 13 (NIPS),
pages 682–688, 2001.
A
Trace Bound for Ridge Leverage Score Sampling
Lemma 20. For i ∈ {1, . . . , d}, let τ̃i ≥ τ̄i (A) be an overestimate for the ith ridge leverage score.
P
· i τ̃i , for some sufficiently large constant c. Construct C by
Let pi = Pτ̃i τ̃i . Let t = c log(k/δ)
ǫ2
i
sampling t columns of A, each set to
2 ≥
singular value with σm
kA\k k2F
k
√1 ai
tpi
with probability pi . Let m be the index of the smallest
. With probability 1 − δ, C satisfies:
| tr(A\m AT\m ) − tr(U\m UT\m CCT U\m UT\m )| ≤ ǫkA\k k2F .
(38)
Proof. Letting P\m = U\m UT\m , we can rewrite (38) as:
|kP\m Ck2F − kA\m k2F | ≤ ǫkA\k k2F .
We can write kP\m Ck2F as a sum over column norms:
kP\m Ck2F =
t
X
j=1
kP\m cj k22 .
Now, for some i ∈ {1, . . . , d} and recalling our definition Σ̄2i,i = σi2 (A) +
kP\m ci k22 =
kA\k k2F
k
we have:
kP\m ai k22
1
ǫ2
·
kP\m ai k22 ≤
tpi
c log(k/δ)
τ̄i (A)
2
kP\m ai k22
ǫ
· T
=
c log(k/δ) ai UΣ̄−2 UT ai
=
kP\m ai k22
ǫ2
· T
c log(k/δ) ai P\m UΣ̄−2 UT P\m ai
≤
ǫ2
·
c log(k/δ)
≤
2ǫ2
· kA\k k2F ,
c log(k/δ)
≤
kP\m ai k22
ǫ2
·
c log(k/δ) aT P
−2 T
\m U\m Σ̄ U\m P\m ai
i
kP\m ai k22
k
2kA\k k2F
kP\m ai k22
where the second to last inequality follows from the fact that Σ̄2i,i = σi2 (A) +
i ≥ m. Therefore, U\m Σ̄−2 UT\m 2kAk k2 P\m .
\k F
33
kA\k k2F
k
≤
2kA\k k2F
k
for
So,
bound:
c log(k/δ)
2ǫ2 kA\k k2F
· kP\m ci k22 ∈ [0, 1]. We have E
hP
t
2
j=1 kP\m ci k2
i
= kA\m k2F so by a Chernoff
P kP\m Ck2F ≥ kA\m k2F + ǫkA\k k2F
!
t
2
2
X
c
log(k/δ)kA
k
ǫkA
k
c log(k/δ)
\m F
\k F
kP\m ci k22 ≥ 1 +
= P 2
2ǫ kA\k k2F j=1
kA\m k2F
2ǫ2 kA\k k2F
≤ e−c log(k/δ)/4 ≤ δ/2,
kA
k2
if we set c sufficiently large. In the second to last step we use the fact that kA \k kF2 ≥ 21 by the
\m F
definition of m. We can similarly prove that P kP\m Ck2F ≤ kA\m k2F − ǫkA\k k2F ≤ δ/2. Union
bounding gives the result.
B
Independent Sampling Bounds
In this section we give analogies to Theorem 5 and Lemma 20 when columns are sampled independently using their ridge leverage scores rather than sampled with replacement.
o
n
,
1
for
Lemma 21. For i ∈ {1, . . . , d}, given τ̃i ≥ τ̄i (A) for all i, let pi = min τ̃i · c log(k/δ)
2
ǫ
some sufficiently large constant c. Construct C by independently sampling each column ai from
√
A with probability
pi and scaling selected columns by 1/ pi . With probability 1 − δ, C has
P
O log(k/δ)/ǫ2 · i τ̃i columns and satisfies:
ǫ
ǫ
(1 − ǫ)CCT − kA − Ak k2F In×n AAT (1 + ǫ)CCT + kA − Ak k2F In×n .
k
k
(7)
Proof. Again we rewrite the ridge leverage score definition using A’s singular value decomposition:
!+
2
kA
k
\k
F
τ̄i (A) = aTi UΣ2 UT +
UUT
ai
k
+
= aTi UΣ̄2 UT ai = aTi UΣ̄−2 UT ai ,
where Σ̄2i,i = σi2 (A) +
kA\k k2F
k
Xi =
Let Y =
P
. For each i ∈ 1, . . . , d define the matrix valued random variable:
(
1
pi
− 1 Σ̄−1 UT ai aTi UΣ̄−1 with probability pi
−Σ̄−1 UT ai aTi UΣ̄−1 with probability (1 − pi )
Xi . We have E Y = 0. Furthermore, CCT = UΣ̄Y Σ̄U + AAT . Showing kYk2 ≤ ǫ
gives −ǫI Y ǫI, and since UΣ̄2 UT = AAT +
(1 − ǫ)AAT −
kA\k k2F
k
I would give:
ǫkA\k k2F
ǫkA\k k2F
I CCT (1 + ǫ)AAT +
I.
k
k
After rearranging and adjusting constants on ǫ, this statement is equivalent to (7).
34
To prove that kYk2 is we use the same stable rank matrix Bernstein inequality used for our
with replacement results [Tro15]. If pi = 1 (i.e. τ̃i · c log(k/δ)/ǫ2 ≥ 1) then Xi = 0 so kXi k2 = 0.
1
T
τ̄i (A) ai ai
Otherwise, we use the fact that
AAT +
kA\k k2F
k
I, which lets us bound:
!
2
kA
k
\k F
AAT +
I UΣ̄−1 = I.
k
1
· Σ̄−1 UT ai aTi UΣ̄−1 Σ̄−1 UT
τ̃i
2
2
ǫ
ǫ
I and hence kXi k2 ≤ c log(k/δ)
.
So we have Xi p1i Σ̄−1 UT ai aTi UΣ̄−1 c log(k/δ)
Next we bound the variance of Y.
#
"
2
X
X
1
2
2
E(Y ) =
E(Xi )
pi
− 1 + (1 − pi ) · Σ̄−1 UT ai aTi UΣ̄−2 UT ai aTi UΣ̄−1
pi
X 1
ǫ2
Σ̄−1 UT AAT UΣ̄−1
· τ̄i (A) · Σ̄−1 UT ai aTi UΣ̄−1
pi
c log(k/δ)
ǫ2
ǫ2
Σ2 Σ̄−2
D.
(39)
c log(k/δ)
c log(k/δ)
where again we set Di,i = 1 for i ∈ 1, . . . , k and Di,i =
σi2
σi2 +kA\k k2F /k
for all i ∈ k + 1, . . . , n. By the
stable rank matrix Bernstein inequality given in Theorem 7.3.1 of [Tro15], for ǫ < 1,
4 tr(D)
e
P [kYk ≥ ǫ] ≤
kDk2
−ǫ2 /2
ǫ2
(kDk2 +ǫ/3)
c log(k/δ)
.
(40)
Clearly kDk2 = 1. Furthermore,
tr(D) = k +
d
X
i=k+1
σi2
σi2 +
kA\k k2F
k
≤k+
d
X
i=k+1
σi2
kA\k k2F
=k+
k
Pd
2
i=k+1 σi
2
kA\k kF
k
≤ k + k.
Plugging into (9), we see that
P [kYk ≥ ǫ] ≤ 8ke−
c log(k/δ)
2
) ≤ δ/2,
if we choose the constant c large enough. So we have established (7).
All that remains to note is that, the expected number of columns in C is at most c log(k/δ)
·
ǫ2
Pd
log(k/δ) P
· i τ̃i columns with probability > 1 − δ/2 by a
i=1 τ̃i . Accordingly, C has at most O
ǫ2
standard Chernoff bound. Union bounding over failure probabilities gives the lemma.
o
n
,
1
for
Lemma 22. For i ∈ {1, . . . , d}, given τ̃i ≥ τ̄i (A) for all i, let pi = min τ̄i (A) · c log(k/δ)
2
ǫ
some sufficiently large constant c. Construct C by independently sampling each column ai from
√
A with probability pi and scaling selected columns by 1/ pi . Let m be the index of the smallest
2 ≥
singular value with σm
kA\k k2F
k
. With probability 1 − δ, C satisfies:
| tr(A\m AT\m ) − tr(U\m UT\m CCT U\m UT\m )| ≤ ǫkA\k k2F .
35
(41)
Proof. We need to show tr(A\m AT\m ) − tr(U\m UT\m BBT U\m UT\m ) ≥ −ǫkA\m k2F . Letting P\m =
U\m UT\m , we can rewrite this as:
kP\m Bk2F − kA\m k2F ≤ ǫkA\m k2F .
We can write kP\m Bk2F as a sum over column norms:
kP\m Bk2F =
d
X
i=1
Ii
1
kP ai k2 ,
pi \m 2
where Ii is an indicator random variable equal to 1 with probability pi and 0 otherwise.
We have:
kP\m ai k22
kP\m ai k22
ǫ2
ǫ2
1
2
kP ai k =
·
≤
·
pi \m 2 c log(k/δ)
τ̃i
c log(k/δ) aTi UΣ̄−2 UT ai
=
kP\m ai k22
ǫ2
· T
c log(k/δ) ai P\m UΣ̄−2 UT P\m ai
≤
ǫ2
·
c log(k/δ)
≤
2ǫ2
· kA\k k2F ,
c log(k/δ)
≤
kP\m ai k22
ǫ2
·
c log(k/δ) aT P
−2 UT
U
Σ̄
\m
\m
i
\m P\m ai
kP\m ai k22
k
2kA\k k2F
kP\m ai k22
where the second to last inequality follows from the fact that Σ̄2i,i = σi2 (A) +
i ≥ m. Therefore, U\m Σ̄−2 UT\m 2kAk k2 P\m .
So
P
c log(k/δ)
2ǫ2 kA\m k2F
kA\k k2F
k
≤
2kA\k k2F
k
\k F
·
kP\m Bk2F
1
2
pi kP\m ai k2
≥ (1 +
∈ [0, 1] and by a Chernoff bound we have:
ǫ)kA\m k2F
d
c log(k/δ)
c log(k/δ) X 1
=P
Ii kP\m ai k22 ≥ (1 + ǫ)
2
2
2ǫ2
2ǫ kA\m kF i=1 pi
"
≤ e−c log(k/δ)/4 ≤ δ/2,
if we set c sufficiently large.
36
#
for
| 8 |
Acylindrical group actions on quasi-trees
arXiv:1602.03941v3 [] 26 Mar 2016
S. Balasubramanya
Abstract
A group G is acylindrically hyperbolic if it admits a non-elementary acylindrical
action on a hyperbolic space. We prove that every acylindrically hyperbolic group G
has a generating set X such that the corresponding Cayley graph Γ is a (non-elementary)
quasi-tree and the action of G on Γ is acylindrical. Our proof utilizes the notions of
hyperbolically embedded subgroups and projection complexes. As an application, we
obtain some new results about hyperbolically embedded subgroups and quasi-convex
subgroups of acylindrically hyperbolic groups.
1
Introduction
Recall that an isometric action of a group G on a metric space (S, d) is acylindrical if for
every ε > 0 there exist R, N > 0 such that for every two points x, y with d(x, y) ≥ R,
there are at most N elements g ∈ G satisfying
d(x, gx) ≤ ε and d(y, gy) ≤ ε.
Obvious examples are provided by geometric (i.e., proper and cobounded) actions; note,
however, that acylindricity is a much weaker condition.
A group G is called acylindrically hyperbolic if it admits a non-elementary acylindrical action on a hyperbolic space. Over the last few years, the class of acylindrically hyperbolic groups has received considerable attention. It is broad enough to include many
examples of interest, e.g., non-elementary hyperbolic and relatively hyperbolic groups,
all but finitely many mapping class groups of punctured closed surfaces, Out(Fn ) for
n ≥ 2, most 3-manifold groups, and finitely presented groups of deficiency at least 2.
On the other hand, the existence of a non-elementary acylindrical action on a hyperbolic space is a rather strong assumption, which allows one to prove non-trivial results.
In particular, acylindrically hyperbolic groups share many interesting properties with
non-elementary hyperbolic and relatively hyperbolic groups. For details we refer to
[5, 10, 11, 12] and references therein.
The main goal of this paper is to answer the following.
Question 1.1. Which groups admit non-elementary cobounded acylindrical actions on
quasi-trees?
In this paper, by a quasi-tree we mean a connected graph which is quasi-isometric to
a tree. Quasi-trees form a very particular subclass of the class of all hyperbolic spaces.
From the asymptotic point of view, quasi-trees are exactly “1-dimensional hyperbolic
spaces”.
1
The motivation behind our question comes from the following observation. If instead
of cobounded acylindrical actions we consider cobounded proper (i.e., geometric) ones,
then there is a crucial difference between the groups acting on hyperbolic spaces and
quasi-trees. Indeed a group G acts geometrically on a hyperbolic space if and only if G is
a hyperbolic group. On the other hand, Stallings theorem on groups with infinitely many
ends and Dunwoodys accessibility theorem implies that groups admitting geometric
actions on quasi-trees are exactly virtually free groups. Yet another related observation
is that acylindrical actions on unbounded locally finite graphs are necessarily proper.
Thus if we restrict to quasi-trees of bounded valence in Question 1.1, we again obtain the
class of virtually free groups. Other known examples of groups having non-elementary,
acylindrical and cobounded actions on quasi-trees include groups associated with special
cube complexes and right angled artin groups (see [1], [6], [8]).
Thus one could expect that the answer to Question 1.1 would produce a proper
subclass of the class of all acylindrically hyperbolic groups, which generalizes virtually
free groups in the same sense as acylindrically hyperbolic groups generalize hyperbolic
groups. Our main result shows that this does not happen.
Theorem 1.2. Every acylindrically hyperbolic group admits a non-elementary
cobounded acylindrical action on a quasi-tree.
In other words, being acylindrically hyperbolic is equivalent to admitting a nonelementary acylindrical action on a quasi-tree are equivalent. Although this result does
not produce any new class of groups, it can be useful in the study of acylindrically hyperbolic groups and their subgroups. In this paper we concentrate on proving Theorem
1.2 and leave applications for future papers to explore (for some applications, see [10]).
It was known before that every acylindrically hyperbolic group admits a nonelementary cobounded action on a quasi-tree satisfying the so-called weak proper discontinuity property, which is weaker than acylindricity. Such a quasi-tree can be produced
by using projection complexes introduced by Bestvina-Bromberg-Fujiwara in [2]. To
the best of our knowledge, whether the corresponding action is acylindrical is an open
question. The main idea of the proof of Theorem 1.2 is to combine the BestvinaBromberg-Fujiwara approach with an ‘acylindrification’ construction from [11] in order
to make the action acylindrical. An essential role in this process is played by the notion
of a hyperbolically embedded subgroup introduced in [5] - this fact is of independent
interest since it provides a new setting for the application of the Bestvina-BrombergFujiwara construction.
The above mentioned construction has been applied in the setting of geometrically
separated subgroups (see [5]), but hyperbolically embedded subgroups do not necessarily
satisfy this condition. Nevertheless, it is possible to employ them in this construction,
possibly with interesting applications. If fact, we prove much stronger results in terms
of hyperbolically embedded subgroups (see Theorem 3.1) of which Theorem 1.2 is an
easy consequence, and derive an application in this paper which is stated below (see
Corollary 3.23).
Corollary 1.3. Let G be a group. If H ≤ K ≤ G , H is countable and H is hyperbolically embedded in G, then H is hyperbolically embedded in K.
We would like to note that the above result continues to hold even when we have
a finite collection {H1 , H2 , ..., Hn } of hyperbolically embedded subgroups in G such
that Hi ≤ K for all i = 1, 2, ..., n. Interestingly, A.Sisto obtains a similar result in [14],
2
Corollary 6.10. His result does not require H to be countable, but under the assumption
that H ∩ K is a virtual retract of K, it states that H ∩ K ,→h K. Although similar,
these two theorems are distinct in the sense that neither follows from the other.
Another application of Theorem 3.1 is to the case of finitely generated subgroups,
as stated below (see Corollary 3.26).
Corollary 1.4. Let H be a finitely generated subgroup of an acylindrically hyperbolic
group G. Then there exists a subset X ⊂ G such that
(a) Γ(G, X) is hyperbolic, non-elementary and acylindrical
(b) H is quasi-convex in Γ(G, X)
The above result indicates that in order to develop a theory of quasi-convex subgroups in acylindrically hyperbolic groups, the notion of quasi-convexity is not sufficient,
i.e., a stronger set of conditions is necessary in order to prove results similar to those
known for quasi-convex subgroups in hyperbolic groups. For example, using Rips’ construction from [13] and the above corollary, one can easily construct an example of an
infinite, infinite index, normal subgroup in an acylindrically hyperbolic group, which is
quasi-convex with respect to some non-elementary acylindrical action.
Acknowledgements: My heartfelt gratitude to my advisor Denis Osin for his guidance and support, and to Jason Behrstock and Yago Antolin Pichel for their remarks.
My sincere thanks to Bryan Jacobson for his thorough proof-reading and comments on
this paper.
2
Preliminaries
We recall some definitions and theorems which we will need to refer to.
2.1
Relative Metrics on subgroups
Definition 2.1 (Relative metric). Let G be a group and {Hλ }λ∈Λ a fixed collection of
subgroups of G. LetFX ⊂ G such that G is generated by X along with the union of all
{Hλ }λ∈Λ . Let H = λ∈Λ Hλ . We denote the corresponding Cayley graph of G (whose
edges are labeled by elements of X t H) by Γ(G, X t H).
Remark 2.2. It is important that the union in the definition above is disjoint. This
disjoint union leads to the following observation : for every h ∈ Hi ∩ Hj , the alphabet
H will have two letters representing h in G, one from Hi and another from Hj . It may
also be the case that a letter from H and a letter from X represent the same element of
the group G. In this situation, the corresponding Cayley graph Γ(G, X t H) has bigons
(or multiple edges in general) between the identity and the element, one corresponding
to each of these letters.
We think of Γ(Hλ , Hλ ) as a complete subgraph in Γ(G, X tH). A path p in Γ(G, X t
H) is said to be λ-admissible if it contains no edges of the subgraph Γ(Hλ , Hλ ). In other
words, the path p does not travel through Hλ in the Cayley graph. Using this notion,
we can define a metric, known as the relative metric dbλ : Hλ × Hλ → [0, +∞] by setting
dbλ (h, k) for h, k ∈ Hλ to be the length of the shortest admissible path in Γ(G, X t H)
that connects h to k. If no such path exists, we define dbλ (h, k) = +∞. It is easy to
check that dbλ satisfies all the conditions to be a metric function.
3
Definition 2.3. Let q be a path in the Cayley graph of Γ(G, XtH) and p be a nontrivial
subpath of q. p is said to be an Hλ - subpath if the label of p (denoted Lab(p)) is a
word in the alphabet Hλ . Such a subpath is further called an Hλ -component if it is not
contained in a longer Hλ -subpath of q. If q is a loop, we must also have that p is not
contained in a longer Hλ -subpath of any cyclic shift of q. We refer to an Hλ -component
of q (for some λ ∈ Λ) simply by calling it a component of q. We note that on a geodesic,
Hλ - components must be single Hλ -edges.
Let p1 , p2 be two Hλ components of a path q for some λ ∈ Λ. p1 and p2 are said
to be connected if there exists a path p in Γ(G, X t H) such that Lab(p) is a word
consisting only of letters from Hλ , and p connects some vertex of p1 to some vertex of
p2 . In algebraic terms, this means that all vertices of p1 and p2 belong to the same
(left) coset of Hλ . We refer to a component of a path q as isolated if it is not connected
to any other component of q.
If p is a path, we denote its initial point by p− and its terminating point by p+ .
Lemma 2.4 ([5], Proposition 4.13). Let G be a group and {Hλ }λ∈Λ a fixed collection
of subgroups in G. Let X ⊂ G such that G is generated by X together with the union
of all {Hλ }λ∈Λ . Then there exists a constant C > 0 such that for any n-gon p with
geodesic sides in Γ(G, X t H), any λ ∈ Λ, and any isolated Hλ component a of p,
dbλ (a− , a+ ) ≤ Cn.
2.2
Hyperbolically embedded subgroups
Hyperbolically embedded subgroups will be our main tool in constructing the quasi-tree.
The notion has been taken from [5]. We recall the definition here.
Definition 2.5 (Hyperbolically embedded subgroups). Let G be a group. Let X be a
(not necessarily finite) subset of G and let {Hλ }λ∈Λ be a collection of subgroups of G.
We say that {Hλ }λ∈Λ is hyperbolically embedded in G with respect to X (denoted by
{Hλ }λ∈Λ ,→h (G, X) ) if the following conditions hold :
(a) The group G is generated by X together with the union of all {Hλ }λ∈Λ
F
(b) The Cayley graph Γ(G, X t H) is hyperbolic, where H = λ∈Λ Hλ .
(c) For every λ ∈ Λ, the metric space (Hλ , dbλ ) is proper, i.e., every ball of finite radius
has finite cardinality.
Further we say that {Hλ }λ∈Λ is hyperbolically embedded in G (denoted by
{Hλ }λ∈Λ ,→h G) if {Hλ }λ∈Λ ,→h (G, X) for some X ⊆ G. The set X is called a
relative generating set.
Since the notion of a hyperbolically embedded subgroup plays a crucial role in this
paper, we include two examples borrowed from [5].
Example 2.6. Let G = H × Z and Z = hxi. Let X = {x}. Then Γ(G, X t H) is
quasi-isometric to a line and is hence hyperbolic. The corresponding relative metric
satisfies the following inequality which is easy to see from the Cayley graph (see Fig.1)
b 1 , h2 ) ≤ 3 for every h1 , h2 ∈ H. Indeed if ΓH denotes the Cayley graph Γ(H, H),
: d(h
then in its shifted copy xΓH , there is an edge e connecting xh1 to xh2 (labeled by
h−1
1 h2 ∈ H) . There is thus an admissible path of length 3 connecting h1 to h2 . We
conclude that if H is infinite, then H is not hyperbolically embedded in (G, X), since
the relative metric will not be proper. In this example, one can also note that the
4
...
...
...
...
...
xΓH
xh1
e
xh2
...
x
x
x
x
ΓH
h1
x
x
x
h2
x
...
xΓH
ΓH
1
x
x
x−1 ΓH
x
x
...
...
x−1 ΓH
...
...
...
...
...
Figure 1: H × Z
Figure 2: H ∗ Z
admissible path from h1 to h2 contains an H-subpath, namely the edge e, which is also
an H-component of this path.
Example 2.7. Let G = H ∗ Z and Z = hxi. As in the previous example, let X = {x}.
In this case Γ(G, X t H) is quasi-isometric to a tree (see Fig.2) and it is easy to see
b 1 , h2 ) = ∞ unless h1 = h2 . This means that every ball of finite radius in the
that d(h
relative metric has cardinality 1. We can thus conclude that H ,→h (G, X).
2.3
A slight modification to the relative metric
The aim of this section is to modify the relative metric on countable subgroups that
are hyperbolically embedded, so that the resulting metric takes values only in R, i.e., is
finite valued. This will be of importance in section 3. The main result of this section is
the following.
Theorem 2.8. Let G be a group. Let H < G be countable, such that H ,→h G. Then
there exists a metric de: H × H → R, such that
(a) de ≤ db
(b) de is proper, i.e., every ball of finite radius has finitely many elements.
Proof. There exists aScollection of finite, symmetric (closed under inverses) subsets {Fi }
∞
of H such that H = i=1 Fi and 1 ⊆ F1 ⊆ F2 ⊆ ...
b h) < ∞}.
Let db be the relative metric on H. Let H0 = {h ∈ H | d(1,
Define a function w : H → N as
b h)
d(1,
, if h ∈ H0
w(h) =
min {i | h ∈ Fi } , otherwise
5
Since Fi ’s are symmetric, w(h) = w(h−1 ) for all h ∈ H. Define a function l on H as
follows- for every word u = x1 x2 ...xk in the elements of H, set
l(u) =
k
X
w(xi ).
i=1
Set a length function on H as
|g|w = min{l(u) | u is a word in the elements of H that represents g},
for each g in H. We can now define a metric dw : H × H → N as
dw (g, h) = |g −1 h|w .
It is easy to check that dw is a (finite valued) well-defined metric, which satisfies
dw (1, h) ≤ w(h) for all h ∈ H.
It remains to show that dw is proper. Let N ∈ N. Suppose h ∈ H such that
b h) ≤ N which implies that there are finitely many
w(h) ≤ N . If h ∈ H0 , then d(1,
b
choices for h, since d is proper. If h ∈
/ H0 , then h ∈ Fi for some minimal i. But each Fi
is a finite set, so there are finitely many choices for h. Thus |{h ∈ H | w(h) ≤ N }| < ∞
for all N ∈ N. This implies dw is proper.
Indeed, if y 6= 1 is such that |y|w ≤ n, then there exists a word u, written without
the identity element (which
Pr has weight zero), representing y in the alphabet H such
that u = x1 x2 ...xr and i=1 w(xi ) ≤ n. Since w(xi ) ≥ 1 for every xi 6= 1, r ≤ n.
Further, w(xi ) ≤ n for all i. Thus xi ∈ {x ∈ H | w(x) ≤ n} for all i. So there only
finitely many choices for each xi , which implies there are finitely many choices for y.
b So we can set de = dw .
By definition, dw ≤ d.
2.4
Acylindrically Hyperbolic Groups
In the following theorem ∂ represents the Gromov boundary.
Theorem 2.9. For any group G, the following are equivalent.
(AH1 ) There exists a generating set X of G such that the corresponding Cayley graph
Γ(G,X) is hyperbolic, |∂Γ(G, X)| ≥ 2, and the natural action of G on Γ(G,X) is
acylindrical.
(AH2 ) G admits a non-elementary acylindrical action on a hyperbolic space.
(AH3 ) G contains a proper infinite hyperbolically embedded subgroup.
It follows from the definitions that (AH1 ) ⇒ (AH2 ). The implication (AH2 ) ⇒
(AH3 ) is non-trivial and was proved in [5]. The implication (AH3 ) ⇒ (AH1 ) was
proved in [11].
Definition 2.10. We call a group G acylindrically hyperbolic if it satisfies any of the
equivalent conditions (AH1 )-(AH3 ) from Theorem 2.9.
Lemma 2.11 ([5], Corollary 4.27). Let G be a group, {Hλ }λ∈Λ a collection of subgroups
of G, and X1 and X2 be relative generating sets. Suppose that |X1 ∆X2 | < ∞. Then
{Hλ }λ∈Λ ,→h (G, X1 ) if and only if {Hλ }λ∈Λ ,→h (G, X2 ).
6
Theorem 2.12 ([11], Theorem 5.4). Let G be a group, {Hλ }λ∈Λ a finite collection of
subgroups of G, X a subset of G. Suppose that {Hλ }λ∈Λ ,→h (G, X). Then there exists
Y ⊂ G such that the following conditions hold.
(a) X ⊂ Y
(b) {Hλ }λ∈Λ ,→h (G, Y ). In particular, the Cayley graph Γ(G, Y t H) is hyperbolic.
(c) The action of G on Γ(G, Y t H) is acylindrical.
Definition 2.13. Let (X, dX ) and (Y, dY ) be two metric spaces. A map φ : X → Y is
said to be a (λ,C)-quasi-isometry if there exist constants λ > 1, C > 0 such that
(a)
1
λ dX (a, b)
− C ≤ dY (φ(a), φ(b)) ≤ λdX (a, b) + C, for all a, b ∈ X and
(b) Y is contained in the C-neighborhood of φ(X).
The spaces X and Y are said to be quasi-isometric if such a map φ : X → Y exists.
It is easy to check that being quasi-isometric is an equivalence relation. If the map φ
satisfies only condition (a), then it is said to be a (λ,C)-quasi-isometric embedding.
Definition 2.14. A graph Γ with the combinatorial metric dΓ is said to be a quasi-tree
if it is quasi-isometric to a tree T .
Definition 2.15. A quasi-geodesic is a quasi-isometric embedding of an interval
(bounded or unbounded) I ⊆ R into a metric space X. Note that geodesics are (1, 0)quasi-geodesics. By slight abuse of notation, we may identify the map that defines a
quasi-geodesic with its image in the space.
Theorem 2.16 ([9], Theorem 4.6, Bottleneck property). Let Y be a geodesic metric
space. The following are equivalent.
(a) Y is quasi-isometric to some simplicial tree Γ
(b) There is some µ > 0 so that for all x, y ∈ Y, there is a midpoint m = m(x, y) with
d(x,m) = d(y,m) = 21 d(x,y) and the property that any path from x to y must pass
within less than µ of the point m.
We remark that if m is replaced with any point p on a geodesic between x and y,
then the property that any path from x to y passes within less than µ of the point p still
follows from (a), as proved below in Lemma 2.18. We will need the following lemma.
Lemma 2.17. [[4], Proposition 3.1] For all λ ≥ 1, C ≥ 0, δ ≥ 0, there exists an
R = R(δ, λ, C) such that if X is a δ-hyperbolic space, γ is a (λ, C)-quasi-geodesic in
X, and γ 0 is a geodesic segment with the same end points, then γ 0 and γ are Hausdorff
distance less than R from each other.
Lemma 2.18. If Y is a quasi-tree, then there exists µ > 0 such that for any point z
on a geodesic connecting two points, any other path between the same end points passes
within µ of z.
Proof. Let T be a tree and q : Y → T be the (λ, C) quasi-isometry. Let dY and dT denote
the metrics in the spaces Y and T respectively. Note that since T is 0-hyperbolic, Y is
δ-hyperbolic for some δ.
Let x, y be two points in Y , joined by a geodesic γ. Let z be any point of γ, and
let α be another path from x to y. Let V denote the vertex set of α, ordered according
to the geodesic γ. Take its image q(V ) and connect consecutive points by geodesics (of
7
w
q
≤k
≤k
p1
x
p2
z1
z2
z
y
Figure 3: Corresponding to Lemma 2.20
length at most λ + C) to get a path β in T from q(x) to q(y). Then the unique geodesic
σ in T must be a subset of β. Since q(V ) ⊂ q ◦ α, we get that any point of σ s at most
λ + C from q ◦ α. Also, q ◦ γ is a (λ, C)-quasi-isometric embedding of an interval, and
hence a (λ, C)-quasi-geodesic. Thus, by Lemma 2.17 the distance from q(z) to σ is less
than R = R(0, λ, C).
Let p be the point on σ closest to q(z). There is a point w ∈ Y on α such that
d(q(w), p) ≤ λ + C. Since d(p, q(z)) < R, we have d(q(w), q(z)) ≤ λ + C + R. Thus
d(z, w) ≤ λ2 + 2λC + Rλ.
Thus α must pass within µ = λ2 + 2λC + Rλ of the point z.
2.5
A modified version of Bowditch’s lemma
In this section, Nk (x) denotes the closed k-neighborhood of a point x in a metric space.
The following theorem will be used in Section 5. Part (a) is a simplified form of a
result taken from [7], which is infact derived from a hyperbolicity criterion developed
by Bowditch in [3].
Theorem 2.19. Let Σ be a hyperbolic graph, and ∆ be a graph obtained from Σ by
adding edges.
(a) [3] Suppose there exists M > 0 such that for all vertices x, y ∈ Σ joined by an
edge in ∆ and for all geodesics p in Σ between x and y, all vertices of p lie in an
M-neighborhood of x, i.e., p ⊆ NM (x) in ∆. Then ∆ is also hyperbolic, and there
exists a constant k such that for all vertices x,y ∈ Σ, every geodesic q between x
and y in Σ lies in a k-neighborhood in ∆ of every geodesic in ∆ between x and y.
(b) If, under the assumptions of (a), we additionally assume that Σ is a quasi-tree,
then ∆ is also a quasi-tree.
Lemma 2.20. Let p,q be two paths in a metric space S between points x and y, such
that p is a geodesic and q ⊆ Nk (p). Then p ⊆ N2k (q).
Proof. Let z be any point on p. Let p1 , p2 denote the segments of the geodesic p with
end points x, z and z, y respectively.
8
s
z in Case(2)
s0
≤k
z in Case(1)
≤
≤ µ0
µ0
r
m
x
y
≤ 2k
n
Figure 4: Corresponding to Theorem 2.19
Define a function f : q → R as f (s) = d(s, p1 ) − d(s, p2 ). Then f is a continuous
function. Further, f (x) < 0 and f (y) > 0. By the intermediate value theorem, there
exists a point w on q such that f (w) = 0. Thus d(w, p1 ) = d(w, p2 ) (see Fig.3). Let z1
(resp. z2 ) be a point of p1 (resp. p2 ) such that d(pi , w) = d(zi , w) for i = 1, 2. Then
d(z1 , w) = d(z2 , w). By the hypothesis, d(w, p) = min{d(w, p1 ), d(w, p2 )} ≤ k. So we
get that d(w, p1 ) = d(w, p2 ) ≤ k. Thus d(z1 , z2 ) ≤ 2k, which implies d(z, w) ≤ 2k.
Proof of Theorem 2.19. We proceed with the proof of part (b).
We prove that ∆ is a quasi-tree by verifying the bottleneck property from Theorem
2.16. Let dΣ (resp. d∆ ) denote the distance in the graph Σ (resp. ∆). Note that the
vertex sets of the two graphs are equal.
Let x, y be two vertices. Let m be the midpoint of a geodesic r in ∆ connecting
them. Let s be any path from x to y in ∆. The path s consists of edges of two types
(i) edges of the graph Σ;
(ii) edges added in transforming Σ to ∆ (marked as bold edges on Fig.4).
Let p be a geodesic in Σ between x and y. By Part (a), there exists k such that p is in
the k-neighborhood of r in ∆. Applying Lemma 2.20 , we get a point n on p such that
d∆ (m, n) ≤ 2k.
Let s0 be the path in Σ between x and y, obtained from s by replacing every edge e
of type (ii) by a geodesic path t(e) in Σ between its end points (marked by dotted lines
in Fig.4). Since Σ is a quasi-tree, by Lemma 2.18, there exists µ0 > 0 and a point z on
s0 such that
dΣ (z, n) ≤ µ0 .
Case 1: If z lies on an edge of s of type (i) , then
d∆ (z, m) ≤ d∆ (z, n) + d∆ (n, m) ≤ dΣ (z, n) + d∆ (n, m) ≤ µ0 + 2k.
9
Case 2: If z lies on a path t(e) that replaced an edge e of type (ii), then by Part (a),
d∆ (e− , m) ≤ d∆ (e− , z) + d∆ (z, n) + d∆ (n, m) ≤ k + µ0 + 2k = µ0 + 3k.
Thus the bottleneck property holds for µ = µ0 + 3k > 0.
3
Proof of the main result
Our main result is the following theorem, from which Theorem 1.2 and other corollaries
stated in the introduction can be easily derived (see Section 3.5).
Theorem 3.1. Let {H1 , H2 , ..., Hn } be a finite collection of countable subgroups of a
group G such that {H1 , H2 , ..., Hn } ,→h (G, Z) for some Z ⊂ G. Let K be a subgroup
of G such that Hi ≤ K for all i. Then there exists a subset Y ⊂ K such that:
(a) {H1 , H2 , ..., Hn } ,→h (K, Y )
(b) Γ(K, Y t H) is a quasi-tree, where H =
Fn
i=1
Hi
(c) The action of K on Γ(K, Y t H) is acylindrical
(d) Z ∩ K ⊂ Y
3.1
Outline of the proof
Step 1: In order to prove Theorem 3.1, we first prove the following proposition. It is
distinct from Theorem 3.1 since it does not require the action of K on the Cayley graph
Γ(K, X t H) to be acylindrical.
Proposition 3.2. Let {H1 , H2 , ..., Hn } be a finite collection of countable subgroups of
a group G such that {H1 , H2 , ..., Hn } ,→h G with respect to a relative generating set Z.
Let K be a subgroup of G such that Hi ≤ K for all i. Then there exists X ⊂ K such
that
(a) {H1 , H2 , ..., Hn } ,→h (K, X)
(b) Γ(K, X t H) is a quasi-tree, where H =
Fn
1=1 {Hi }
(c) Z ∩ K ⊂ X
Step 2: Once we have proved Proposition 3.2, we will utilize an ’acylindrification’
construction from [11] to make the action acylindrical, which will prove Theorem 3.1.
The details of this step are as follows.
Proof. By Proposition 3.2, there exists X ⊆ K such that
(a) {H1 , H2 , ..., Hn } ,→h (K, X)
(b) Γ(K, X t H) is a quasi-tree
(c) Z ∩ K ⊂ X
By applying Theorem 2.12 to the above, we get that there exists Y ⊂ K such that
(a) X ⊆ Y
(b) {H1 , H2 , ..., Hn } ,→h (K, Y ). In particular, the Cayley Graph Γ(K, Y t H) is
hyperbolic
10
(c) The action of K on Γ(K, Y t H) is acylindrical.
From the proof of Theorem 2.12 (see [11] for details), it is easy to see that the
Cayley graph Γ(G, Y t H) is obtained from Γ(G, X t H) in a manner that satisfies the
assumptions of Theorem 2.19, with M = 1. Thus by Theorem 2.19, Γ(K, Y t H) is also
a quasi-tree. Further
K ∩ Z ⊂ X ⊂ Y.
Thus Y is the required relative generating set.
We will thus now focus on proving Proposition 3.2. In order to prove this proposition,
will use a construction introduced by Bestvina, Bromberg and Fujiwara in [2]. We
describe the construction below.
3.2
The projection complex
Definition 3.3. Let Y be a set. Suppose that for each Y ∈ Y we have a function
dπY : (Y\{Y } × Y\{Y }) → [0, ∞)
called a projection on Y , and a constant ξ > 0 that satisfy the following axioms for all
Y and all A, B, C ∈ Y\{Y } :
(A1) dπY (A, B) = dπY (B, A)
(A2) dπY (A, B) + dπY (B, C) ≥ dπY (A, C)
(A3) min {dπY (A, B), dπB (A, Y )} < ξ
(A4) #{Y | dπY (A, B) ≥ ξ} is finite
Let J be a positive constant. Then associated to this data we have the projection
complex PJ (Y), which is a graph constructed in the following manner : the set of
vertices of PJ (Y) is the set Y. To specify the set of edges, one first defines a new function
dY : (Y\{Y } × Y\{Y }) → [0, ∞), which can be thought of as a small perturbation of
dπY . The exact definition of dY can be found in [2]. An essential property of the new
function is the following inequality, which is an immediate corollary of [2], Proposition
3.2.
For every Y ∈ Y and every A, B ∈ Y\{Y }, we have
|dπY (A, B) − dY (A, B)| ≤ 2ξ.
(1)
The set of edge of the graph PJ (Y) can now be described as follows : two vertices
A, B ∈ Y are connected by an edge if and only if for every Y ∈ Y\{A, B}, dY (A, B) ≤ J.
This construction strongly depends on the constant J. Complexes corresponding to
different J are not isometric in general.
We would like to mention that if Y is endowed with an action of a group G that
preserves projections, i.e., dπg(Y ) (g(A), g(B)) = dπY (A, B) ), then the action of G can be
extended to an action on PJ (Y). We also mention the following proposition, which has
been proved under the assumptions of Definition 3.3.
Proposition 3.4 ([2], Theorem 3.16). For a sufficiently large J > 0, PJ (Y) is connected
and quasi-isometric to a tree.
11
Definition 3.5. [Nearest point projection] In a metric space (S, d), given a set Y and
a point a ∈ S, we define the nearest point projection as
projY (a) = {y ∈ Y | d(Y, a) = d(y, a)}.
If A, Y are two sets in S, then
projY (A) =
[
projY (a).
a∈A
We note that in our case, since elements of Y will come from a Cayley graph, which is
a combinatorial graph, the nearest point projection will exist. This is because distances
on a combinatorial graph take discrete values in N ∪ {0}. Since this set is bounded
below, we cannot have an infinite strictly decreasing sequence of distances. This may
not be the case if we have a real tree. For example, on the real line, the point 0 has no
nearest point projection to the open interval (0, 1).
We make all geometric considerations in the Cayley graph Γ(G, Z t H). Let dZtH
denote the metric on this graph. Since {H1 , H2 , ..., Hn } ,→h G under the assumptions
of Proposition 3.2, by Remark 4.26 of [5], Hi ,→h G for all i = 1, 2, ..., n. By Theorem
2.8, we can define a finite valued, proper metric dei on Hi , for all i = 1, 2, ..., n, satisfying
dei (x, y) ≤ dbi (x, y) for all x, y ∈ Hi and for all i = 1, 2, ..., n
(2)
We can extend both dbi and dei to all cosets gHi of Hi by setting dei (gx, gy) = dei (x, y)
[ (resp. diam)
] denote the diameter
and dbi (gx, gy) = dbi (x, y) for all x, y ∈ Hi . Let diam
of a subset of Hi or a coset of Hi with respect to the dbi (resp. dei ) metric.
Let
Y = {kHi | k ∈ K, i = 1, 2, ..., n}
be the set of cosets of all Hi in K. We think of cosets of Hi as a subset of vertices of
Γ(G, Z t H).
For each Y ∈ Y, and A, B ∈ Y\{Y }, define
]
dπY (A, B) = diam(proj
Y (A) ∪ projY (B)),
(3)
where projY (A) is defined as in Definition 3.5. The fact that (3) is well-defined will
follow from Lemma 3.6 and Lemma 3.7, which are proved below. We will also proceed
to verify the axioms (A1) − (A4) of the Bestvina-Bromberg-Fujiwara construction in
the above setting.
]
Lemma 3.6. For any Y ∈ Y and any x ∈ G, diam(proj
Y (x)) is bounded.
0
[
Proof. By (2), it suffices to prove that diam(proj
Y (x)) is bounded. Let y, y ∈ projY (x).
0
Then dZtH (x, y) = dZtH (x, y ) = dZtH (x, Y ). Without loss of generality, x ∈
/ Y , else
the diameter is zero.
Let Y = gHi . Let e denote the edge connecting y and y 0 , and labelled by an element
of Hi . Let p and q denote the geodesics between the points x and y, and x and y 0
respectively. (see Fig.5)
12
x
q
p
y
e
y0
Y = gHi
Figure 5: The bold red edge denotes a single edge labelled by an element of H
Consider the geodesic triangle T with sides e, p, q. Since p and q are geodesics
between the point x and Y , e is an isolated component in T , i.e., e cannot be connected
to either p or q. Indeed if e is connected to, say, a component of p, then that would
imply that e+ and e− are in Y , i.e., the geodesic p passes through a point of Y before y.
But then y is not the nearest point from Y to x, which is a contradiction. By Lemma
2.4, dbi (y, y 0 ) ≤ 3C. Hence
[
diam(proj
Y (x)) ≤ 3C.
[
Lemma 3.7. For every pair of distinct elements A, Y ∈ Y, diam(proj
Y (A)) ≤ 4C,
]
where C is the constant as in Lemma 2.4. As a consequence, diam(projY (A)) is
bounded.
Proof. let Y = gHi and A = f Hj . Let y1 , y2 ∈ projY (A). Then there exist a1 , a2 ∈ A
such that dZtH (a1 , y1 ) = dZtH (a1 , Y ) and dZtH (a2 , y2 ) = dZtH (a2 , Y ). Now y1 and
y2 are connected by a single edge e, labelled by an element of Hi , and similarly, a1 and
a2 are connected by an edge f , labelled by an element of Hj (see Fig.6). Let p and q
denote geodesics that connect y1 , a1 and y2 , a2 respectively. We note that p and/or q
may be trivial paths (consisting of a single point), but this does not alter the proof.
Consider e in the quadrilateral Q with sides p, f, q, e. As argued in the previous
lemma, since p (respectively q) is a path of minimal length between the points a1
(respectively a2 ) and Y, e cannot be connected to a component of p or q.
If i = j, then e cannot be connected to f since A 6= Y . If i 6= j, then obviously e
and f cannot be connected. Thus e is isolated in this quadrilateral Q. By Lemma 2.4,
dbi (y1 , y2 ) ≤ 4C. Thus
[
diam(proj
Y (A)) ≤ 4C.
Corollary 3.8. The function dπY defined by (3) is well-defined.
Proof. Since the dei metric takes finite values for all i = 1, 2, ..., n, using Lemma 3.7, we
have that dπY also takes only finite values.
13
a2
A = f Hj
f
a1
q
p
e
y1
y2
Y = gHi
Figure 6: Lemma 3.7
Lemma 3.9. The function dπY defined by (3) satisfies conditions (A1) and (A2) in
Definition 3.3
Proof. (A1) is obviously satisfied. For any Y ∈ Y and any A, B, C ∈ Y\{Y }, by the
triangle inequality, we have that
]
dπY (A, C) = diam(proj
Y (A) ∪ projY (C))
]
]
≤ diam(proj
Y (A) ∪ projY (B)) + diam(projY (B) ∪ projY (C))
= dπY (A, B) + dπY (B, C).
Thus (A2) also holds.
Lemma 3.10. The function dπY from (3) satisfies condition (A3) in Definition 3.3 for
any ξ > 14C, where C is the constant from Lemma 2.4
Proof. By (2), it suffices to prove that
[
[
min{diam(proj
Y (A) ∪ projY (B)), diam(projB (A) ∪ projB (Y ))} < ξ.
Let A, B ∈ Y\{Y } be distinct. Let Y = gHi , A = f Hj and B = tHk .
[
diam(proj
Y (A) ∪ projY (B)) ≤ 14C, then we are done. So let
[
diam(proj
Y (A) ∪ projY (B))) > 14C.
If
(4)
Choose a ∈ A, b ∈ B, and x, y ∈ Y such that dZtH (A, Y ) = dZtH (a, x) and
dZtH (B, Y ) = dZtH (b, y). In particular,
x ∈ projY (A), y ∈ projY (B)
(5)
and b ∈ projB (Y ). Let p, q denote the geodesics connecting a, x and b, y respectively.
Let h1 denote the edge connecting x and y, which is labelled by an element of Hi .
14
a0
b0
r
h0
u
v
A = f Hj
B = tHk
r0
h3
h2
b
s
a
q
p
Hi edges
h1
x
y
Y = gHi
Figure 7: Condition (A3)
By (5), we have that
b
[
[
[
diam(proj
Y (A) ∪ projY (B)) ≤ diam(projY (A)) + diam(projY (B)) + di (x, y).
Combining this with (4) and Lemma 3.7, we get
[
[
[
dbi (x, y) ≥ diam(proj
Y (A) ∪ projY (B)) − diam(projY (A)) − diam(projY (B))
> 14C − 8C = 6C.
Choose any a0 ∈ A and b0 ∈ projB (a0 ), i.e.,: dZtH (a0 , B) = dZtH (a0 , b0 ); (see Fig.7).
(Note that if a0 = a, the following arguments still hold). Let h2 and h3 denote the edges
≤ 4C
≤ 4C
b0
b
projB (A)
c
projB (Y )
B = tHk
Figure 8: Estimating the distance between arbitrary points b and c of projB (A) and
projB (Y ) resp.
15
Y0
x0
A = f Hj
e
y0
a0
b0
r
h0
u
h2
B = tHk
h3
v
b
a
p
q
h1
y
x
Y = gHi
Figure 9: Condition (A4)
connecting a, a0 and b, b0 ; which are labelled by elements of Hj and Hk respectively.
Let r denote the geodesic connecting a0 and b0 . Consider the geodesic hexagon W with
sides p, h1 , q, h3 , r, h2 . Then h1 is not isolated in W , else by Lemma 2.4, dbi (x, y) ≤ 6C,
a contradiction.
Thus h1 is connected to another Hi -component in W . Arguing as in Lemma 3.7,
h1 cannot be connected to a component of p or q. Since A, B, Y are all distinct, h1
cannot be connected to h2 or h3 . So h1 must be connected to an Hi -component on the
geodesic r. Let this edge be h0 with end points u and v as shown in Fig 7. Let s denote
the edge (labeled by an element of Hi ), that connects y, v. Let r0 denote the segment
of r that connects v to b0 . Then r0 is also a geodesic.
Consider the quadrilateral Q with sides r0 , h3 , q, s. As argued before, h3 cannot be
connected to r0 , q or s. Thus h3 is isolated in Q. By Lemma 2.4,
dbk (b, b0 ) ≤ 4C.
Since the above argument holds for any a0 ∈ A and for b0 ∈ projB (A), we have that
ck (b, b0 ) ≤ 4C. Using Lemma 3.7 (see Fig.8), we get that
d
[
diam(proj
B (Y ) ∪ projB (A)) ≤ 4C + 4C = 8C < ξ.
Lemma 3.11. The function dπY defined by (3) satisfies condition (A4) in Definition
3.3, for ξ > 14C, where C is the constant from Lemma 2.4
16
π
[
Proof. If dπY (A, B) ≥ ξ, then by (2), diam(proj
Y (A) ∪ projY (B)) ≥ dY (A, B) ≥ ξ.
Thus it suffices to prove that the number of elements Y ∈ Y satisfying
[
diam(proj
Y (A) ∪ projY (B)) ≥ ξ
(6)
is finite. Let A, B ∈ Y, A = f Hj and B = tHk . Let Y ∈ Y\{A, B}, Y = gHi . Let
[
a0 ∈ A, b0 ∈ projB (a0 ). Arguing as in Lemma 3.10, if is such that diam(proj
Y (A) ∪
projY (B)) ≥ ξ, then for any a ∈ A, b ∈ B, x ∈ projY (a), y ∈ projY (b), we have that
dbi (x, y) > 6C.
Let h1 denote the edge connecting x, y, which is labelled by an element of Hi (see
Fig.9). Let h2 denote the edge connecting a, a0 , which is labelled by an element of Hj
and h3 denote the edge connecting b, b0 , which is labelled by an element of Hk . Let
p be a geodesic between a, x, let q be a geodesic between b, y, and let r be a geodesic
between a0 , b0 . As argued in Lemma 3.10, we can show that h1 cannot be isolated in
the hexagon W with sides p, h1 , q, h2 , r, h3 and must be connected to an Hi -component
of r, say the edge h0 .
We claim that the edge h0 uniquely identifies Y . Indeed, let Y 0 be a member of Y,
with elements x0 , y 0 connected by an edge e (labelled by an element of the corresponding
subgroup). Suppose that e is connected to h0 . Then we must have that Y 0 is also a
coset of Hi . But cosets of a subgroup are either disjoint or equal, so Y = Y 0 . Thus, the
number of Y ∈ Y satisfying (6) is bounded by the number of distinct Hi -components
of r, which is finite.
3.3
Choosing a relative generating set
We now have the necessary details to choose a relative generating set X which will satisfy
conditions (a) and (b) of Proposition 3.2. This set will later be altered slightly to obtain
another relative generating set which will satisfy all three conditions of Proposition 3.2.
We will repeat arguments
Fnsimilar to those from pages 60-63 of [5].
Recall that H =
i=1 Hi , and Z is the relative generating set such that
{H1 , H2 , ..., Hn } ,→h (G, Z). Let PJ (Y) be the projection complex corresponding to
the vertex set Y as specified in section 3.2 and the constant J is as in Proposition
3.4, i.e., PJ (Y) is connected and a quasi-tree. Let dP denote the combinatorial metric
onPJ (Y). Our definition of projections is K- equivariant and hence the action of K on
Y extends to a cobounded action of K on PJ (Y).
In what follows, by considering Hi to be vertices of the projection complex PJ (Y),
we denote by star(Hi ), the set
{kHj ∈ Y |dP (Hi , kHj ) = 1 }.
We choose the set X in the following manner. For all i = 1, 2, ..., n and each edge
e in star(Hi ) in PJ (Y) that connects Hi to kHj , choose an element xe ∈ Hi kHj such
that
dZtH (1, xe ) = dZtH (1, Hi kHj ).
We say that such an xe has type (i, j). Since Hi ≤ K for all i, xe ∈ K. We observe
the following:
17
(a) For each xe of type (i, j) as above, there is an edge in PJ (Y) connecting Hi and
xe Hj . Indeed if xe = h1 kh2 , for h1 ∈ Hi , h2 ∈ Hj , then
dP (Hi , xe Hj ) = dP (Hi , h1 kh2 Hj ) = dP (Hi , h1 kHj )
= dP (h−1
1 Hi , kHj ) = dP (Hi , kHj ) = 1.
(b) For each edge e connecting Hi and kHj , there is a dual edge f connecting Hj
and k −1 Hi . We will choose the elements xe and xf to be mutually inverse. In
particular, the set given by
X = {xe 6= 1|e ∈ star(Hi ), i = 1, 2, ..., n}
(7)
is symmetric, i.e., closed under taking inverses. Obviously, X ⊂ K.
(c) If xe ∈ X is of type (i, j), then xe is not an element of Hi or Hj . Indeed if
xe = h1 kh2 ∈ Hi for some h1 ∈ Hi and some h2 ∈ Hj , then k = hf for some
h ∈ Hi and some f ∈ Hj . Consequently
dZtH (1, Hi kHj ) = dZtH (1, Hi Hj ) = 0 = dZtH (1, xe ),
which implies xe = 1, which is a contradiction to (7).
Lemma 3.12 (cf. Lemma 4.49 in [5]). The subgroup K is generated by X together
with the union of all Hi ’s. Further, the Cayley graph Γ(K, X t H) is quasi-isometric to
PJ (Y), and hence a quasi-tree.
Proof. Let Σ = {H1 , H2 , ..., Hn } ⊆ Y. Let diam(Σ) denote the diameter of the set Σ
in the combinatorial metric dP . Since Σ is a finite set, diam(Σ) is finite. Define
φ : K → Y as φ(k) = kH1
By Property (a) above, if xe ∈ X is of type (i, j),
dP (xe H1 , H1 ) ≤ dP (xe H1 , xe Hj ) + dP (xe Hj , Hi ) + dP (Hi , H1 )
= dP (H1 , Hj ) + 1 + dP (Hi , H1 ) ≤ 2diam(Σ) + 1.
Further, for h ∈ Hi ,
dP (hH1 , H1 ) ≤ dP (hH1 , hHi ) + dP (hHi , H1 )
= dP (H1 , Hi ) + dP (Hi , H1 ) ≤ 2diam(Σ).
Thus for all g ∈ hX ∪ H1 ∪ H2 ... ∪ Hn i, we have
dP (φ(1), φ(g)) ≤ (2diam(Σ) + 1)|g|XtH ,
(8)
where |g|XtH denotes the length of g in the generating set X ∪ H1 ∪ H2 ... ∪ Hn . (We
use this notation for the sake of uniformity).
Now let g ∈ K and suppose dP (φ(1), φ(g)) = r, i.e., dP (H1 , gH1 ) = r. If r = 0, then
H1 = gH1 , thus g ∈ H1 and |g|XtH ≤ 1. If r > 0, consider the geodesic p in PJ (Y)
connecting H1 and gH1 . Let
v0 = H1 = g0 H1 (g0 = 1), v1 = g1 Hλ1 , v2 = g2 Hλ2 , ..., vr−1 = gr−1 Hλr−1 , vr = gH1
18
v2
v1
. . .
v0 = H1
vr = gH1
p
vr−1
Figure 10: The geodesic p
be the sequence of vertices of p, for some λj ∈ {1, 2, ..., n}, and some gi ∈ K (see Fig.10).
Now gi Hλi is connected by a single edge to gi+1 Hλi+1 . Thus dP (gi Hλi , gi+1 Hλi+1 ) =
1, which implies dP (Hλi , gi−1 gi+1 Hλi+1 ) = 1. Then there exists x ∈ X such that
x ∈ Hλi gi−1 gi+1 Hλi+1
and
dZtH (1, x) = dZtH (1, Hλi gi−1 gi+1 Hλi+1 ).
Thus x = hgi−1 gi+1 k for some h ∈ Hλi and some k ∈ Hλi+1 which implies gi−1 gi+1 =
h−1 xk −1 . So |gi−1 gi+1 |XtH ≤ 3, which implies
|g|XtH =
r
Y
i=1
−1
gi−1
gi
≤
XtH
r
X
−1
gi−1
gi
XtH
≤ 3r = 3dP (φ(1), φ(g))
(9)
i=1
The above argument also provides a representation for every element g ∈ K as a
product of elements from X ∪ H1 ∪ H2 ... ∪ Hn . Thus K is generated by the union of
X and all Hi ’s. By (8) and (9), φ is a quasi-isometric embedding of (K, |.|XtH ) into
(PJ (Y), dP ) satisfying
1
|g|XtH ≤ dP (φ(1), φ(g)) ≤ (2diam(Σ) + 1)|g|XtH .
3
Since Y is contained in the closed diam(Σ)-neighborhood of φ(K), φ is a quasi-isometry.
This implies that Γ(K, X t H) is a quasi-tree.
Let dei denote the modified relative metric on Hi associated with the Cayley graph
X denote the relative metric on H associated with
Γ(G, Z tH) from Theorem 2.8. Let dc
i
i
c
X
the Cayley graph Γ(K, X t H). We will now show that d is proper for all i = 1, 2, ..., n.
i
X.
We will use the fact that dei is proper and derive a relation between dei and dc
i
Lemma 3.13 (cf. Lemma 4.50 in [5]). There exists a constant α such that for any
Y ∈ Y and any x ∈ X t H, if
]
diam(proj
Y {1, x}) > α,
then x ∈ Hj and Y = Hj for some j.
19
Proof. We prove the result for
α = max{J + 2ξ, 6C}.
]
Suppose that diam(proj
Y {1, x}) > α and x ∈ X has type (k, l), i.e., there exists an
edge connecting Hk and gHl in PJ (Y), where g ∈ K. We consider three possible cases
and arrive at a contradiction in each case.
Case 1: Hk 6= Y 6= xHl . Then
π
]
diam(proj
Y {1, x}) ≤ dY (Hk , xHl ) ≤ dY (Hk , xHl ) + 2ξ ≤ J + 2ξ ≤ α,
using (1) and the fact that Hk and xHl are connected by an edge in PJ (Y), which
is a contradiction.
Case 2: Hk = Y . Since x ∈
/ Hk , let y ∈ projY (x), i.e., dZtH (x, y) = dZtH (x, Hk ) =
dZtH (x, Y ).
x
1
Hk = Y
y
Figure 11: Case 2
By Lemma 3.6, if dbk (1, y) ≤ 3C, then
b
[
[
[
diam(proj
Y {1, x}) ≤ diam(projY (1)) + diam(projY (x)) + dk (projY (1), projY (x))
≤ 0 + 3C + dbk (1, y) ≤ 6C ≤ α.
Then by (2), we have
]
diam(proj
Y {1, x}) ≤ α,
which is a contradiction. Thus dbk (1, y) > 3C. This implies that 1 ∈
/ projY (x) (see
Fig.11). By definition of the nearest point projection, dZtH (1, x) > dZtH (y, x),
which implies dZtH (1, x) > dZtH (1, y −1 x). Since y −1 x ∈ Hk gHl , we obtain
dZtH (1, x) > dZtH (1, Hk gHl ), which is a contradiction to the choice of x.
Case 3: Y = xHl , Hk 6= Y . This case reduces to Case 2, since we can translate everything
by x−1 .
Thus we must have x ∈ Hj for some j. Suppose that Hj 6= Y . But then
]
]
diam(proj
Y {1, x}) ≤ diam(projY (Hj )) ≤ 4C ≤ α, by Lemma 3.7; which is a contradiction.
20
e
v0 = 1
x1
v1
h = vr
xr
vr−1
x2
v2
x3
...
v3
...
Figure 12: The cycle ep
Lemma 3.14 (cf. Lemma 4.45 in [5]). If Hi = f Hj , then Hi = Hj and f ∈ Hi .
Consequently, if gHi = f Hj , then Hi = Hj and g −1 f ∈ Hi .
Proof. If Hi = f Hj , then 1 = f k for some k ∈ Hj . Then f = k −1 ∈ Hj , which implies
Hi = Hj .
Lemma 3.15 (cf. Theorem 4.42 in [5]). For all i = 1, 2, ..., n and any h ∈ Hi , we have
X (1, h) ≥ d
ei (1, h),
αdc
i
X is proper.
where α is the constant from Lemma 3.13. Thus dc
i
X (1, h) = r. Let e denote the H -edge in the Cayley
Proof. Let h ∈ Hi such that dc
i
i
graph Γ(K, X t H) connecting h to 1, labeled by h−1 . Let p be an admissible (see
Definition 2.1) geodesic path of length r in Γ(K, X t H) connecting 1 and h. Then ep
forms a cycle. Since p is admissible, e is isolated in this cycle.
Let Lab(p) = x1 x2 ...xr for some x1 , x2 , ..., xr ∈ X t H. Let
v0 = 1, v1 = x1 , v2 = x1 x2 , ..., vr = x1 x2 ...xr = h.
Since these are also elements of G, for all k = 1, 2, ..., r we have
]
]
diam(proj
Hi {vk−1 , vk }) = diam(projHi {x1 x2 ...xk−1 , x1 x2 ...xk−1 xk })
]
= diam(proj
Y {1, xk },
where Y = (x1 x2 ...xk−1 )−1 Hi .
]
If diam(proj
Y {1, xk }) > α for some k, then by Lemma 3.13, xk ∈ Hj and Y = Hj
for some j. By Lemma 3.14, Hi = Hj and x1 x2 ...xk−1 ∈ Hj . But then e is not isolated
in the cycle ep, which is a contradiction.
Hence
]
diam(proj
Hi {vk−1 , vk }) ≤ α
for all k = 1, 2, ..., r, which implies
]
dei (1, h) ≤ diam(proj
Hi {v0 , vr }) ≤
r
X
c
X
]
diam(proj
Hi {vj−1 , vj } ≤ rα = αd i (1, h).
j=1
21
1
Hi
e
zHi
z
t
s
f
h
q
p
x
h2
y
Y = gHj
Figure 13: Dealing with elements of Z ∩ K that represent elements of H
3.4
Proof of Proposition 3.2
The goal of this section is to alter our relative generating set X from Section 3.3, so that
we obtain another relative generating set that satisfies all the conditions of Proposition
3.2. To do so, we need to establish a relation between the set X and the set Z. We will
need the following obvious lemma.
Lemma 3.16. Let X and Y be generating sets of G, and supx∈X |x|Y < ∞ and
supy∈Y |y|X < ∞. Then Γ(G, X) is quasi-isometric to Γ(G, Y ). In particular Γ(G, X)
is a quasi-tree if and only if Γ(G, Y ) is a quasi-tree.
Remark 3.17. The above lemma implies that if we change a generating set by adding
finitely many elements, then the property that the Cayley graph is a quasi-tree still
holds.
We also need to note that from (1) in Definition 3.3, it easily follows that
dY (A, B) ≤ dπY (A, B) + 2ξ.
(10)
Lemma 3.18. For a large enough J, the set X constructed in Section 3.3 satisfies
the following property : If z ∈ Z ∩ K does not represent any element of Hi for all
i = 1, 2, ..., n, then z ∈ X.
Proof. Recall that dZtH denotes the combinatorial metric on Γ(G, Z t H). Let z ∈
Z ∩ K be as in the statement of the lemma. Then z ∈ Hi zHi for all i and 1 ∈
/ Hi zHi .
Thus
dZtH (1, Hi zHi ) ≥ 1 = dZtH (1, z) ≥ dZtH (1, Hi zHi ),
which implies
dZtH (1, Hi zHi ) = dZtH (1, z) for all i.
In order to prove z ∈ X, we must show that Hi and zHi are connected by an edge
in PJ (Y). By Definition 3.3, this is true if
dY (Hi , zHi ) ≤ J for all Y 6= Hi , zHi .
22
z
1
z
h1
Figure 14: Bigons in the Cayley graph
In view of (10), we will estimate dπY (Hi , zHi ).
Let dZtH (h, x) = dZtH (Hi , Y ) and dZtH (f, y) = dZtH (zHi , Y ) for some h ∈
Hi , f ∈ zHi and for some x, y ∈ Y = gHj . Let p be a geodesic connecting h and x;
and q be a geodesic connecting y and f . Let h2 denote the edge connecting x and y,
labelled by an element of Hj . Similarly, let s, t denote the edges connecting h, 1 and
z, f respectively, that are labelled by elements of Hi . Let e denote the edge connecting
1 and z, labelled by z. Consider the geodesic hexagon W with sides p, h2 , q, t, e, s (see
Fig.13).
Arguing as in Lemma 3.10), we can show that h2 cannot be connected to q , p, s or
t. Since z does not represent any element of Hi for all i, h2 cannot be connected to e.
Thus, h2 is isolated in W . By Lemma 2.4, dbj (x, y) ≤ 6C. By Lemma 3.7,
dY (Hi , zHi ) ≤ dπY (Hi , zHi ) ≤ 14C + 2ξ.
So we conclude that by taking the constant J to be sufficiently large so that Proposition
3.4 holds and J exceeds 14C + 2ξ, we can ensure that z ∈ X and the arguments of the
previous section still hold.
Lemma 3.19. There are only finitely many elements of Z ∩ K that can represent an
element of Hi for some i ∈ {1, 2, ..., n}.
Proof. Let z ∈ Z ∩ K represent an element of Hi for some i = 1, 2, ..., n. Then in the
Cayley graph Γ(G, Z t H), we have a bigon between the elements 1 and h, where one
edge is labelled by z, and the other edge is labelled by an element of Hi , say h1 (see
Rem. 2.2 and Fig.14).
fi (1, 1), i.e., the ball of
This implies that dbi (1, z) ≤ 1, so dei (1, z) ≤ 1. But then z ∈ B
radius 1 in the subgroup Hi in the relative metric, centered at the identity. But this is
a finite ball. Take
n
[
fi (1, 1) .
ρ=
B
i=1
Then z has at most ρ choices, which is finite.
By Lemma 3.19 and by selecting the constant J as specified in Lemma 3.18, we
conclude that the set X from Section 3.3 does not contain at most finitely many elements
of Z ∩ K. By adding these finitely many remaining elements of Z ∩ K to X, we
obtain a new relative generating set X 0 such that |X 0 ∆X| < ∞. By Proposition 2.11,
{H1 , H2 , ..., Hn } ,→h (K, X 0 ) and Z ∩ K ⊂ X 0 . By Remark 3.17, Γ(K, X 0 t H) is also
a quasi-tree. Thus X 0 is the required set in the statement of Proposition 3.2, which
completes the proof.
23
3.5
Applications of Theorem 3.1
In order to prove Theorem 1.2, we first need to recall the following definitions.
Definition 3.20 (Loxodromic element). Let G be a group acting on a hyperbolic space
S. An element g ∈ G is called loxodromic if the map Z → S defined by n → g n s is a
quasi-isometric embedding for some (equivalently, any) s ∈ S.
Definition 3.21 (Elementary subgroup, Lemma 6.5 in [5]). Let G be a group acting
acylindrically on a hyperbolic space S, g ∈ G a loxodromic element. Then g is contained
in a unique maximal elementary subgroup E(g) of G given by
E(g) = {h ∈ G |dHau (l, h(l)) < ∞ },
where l is a quasi-geodesic axis of g in S.
Corollary 3.22. A group G is acylindrically hyperbolic if and only if G has an acylindrical and non-elementary action on a quasi-tree.
Proof. If G has an acylindrical and non-elementary action on a quasi-tree, by Theorem
2.9, G is acylindrically hyperbolic. Conversely, let G be acylindrically hyperbolic, with
an acylindrical non-elementary action on a hyperbolic space X. Let g be a loxodromic
element for this action. By Lemma 6.5 of [5] the elementary subgroup E(g) is virtually
cyclic and thus countable. By Theorem 6.8 of [5], E(g) is hyperbolically embedded in G.
Taking K = G and E(g) to be the hyperbolically embedded subgroup in the statement
of Theorem 3.1 now gives us the result. Since E(g) is non-degenerate, by [11], Lemma
5.12, the resulting action of G on the associated Cayley graph Γ(G, X t E(g)) is also
non-elementary.
The following corollary is an immediate consequence of Theorem 3.1.
Corollary 3.23. Let {H1 , H2 , ..., Hn } be a finite collection of countable subgroups of a
group G such that {H1, H2 , ..., Hn } ,→h G. Let K be a subgroup of G. If Hi ≤ K for
all i = 1, 2, ..., n, then {H1 , H2 , ..., Hn } ,→h K.
Definition 3.24. Let (M, d) be a geodesic metric space, and > 0 a fixed constant. A
subset S ⊂ M is said to be -coarsely connected if there for any two points x, y in S,
there exist points x0 = x, x1 , x2 , ..., xn−1 , xn = y in S such that for all i = 0, ..., n − 1,
d(xi , xi+1 ) ≤ .
Further we say that S is coarsely connected if it is -coarsely connected for some > 0.
Recall that we denote the closed σ neighborhood of S by S +σ .
Definition 3.25. Let (M, d) be a geodesic metric space, and σ > 0 a fixed constant. A
subset S ⊂ M is said to be σ-quasi-convex if for any two points x, y in S, any geodesic
connecting x and y is contained in S +σ . Further, we say that S is quasi-convex if it is
σ-quasi-convex for some σ > 0.
Corollary 3.26. Let H be a finitely generated subgroup of an acylindrically hyperbolic
group G. Then there exists a subset X ⊂ G such that
(a) Γ(G, X) is hyperbolic, non-elementary and acylindrical
24
(b) H is quasi-convex in Γ(G, X)
To prove the above corollary, we need the following two lemmas.
Lemma 3.27. Let T be a tree, and let Q ⊂ T be -coarsely connected. Then Q is
-quasi-convex.
Proof. Let > 0 be the constant from Definition 3.24. Let x, y be two points in Q, and
p be any geodesic between them. Then there exist points x0 = x, x1 , x2 , ..., xn−1 , xn = y
in Q such that for all i = 0, ..., n−1, d(xi , xi+1 ) ≤ . Let pi denote the geodesic segments
between xi and xi+1 for all i = 0, 1, ..., n − 1. Since T is a tree, we must have that
p⊆
n−1
[
pi .
i=0
By definition, for all i = 0, 1, ..., n − 1, pi ⊆ B(xi , ), the ball of radius centered at xi .
Since xi ∈ Q for all i = 0, 1, ..., n − 1, we obtain
pi ⊆ Q+ .
This implies p ⊆ Q+ .
Lemma 3.28. Let Γ be a quasi-tree, and S ⊂ Γ be coarsely connected. Then S is
quasi-convex.
Proof. Let T be a tree, and dΓ and dT denote distances in Γ and T respectively. Let
δ > 0 be the hyperbolicity constant of Γ. Let q : T → Γ be a (λ, C)-quasi-isometry. i.e.,
−C +
1
dT (a, b) ≤ dΓ (q(a), q(b)) ≤ λdT (a, b) + C.
λ
Let > 0 be the constant from Definition 3.24 for S. Set Q = q −1 (S). Then Q ⊂ T . It
is easy to check that Q is ρ-coarsely connected with constant ρ = λ( + C). By Lemma
3.27, Q is ρ-quasi-convex.
Let x, y be two points in S, and p be a geodesic between them. Choose points a, b in
Q such that q(a) = x and q(b) = y. Let r denote the (unique) geodesic in T between a
and b. Since Q is ρ-quasi-convex, we have
r ⊆ Q+ρ .
Set σ = λρ + C. Then
q(r) ⊆ S +σ .
Further q ◦ r is a quasi-geodesic between x and y. By Lemma 2.17, there exists a
constant R(= R(λ, C, δ)) such that q(r) and p are Hausdorff distance less than R from
each other. This implies that p ⊆ S +(R+σ) . Thus S is quasi-convex.
Proof of Corollary 3.26. By Corollary 3.22, there exists a generating set X of G such
that Γ(G, X) is a quasi-tree (hence hyperbolic), and the action of G on Γ(G, X) is
acylindrical and non-elementary. Let dX denote the metric on Γ(G, X) induced by the
generating set X. Let H = hx1 , x2 , ..., xn i. Set
= max{dX (1, x±1
i ) | i = 1, 2, ..., n}.
25
We claim that HQis coarsely connected with constant . Indeed if u, v are elements of
k
±1
H, then u−1 v = j=1 wj , where wj ∈ {x±1
1 , ..., xn }. Set
z0 = u, z1 = uw1 , ..., zk−1 = uw1 w2 ...wk−1 , zk = v.
Clearly zi ∈ H for all i = 0, 2, ..., k − 1. Further
dX (zi , zi+1 ) = dX (1, wi+1 ) ≤
for all i = 0, 1, 2, ..., k − 1. By Lemma 3.28, H is quasi-convex in Γ(G, X).
References
[1] J. Behrstock, M. F. Hagen, A. Sisto; Hierarchically hyperbolic spaces I: curve
complexes for cubical groups, arXiv:1412.2171v3.
[2] M. Bestvina, K. Bromberg, K. Fujiwara; Constructing group actions on quasi-trees
and applications to mapping class groups, arXiv:1006.1939v5.
[3] B. Bowditch; Intersection numbers and the hyperbolicity of the curve complex, J.
Reine Angew. Math. 598 (2006), 105-129.
[4] M. R. Bridson and A. Haefliger; Metric Spaces of Non-Positive Curvature, volume
319 of Grundlehren der mathematischen Wissenschaften. Springer-Verlag, Berlin,
1999.
[5] F. Dahmani, V. Guirardel, D. Osin; Hyperbolically embedded subgroups and rotating families in groups acting on hyperbolic spaces, Memoirs AMS, To appear.
[6] Mark F. Hagen; Weak hyperbolicity of cube complexes and quasi-arboreal groups,
J. Topol. 7, no. 2 (2014), 385-418.
[7] I. Kapovich, K. Rafi; On hyperbolicity of free splitting and free factor complexes,
Groups, Geometry and Dynamics 8, no. 2 (2014), 391414.
[8] Sang-hyun Kim, T. Koberda; The geometry of the curve graph of a right-angled
Artin group, Int. J. Algebra Comput. 24, no. 2 (2014), 121-169.
[9] J. F. Manning; Geometry of pseudocharacters, Geometry and Topology 9 (2005)
1147-1185.
[10] A. Minasyan, D. Osin; Small subgroups of acylindrically hyperbolic groups,
Preprint 2016.
[11] D. Osin; Acylindrically hyperbolic groups, Trans. Amer. Math. Soc. 368, no. 2
(2016), 851-888.
[12] D. Osin; On acylindrical hyperbolicity of groups with positive first `2 -Betti number,
Bull. London Math. Soc. 47, no. 5 (2015), 725-730.
[13] E. Rips; Subgroups of small cancellation groups, Bull. London Math. Soc. 14, no.
1(1982), 45-47.
[14] A. Sisto; On metric relative hyperbolicity, arXiv:1210.8081.
Sahana Balasubramanya
Department of Mathematics, Vanderbilt University
Nashville, TN -37240, U.S.A
Email : [email protected]
26
| 4 |
1
Covert Communication over Classical-Quantum
Channels
Azadeh Sheikholeslami∗† , Boulat A. Bash† , Don Towsley‡ , Dennis Goeckel∗ , Saikat Guha†
and Computer Engineering Department, University of Massachusetts, Amherst, MA
† Quantum Information Processing Group, Raytheon BBN Technologies, Cambridge, MA
‡ College of Information and Computer Sciences, University of Massachusetts, Amherst, MA
arXiv:1601.06826v4 [quant-ph] 25 Jun 2017
∗ Electrical
Abstract—The square root law (SRL) is the fundamental
limit of covert communication over classical memoryless
channels (with a classical adversary) and quantum lossynoisy bosonic channels (with√a quantum-powerful adversary). The SRL states that O( n) covert bits, but no more,
√
can be reliably transmitted in n channel uses with O( n)
bits of secret pre-shared between the communicating
parties. Here we investigate covert communication over
general memoryless classical-quantum (cq) channels with
fixed finite-size input alphabets, and show that the SRL
governs covert communications in typical scenarios.
√ We
characterize the optimal constants in front of n for
the reliably communicated covert bits, as well as for the
number of the pre-shared secret bits consumed. We assume a quantum-powerful adversary that can perform an
arbitrary joint (entangling) measurement on all n channel
uses. However, we analyze the legitimate receiver that is
able to employ a joint measurement as well as one that is
restricted to performing a sequence of measurements on
each of n channel uses (product measurement). We also
evaluate the scenarios where covert communication is not
governed by the SRL.
I. I NTRODUCTION
Security is important for many types of communication, ranging from electronic commerce to diplomatic
missives. Preventing the extraction of information from
a message by an unauthorized party has been extensively
studied by the cryptography and information theory
communities. However, the standard setting to analyze
secure communications does not address the situation
when not only must the content of the signal be protected, but also the detection of the occurrence of the
communication itself must be prevented. This motivates
the exploration of the information-theoretic limits of
This research was funded by the National Science Foundation
(NSF) under grants ECCS-1309573 and CNS-1564067, and DARPA
under contract number HR0011-16-C-0111. This document does not
contain technology or technical data controlled under either the
U.S. International Traffic in Arms Regulations or the U.S. Export
Administration Regulations. This work was presented in part at the
IEEE International Symposium on Information Theory, July 2016 [1].
Fig. 1. Covert communication setting. Alice has a noisy channel to
legitimate receiver Bob and adversary Willie. Alice encodes message
W with blocklength n code, and chooses whether to transmit. Willie
observes his channel from Alice to determine whether she is quiet
(null hypothesis H0 ) or not (alternate hypothesis H1 ). Alice and
Bob’s coding scheme must ensure that any detector Willie uses is
close to ineffective (i.e., a random guess between the hypotheses),
while allowing Bob to reliably decode the message (if one is
transmitted). Alice and Bob may share a secret prior to transmission.
covert communications, i.e., communicating with low
probability of detection/interception (LPD/LPI).
We consider a broadcast channel setting in Figure 1
typical in the study of the fundamental limits of secure
communications, where the intended receiver Bob and
adversary Willie receive a sequence of input symbols
from Alice that are corrupted by noise. We label one of
the input symbols (say, zero) as the “innocent symbol”
indicating “no transmission by Alice”, whereas the other
symbols correspond to transmissions, and are, therefore,
“non-innocent”. In a covert communications scenario,
Willie’s objective is to estimate Alice’s transmission status, while Bob’s objective is to decode Alice’s message,
given their respective observations. Thus, the transmitter
Alice must hide her transmissions in channel noise from
Willie while ensuring reliable decoding by Bob. The
properties of the noise in the channels from Alice to
Willie and Bob result in the following “six” scenarios:
(A) covert communication
is governed by the square
√
root law (SRL): O( n) covert bits (but no more)
can be reliably transmitted over n channel uses,
(B) corner cases:
2
1. covert communication is impossible,
2. O(1) covert bits can be reliably transmitted over
n channel uses,
3. covert communication is governed by the logarithmic law: O(log n) covert bits can be reliably
transmitted over n channel uses,
4. constant-rate covert communication is possible,
5. covert communication is governed
by the square
√
root logarithmic law: O( n log n) covert bits
(but no more) can be reliably transmitted over
n channel uses.
The research on the fundamental limits of covert communications in the setting described in Figure 1 has focused
on scenario (A), whereas scenarios in (B) are, arguably,
corner cases. The authors of [2], [3] examined covert
communications when Alice has additive white Gaussian
noise (AWGN) channels to both Willie and Bob. They
found that the SRL governs covert communications, and
that, to achieve it, Alice and Bob may have to share
a secret prior to communicating. The follow-on work
on the SRL for binary symmetric channels (BSCs) [4]
showed its achievability without the use of a pre-shared
secret, provided that Bob has a better channel from Alice
than Willie. The SRL was further generalized to the
entire class of discrete memoryless
channels (DMCs)
√
[5], [6] with [6] finding that O( n) pre-shared secret
bits were sufficient. However, the key contribution of
[5], [6] was √
the characterization of the optimal constants
in front of n in the SRL for both the communicated
bits as well as the pre-shared secret bits consumed in
terms of the channel transition probability p(y, z|x). We
note that, while zero is the natural innocent symbol for
channels that take continuously-valued input (such as the
AWGN channel), in the analysis of the discrete channel
setting an arbitrary input is designated as innocent. A
tutorial overview of this research can be found in [7].
It was recently shown that the SRL governs the fundamental limits of covert communications over a lossy
thermal-noise bosonic channel [8], which is a quantum
description of optical communications in many practical
scenarios (with vacuum being the innocent input). Notably, the SRL is achievable in this setting even when
Willie captures all the photons that do not reach Bob,
performs an arbitrary measurement that is only limited
by the laws of quantum mechanics, and has access to
unlimited quantum storage and computing capabilities.
Furthermore, the SRL cannot be surpassed even if Alice and Bob employ an encoding/measurement/decoding
scheme limited only by the laws of quantum mechanics,
including the transmission of codewords entangled over
many channel uses and making collective measurements.
Successful demonstration of the SRL for a particular
quantum channel in [7] motivates a generalization to
arbitrary quantum channels, which is the focus of this
article. We study the memoryless classical-quantum (cq)
channel: a generalization of the DMC that maps a finite
set of discrete classical inputs to quantum states at the
output. This allows us to prove achievability of the SRL
for an arbitrary memoryless quantum channel, since a cq
channel can be induced by a specific choice of modulation at Alice. Our main result is the following theorem
that establishes the optimal sizes log M and log K (in
bits) of the reliably-transmissible covert message and the
required pre-shared secret when the cq channel is used
n times:
Theorem 1. Consider a stationary memoryless
classical-quantum channel that takes input x ∈ X
at Alice and outputs the quantum states σx and
ρx at Bob and Willie, respectively, with x = 0
designating the innocent state. If, ∀x ∈ X , the supports
supp(σx ) ⊆ supp(σ0 ) and supp (ρx ) ⊆ supp (ρ0 ) such
that ρ0 is not a mixture of {ρx }x∈X \{0} , then there
exists a coding scheme that meets the covertness and
reliability criteria
B
lim D(ρ̄n kρ⊗n
0 ) = 0 and lim Pe = 0,
n→∞
n→∞
with optimal scaling coefficients of message length and
key length,
P
log M
x∈X \{0} p̃(x)D(σx kσ0 )
q
,
=
lim q
n→∞
1 2
χ
(ρ̃kρ
)
nD(ρ̄n kρ⊗n
)
0
0
2
and,
lim q
log K
n→∞
nD(ρ̄n kρ⊗n
0 )
hP
i+
p̃(x)(D(ρ
kρ
)
−
D(σ
kσ
))
x
0
x
0
x∈X \{0}
q
=
,
1 2
χ
(ρ̃kρ
)
0
2
where ρ̄n is the average state at Willie when a transmission occurs, PB
e is Bob’s decoding error probability,
p̃(x)
is
a
distribution
on non-innocent input symbols
P
p̃(x)
=
1
,
ρ̃
is
the average non-innocent state
x∈X \{0}
at Willie induced by p̃(x), [c]+ = max{c, 0}, D(ρkσ) ≡
Tr {ρ(log ρ − log σ)}
is the quantum relative entropy,
and χ2 (ρkσ) ≡ Tr (ρ − σ)2 σ −1 is the quantum χ2 divergence.
Theorem 1 generalizes [6] by assuming that both
Willie and Bob are limited only by the laws of quantum
mechanics, and, thus, can perform arbitrary joint measurement over all n channel uses. While it is reasonable
3
to consider a quantum-powerful Willie, a practical Bob
would perform a symbol-by-symbol measurement. In
this case, we show that Theorem 1 still holds with quantum relative entropy D(σx kσ0 ) (characterizing Bob’s cq
channel from Alice) replaced by the classical relative
entropy characterizing the classical DMC induced by
Bob’s choice of measurement. We also develop explicit
conditions that differentiate covert communication corner cases for cq channels given in (B) above.
The scaling coefficients in Theorem 1 are optimal for
the discrete-input cq channels, as we prove both the
achievability and the converse in this setting. We leave
open the general converse, which should account for
Alice being able to encode her message in a codebook
comprising arbitrary quantum states that are entangled
over all n channel uses and for Bob to employ an
arbitrary joint measurement. Such a converse would
show that,
for an arbitrary quantum channel, no more
√
than O( n) bits can be sent both reliably and covertly
in n channel uses.
This paper is organized as follows: in the next section
we present the basic quantum information theory background, our system model, and metrics. In Section III we
state our main results, which we prove in the following
sections: we show the achievability of Theorem 1 in
Section IV, show the converse (that is limited to cq
channels) in Section V, and characterize in Section VI
the square-root law covert communications when Bob is
restricted to product measurement while Willie remains
quantum-powerful. In Section VII we show the converse
(that is again limited to cq channels) of the square root
logarithmic law, and in Section VIII, we prove that covert
communication is impossible if codewords with support
contained in the innocent state support at Willie are
unavailable. We conclude in Section IX with a discussion
of future work.
II. BACKGROUND , S YSTEM M ODEL , AND M ETRICS
A. Quantum Information Theory Background
Here we provide basic background on quantum information theory necessary for understanding the paper.
We refer the reader to [9]–[11] and other books for a
comprehensive treatment of quantum information. The
classical statistical description of a communication channel p(y|x) stems from the physics-based description of
the underlying quantum channel (e.g., the physical electromagnetic propagation medium) along with a choice
of the quantum states of the transmitted signals used
to modulate the information, and the specific chosen
receiver measurement. The most general quantum description of a point-to-point memoryless channel is given
by a trace-preserving completely-positive (TPCP) map
NA→B from Alice’s quantum input A to Bob’s quantum
output, B . In ith channel use, Alice can transmit a
quantum state φA
i ∈ D (HA ), where D (HA ) is the set of
unit-trace positive Hermitian operators (called “density
operators”) in a finite dimensional Hilbert space HA
at the channel’s input. This results in Bob receiving
the state σiB = NA→B φA
∈ D (HB ) as the ith
i
output of the channel. Then, over n independent and
identical
Alice transmits a product
NnusesAof the channel,
⊗n
state
φ
∈
D
H
and Bob receives
a prodi=1
A
Nni B
N
n
A ∈ D H⊗n .
uct state
σ
=
N
φ
i
i=1 i
i=1 A→B
B
However, Alice, in general,
can transmit an entangled
n
⊗n
state φA ∈ D HA
over the n channel uses, resulting
B n = N ⊗n
An ∈
in a potentially
entangled
state
σ
φ
A→B
⊗n
D HB
at the channel’s output.1 Entangled states are
more general since they do not necessarily decompose
into product states.
One must measure a quantum state to obtain information from it. The most general quantum description of a
measurement is given by a set of positive operator-valued
measure
P (POVM) operators, {Πj }, where, ∀j, Πj ≥ 0
and
j Πj = I . When acting on a state σ , {Πj }
produces outcome j with probability p(j) = Tr (σΠj ). A
sequence of measurements acting individually on each of
n channel uses (followed by classical post-processing) is
called a product measurement. However, Bob, in general,
can employ a joint (entangling) measurement on the
n
output state σ B that cannot be realized by any product
measurement over n channel uses. Transmitting states
that are entangled over multiple channel uses and/or
employing joint measurements over multiple blocks of
channel uses at the output can increase the reliable
communication rate (in bits per channel use), even if
the underlying quantum channel acts independently and
memorylessly on each channel use.
Now consider the case when Alice uses a product
state for transmission, where she maps a classical index
x ∈ X , |X | < ∞, to a transmitted quantum state φA
x
in each channel use. The states transmitted in each
channel use are drawn from a predetermined input
alphabet that is a finite discrete
subset of D (HA ).
B
A→B
A
Bob receives σx = N
φx ∈ D (HB ) , x ∈ X .
Suppose Bob is not restricted to a product measurement
for his receiver. This simplified description of a quantum
channel x → σxB is known as a classical-quantum (cq)
channel. The maximum classical communication rate
allowed by the cq channel is the Holevo capacity C =
n
The mostn general channel model NAn →B n takes a state σ A ∈
⊗n
⊗n
HA to σ B ∈ HB
, allowing the output of an entangled state for
an input product state. While we consider such a channel in Section
VIII, such generality is usually unnecessary.
1
4
P
P
B
B
maxp(x) H
x∈X p(x)σx −
x∈X p(x)H σx
bits/sec, where H(σ) = −Tr (σ log2 σ) is the von
Neumann entropy of the quantum state σ [12], [13].
Restricting Bob to identical measurements on each
channel output that are described by the POVM{Πy },
y ∈ Y induces a DMC p(y|x) = Tr σxB Πy . The
Shannon capacity of this DMC, maxp(x) I(X; Y ), induced by any choice of product measurement POVM,
is generally strictly less than the Holevo capacity C of
the cq channel x → σxB . Furthermore, despite the fact
n
n
that the transmitted and received states φA and σ B are
product states over n uses of a memoryless cq channel,
n
a joint measurement on σ B is in general needed to
achieve the Holevo capacity.
A practically important quantum channel is the lossy
bosonic channel subject to additive thermal noise. It is a
quantum-mechanical model of optical communications.
This channel, when paired with an ideal laser light
(coherent state) transmitter and a heterodyne detection
receiver, induces a p(y|x) of a classical AWGN channel,
where x, y ∈ C and C denotes the set of complex
numbers. The same lossy thermal-noise bosonic channel
when paired with an ideal laser light transmitter and an
ideal photon counting receiver induces a Poisson channel
p(y|x), with x ∈ C and y ∈ N0 , where N0 denotes
the set of non-negative integers. The Holevo capacity
of the lossy thermal-noise bosonic channel without any
restrictive assumptions on the transmitted signals and
the receiver measurement is greater than the Shannon
capacities of both of the above channels, and those of all
simple classical channels induced by pairing the quantum channel with specific conventional transmitters and
receivers [14]. It is known that entangled inputs do not
help attain any capacity advantage for Gaussian bosonic
channels. In fact, transmission of a product state achieves
Holevo capacity: it is sufficient to send individuallymodulated laser-light pulses of complex-amplitude α on
each channel use with α drawn i.i.d. from a complex
Gaussian distribution p(α) [15]. On the other hand, using
joint measurements at the receiver increases the capacity
of the lossy thermal-noise bosonic channel over what is
achievable using any standard optical receiver, each of
which act on the received codeword by detecting a single
channel use at a time [16, Chapter 7].
It was recently shown that the SRL governs the
fundamental limits of covert communications over the
lossy thermal-noise bosonic channel [8], which motivates generalization to an arbitrary memoryless broadcast
quantum channel NA→BW from Alice to Bob and Willie.
Here we focus on the scenario depicted in Figure 2a,
where a TPCP map NA→BW and Alice’s product state
modulation x → φA
x , x ∈ X induces a cq channel
x → τxBW . However, if Bob and Willie use product measurements described by POVMs {Πy }⊗n and {Γz }⊗n
as depicted on Fig. 2c, then covert communication over
the induced classical DMC p(y, z|x) is governed by the
SRL [5], [6]. Therefore, our main goal is to characterize
the fundamental limits of covert communication on the
underlying cq channel x → τxBW in the following
scenarios:
1) no restrictions are assumed on Bob’s and Willie’s
measurement choices (depicted in Figure 2a), and,
2) a more practically important scenario when Bob is
given a specific product measurement {Πy } but no
assumptions are made on Willie’s receiver measurement (as depicted in Figure 2b).
B. System Model
As depicted in Figure 2, a transmitter Alice maps a
classical input x ∈ X , X = {0, 1, . . . , N }, to a quantum
state φA
x and sends it over a quantum channel NA→BW .
The induced cq channel is the map x → τxBW ∈
D(HBW ), where τxBW = NA→BW (φA
x ). The cq channel
from Alice to Bob is the map x → σxB ∈ D(HB ), where
σxB = TrW {τxBW } is the state that Bob receives, and the
cq channel from Alice to Willie is the map x → ρW
x ∈
BW } is the state that Willie
D(HW ), where ρW
=
Tr
{τ
B x
x
receives, and TrC {·} is the partial trace over system
C . The symbol 0 is taken to be the innocent symbol,
which is the notional channel input corresponding to
when no communication occurs, and symbols 1, . . . , N ,
the non-innocent symbols, comprise Alice’s modulation
alphabet. For simplicity of notation, we drop the systemlabel superscripts, i.e., we denote τ BW by τ , σ B by σ ,
n
n
ρW by ρ, σ B by σ n , and ρW by ρn . We consider
communication over a memoryless cq channel. Hence,
the output state corresponding to the input sequence
x = (x1 , . . . , xn ) ∈ X n , xi ∈ {0, 1, . . . , N }, at Bob
is given by:
⊗n
σ n (x) = σx1 ⊗ · · · ⊗ σxn ∈ D(HB
),
and at Willie is given by:
⊗n
ρn (x) = ρx1 ⊗ · · · ⊗ ρxn ∈ D(HW
).
The innocent input sequence is 0 = (0, . . . , 0), with the
corresponding outputs σ0⊗n = σ0 ⊗ · · · ⊗ σ0 and ρ⊗n
0 =
ρ0 ⊗ · · · ⊗ ρ0 at Bob and Willie, respectively.
Alice intends to transmit to Bob reliably, while keeping Willie oblivious of the transmission attempt. We thus
consider encoding of transmissions next.
5
(a)
(b)
(c)
Fig. 2. Classical-quantum channel scenarios. Alice encodes message W into n-symbol codeword x = (x1 , . . . , xn ), where xi ∈ X ,
X = {1, . . . , N }. As each symbol x ∈ X is mapped to the input quantum state φx , codeword x is mapped to product state input
th
A
φA (x) = φA
x1 ⊗ · · · ⊗ φxn . The i use of a quantum memoryless broadcast channel NA→BW from Alice to Bob and Willie produces the
BW
joint state τxi at the output (with the product joint state over n uses of the channel being τ BW (x) = τxBW
⊗· · ·⊗τxBW
). The corresponding
n
1
W
marginal product states at Bob’s and Willie’s receivers are σ B (x) = σxB1 ⊗ · · · ⊗ σxBn and φA (x) = ρW
x1 ⊗ · · · ⊗ ρxn , respectively. In (a)
Bob and Willie use joint quantum measurements over n channel uses, while (b) depicts a more practically important scenario where Bob
is restricted to using a specific product measurement (that induces a DMC p(y|x) on his channel from Alice) while Willie is unrestricted.
In (c) both Bob and Willie are restricted to using a specific product measurement, which reduces the cq channel x → τxBW to a classical
discrete memoryless broadcast channel, p(y, z|x).
C. Codebook Construction
Denote the message set by M = {1, . . . , M }. Covertness and reliability are fundamentally conflicting requirements: on one hand, the codewords must be “close” to
the innocent sequence to be undetected by Willie, while,
on the other hand, they must be “far enough apart” from
each other (and the innocent sequence) to be reliably
discriminated by Bob. Willie’s objective is fundamentally
easier than Bob’s, as he has to distinguish between
two simple hypotheses in estimating Alice’s transmission
state, while Bob must distinguish between at least M
codewords. Therefore, as is the case for other channels,
to ensure covert and reliable communications, Bob has
to have an advantage over Willie in the form of a preshared secret with Alice (though the pre-shared secret is
unnecessary if Bob’s channel from Alice is better than
Willie’s channel from Alice).
Alice and Bob employ a codebook, where messages
in M are randomly mapped to the set of n-symbol
codewords in {x(m, k)}M
m=1 based on the pre-shared
k∈K
secret k ∈ K = {1, . . . , K}. Formally, M −−−→ X n .
We assume that each message is equiprobable. Given
a pre-shared secret k ∈ K, Bob performs decoding
via POVM Λ = {Λm,k }M
his codeword received
m=1 on P
overP
n channel uses, such that
m Λm,k ≤ I , where
I − m Λm,k corresponds to decoding failure.
D. Reliability
The average probability of decoding error at Bob is:
PB
e
M
1 X
(1 − Tr {Λm,k σ n (m, k)}) .
=
M
m=1
where σ n (m, k) is a shorthand for σ n (x(m, k)).
(1)
6
Definition 1. A coding scheme is called reliable if it
guarantees that for sufficiently large n and for any δ > 0,
PB
e ≤ δ.
E. Covertness
We denote the state received by Willie over n channel
uses when message m is sent and the value of the shared
secret is k by ρn (m, k). Willie must distinguish between
the state that he receives when no communication occurs
(null hypothesis H0 ):
ρ⊗n
0 = ρ0 ⊗ · · · ⊗ ρ0 ,
(2)
and the average state that he receives when Alice transmits (alternate hypothesis H1 ):
M K
1 XX n
ρ (m, k).
ρ̄ =
MK
n
(3)
m=1 k=1
Willie fails by either accusing Alice of transmitting
when she is not (false alarm), or missing Alice’s transmission (missed detection). Denoting the probabilities
of these errors by PFA = P(choose H1 |H0 is true)
and PMD = P(choose H0 |H1 is true), respectively,
and assuming that Willie has no prior knowledge of
Alice’s transmission state (i.e., uninformative priors
P(H0 is true) = P(H1 is true) = 12 ), Willie’s probability of error is:
PFA + PMD
.
(4)
2
Randomly choosing whether or not to accuse Alice
1
yields an ineffective detector with PW
e = 2 . Therefore,
a transmission is covert when Willie’s detector is forced
to be arbitrarily close to ineffective:
PW
e =
1
≥ − ξ,
2
for any ξ > 0 and sufficiently large n.
PW
e
where D(ρkσ) ≡ Tr {ρ(log ρ − log σ)} is the quantum
relative entropy.
Combining (6) and (7), we have that
D ρ̄n kρ⊗n
<
0
√ implies the covertness criterion in
(5) with ξ , ( 2 ln 2)/4. We employ quantum relative entropy in the analysis that follows because of
its convenient mathematical properties such as additivity
for product states. Combining reliability and covertness
metrics, we define (δ, )-covertness as follows.
Definition 3. We call a scheme (δ, )-covert if for
sufficiently large n, PB
e ≤ δ for any δ > 0, and
D ρ̄n kρ⊗n
<
for
any
> 0.
0
In some situations Alice can tolerate a (small) chance
of detection. That is, instead of ensuring that > 0, she
must ensure a relaxed covertness condition ≥ 0 , where
0 > 0 is a constant. This is a weaker covertness criteria,
and we define it as the weak covertness condition. A
coding scheme
is called weak covert if it ensures that
⊗n
n
D ρ̄ kρ0 < for any > 0 where 0 > 0 is a small
constant, and for sufficiently large n.
III. M AIN R ESULTS
Here we present our main results, deferring the formal
proofs to latter sections. The properties of quantum
channels from Alice to Bob and Willie dictate the fundamental limits of covert communications, as discussed
in the scenarios below (we follow the labeling of the
scenarios from the introduction). Table III summarizes
our results using the asymptotic notation [19, Ch. 3.1]
that we employ throughout this paper, where:
•
(5)
Definition 2. A coding scheme is called covert if it
1
ensures that PW
e ≥ 2 − ξ, for any ξ > 0 and for n
large enough.
•
•
The trace distance
np between quantum
o states ρ and σ
†
is kρ − σk1 ≡ Tr
(ρ − σ)(ρ − σ) . The minimum
n
PW
e is related to the trace distance between the states ρ̄
⊗n
and ρ0 as follows [17], [18]:
1
1 n
⊗n
W
min Pe =
1 − kρ̄ − ρ0 k1 .
(6)
2
2
By the quantum Pinsker’s inequality [9, Theorem
11.9.2],
2
1
kρ̄n − ρ⊗n
≤ D ρ̄n kρ⊗n
,
(7)
0 k1
0
2 ln 2
•
•
f (n) = O(g(n)) denotes an asymptotic upper
bound on f (n) (i.e., there exist constants m, n0 > 0
such that 0 ≤ f (n) ≤ mg(n) for all n ≥ n0 ),
f (n) = o(g(n)) denotes an upper bound on f (n)
that is not asymptotically tight (i.e., for any constant
m > 0, there exists constant n0 > 0 such that 0 ≤
f (n) < mg(n) for all n ≥ n0 ),
f (n) = Ω(g(n)) denotes an asymptotic lower
bound on f (n) (i.e., there exist constants m, n0 > 0
such that 0 ≤ mg(n) ≤ f (n) for all n ≥ n0 ),
f (n) = ω(g(n)) denotes a lower bound on f (n)
that is not asymptotically tight (i.e., for any constant
m > 0, there exists constant n0 > 0 such that 0 ≤
mg(n) < f (n) for all n ≥ n0 ), and
f (n) = Θ(g(n)) denotes an asymptotically
tight bound on f (n) (i.e., there exist constants
m1 , m2 , n0 > 0 such that 0 ≤ m1 g(n) ≤ f (n) ≤
m2 g(n) for all n ≥ n0 ). f (n) = Θ(g(n)) implies
that f (n) = Ω(g(n)) and f (n) = O(g(n)).
7
TABLE I
S CALING LAWS FOR - RELIABLE COVERT TRANSMISSION OF log M
BIT MESSAGE OVER
n CQ CHANNEL USES
Willie
∀x 6= 0, supp(ρx ) * supp(ρ0 )
Bob
a
b
c
∀x 6= 0, supp(σx ) ⊆ supp(σ0 )
∃x 6= 0 s.t. supp(σx ) * supp(σ0 )
∃x 6= 0 s.t. supp(σx ) ∩ supp(σ0 ) = ∅
δ>0
0
0
0
a
δ ≥ δ0
0
O(1)c
O(log n)
∃x 6= 0 s.t. supp(ρx ) ⊆ supp(ρ0 )
P
P
ρ0 6= x6=0 pm (x)ρx b ρ0 = x6=0 pm (x)ρx b
√
Θ(
Θ(n)
√ n)
Θ(√n log n)
Θ(n)
Θ( n log n)
Θ(n)
Where δP
0 > 0 is a constant.
Where x6=0 pm (x) = 1.
If ∃x 6= 0, x0 6= 0 s.t. supp(σx ) ∩ supp(σx0 ) = ∅.
A. Square-root law covert communications
Consider the case when the supports of all noninnocent symbols are contained in the support of the
innocent symbol, i.e., ∀x ∈ X , supp(ρx ) ⊆ supp(ρ0 ).
The central theorem of this paper establishes the optimum number of transmissible (δ, )-covert information
bits and the optimum number of required key bits over
n uses of a classical-quantum channel that satisfies the
conditions described above. We re-state it here from the
introduction:
Theorem 1. Consider a stationary memoryless
classical-quantum channel that takes input x ∈ X
at Alice and outputs the quantum states σx and
ρx at Bob and Willie, respectively, with x = 0
designating the innocent state. If, ∀x ∈ X , the supports
supp(σx ) ⊆ supp(σ0 ) and supp (ρx ) ⊆ supp (ρ0 ) such
that ρ0 is not a mixture of {ρx }x∈X \{0} , then there
exists a coding scheme that meets the covertness and
reliability criteria
B
lim D(ρ̄n kρ⊗n
0 ) = 0 and lim Pe = 0,
n→∞
n→∞
with optimal scaling coefficients of message length and
key length,
P
log M
x∈X \{0} p̃(x)D(σx kσ0 )
q
lim q
,
=
n→∞
1 2
χ
(ρ̃kρ
)
nD(ρ̄n kρ⊗n
)
0
0
2
and,
lim q
log K
n→∞
nD(ρ̄n kρ⊗n
0 )
i+
hP
x∈X \{0} p̃(x)(D(ρx kρ0 ) − D(σx kσ0 ))
q
=
,
1 2
2 χ (ρ̃kρ0 )
where ρ̄n is the average state at Willie when a transmission occurs, PB
e is Bob’s decoding error probability,
p̃(x) is a distribution on non-innocent input symbols
P
x∈X \{0} p̃(x) = 1, ρ̃ is the average non-innocent state
at Willie induced by p̃(x), [c]+ = max{c, 0}, D(ρkσ) ≡
Tr {ρ(log ρ − log σ)}
is the quantum relative entropy,
and χ2 (ρkσ) ≡ Tr (ρ − σ)2 σ −1 is the quantum χ2 divergence.
We can pick an input distribution on non-innocent
symbols p̃(x) to maximize the scaling coefficient of the
message length, or to minimize the scaling coefficient
of the key length, and, of course, those distributions
would not necessarily be the same. In fact, p̃(x) can
be optimized over some function of the two scaling
coefficients. Our main result is the generalization of
[6]: indeed, if Bob and Willie both employ symbolby-symbol measurements as in Figure 2, then the cq
channels from Alice reduce to DMCs. Replacing the
quantum relative entropy and χ2 -divergence between
states by the classical counterparts between the induced
probability distributions reduces to the results of [6].
In a practical setting, Bob is likely to be limited to a
product measurement, however, one cannot make such
an assumption about Willie. However, we show that
the expressions in Theorem 1 still hold in this setting
as long as D(σx kσ0 ) (characterizing Bob’s cq channel
from Alice) is replaced by the classical relative entropy
characterizing the classical channel induced by Bob’s
choice of measurement.
In Section IV, we present the proof of achievability of
Theorem 1. For simplicity of exposition, first we present
the proof of achievability for two symbols (one innocent
and one non-innocent symbol) and then we discuss the
required adjustments to extend it to the case of multiple
non-innocent symbols. In Section V, we present the
proof of converse of Theorem 1 for the case of multiple
non-innocent symbols.
B. Corner Cases
1) No covert communications: When the support for
all states received by Willie corresponding to Alice’s
8
codewords is not contained in the support of the innocent
sequence, then reliable covert communication is impossible. Denoting the support of state ρ by supp(ρ), this
is formally stated as follows:
Theorem 2. When, ∀m ∈ M, k ∈ K, and
supp(ρn (m, k)) * supp(ρ⊗n
0 ), (δ, )-covert communication is impossible.
We prove this theorem in Section VIII by showing that
for any n there exists a region where (δ, )-covertness is
not achievable, i.e., ensuring any level of covertness implies that transmission cannot be made reliable. Theorem
2 generalizes [8, Theorem 1] for lossy bosonic channels
to arbitrary quantum channels. Unlike other theorems in
this paper, this result is fully general, as it places no
restrictions on Alice’s transmitted states (they could be
entangled across n channel uses) and the channel (which
does not have to be memoryless).
2) Transmission of O(1) covert bits in n channel uses:
Now let’s return to the cq channel setting, and consider
the case when the support of every non-innocent state
at Willie is not contained in the support of the innocent
state, i.e., ∀x ∈ X \{0}, supp(ρx ) * supp(ρ0 ). Let’s
also assume that the support of every non-innocent state
at Willie is not orthogonal to the support of the innocent
state (i.e., ∀x ∈ X \{0}, supp(ρx ) ∩ supp(ρ0 ) 6= ∅),
precluding trivial errorless detection by Willie. Even so,
by Theorem 2, an (δ, )-covert communication scheme
does not exist in such setting. However, we can have a
weak-covert scheme as described in Section II-E. The
trace distance between the average received state at
Willie given in (3) and the innocent state ρ⊗n
over n
0
channel uses can be written as,
kρx∗ −ρ0 k1 , and, in (8) we denote theP
average
of
Pnumber
K
L
non-innocent symbols as L̄ = M1K M
m=1
k=1 m,k .
Hence, employing the relaxed covertness condition, rearranging (5), substituting (8), employing the quantum
Pinsker’s inequality, and solving for L̄ yields:
L̄ ≤
40
,
kρx∗ − ρ0 k1
(9)
(8)
which implies that Alice may be able to transmit L̄
non-innocent symbols on average and meet the relaxed
covertness criteria. This allows us to consider two corner
cases: transmission of O(1) covert bits in n channel uses,
and logarithmic law covert communication which will be
discussed in next section.
Under the weak covertness condition above, suppose
that ∀x ∈ X \{0}, supp(σx ) ∩ supp(σ0 ) 6= ∅, but
there exist at least two non-innocent symbols x, x0 ∈
X \{0} with non-overlapping supports, i.e., supp(σx ) ∩
supp(σx0 ) = ∅. Alice can meet the relaxed covertness
condition by transmitting L̄ non-innocent symbols on
average. Choosing x or x0 equiprobably conveys a single
bit of information to Bob, as σx and σx0 are perfectly
distinguishable. Since L̄ is a constant, on average O(1)
covert bits of information can thus be conveyed from
Alice to Bob in n channel uses in this scenario.
3) Logarithmic law covert communication: Under the
relaxed covertness condition above, suppose there exists
at least one x ∈ X \{0} such that the support of the
corresponding symbol at Bob does not overlap with that
of innocent state, i.e., supp(σx ) ∩ supp(σ0 ) = ∅. Then,
Alice can use L̄ non-innocent symbols x to indicate
positions within a block of n symbols to Bob while
meeting the relaxed covertness condition. Since Bob can
perfectly distinguish between the innocent state σ0 and
non-innocent state σx , this conveys O(log n) bits of
information on average in n channel uses.
4) Constant rate covert communication: Consider the
case when Willie’s state is such that ρ0 is a mixture of
{ρx }x∈X
π(·) where
P \{0} , i.e., there exists a distribution
P
ρ0 = x∈X \{0} π(x)ρx such that x∈X \{0} π(x) = 1,
but π(·) on non-innocent
symbols does not induce σ0 at
P
Bob, i.e., σ0 6= x∈X \{0} π(x)σx . Define the distribution:
απ(x) if x 6= 0, and
p(x) =
1 − α if x = 0,
where (a) follows from the convexity of the trace distance [9, Eq. (9.9)], and (b) follows from the fact that ρn
is a tensor-product state and the telescoping property of
the trace distance [9, Eq. (9.15)]. In (c), Lm,k is the number of non-innocent symbols in the (m, k)th codeword,
and x∗ is the symbol such that ∀x ∈ X , kρx − ρ0 k1 ≤
where 0 < α ≤ 1 is the probability of using a
non-innocent symbol. Using {p(x)} on input symbols
results in an ensemble {p(x), σx } at Bob that has positive Holevo information by the Holevo-SchumacherWestmoreland (HSW) theorem [9, Chapter 19]. Thus,
Alice can simply draw her codewords from the set of
kρ̄n − ρ⊗n
0 k1 = k
(a)
≤
(b)
≤
(c)
≤
M K
1 XX n
ρ (m, k) − ρ0⊗n k1
MK
1
MK
1
MK
1
MK
m=1 k=1
M
K
XX
kρn (m, k) − ρ⊗n
0 k1
m=1 k=1
M X
K X
n
X
kρ(xi (m, k)) − ρ0 k1
m=1 k=1 i=1
M X
K
X
Lm,k kρx∗ − ρ0 k1
m=1 k=1
= L̄kρx∗ − ρ0 k1
9
states using the probability distribution {p(x)} and transmit at the positive rate undetected by Willie. Therefore,
in the results that follow, we assume that ρ0 is not a
mixture of
√ the non-innocent symbols
5) O( n log n) covert communication: Now suppose
there exists xs ∈ X \{0} such that supp(σxs ) *
supp(σ0 ) and supp(ρxs ) ⊆ supp(ρ0 ). That is, part of
the support of the output state corresponding to xs lies
outside of the innocent state support at Bob while lying
inside the innocent state support at Willie. Also suppose
Alice only uses {0, xs } for transmission. Let Bob use
a POVM {(I − P0 ), P0 } on each of his n received
states, where P0 is the projection onto the innocent
state support. This measurement results in a perfect
identification of the innocent symbol, and an error in
identification of xs with probability Tr{P0 σxs }. Since
specifying a POVM induces a classical DMC, we can
use the achievability part in the proof of [6, Theorem
7] (with the probability that xs is identified
by Bob
√
κ = 1 − Tr{P0 σxs }) to show that O( n log n) (δ, )covert bits are achievable in n channel uses. However,
for a converse in a cq channel setting we must show
that exceeding this limit is impossible in n uses of
such channel even when Bob uses an arbitrary decoding
POVM. We provide this proof in Section VII.
IV. ACHIEVABILITY OF THE SRL
In this section we prove the achievability of the
square root scaling stated in Theorem 1. As mentioned
earlier, for simplicity first we provide a proof for the
case of two symbols, i.e., X = {0, 1}, where 0 is the
innocent symbol and 1 is the non-innocent symbol. The
achievability is formally stated as follows:
A. Prerequisites
1) Prior Probability Distribution: We consider the
following distribution on X = {0, 1}:
αn
if x = 1, and
p(x) =
(10)
1 − αn if x = 0,
where 1 is the non-innocent symbol, 0 is the innocent
symbol, and αn is the probability of transmitting 1. The
output of the classical-quantum channel corresponding to
this input distribution in a single channel use is denoted
by,
X
p(x)τx = (1 − αn )τ0 + αn τ1 .
(11)
ταn =
x∈X
Hence, the state corresponding to this input distribution
that Bob receives is σαn = TrW {ταn }, and that Willie
receives is ραn = TrB {ταn }, respectively. From the
linearity of the trace,
X
σ αn =
p(x)σx = (1 − αn )σ0 + αn σ1 ,
(12)
x∈X
and,
ραn =
X
p(x)ρx = (1 − αn )ρ0 + αn ρ1 .
(13)
x∈X
2) Characterization of αn : In this section we show
that for a specific choice of αn , the quantum relative
entropy between Willie’s state induced by p(x) over n
channel-uses, ρ⊗n
αn , and the state induced by the innocent
symbol over n channel uses, ρ⊗n
0 , vanishes as n tends to
infinity. This is the generalization of a similar concept
introduced in [6] to classical-quantum systems.
First consider the following lemmas:
Lemma 1 ([20]). For any positive semi-definite operaTheorem 3. For any stationary memoryless classical- tors A and B and any number c ≥ 0,
quantum channel with supp (σ1 ) ⊆ supp (σ0 ) and
1
supp (ρ1 ) ⊆ supp (ρ0 ), there exists a coding scheme,
D(AkB) ≥ Tr{A − A1−c B c }
(14)
c
such
that,
for
n
sufficiently
large
and
γ
=
o(1)
∩
n
1
D(AkB) ≤ Tr A1+c B −c − A .
(15)
ω √1n ,
c
√
γn
log M = (1 − ς)γn nD (σ1 kσ0 ) ,
√n ,
Lemma 2. For αn = √
and γn = o(1) ∩ ω log
n
n
√
log K = γn n [(1 + ς)D (ρ1 kρ0 ) − (1 − ς)D (σ1 kσ0 )]+ ,
⊗n
D ρ⊗n
≤ ς3 γn2 ,
(16)
αn kρ0
and,
√
where ς3 > 0 is a constant.
−ς1 γn n
PB
,
e ≤e
√
Proof. From the memoryless property of the channel and
⊗n ⊗n
−ς2 γn n
D(ρ̄n kρ⊗n
,
0 ) − D(ραn kρ0 ) ≤ e
additivity of relative entropy,
⊗n
2
D(ρ⊗n
αn kρ0 ) ≤ ς3 γn ,
⊗n
D ρ⊗n
= nD(ραn kρ0 ).
(17)
αn kρ0
where ς ∈ (0, 1), ς1 > 0, ς2 > 0, and ς3 > 0 are
Using (15) in Lemma 1 with c = 1 and some algebraic
constants, and [c]+ = max{c, 0}.
manipulations, we obtain:
Before we proceed to the proof, we state important
n
o
D(AkB) ≤ Tr (A − B)2 B −1 .
(18)
definitions and lemmas.
10
Substituting A = ρ0 + αn (ρ1 − ρ0 ) and B = ρ0 in (18)
we obtain:
n
o
D(ραn kρ0 ) ≤ Tr (ρ0 + αn (ρ1 − ρ0 ) − ρ0 )2 ρ−1
0
n
o
2 −1
2
= αn Tr (ρ1 − ρ0 ) ρ0
= αn2 χ2 (ρ1 kρ0 ),
(19)
where χ2 (ρkσ) is the χ2 -divergence between ρ and σ
γn
,
[21]. Combining (17) and (19), and choosing αn = √
n
⊗n
D ρ⊗n
≤ ς3 γn2 ,
αn kρ0
where ς3 > 0 is a constant.
We prove Theorem 3 by first establishing the reliability of the coding scheme, and then its covertness.
B. Reliability Analysis
We restate [22, Lemma 2] and use it in the analysis
of the error probability:
Consider the encoding map {1, . . . , M } → x ∈ X n
and the square-root measurement decoding POVM for n
channel uses,
!−1/2
!−1/2
M
M
X
X
n
Λm =
Πk
Πm
Πk
,
(23)
k=1
Πm = {σ̂ n (m) − ea σ0⊗n > 0}.
(24)
Here σ̂ n (m) = Eσ0⊗n (σ n (m)) is the pinching of σ n (m)
as defined in Appendix A, and a > 0 is a real number
to be determined later.
For compactness of notation, we
P denote the summations are over x ∈ X n by
x . Bob’s average
decoding error probability over the random codebook is
characterized by the following lemma:
Lemma 5. For any a > 0,
E PB
e
X
≤2
p(x) Tr{σ n (x){σ̂ n (x) − ea σ0⊗n ≤ 0}}
x
Lemma 3. For operators 0 < A < I and B > 0, we
have,
+4M e−a exp γn2 Tr{σ0−1 σ12 } .
(25)
Proof. Bob’s average decoding error probability is:
I − (A + B)−1/2 A(A + B)−1/2
≤ (1 + c) (I − A) + (2 + c + c−1 )B,
M
1 X
(1 − Tr{σ n (m)Λnm })
M
m=1
M
X
1 X n
≤
Πj ,
Tr σ (m) 2(1 − Πm ) + 4
M
PB
e =
where c > 0 is a real number and I is an identity
operator.
Next, we provide a lemma that is useful for proving
both the reliability and the covertness. First, consider a
self-adjoint
P operator A and its spectral decomposition
A =
i λi |ai i hai |, where {λi } are eigenvalues, and
|ai i hai | are the projectors on the associated eigenspaces.
Then, the non-negative spectral projection on A is defined as in [22]:
X
{A ≥ 0} =
|ai i hai | ,
(20)
i:λi ≥0
which is the projection to the eigenspace corresponding to non-negative eigenvalues of A. The projections
{A > 0}, {A ≤ 0}, and {A < 0} are defined similarly.
Lemma 4. For any Hermitian matrix A and positivedefinite matrix B ,
Tr {BA {A < 0}} ≤ 0,
k=1
where we define the projector Πm as,
m=1
j6=m
where the inequalityP
follows from Lemma 3 with c = 1,
A = Πm , and B = j6=m Πj . Hence,
E PB
e
#
"
M
2 X
n
n
a ⊗n
≤E
Tr{σ (m){σ̂ (m) − e σ0 ≤ 0}}
M
m=1
M X
X
4
+E
Tr{σ n (m){σ̂ n (j) − ea σ0⊗n > 0}}
M
m=1 j6=m
X
p(x) Tr{σ n (x){σ̂ n (x) − ea σ0⊗n ≤ 0}}
=2
x∈X n
+4(M − 1)
X
p(x) Tr{σα⊗n
{σ̂ n (x) − ea σ0⊗n > 0}},
n
x∈X n
(26)
(21)
We can upper-bound the second sum of (26) as follows:
X
p(x) Tr σα⊗n
{σ̂ n (x) − ea σ0⊗n > 0}
n
and,
Tr {BA {A > 0}} ≥ 0.
Proof. See Appendix B.
(22)
x
(a)
=
X
x
p(x) Tr σ̂α⊗n
{σ̂ n (x) − ea σ0⊗n > 0}
n
11
(b)
=
X
n
o
≤ (n + 1)rd tr Tr An (B n )r/2 (An )−r (B n )r/2 ,
p(x)
x
Tr
(c)
≤
X
n
σ0⊗n
−1
o
⊗n
n
a ⊗n
σ̂α⊗n
σ
{σ̂
(x)
−
e
σ
>
0}
0
0
n
p(x)
x
n
o
−1 ⊗n n
e−a Tr σ0⊗n
σ̂αn σ̂ (x){σ̂ n (x) − ea σ0⊗n > 0}
n
(d) X
−1 ⊗n n o
≤
p(x)e−a Tr σ0⊗n
σ̂αn σ̂ (x)
(30)
where Ân = EB n (An ) and d = dim HB . Applying this
to states An = σ n (x) and B n = σ0⊗n and setting t = ea
yields,
X
p(x) Tr{σ n (x){σ̂ n (x) − ea σ0⊗n ≤ 0}}
x
≤
X
p(x)(n + 1)rd
x
x
= e−a Tr
n
⊗n −1
σ0
n
o
r/2
r/2
ar+log Tr σ n (x)(σ0⊗n ) (σ n (x))−r (σ0⊗n )
o
⊗n 2
σ̂αn
n
e
= (n + 1)rd
= e−a Tr σ0−1 σ̂α2 n
n
(e) −a
=e
Tr σ0−1 σα2 n
,
and then
using linearity of trace; (d) follows since
⊗n −1
σ0
, σ̂α⊗n
, and σ̂ n (x) commute, which implies
n
⊗n −1 ⊗n n
that σ0
σ̂αn σ̂ (x) is positive-definite; and, finally,
(e) follows from the second property of pinching in
Appendix A.
Now, Tr{σ0−1 σα2 n } can be simplified and upperbounded as follows:
Tr{σ0−1 σα2 n } = Tr{σ0−1 ((1 − αn )σ0 + αn σ1 )2 }
=
≤
≤
p(x)
x
(27)
where (a) follows from the second property of pinching
in Appendix A and the fact that {σ̂ n (x) − ea σ0⊗n > 0}
commutes with σ0⊗n ; (b) follows from the fact that σ̂α⊗n
n
commutes with σ0⊗n ; (c) follows by applying Lemma 4
−1 ⊗n
with A = σ̂ n (x) − ea σ0⊗n and B = σ0⊗n
σ̂αn to
obtain:
n
−1 ⊗n n
Tr σ0⊗n
σ̂αn σ̂ (x) − ea σ0⊗n
o
n
σ̂ (x) − ea σ0⊗n > 0
≥ 0,
1 − αn2 + αn2 Tr{σ0−1 σ12 }
1 + αn2 Tr{σ0−1 σ12 }
exp αn2 Tr{σ0−1 σ12 } .
X
e(ar+
log Tr{σ(xi )σ0 r/2 (σ(xi ))−r σ0 r/2 })
Since ϕ(σ0 , r) = 0, only terms with xi = 1 contribute to
the sum in (31). Define the random variable L indicating
the number of non-innocent symbols in x. We define the
set similar to the one used in [6],
√
√
Cµn = {l ∈ N : |l − µγn n| < γn n},
(32)
describing the values that the random variable L takes,
where 0 < µ < 1 is a constant. Using a Chernoff bound,
2
P (L ∈
/ Cµn ) ≤ 2e−µ
√
γn n/2
Now we evaluate the first term of the right-hand side
of (25). In [23, Section 3] it is shown that for any tensor
product states An and B n and any number t > 0 and
0 ≤ r ≤ 1,
Tr{An {Ân − tB n ≤ 0}}
.
(33)
Hence,
!
n
X
X
p(x) exp ar −
ϕ(σ(xi ), r)
x
i=1
X
≤
X
p(x) exp ar −
L
X
!
ϕ(σ1 , r)
i=1
p(L = l) exp (ar − lϕ(σ1 , r)) + P (L ∈
/ Cµn )
l∈Cµn
√
√
2
≤ exp ar − (1 − µ)γn nϕ(σ1 , r) + 2e−µ γn n/2 .
(34)
x
≤ e−a exp nαn2 Tr{σ0−1 σ12 }
= e−a exp γn2 Tr{σ0−1 σ12 } . (29)
,
where the equality follows from the memoryless property
of the channel. Let us define the function
n
o
ϕ(σ(xi ), r) = − log Tr σ(xi )σ0 r/2 (σ(xi ))−r σ0 r/2
x
Substituting (28) into (27) yields:
X
{σ̂ n (x) − ea σ0⊗n > 0}}
p(x) Tr{σα⊗n
n
i=1
(31)
= EL
(28)
Pn
∂
∂r ϕ(σ1 , r)
Appendix C shows that
is uniformly continuous, and
∂
ϕ(σ1 , 0) = D(σ1 kσ0 ).
∂r
Moreover, we have ϕ(σ1 , 0) = 0. Now let ε > 0 be
∂
an arbitrary constant. Because ∂r
ϕ(σ1 , r) is uniformly
continuous, there exists 0 < κ < 1 such that
ϕ(σ1 , r) − ϕ(σ1 , 0)
− D(σ1 kσ0 ) < ε for 0 < r ≤ κ.
r−0
(35)
12
Substituting (34)
and (35) into (31), and letting a =
√
(1−ν)(1−µ)γn nD (σ1 kσ0 ) where ν > 0 is a constant,
and realizing that r ≤ κ, yields,
X
p(x) Tr{σ n (x){σ̂ n (x) − ea σ0⊗n ≤ 0}}
x
≤ (n + 1)κd e−νκ(1−µ)γn
√
n
+ 2e−µ
2
√
γn n/2
.
(36)
Consequently, substituting (36) into (25) yields,
E PB
e
√
√
2
≤ 2(n + 1)κd e−νκ(1−µ)γn n + 2e−µ γn n/2
+ 4M e−(1−ν)(1−µ)γn
√
nD(σ1 kσ0 ) γn2 Tr{σ0−1 σ12 }
e
.
(37)
Hence, if,
√
log M = (1 − ς)γn nD(σ1 kσ0 ),
(38)
where 1 − ς = (1 − ς5 )(1 − µ)(1 − ν) for some constant
ς5 > 0, then, for sufficiently large n there must exist a
constant $ > 0 such that the expected error probability
is upper-bounded as,
√
−$γn n
E PB
.
(39)
e ≤e
C. Covertness Analysis
The goal is now to show that the average state that
Willie receives overPn channel
PK usesn when communication
occurs, ρ̄n = M1K M
m=1
k=1 ρ (m, k), is close to the
state he receives when no communication occurs, i.e.,
ρ⊗n
0 . In order to show this, we first prove the following
lemma.
Lemma 6. For sufficiently large n there exists a coding
scheme with
√
log M + log K = (1 + ς)γn nD(ρ1 kρ0 ),
(40)
M X
K
o
1 X
−1
−
1
ρn (m0 , k 0 ) ρ⊗n
αn
MK 0
0
m =1 k =1
M X
K
n 1 X
1
= E Tr
ρn (m, k)
ρn (m, k)
MK
MK
1
+
MK
m=1 k=1
M
K
X X
o
⊗n −1
ρ (m , k ) ραn
−1
0
n
0
m0 =1 k0 =1
(m0 ,k0 )6=(m,k)
n
1
ρn (x)
= Ex Ex0 Tr ρn (x)
MK
M K − 1 n 0 ⊗n −1 o
+
ρ (x ) ραn
−1
MK
n
1
ρn (x)
= Ex Tr ρn (x)
MK
M K − 1 ⊗n ⊗n −1 o
+
ραn ραn
−1
MK
n
o
−1
1
MK − 1
=
+
Ex Tr (ρn (x))2 ρ⊗n
−1
αn
MK
MK
n
o n
−1 o
1
1
−
≤
Ex Tr (ρn (x))2 Tr ρ⊗n
,
αn
MK
MK
(42)
where the inequality is because both (ρn (x))2 and
−1
are positive-definite, and for any positiveρ⊗n
αn
definite matrices A and B we have:
p
Tr{AB} ≤ Tr{A2 } Tr{B 2 }
p
≤ Tr{A}2 Tr{B}2
≤ Tr{A} Tr{B}.
(43)
We upper-bound each trace in (42) in turn. First, let the
ordered sets of eigenvalues of ραn , ρ0 , and ρ1 be denoted
by a1 ≥ a2 ≥ · · · ≥ ad , b1 ≥ b2 ≥ · · · ≥ bd and
c1 ≥ c2 ≥ · · · ≥ cd , respectively. Then,
n
−1 o
Tr ρ⊗n
= n Tr ρ−1
αn
αn
=n
d
X
a−1
i
i=1
≤ nda−1
d
such that,
−ζγn
D(ρ̄n kρ⊗n
αn ) ≤ e
√
n
,
(41)
√n .
where ζ > 0 is a constant and γn = o(1) ∩ ω log
n
Proof. Using Lemma 1 with S = ρ̄n , T = ρ⊗n
αn and
c = 1, the expected quantum relative entropy can be
upper-bounded as:
E D(ρ̄n kρ⊗n
αn )
n
o
−1
≤ Tr (ρ̄n )2 ρ⊗n
−
1
αn
M X
K
n 1 X
ρn (m, k)
= E Tr
MK
m=1 k=1
(a)
≤ nd((1 − αn )bd + αn cd )−1
≤ nd((1 − αn )bd )−1
(b)
2nd
≤
,
(44)
bd
where (a) follows from Weyl’s inequalities for Hermitian
matrices [24] and the fact that ραn = (1 − αn )ρ0 + αn ρ1 ,
while (b) follows from the assumption that n is large
enough for αn < 12 .
To upper-bound the second trace in (42), let us define
the projector
n
o
Υnb = ρn (x) − eb ρ⊗n
(45)
0 ≤0 ,
13
with b > 0 a constant. Then
n
o
Tr (ρn (x))2
n
o
n
o
= Tr (ρn (x))2 Υnb + Tr (ρn (x))2 (I − Υnb ) .
(46)
Let us define the function
n
o
ψ(ρ(xi ), r) = log Tr (ρ(xi ))1+r (ρ0 )−r .
Since ψ(ρ0 , r) = 0, terms with xi = 0 vanish and only
terms with xi = 1 contribute to the summation. Let the
random variable L indicate the number of non-innocent
symbols in x, and, as in the previous section,
√
√
Cµn = {l ∈ N : |l − µγn n| < γn n}.
(51)
In what follows, we find an upper-bound for each term
in the right-hand side of (46).
Applying Lemma 4 with B = ρn (x) and A = ρn (x)− is a set of the values that random variable L takes. Using
eb ρ⊗n
0 yields:
a Chernoff bound, we have:
√
n
n
oo
2
n
b ⊗n
P (L ∈
/ Cµn ) ≤ 2e−µ γn n/2 .
(52)
Ex Tr ρn (x) ρn (x) − eb ρ⊗n
ρ
(x)
−
e
ρ
≤
0
0
0
≤ 0. Hence,
!
n
X
X
p(x) exp −br +
ψ(ρ(xi ), r)
Hence, the expected value of the first term in right-hand
x
i=1
side of (46) can be upper-bounded as:
!
L
n
o
X
X
2 n
n
Ex Tr (ρ (x)) Υb
ψ(ρ1 , r)
= EL
p(x) exp −br +
n
o
x
i=1
n
X
≤ Ex Tr ρn (x)eb ρ⊗n
0 Υb
≤
p(L = l) exp (−br + lψ(ρ1 , r)) + P (L ∈
/ Cµn )
r
n
o
n
o
(a)
2 n
l∈Cµn
Υb
≤ eb Ex Tr (ρn (x))2 Tr ρ⊗n
√
√
0
2
≤ exp −br + (1 + µ)γn nψ(ρ1 , r) + 2e−µ γn n/2 .
r
n
o n
2 o
(53)
≤ eb Ex Tr (ρn (x))2 Tr ρ⊗n
0
∂
ψ(ρ1 , r) is uniformly continuous and
By Appendix C, ∂r
= eb ,
(47) ∂ ψ(ρ1 , 0) = D(σ1 kσ0 ). Let ε > 0 be an arbitrary
∂r
∂
constant.
By the uniform continuity of ∂r
ψ(ρ1 , r), there
where (a) follows from the Cauchy-Schwartz inequality:
exists 0 < κ < 1 such that, for 0 < r ≤ κ, we have:
p
Tr{AB} = Tr{A2 } Tr{B 2 }
ψ(ρ1 , r) − ψ(ρ1 , 0)
− D(ρ1 kρ0 ) < ε,
(54)
r−0
n.
for density operators A = ρn (x) and B = ρ⊗n
Υ
0
b
(53) and (54)
Now consider the second term in the right-hand side where ψ(ρ1 , 0) = 0. Thus, substituting √
n
in
(50)
and
setting
b
=
(1
+
ν)(1
+
µ)γ
n nD (ρ1 kρ0 ),
of (46). Since ρ (x) is positive-definite and unit-trace,
all of its eigenvalues are positive and not greater than where ν > 0 is a constant, we obtain:
one, and, thus,
Ex Tr {ρn (x) (I − Υnb )}
n
o
√
√
2
Tr (ρn (x))2 (I − Υnb ) ≤ Tr {ρn (x) (I − Υnb )} .
≤ e(−κν(1+µ)γn nD(ρ1 kρ0 )) + 2e−µ γn n/2 . (55)
(48) Combining (42)-(54), we have:
In [25, Section 2] it is shown that for any states A
E D(ρ̄n kρ⊗n
αn )
and B and any numbers t > 0 and 0 ≤ r ≤ 1,
√
1
2nd b
(−κν(1+µ)qγn nD(ρ1 kρ0 ))
≤
e
+
e
MK
bd
Tr {A {A − tB > 0}} ≤ t−r Tr A1+r B −r . (49)
√
−µ2 γn n/2
⊗n
n
+2e
.
(56)
Applying this result with A = ρ (x) and B = ρ0 and
letting t = eb , we obtain
Hence, we should choose
Ex Tr {ρn (x) (I − Υnb )}
X
=
p(x) Tr{ρn (x){ρn (x) − eb ρ0 ⊗n > 0}}
x
≤
X
≤
X
1+r
n
⊗n −r
p(x)e(−br+log Tr{(ρ (x)) (ρ0 ) })
x
x
p(x)e(−br+
Pn
i=1
log Tr{(ρ(xi ))1+r (ρ0 )−r })
. (50)
log M + log K
√
= (1 + ς5 )(1 + ν)(1 + µ)γn nD (ρ1 kρ0 ) ,
(57)
and with this choice of M and K , there exists a constant
ζ > 0 such that for sufficiently large n,
√
−ζγn n
D ρ̄n kρ⊗n
.
(58)
αn ≤ e
14
D. Identification of a Specific Code
We choose ς , ζ and $, M , and K such that both (38)
and (40) are satisfied. In Appendix D we use Markov’s
inequality to show that, for a constants ς1 > 0 and
sufficiently large n, there exists at least one coding
scheme such that:
PB
e
≤e
√
−ς1 γn n
n
and D(ρ̄
kρ⊗n
αn )
√
−ζγn n
≤e
The quantum relative entropy between
ρ̄n
and
.
ρ⊗n
0
(59)
is:
D(ρ̄n kρ⊗n
0 )
⊗n
= D(ρ̄n kρ⊗n
) + D(ρ⊗n
αn kρ0 )
αnn
⊗n
+ Tr ρ̄ − ρ⊗n
log ρ⊗n
. (60)
αn
αn − log ρ0
To show that the last term in right-hand side of (60)
vanishes as n tends to infinity, let the eigenvalues of
⊗n
⊗n
A = ρ̄n −ρ⊗n
αn and B = log ραn −log ρ0 be enumerated
in decreasing order as ϑ1 ≥ ϑ2 ≥ · · · ≥ ϑd and κ1 ≥
κ2 ≥ · · · ≥ κd , respectively. Then:
⊗n
Tr ρ̄n − ρ⊗n
log ρ⊗n
αn
αn − log ρ0
(a)
≤
d
X
ϑi κi
d
X
≤
! 12
d
X
ϑ2i
! 12
κ2i
,
(61)
where (a) follows from von Neumann’s trace inequality
[26], and (b) follows from the Cauchy-Schwarz inequality. The first summation on the right-hand side of (61)
is upper-bounded as follows:
ϑ2i = Tr
n
ρ̄n − ρ⊗n
αn
2 o
i=1
≤ Tr
q
n
ρ̄n
−
2
ρ⊗n
αn
≤e
√
− ζγn n
1
2
i=1
κ2i
≤
d
X
i=1
d
X
(log (ani ) − log (bnd ))2
ai 2
=
n log
bd
i=1
2
a1
.
≤ n2 d log
bd
2
(63)
Substituting (62) and (63) into (61) yields:
⊗n
Tr ρ̄n − ρ⊗n
log ρ⊗n
αn
αn − log ρ0
√
√
a1
≤ n d log
e−ζγn n/2 .
bd
(64)
Re-arranging (60), substituting (64) and the result of
Lemma 6, and appropriately choosing a constant ς2 > 0
yields:
√
n
.
(65)
Application of Lemma 2 completes the proof of
Theorem 3, the achievability of the SRL for covert
communication over a cq channel.
,
E. Multiple Symbols
The proof of achievability with a single non-innocent
symbol described above can be used mutatis mutandis to
prove achievability with multiple non-innocent symbols.
Following the notation of Section IV-A1, with multiple
non-innocent symbols, the average state at Bob can be
written as:
X
σαn = (1 − αn )σ0 + αn
p̃(x)σx
x∈X \{0}
ρ⊗n
αn 1
= (1 − αn )σ0 + αn σ̃
P
where p̃(.) is defined such that x∈X \{0} p̃(x) = 1, i.e.,
= ρ̄ −
r
(a)
1
≤
D ρ̄n kρ⊗n
αn
2
(b)
d
X
i=1
i=1
d
X
Hence, setting j = 1,
−ς2 γn
⊗n ⊗n
D(ρ̄n kρ⊗n
0 ) − D(ραn kρ0 ) ≤ e
i=1
(b)
− log(bnd ) ≥ · · · ≥ − log(bn2 ) ≥ − log(bn1 ). Using Weyl’s
inequalities [24] we obtain
κi+j−1 ≤ log(ani ) − log bnd−j+1 .
(62)
where (a) follows from the quantum Pinsker’s inequality
[9, Ch. 11] and (b) follows from (59).
To upper-bound the second summation on the righthand side of (61) denote the ordered sets of eigenvalues of ραn and ρ0 by a1 ≥ a2 ≥ · · · ≥ ad
and b1 ≥ b2 ≥ · · · ≥ bd , respectively.
Hence, the
respective eigenvalues of log ρ⊗n
and
−
log
ρ⊗n
are
αn
0
n
n
n
enumerated as log(a1 ) ≥ log(a2 ) ≥ · · · ≥ log(ad ) and
p̃(x) = p(x)
αn , and thus σ̃ is the average non-innocent state
at Bob. Similarly, the average state at Willie is,
ραn = (1 − αn )ρ0 + αn ρ̃,
where ρ̃ is the average non-innocent state at Willie,
X
ρ̃ =
p̃(x)ρx .
x∈X \{0}
By replacing σ1 with σ̃ in (28)-(29), D(σ1 kσ0 ) with
P
x∈X
\{0} p̃(x)D(σx kσ0 ) in (37)-(38), and D(ρ1 kρ0 )
P
with x∈X \{0} p̃(x)D(ρx kρ0 ) in (56)-(57), and making
15
Theorem 4. For any stationary memoryless cq channel
where, ∀u ∈ X , supp(σx ) ⊆ supp(σ0 ) and supp (ρx ) ⊆
supp (ρ0 ) such that ρ0 is not a mixture of
{ρx }x∈X
\{0} ,
for n sufficiently large and γn = o(1) ∩ ω
√
log M = (1 − ς)γn n
X
log
√n
n
,
P
=
X
−ς2 γn
D(ρ̄n kρ⊗n
0 ) ≥ nD(ραn kρ0 ) − e
x∈X \{0}
,
−ς1 γn
PB
e ≤e
n
⊗n
D ρ̄ kρ0
n
,
⊗n
− D ρ⊗n
αn kρ0
≤ e−ς2 γn
√
n
,
Now, consider the following lemma which quantifies
the quantum relative entropy between ραn and ρ0 .
Lemma 7. Let A = αC + (1 − α)B , where B and C
are states, and α satisfies 0 ≤ α ≤ min{1, kB −1 (C −
B)k−1 }. Then,
Thus, since γn = o(1) ∩ ω
Theorem 5. For any stationary memoryless cq channel,
where, ∀u ∈ X , supp(σx ) ⊆ supp(σ0 ) and supp (ρx ) ⊆
supp (ρ0 ) such that ρ0 is not a mixture of {ρx }x∈X \{0} ,
there exists a coding scheme such that,
.
√1
n
lim q
n→∞
log M
nD(ρ̄n kρ⊗n
0 )
and,
lim q
n→∞
log K
nD(ρ̄n kρ⊗n
0 )
+
√ P
γn n
p̃(x)((1 + ς)D(ρx kρ0 ) − (1 − ς)D(σx kσ0 ))
lim PB
e = 0,
x∈X
x6=0
=
n→∞
q
2
γn
2
2 χ (ρ̃kρ0 )
+
P
nD ρ̄n kρ0
and,
log K
nD(ρ̄n kρ⊗n
0 )
x∈X \{0} p̃(x)D (σx kσ0 )
q
1 2
2 χ (ρ̃kρ0 )
,
√ P
(1 − ς)γn n x∈X \{0} p̃(x)D(σx kσ0 )
q
=
γn2 2
2 χ (ρ̃kρ0 )
P
(1 − ς) x∈X \{0} p̃(x)D(σx kσ0 )
q
=
,
1 2
χ
(ρ̃kρ
)
0
2
lim D(ρ̄n kρ⊗n
0 ) = 0,
=
⊗n
(66)
Using Theorem 4 and (66),
n→∞
lim q
n
lim D(ρ̄n kρ⊗n
0 ) = 0.
Using Theorem 4 and Lemma 7, it follows that the
following specific scaling coefficients are achievable.
n→∞
,
n→∞
α2 2
D(AkB) =
χ (CkB) + O(α3 ),
2
Proof. See Appendix E.
n→∞
n
αn2 2
χ (ρ̃kρ0 ) + O(nαn3 )
2
3
γ2
γ
= n χ2 (ρ̃kρ0 ) + O √n
2
n
where ς ∈ (0, 1), ς1 > 0, ς2 > 0, and ς3 > 0 are
constants, and [c]+ = max{c, 0}.
log M
√
D(ρ̄n kρ⊗n
0 )=n
⊗n
D ρ⊗n
≤ ς3 γn2 ,
αn kρ0
lim q
√
Hence, using Lemma 7,
and,
√
1 2
2 χ (ρ̃kρ0 )
−ς2 γn
D(ρ̄n kρ⊗n
0 ) ≤ nD(ραn kρ0 ) + e
p̃(x) (1 + ς)D (ρx kρ0 )
i+
q
Proof. From (65),
p̃(x)D (σx kσ0 ) ,
− (1 − ς)D (σx kσ0 )
p̃(x) (D (ρx kρ0 ) − D (σx kσ0 ))
x∈X \{0}
where ρ̃ is the average non-innocent state at Willie
induced by p̃(x), and [x]+ = max{x, 0}.
x∈X \{0}
√ h
log K = γn n
#+
"
the required adjustments, we prove the following theorem:
,
=
P
p̃(x)((1 + ς)D(ρx kρ0 ) − (1 − ς)D(σx kσ0 ))
x∈X
x6=0
q
.
1 2
2 χ (ρ̃kρ0 )
Since ς > 0 is arbitrary, the statement of the theorem
follows.
16
n
V. C ONVERSE OF THE SRL FOR CQ CHANNELS
σ̄ =
In this section, we prove that the limiting values of
M and K given in Theorem 1 are optimal for the cq
channels. The proof adapts [6, Section VI] based on [27].
Theorem 6. For any stationary memoryless cq channel
with ∀x ∈ X , supp(σx ) ⊆ supp(σ0 ) and supp (ρx ) ⊆
supp (ρ0 ) such that ρ0 is not a mixture of {ρx }x∈X \{0} ,
if
lim PB
e = 0,
n→∞
lim D(ρ̄n kρ⊗n
0 ) = 0,
and,
(69)
i=1
and (e) follows because Holevo information is concave
in the input distribution. Rearranging (67) we have,
1
log M ≤
(nχ(p̄, σ̄) + 1).
(70)
1 − δn
Generalizing [28, Section 5.2.3] to cq channels, we
obtain:
log M + log K
n→∞
= H(X n )
then,
n
(71)
n
≥ I(X ; ρ̄ )
lim q
n→∞
1X
σ(xi );
n
log M
P
x∈X \{0} p̃(x)D(σx kσ0 )
q
,
1 2
χ
(ρ̃kρ
)
0
2
≤
nD(ρ̄n kρ⊗n
0 )
and,
log M + log K
≥
lim q
n→∞
nD(ρ̄n kρ⊗n
)
0
P
x∈X \{0} p̃(x)D(ρx kρ0 )
q
.
≥ I(X n ; ρ̄n ) + D(ρ̄n kρ⊗n
0 ) − n
= H(ρ̄n ) − H(ρ̄n |X n ) − H(ρ̄n )
− Tr{ρ̄n log ρ⊗n
0 } − n
n
n
X
X
=−
H(ρ(xi )|Xi ) −
Tr{ρ(xi ) log ρ0 } − n
i=1
1 2
2 χ (ρ̃kρ0 )
n ⊗n
Proof. Let us define PB
e ≤ δn and D(ρ̄ kρ0 ) ≤ n
for a length n code, where limn→∞ δn = 0 and
limn→∞ n = 0 are the reliability and covertness criteria,
respectively. Let Y n be the classical random variable
describing the output of the channel at Bob, and S the
random variable describing the pre-shared secret. We
have:
log M = H(W )
= I(W ; Y n S) + H(W |Y n S)
p(u)H(ρ(xi )|Xi = u)
i=1 u∈X
n
X
Tr{ρ(xi ) log ρ0 } − n
i=1
=
n X
X
p(u) [Tr{ρu (xi ) log ρu (xi )}
i=1 u∈X
− Tr{ρu (xi ) log ρ0 }] − n
X
=n
p̄(u) (Tr{ρu log ρu } − Tr{ρu log ρ0 }) − n
≥n
= I(W ; Y n |S) + 1 + δn log M
X
p̄(u) (Tr{ρu log ρu } − Tr{ρu log ρ0 })
u∈X
−n − nD(ρ̄kρ0 )
X
=n
p̄(u) Tr{ρu log ρu } − Tr{ρ̄ log ρ0 }
= I(W S; Y n ) + 1 + δn log M
(b)
≤ I(X n ; Y n ) + 1 + δn log M
u∈X
−n Tr{ρ̄(log ρ̄ − log ρ0 )} − n
X
= −n Tr{ρ̄ log ρ̄} + n
p̄(u) Tr{ρu log ρu } − n
(c)
≤ I(X n ; σ n ) + 1 + δn log M
≤ χ(pX n , σ n ) + 1 + δn log M
n
(d) X
=
χ(pi , σ(xi )) + 1 + δn log M
u∈X
= nχ(p̄, ρ̄) − n ,
i=1
(e)
(67)
where (a) follows from Fano’s inequality; (b) follows
from the data processing inequality; (c) is the Holevo
bound; (d) is due to σ n being a product state; p̄ and σ̄
are defined as follows:
n
1X
p̄(u) =
p(xi = u)
n
−
i=1
n X
X
u∈X
≤ I(W ; Y n S) + 1 + δn log M
≤ nχ(p̄, σ̄) + 1 + δn log M
=−
(b)
(a)
i=1
(a)
(68)
(72)
where (a) follows from the covertness condition
D(ρ̄n kρ⊗n
0 ) ≤ n , ρ̄ is the average output state at Willie,
n
ρ̄ =
1X
ρ(xi ),
n
(73)
X
(74)
i=1
or equivalently,
ρ̄ =
p̄(x)ρx ,
x∈X
and (b) follows because D(ρ̄kρ0 ) ≥ 0.
17
= µn
As in [27],
X
n ≥ D(ρ̄n kρ⊗n
0 )
n
(a) X
D(ρ(xi )kρ0 )
=
− Tr{σµn log σµn }
X
p̃(x)σx + (1 − µn )σ0 ) log σ0 }
+ Tr{(µn
n=1
x∈X
x6=0
(b)
≥ nD(ρ̄kρ0 )
(75)
where (a) follows from the memoryless property of
the channel and (b) follows from the convexity of the
quantum relative entropy. Using the quantum Pinsker’s
inequality,
= µn
(76)
n→∞
(77)
Denote the average probability
of transmitting a nonP
innocent state by µn =
x∈X \{0} p̄(x). Similarly to
(13), the average state induced by p̄(x) at Willie is:
ρ̄ = ρµn = (1 − µn )ρ0 + µn ρ̃,
(78)
where ρ̃ is the average non-innocent state at Willie,
X
p(x)
.
ρ̃ =
p̃(x)ρx , and p̃(x) =
µn
x∈X \{0}
Since we are limited to cq channels, the set of classical
inputs X at Alice maps to a fixed set of output states at
Willie (and Bob). This implies that (77) holds (and thus
from (76) the covertness criterion is maintained) only
when:
lim µn = 0.
n→∞
(79)
− Tr{σµn log σµn } + Tr{σµn log σ0 }
X
p̃(x)D(σx kσ0 ) − D(σµn kσ0 )
= µn
x∈X
x6=0
x∈X \{0}
Expanding the Holevo information of the average state
σ̄ = σµn at Bob we have:
χ(p̄, σµn )
X
p̃(x)H(σx )
x∈X
= − Tr{σµn log σµn } + (1 − µn ) Tr{σ0 log σ0 }
X
+µn
p̃(x) Tr{σx log σx }
x∈X
x6=0
p̃(x)D(σx kσ0 ).
(82)
Similarly, expanding the Holevo information of the average state ρ̄ = ρµn at Willie yields:
X
χ(p̄, ρµn ) = µn
p̃(x)D(ρx kρ0 ) − D(ρµn kρ0 ). (83)
x∈X
x6=0
By Lemma 7 we have:
µ2n 2
χ (ρ̃kρ0 ) + O(µ3n ).
(84)
2
Again, the assumption of a cq channel implies that
Alice’s classical inputs in X are mapped to a fixed set of
output states at Willie, which means that χ2 (ρ̃kρ0 ) > 0.
Thus, the covertness condition in the right-hand side of
(76) can only be maintained by ensuring that
√
lim nµn = 0.
(85)
D(ρµn kρ0 ) ≥
n→∞
From (70) and (82) we have,
X
nµn
p̃(x)D(σx kσ0 ) ≥ nχ(p̄, σµn )
≥ (1 − δn ) log M − 1.
(80)
where σ̃ is the average non-innocent state at Bob,
X
p(x)
.
(81)
σ̃ =
p̃(x)σx , and p̃(x) =
µn
= H(σµn ) −
≤ µn
X
x∈X
x6=0
The state induced by p̄(x) at Bob is
σ̄ = σµn = (1 − µn )σ0 + µn σ̃,
p̃(x) Tr{σx (log σx − log σ0 )}
x∈X
x6=0
Hence, by (76), the covertness criterion implies:
lim ρ̄ = ρ0 .
X
x∈X
x6=0
2
kρ̄ − ρ0 k
n
≤ D(ρ̄kρ0 ) ≤ .
2 log 2
n
p̃(x) Tr{σx (log σx − log σ0 )}
x∈X
x6=0
Since we assume that supp(σx ) ⊆ supp(σ0 ),
P
x∈X \{0} p̃(x)D(σx kσ0 ) < ∞. However, we know that
limn→∞ log M = ∞ is achievable. Thus, we require
lim nµn = ∞.
n→∞
(86)
Now, for n large enough, log M is upper-bounded as
follows:
(a)
nχ(p̄, σµn ) + 1
log M
q
p
≤
(1 − δn ) n2 D(ρµn kρ0 )
nD(ρ̄n kρ⊗n
)
0
P
x∈X p̃(x)D(σx kσ0 ) + 1
(b) nµn
x6=0
q
≤
,
2 2
(1 − δn ) n 2µn χ2 (ρ̃kρ0 )
(87)
18
where (a) follows from (70) and (75), and (b) follows
from (82) and (84). Thus, using (86) and applying the
reliability criteria we obtain:
P
x∈X p̃(x)D(σx kσ0 )
log M
x6=0
q
. (88)
lim q
≤
n→∞
1 2
⊗n
n
χ
(ρ̃kρ
)
nD(ρ̄ kρ0 )
0
2
Recall from Theorem 5 that there exists a sequence of
codes such that
P
x∈X p̃(x)D(σx kσ0 )
log M
x6=0
q
lim q
. (89)
=
n→∞
1 2
⊗n
χ
(ρ̃kρ
)
nD(ρ̄n kρ0 )
0
2
From (70) and (82) we have:
1
(nχ(p̄, σ̄) + 1)
log M ≤
1 − δn
X
1
≤
(nµn
p̃(x)D(σx kσ0 ) + 1)
1 − δn
x∈X
(90)
x6=0
Combining (89) and (90) for arbitrary β > 0 yields:
P
nµn p̃(x)D(σx kσ0 )
u6=0
lim
q
n→∞
nD(ρ̂n kρ⊗n
0 )
P
(1 − β) p̃(x)D(σx kσ0 )
u6=0
≥
q
.
(91)
1 2
2 χ (ρ̃kρ0 )
Now we can find a lower bound for log M + log K ,
log M + log K (a) nχ(p̄, ρ̄) − n
q
≥q
nD(ρ̄n kρ⊗n
)
nD(ρ̄n kρ⊗n
0
0 )
P
nµn p̃(x)D(ρx kρ0 ) − nD(ρµn kρ0 ) − n
(b)
≥
≥
When Bob applies a specific symbol-by-symbol measurement described by POVM {Πy } and observes the
classical output of the channel Y , the channel between
Alice and Bob is classical with transition probability
pY |X (y|x) = Tr{σx Πy }.
q
P
Theorem 7. For any covert communication scenario
when the channel from Alice to Bob is a stationary memoryless classical channel, where, ∀x ∈ X , supp(Px ) ⊆
supp(P0 ) and the channel from Alice to Willie is a cq
channel with ∀x ∈ X , supp(ρx ) ⊆ supp(ρ0 ) such that
ρ0 is not a mixture of {ρx }x∈X \{0} , there exists a coding
scheme such that,
lim D(ρ̄n kρ⊗n
0 ) = 0,
n→∞
lim PB
e = 0,
nD(ρ̄n kρ⊗n
0 )
p̃(x)D(ρx kρ0 ) −
u6=0
q
(96)
This implies that Bob is not able to perform joint
measurement and, thus, the capacity of the classical
channel between Alice and Bob is in general less than
the capacity of the cq channel considered in Sections IV
and V. On the other hand, when Willie is not restricted
to a specific detection scheme, he has a cq channel from
Alice. We aim to show that, if certain conditions are
maintained, the SRL for reliable covert communication
applies to this scenario. Denoting by Px the probability
distribution for the classical output of the channel at
Bob conditioned on Alice transmitting x ∈ X and by
supp(Px ) the support of the distribution Px , we prove
the following theorem:
u6=0
(1 − β)(
(c)
VI. B OB RESTRICTED TO PRODUCT MEASUREMENT
n→∞
1
µn D(ρµn kρ0 )
−
n
nµn )
,
1 2
2 χ (ρ̃kρ0 )
(92)
where (a) follows from (72), (b) follows from (83), and
(c) follows from (91) for any β > 0.
Let us take the limit of right-hand side of (92) as n
tends to ∞. By Lemma 7, we have:
1
µn 2
D(ρµn kρ0 ) = lim
χ (ρ̃kρ0 ) = 0, (93)
lim
n→∞ 2
n→∞ µn
and from (86),
n
lim
= 0.
(94)
n→∞ nµn
Hence, since β > 0 is arbitrary,
P
x∈X p̃(x)D(ρx kρ0 )
log M + log K
x6=0
q
lim q
≥
. (95)
n→∞
1 2
⊗n
n
χ
(ρ̃kρ
)
nD(ρ̄ kρ0 )
0
2
lim q
n→∞
log M
nD(ρ̄n kρ⊗n
0 )
P
=
x∈X \{0} p̃(x)D(Px kP0 )
q
,
1 2
2 χ (ρ̃kρ0 )
and,
lim q
n→∞
log K
nD(ρ̄n kρ⊗n
0 )
hP
i+
x∈X \{0} p̃(x)(D(ρx kρ0 ) − D(Px kP0 ))
q
=
1 2
2 χ (ρ̃kρ0 )
where ρ̃ is the average non-innocent state at Willie
induced by p̃(x), and [c]+ = max{c, 0}.
Proof. First, consider the achievability of the limits
stated in the theorem. For reliability analysis, since the
channel between Alice and Bob is classical, we can
consider a typical set similar to the typical set defined
in [6, Section V] and follow the steps in the proof of
19
[6, Theorem 2]. Since the channel between Alice and
Willie is a cq channel similar to the channel considered
in previous sections of the paper, the covertness analysis
of the achievability is the same as in Section IV-C.
Now we consider the converse. The proof follows the
proof of Theorem 6, and we just mention the necessary
changes here. Denoting by Y n the classical random
variable that describes the output of the channel at Bob,
and applying Fano’s and data processing inequalities as
in (67), we have:
log M = H(W )
+ µn
= −EPµn [log Pµn ]
+ (1 − µn )EP0 [log P0 ] + µn
+ µn
X
p̃(x)EPx [log Px ] − µn
(98)
i=1
and Ȳ is output of the channel between Alice and
Bob induced by X̄ . The last inequality follows from
the concavity of the mutual information in the input
distribution. From (97),
(99)
X
p̃(x)D(Px kP0 ) − D(Pµn kP0 )
x∈X
x6=0
≤ µn
X
p̃(x)D(Px kP0 ).
(102)
x∈X
x6=0
Recalling (83), the Holevo information of the average
state ρ̄ = ρµn is upper bounded by
X
χ(p̄, ρµn ) = µn
p̃(x)D(ρx kρ0 ) − D(ρµn kρ0 ).
x∈X
x6=0
(103)
From (99) and (102) we have,
X
nµn
p̃(x)D(Px kP0 ) ≥ nI(X̄, Ȳ )
x∈X
x6=0
≥ (1 − δn ) log M − 1.
(100)
where, as in Section V, ρ̄ is the average output state at
Willie.
The probability distribution P̄ of Bob’s average output
(induced by the average input distribution p̄(u)) is:
(101)
where P̃ is the average probability distribution of noninnocent symbols at Bob,
X
p̄(x)
.
P̃ =
p̃(x)Px , and p̃(x) =
µn
x∈X \{0}
Expanding the mutual information of the average probability distribution P̄ = Pµn at Bob yields,
X
I(X̄, Ȳ ) = H(Ȳ ) −
p̃(x)H(Ȳ |X̄ = x)
x∈X
= −EPµn [log Pµn ] − (1 − µn )H(Ȳ |X̄ = 0)
X
− µn
p̃(u)H(Ȳ |X̄ = u)
u6=0
= −EPµn [log Pµn ] + (1 − µn )EP0 [log P0 ]
p̃(x)EPx [log P0 ]
= −EPµn [log Pµn ] + EPµn [log P0 ]
X
+ µn
p̃(x)D(Px kP0 )
Repeating the steps of (72), we have,
P̄ = Pµn = (1 − µn )P0 + µn P̃ ,
X
x∈X
x6=0
x∈X
x6=0
(97)
n
log M + log K ≥ nχ(p̄, ρ̄) − n ,
p̃(x)EPx [log P0 ]
x∈X
x6=0
= µn
where X̄ is the average input symbol with distribution,
1
log M ≤
(nI(X̄, Ȳ ) + 1).
1 − δn
X
x∈X
x6=0
≤I(X n ; Y n ) + 1 + δn log M
1X
p(xi = u)
p̄(u) =
n
p̃(x)EPx [log Px ]
x∈X
x6=0
= I(W S; Y n ) + 1 + δn log M
≤nI(X̄, Ȳ ) + 1 + δn log M
X
P
As supp(Px ) ⊆ supp(P0 ), x∈X \{0} p̃(x)D(Px kP0 ) <
∞. Thus, in order for limn→∞ M = ∞, we require
lim nµn = ∞.
n→∞
(104)
For n sufficiently large,
log M
nI(X̄, Ȳ ) + 1
q
p
≤
(1 − δn ) n2 D(ρµn kρ0 )
nD(ρ̄n kρ⊗n
)
0
P
nµn x∈X p̃(x)D(Px kP0 ) + 1
x6=0
q
≤
,
n2 µ2n 2
(1 − δn )
2 χ (ρ̃kρ0 )
(105)
Thus, using (104) and applying the reliability criteria we
obtain
P
x∈X p̃(x)D(Px kP0 )
log M
x6=0
q
lim q
≤
. (106)
n→∞
1 2
⊗n
χ
(ρ̃kρ
)
nD(ρ̄n kρ0 )
0
2
Finally, using the same steps as in Section V yields
P
x∈X p̃(x)D(ρx kρ0 )
log M + log K
x6=0
q
lim q
≥
. (107)
n→∞
1 2
⊗n
n
χ
(ρ̃kρ
)
nD(ρ̄ kρ0 )
0
2
20
√
VII. O( n log n) COVERT COMMUNICATION
+ µn
X
p̃(x) Tr{P0 σx (log σx − log σµn )}
x∈X
x6=0
In Section III-B5 we argue that, if there exists
xs ∈ X \{0} such that supp(σxs ) * supp(σ0 ) and
√
supp(ρxs ) ⊆ supp(ρ0 ), then O( n log n) (δ, )-covert
bits are achievable in n channel uses. We specify a
POVM for Bob that induces a classical DMC and use
[6, Theorem 7] to argue achievability. Here we prove
the converse result, demonstrating that, even when Bob
uses an arbitrary decoding
POVM, it is not possible to
√
convey more than O( n log n) (δ, )-covert bits in n
channel uses.
Since we are interested in the converse, let’s assume
that, for all x ∈ X \{0}, supp(σx ) * supp(σ0 ) and
supp(ρx ) ⊆ supp(ρ0 ). As in the proof of Theorem 6,
n ⊗n
suppose PB
e ≤ δn and D(ρ̄ kρ0 ) ≤ n for a length n
code, where limn→∞ δn = 0 and limn→∞ n = 0 are
the reliability and covertness criteria, respectively. We
can apply the results and notation in (67)-(81) here, as
they do not rely on the supports of the received states at
Bob. However, since supp(σx ) 6⊆ supp(σ0 ), the bound
on the Holevo information of the average state at Bob
in (82) cannot be used. Instead we expand the Holevo
information as follows, denoting the projection into the
support of σ0 as P0 :
χ(p̄, σµn )
= H(σµn ) −
X
p(x)H(σx )
n
o
X
− µn log µn Tr (1 − P0 )
p̃(x)σx
x∈X
x6=0
= (1 − µn ) Tr{P0 σ0 log σ0 }
X
p̃(x) Tr{P0 σx log σx } − Tr{P0 σµn log σµn }
+ µn
x∈X
x6=0
n
o
X
− µn log µn Tr (1 − P0 )
p̃(x)σx
x∈X
x6=0
(a)
= Tr{P0 σµn (log σ0 − log σµn )}
X
p̃(x)D(P0 σx kσ0 )
+ µn
x∈X
x6=0
n
o
X
− µn log µn Tr (1 − P0 )
p̃(x)σx
x∈X
x6=0
(b)
≤ log
X
1
+ µn
p̃(x)D(P0 σx kσ0 ) − κµn log µn
1 − µn
x∈X
x6=0
(108)
where (a)P follows from adding and subtracting
µn Tr P0 x∈X \{0} p̃(x)σx log σ0 , and (b) follows
from the fact that the logarithmic function is operative
monotone, and since quantum states are positive definite,
x∈X
Hence, log M is upper-bounded as,
x∈X
x6=0
n
(1 − µn )σ0 + µn
p̃(x)σx > (1 − µn )σ0 .
x∈X \{0}
= − Tr{σµn log σµn } + (1 − µn ) Tr{σ0 log σ0 }
X
+ µn
p̃(x) Tr{σx log σx }
= − Tr
X
σµn = (1 − µn )σ0 + µn
X
o
p̃(x)σx log σµn
x∈X
x6=0
+ (1 − µn ) Tr{σ0 log σ0 } + µn
X
p̃(x) Tr{σx log σx }
log M
q
nD(ρ̄n kρ⊗n
0 ) log n
(a)
nχ(p̄, σµn ) + 1
p
(1 − δn ) n2 D(ρµn kρ0 ) log n
P
1
log 1−µ
+ µn p̃(x)D(P0 σx kσ0 ) − κµn log µn +
n
≤
x∈X
x6=0
= (1 − µn ) Tr{σ0 (log σ0 − log σµn )}
X
+ µn
p̃(x) Tr{σx (log σx − log σµn )}
(b)
≤
=
− log(1−µn )
µn log n
+
(1 − δn )
q
1
nµn log n
(109)
where (a) is from (70) and (75), and (b) follows from
(108) and (84). Recalling (85) from Section V,
x∈X
x6=0
= (1 − µn ) Tr{P0 σ0 (log σ0 − log σµn )}
+
1 2
2 χ (ρ̃kρ0 )
x∈X
x6=0
+ (1 − µn ) Tr{(1 − P0 )σ0 (log σ0 − log σµn )}
X
+ µn
p̃(x) Tr{(1 − P0 )σx (log σx − log σµn )}
1
n
q
µ2n 2
2 χ (ρ̃kρ0 ) log n
P
p̃(x)D(P0 σx kσ0 )
u6=0
− κµnloglogn µn
log n
(1 − δn )
x∈X
x6=0
= (1 − µn ) Tr{P0 σ0 (log σ0 − log σµn )}
X
+ µn
p̃(x) Tr{P0 σx (log σx − log σµn )}
u6=0
lim
n→∞
√
nµn = 0.
(110)
,
21
Hence, µn can be written as µn = √ιnn where ιn = o(1). the performance (since that is equivalent to transmitting
a randomly chosen pure state from an ensemble and
From (70) and (108) we have,
X
discarding the knowledge of that choice).
1
n log
+ µn
p̃(x)D(P0 σx kσ0 ) − κµn log µn
A TPCP map NAn →W n describes the quantum chan1 − µn
x∈X
nel
from Alice to Willie acting on n channel uses (not
x6=0
necessarily
memorylessly). Thus, the innocent state at
(111)
≥ nχ(p̄, σµn ) ≥ (1 − δn ) log M − 1.
Willie is expressed as ρn0 ≡ NAn →W n (|0ih0|). When
The term κµn log µn is the asymptotically dominant Wm is transmitted, Willie’s hypothesis test reduces to
term on the left-hand side of (111). Thus, in order for discriminating between the states ρn and ρn , where
m
0
limn→∞ M = ∞,
ρnm = NAn →W n (φnm ). Let Willie use a detector that is
√
1
given by the positive operator-valued measure (POVM)
lim nµn log µn = lim nιn ( log n + log ιn−1 ) = ∞,
n→∞
n→∞
{P0n , I − P0n }, where P0n is the projection onto the
2
support of the innocent state ρn0 . Thus, Willie’s average
which requires that ι = ω √n 1log n . Hence, we have
error probability is:
ι = o(1) ∩ ω √n 1log n . Applying this and the reliability
M
1 X
criteria to (109), in the limit as n → ∞,
PW
=
Tr {P0n ρnm } ,
(112)
e
2M
−1
m=1
ι
κ( 12 + limn→∞ log
log M
log n )
q
≤
,
lim q
since messages are sent equiprobably. Note that the error
n→∞
1 2
χ
(ρ̃kρ
)
nD(ρ̄n kρ⊗n
is entirely because of missed codeword detections, as
0
0 ) log n
2
n P
o
Willie’s receiver never raises a false alarm because the
where κ = 1 − Tr P0 x∈X \{0} p̃(x)σx .
support of the innocent state at Willie is a strict subset
of the supports of each of the non-innocent states. Now,
VIII. P ROOF OF T HEOREM 2
Tr {P0n ρnm }
Here we prove that (δ, )-covert communication is
= Tr {P0n NAn →W n (φnm )}
(113)
impossible when there are no input states available
whose supports are contained within the support of the
= Tr P0n NAn →W n |a0 (m)|2 |0ih0|
innocent state at Willie. Unlike other proofs in this
paper, this proof is for a general input state that may be
entangled over n channel uses and a general quantum
X
channel from Alice to Willie NAn →W n that may not
†
0
b
b
+
a
(m)a
(m)
b
b0
be memoryless across n channel uses. Since this is a
b6=0 or
b0 6=0
converse, to simplify the analysis, we assume that Bob’s
channel from Alice is identity. This generalizes the proof
(a)
of [8, Theorem 1] to arbitrary channels.
= Tr P0n |a0 (m)|2 ρn0
Proof. Alice sends one of M (equally likely) log M
bit messages by choosing an element from an arbi
n
A
X
trary codebook {φm , m = 1, . . . , M }, where a state
†
0
n
n
n
+ NAn →W n
ab (m)ab0 (m) b b
A A
A
φm = |ψx i
hψm | encodes a log M -bit message
b6=0 or
An
b0 6=0
Wm . State |ψm i
∈ H is a general pure state for n
n
o
(b)
channel uses, where H is an infinite-dimensional Hilbert
= Tr P0n |a0 (m)|2 ρn0 + 1 − |a0 (m)|2 ρnm0̄
space corresponding to a single channel use. Denoting
the set of non-negative integers by N0 and a complete
= |a0 (m)|2 + 1 − |a0 (m)|2 (1 − cm ),
(114)
orthonormal basis (CON) ofPH by B = {|bi , b ∈ N0 },
n
n
of TPCP map NAn →W n
we can express |ψm iA = b∈Nn0 ab (m) |biA , where where (a) is by the linearity
n
of ρ0 , (b) follows from the substi|bi ≡ |b1 i ⊗ |b2 i ⊗ · · · ⊗ |bn i is a tensor product of and the definition
n , which is a quantum state that satisfies
An
tution
of
ρ
m0̄
n states drawn from CON B . We designate |0i
=
2
2
|0i ⊗ |0i ⊗ · · · ⊗ |0i as the innocent state. As in the |a0 (m)| ρn0 + 1 − |a0 (m)| ρnm0̄ = ρnm and correrest of the paper, for simplicity of notation, we drop sponds to the part of ρnm that is not an innocent state.
n
the system label superscripts, i.e., we denote |biA by Since part of the support of ρnm is outside the support
|bi. We limit our analysis to pure input states since, by of the innocent state ρn0 , part of the support of ρnm0̄
convexity, using mixed states as inputs can only degrade has to lie outside the innocent state support. Thus, in
22
(114) we denote by cm = Tr (I − P n )ρnm0̄ > 0
the constant corresponding to the “amount” of support
that ρnm0̄ has outside of the innocent state support. Let
cmin = minm cm , and note that cmin > 0. This yields an
upper-bound for (112):
!
M
X
1
c
1
min
2
1−
|a0 (m)| .
−
PW
e ≤
2
2
M
m=1
1
2
PW
e
Thus, to ensure
≥ − , Alice must use a codebook
with the probability of transmitting the innocent state:
M
1 X
2
.
|a0 (m)|2 ≥ 1 −
M
cmin
(115)
m=1
Equation (115) can be restated as an upper bound on
the probability of transmitting one or more non-innocent
states:
M
1 X
2
1 − |a0 (m)|2 ≤
.
(116)
M
cmin
m=1
Now we show that there exists an interval (0, 0 ], 0 > 0
such that if ∈ (0, 0 ], Bob’s average decoding error
probability PB
e ≥ 0 where 0 > 0, thus making covert
communication over a pure-loss channel unreliable.
Analysis of Bob’s decoding error follows that in the
proof of [8, Theorem 1] with minor substitutions. Denote
by Em→l the event that the transmitted message Wm
is decoded by Bob as Wv 6= Wm . Given that Wm
is transmitted, the decoding error probability is the
probability of the union of events ∪M
l=0,l6=m Em→l . Let
Bob choose a POVM {Λ∗j } that minimizes the average
probability of error over n channel uses:
M
1 X
P ∪M
l=0,l6=m Em→l .
{Λj } M
PB
e = inf
(117)
m=1
Now consider a codebook that meets the necessary condition for covert communication given in equation (116).
Define
{φnm , u ∈ A} where
n the subset of this codebook
o
4
. We lower-bound (117)
A = u : 1 − |a0 (m)|2 ≤ cmin
as follows:
1 X
P ∪M
PB
=
e
l=0,l6=m Em→l
M
u∈Ā
1 X
+
P ∪M
(118)
l=0,l6=m Em→l
M
u∈A
1 X
≥
P ∪M
(119)
l=0,l6=m Em→l ,
M
u∈A
where the probabilities in equation (118) are with respect
to the POVM {Λ∗j } that minimizes equation (117) over
the entire codebook. Without loss of generality, let’s
assume that |A| is even, and split A into two equalsized non-overlapping subsets A(left) and A(right) (formally, A(left) ∪ A(right) = A, A(left) ∩ A(right) = ∅, and
|A(left) | = |A(right) |). Let g : A(left) → A(right) be a
bijection. We can thus re-write (119):
M
P
∪
E
l=0,l6=m m→l
1 X
PB
2
e ≥
M
2
u∈A(left)
M
P ∪l=0,l6=g(m) Eg(m)→l
+
2
!
P Eg(m)→m
P Em→g(m)
1 X
≥
,
2
+
M
2
2
(left)
u∈A
(120)
where the second lower bound is because the events
Em→g(m) and Eg(m)→m are contained in the unions
M
∪M
l=0,l6=m Em→l and ∪l=0,l6=g(m) Eg(m)→l , respectively.
The summation term in equation (120),
P Em→g(m)
P Eg(m)→m
Pe (m) ≡
+
,
(121)
2
2
is Bob’s average probability of error when Alice only
sends messages Wm and Wg(m) equiprobably. We thus
reduce the analytically intractable problem of discriminating between many states in equation (117) to a
quantum binary hypothesis test.
The lower bound on the probability of error in discriminating two received codewords is obtained by lowerbounding the probability of error in discriminating two
codewords before they are sent (this is equivalent to
Bob having an unattenuated unity-transmissivity channel from Alice). Recalling that φnm = |ψm ihψm | and
φng(m) = ψg(m) ψg(m) are pure states, the lower bound
on the probability of error in discriminating between
|ψm i and ψg(m) is [18, Ch. IV.2 (c), Eq. (2.34)]:
q
Pe (m) ≥ 1 − 1 − F |ψm i , ψg(m)
2 , (122)
where F (|ψi , |φi) = | hψ|φi |2 is the fidelity between the pure states |ψi and |φi. Lower-bounding
F |ψm i , ψg(m) lower-bounds the RHS of equation
(122). For pure states |ψi
2 and |φi, F (|ψi , |φi) = 1 −
1
2 k |ψi hψ| − |φi hφ| k1 , where kρ − σk1 is the trace
distance [9, Equation (9.134)]. Thus,
F |ψm i , ψg(m)
2
1 n
n
=1−
kφ − φg(m) k1
2 m
!2
n
kφnm − |0ih0| k1 kφg(m) − |0ih0| k1
≥1−
+
2
2
23
=1−
q
2
1 − |h0|ψm i| +
q
1−
0 ψg(m)
2
2
,
(123)
where the inequality is from the triangle inequality for
trace distance. Substituting (123) into (122) yields:
q
q
2
1 − 1 − |h0|ψm i|2 − 1 − 0 ψg(m)
.
Pe (m) ≥
2
(124)
Since |h0|ψm i|2 = |a0 (m)|2 and, by the construction of
4
4
A, 1 − |a0 (m)|2 ≤ cmin
and 1 − |a0 (g(m))|2 ≤ cmin
, we
have:
r
1
Pe (m) ≥ − 2
.
(125)
2
cmin
Recalling the definition of Pe (m) in equation (121), we
substitute (125) into (120) to obtain:
r
|A| 1
B
Pe ≥
−2
,
(126)
M 2
cmin
Now, re-stating the condition for covert communication
(116) yields:
1 X
2
≥
1 − |a0 (m)|2
cmin
M
u∈A
≥
(M − |A|) 4
M
cmin
(127)
4
for
with inequality (127) because 1 − |a0 (m)|2 > cmin
all codewords in A by the construction of A. Solving
inequality in (127) for |A|
M yields the lower bound on the
fraction of the codewords in A,
|A|
1
≥ .
(128)
M
2
Combining equations (126) and (128) results in a positive lower bound
q on Bob’s probability of decoding error
cmin
1
PB
≥
−
and any n, and
e
4
cmin for ∈ 0, 16
demonstrates that (δ, )-covert communication when the
support of the innocent state at Willie is a strict subset
of the supports of each of the non-innocent states is
impossible.
IX. D ISCUSSION
In this section we put our results in the context
of research in quantum-secure covert communication.
Theorem 3 proves the achievability of the square root
scaling law for covert communication over an arbitrarily non-trivial memoryless quantum channel. This
is true notwithstanding the restriction to a specific set
of the input states imposed by our classical-quantum
channel model. Achievability shows a lower bound on
the covert communication performance, as relaxing the
classical-quantum channel restriction and allowing Alice
to choose arbitrary codewords from the entire n-fold
d-dimensional Hilbert space H⊗n could only improve
the system. However, the extent of such improvement
is an important open problem that is outside the scope
of this work. Even showing the square root scaling law
for arbitrary non-trivial quantum channels is an open
challenge. Our converse in Theorem 6 is limited to
classical-quantum channels. In fact, the assumption that
Alice’s set of input classical states maps to a set of fixed
quantum states, which in turn maps to a set of fixed
output states at Bob and Willie plays a critical role in
its proof: meeting the covertness criterion in this setting
requires that the fraction µn of non-innocent
states in an
√
n-state codeword scales as µn = O(1/ n). This greatly
simplifies the proof of the converse. This assumption
can be slightly relaxed by allowing Alice to vary a set
of input states with n. This implies that Alice could
meet the covertness criteria without ever transmitting
the innocent state by using states that get progressively
closer (in relative entropy or trace norm) to the innocent
state. However, even this small change complicates the
analysis, precluding our proof from proceeding. That
being said, a general converse for the square root law
that allows the use of arbitrary codewords from H⊗n has
been proven for the bosonic channel [8, Theorem 5]. We
conjecture that the square root scaling indeed holds for
all non-trivial quantum channels.
ACKNOWLEDGMENT
The authors thank Mark M. Wilde for pointing out
[20], a lemma from which was instrumental to proving
the main result of this paper.
R EFERENCES
[1] A. Sheikholeslami, B. A. Bash, D. Towsley, D. Goeckel, and
S. Guha, “Covert communication over classical-quantum channels,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT), 2016,
pp. 2064–2068.
[2] B. A. Bash, D. Goeckel, and D. Towsley, “Square root law for
communication with low probability of detection on AWGN
channels,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT),
Cambridge, MA, Jul. 2012.
[3] B. Bash, D. Goeckel, and D. Towsley, “Limits of reliable
communication with low probability of detection on AWGN
channels,” IEEE J. Select. Areas Commun., vol. 31, no. 9, pp.
1921–1930, 2013.
[4] P. H. Che, M. Bakshi, and S. Jaggi, “Reliable deniable
communication: Hiding messages in noise,” in Proc. IEEE
Int. Symp. Inform. Theory (ISIT), Istanbul, Turkey, Jul. 2013,
arXiv:1304.6693.
[5] L. Wang, G. W. Wornell, and L. Zheng, “Limits of lowprobability-of-detection communication over a discrete memoryless channel,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT),
2015, pp. 2525–2529.
24
[6] M. R. Bloch, “Covert communication over noisy channels: A
resolvability perspective,” IEEE Trans. Inf. Theory, vol. 62,
no. 5, pp. 2334–2354, 2016.
[7] B. A. Bash, D. Goeckel, S. Guha, and D. Towsley, “Hiding
information in noise: Fundamental limits of covert wireless
communication,” IEEE Commun. Mag., vol. 53, no. 12, 2015.
[8] B. A. Bash, A. H. Gheorghe, M. Patel, J. L. Habif, D. Goeckel,
D. Towsley, and S. Guha, “Quantum-secure covert communication on bosonic channels,” Nature Commun., vol. 6, 2015.
[9] M. M. Wilde, Quantum information theory. Cambridge Univ.
Press, 2013, arXiv:1106.1445v5 [quant-ph].
[10] A. S. Holevo, Quantum Systems, Channels, Information: A
Mathematical Introduction. Berlin, Boston: De Gruyter, 2012.
[11] M. A. Nielsen and I. L. Chuang, Quantum Computation and
Quantum Information.
New York, NY, USA: Cambridge
University Press, 2000.
[12] A. Holevo, “The capacity of quantum channel with general
signal states,” IEEE Trans. Inf. Theory, vol. 44, 1998.
[13] B. Schumacher and M. D. Westmoreland, “Sending classical
information via noisy quantum channels,” Phys. Rev. A, vol. 56,
p. 131, 1997.
[14] M. Takeoka and S. Guha, “Capacity of optical communication
in loss and noise with general quantum Gaussian receivers,”
Phys. Rev. A, vol. 89, no. 4, p. 042309, 2014.
[15] V. Giovannetti, R. Garcı́a-Patrón, N. Cerf, and A. Holevo,
“Ultimate classical communication rates of quantum optical
channels,” Nature Photonics, vol. 8, no. 10, pp. 796–800, 2014.
[16] S. Guha, “Classical capacity of the free-space quantum-optical
channel,” Master’s thesis, Massachusetts Institute of Technology, 2004.
[17] C. W. Helstrom, “Quantum detection and estimation theory,” J.
Stat. Phys., vol. 1, no. 2, pp. 231–252, 1969.
[18] ——, Quantum detection and estimation theory. Academic
press, 1976.
[19] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein,
Introduction to Algorithms, 2nd ed. Cambridge, Massachusetts:
MIT Press, 2001.
[20] M. Ruskai and F. H. Stillinger, “Convexity inequalities for
estimating free energy and relative entropy,” J. Phys. A, vol. 23,
no. 12, p. 2421, 1990.
[21] K. Temme, M. J. Kastoryano, M. Ruskai, M. M. Wolf, and
F. Verstraete, “The χ2-divergence and mixing times of quantum
Markov processes,” J. Math. Phys., vol. 51, no. 12, p. 122201,
2010.
[22] M. Hayashi and H. Nagaoka, “General formulas for capacity of
classical-quantum channels,” IEEE Trans. Inf. Theory, vol. 49,
no. 7, pp. 1753–1768, 2003.
[23] T. Ogawa and M. Hayashi, “On error exponents in quantum
hypothesis testing,” IEEE Trans. Inf. Theory, vol. 50, no. 6, pp.
1368–1372, 2004.
[24] R. Bhatia, “Linear algebra to quantum cohomology: the story of
Alfred Horn’s inequalities,” Amer. Math. Monthly, pp. 289–318,
2001.
[25] T. Ogawa and H. Nagaoka, “Strong converse and Stein’s
lemma in quantum hypothesis testing,” IEEE Trans. Inf. Theory,
vol. 46, no. 7, pp. 2428–2433, 2000.
[26] L. Mirsky, “A trace inequality of John von Neumann,” Monatsh.
für Math., vol. 79, no. 4, pp. 303–306, 1975.
[27] L. Wang, “Optimal throughput for covert communication over
a classical-quantum channel,” arXiv:1603.05823, 2016.
[28] J. Hou et al., “Coding for relay networks and effective secrecy for wire-tap channels,” Ph.D. dissertation, Univ. der TU
München, 2014.
[29] D. Petz, “Quasi-entropies for finite quantum systems,” Rep.
math. phys., vol. 23, no. 1, pp. 57–65, 1986.
A PPENDIX A
D EFINITION OF THE P INCHING M AP
In this section we briefly define the pinching of an
operator.PLet spectral decomposition of an operator A
A
be A = ni=1
λi Ei , where nA is the number of distinct
eigenvalues of A, and Ei are the projectors onto their
corresponding eigenspaces. The following map is called
the pinching [23]:
EA : B → EA (B) =
nA
X
Ei BEi
(129)
i=1
Some of the properties of pinching of an operator that
we use are:
1) EA (B) commutes with A.
2) For any operator C commuting with A, Tr{BC} =
Tr{EA (B)C}.
A PPENDIX B
P ROOF OF L EMMA 4
In this section we present the proof of Lemma 4.
Consider the spectral decompositions of A and B ,
X
X
A=
λi |ai ihai | , and, B =
µj |bj ihbj | ,
i
j
where µj > 0 because B is positive-definite. Hence,
X
X
µj |bj ihbj |
λi |ai ihai |
Tr{BA {A < 0}}= Tr
j
i:λi <0
X X
=
µj λi | hai |bi i |2 ≤ 0.
j
i:λi <0
The second inequality in the lemma (equation 22) follows by replacing λi < 0 with λi > 0 and applying the
same reasoning.
A PPENDIX C
D ERIVATIVES
In this section, we evaluate the matrix derivatives used
in Section IV-B and Section IV-C. First, note for matrices
A and B and scalars x and c,
∂ cx log A
∂ cx
A =
e
= c(log A)Acx .
(130)
∂x
∂x
Now, consider the matrix derivative in Section IV-B.
n
o
∂
∂
r/2
r/2
ϕ(σ1 , r) =
− log Tr σ1 σ0 σ1−r σ0
∂r
∂r
n
o
r/2 −r r/2
∂
Tr
σ
σ
σ
σ
1 0
1
0
∂r
n
o .
=−
(131)
r/2
r/2
Tr σ1 σ0 σ1−r σ0
25
We have,
A PPENDIX D
∂ x −x x
B2A B2
∂x
x
x
x
∂ x
∂ −x
=
B 2 A−x B 2 + B 2
A
B2
∂x
∂x
x
∂ x
+ B 2 A−x
B2
∂x
x
x
x
x
1
= (log B)B 2 A−x B 2 − B 2 (log A)A−x B 2
2
x
1 x
(132)
+ B 2 A−x (log B)B 2 .
2
Applying this to (131) with A = σ1 , B = σ0 , and x = r
yields,
Suppose that we choose δ , ζ and $, M , and K such
that,
−$γn
E[PB
e ]≤e
√
n
,
(138)
and,
−ζγn
E[D(ρ̄n kρ⊗n
αn )] ≤ e
√
n
.
(139)
Thus, for sufficiently large n and any 1 > 0 and 2 > 0
there exists at least one coding scheme such that,
n
p PB
e < 1 ∩ D(ρ̄ kραn ) < 2
n
≥ 1 − p(PB
e < 1 ) − p(D(ρ̄ kραn ) < 2 )
√
√
∂
(a)
e−$γn n e−ζγn n
ϕ(σ1 , r) =
≥1−
−
,
(140)
∂rn
1
2
o
r
r
r
r
r
r
Tr σ1−rσ02 σ1 σ02 log σ1 − 12 σ02 σ1−r σ02 σ1 +σ02 σ1 σ02 σ1−r log σ0
n
o
, where (a) is from Markov’s inequality. Thus, for any
r/2
r/2
ς1 < $ and ς3 < ζ ,
Tr σ1 σ0 σ1−r σ0
√
√
(133)
p PB < e−ς1 γn n ∩ D(ρ̄n kρ ) < e−ς3 γn n
which is uniformly continuous with respect to r ∈ [0, 1],
and we have,
∂
ϕ(σ1 , 0) = D(σ1 kσ0 ).
∂r
Next, consider the matrix derivative in Section IV-C,
∂
∂
−r
ψ(ρ1 , r) =
log Tr{ρ1+r
1 ρ0 }
∂r
∂r
−r
∂
Tr{ρ1+r
1 ρ0 }
= ∂r
.
−r
Tr{ρ1+r
1 ρ0 }
αn
e
≥1−e
√
−($−ς1 )γn n
− e−(ζ−ς3 )γn
√
n
→ 1 as n → ∞.
(141)
A PPENDIX E
P ROOF OF L EMMA 7
First recall from Lemma 1 that, for any quantum states
A and B , and a real number c > 0,
(134)
We have,
∂ 1+x −x
∂ −x
∂ 1+x
A
B = A1+x
B
+
A
B −x
∂x
∂x
∂x
= A1+x (− log B)B −x + A(log A)Ax B −x .
(135)
Applying this to (134) with A = ρ1 , B = ρ0 , and x = r
yields,
−r
r −r
Tr{ρ1+r
∂
1 (− log ρ0 )ρ0 +ρ1 (log ρ1 )ρ1 ρ0 }
ψ(ρ1 , r)=
−r
∂r
Tr{ρ1+r
1 ρ0 }
1+r
Tr{ρ−r
0 ρ1 (log ρ1 − log ρ0 )}
=
,
(136)
−r
Tr{ρ1+r
1 ρ0 }
which is uniformly continuous with respect to r ∈ [0, 1],
and we have,
∂
ψ(ρ1 , 0) = D(ρ1 kρ0 ).
(137)
∂r
The development of (134)-(137) is known as the convergence of the Rényi relative entropy to the quantum
relative entropy [29].
1
Tr{A − A1−c B c }
(142)
c
1
D(AkB) ≤ Tr A1+c B −c − A .
(143)
c
Let X be a Hermitian matrix, I an identity matrix, and
r a real number. Provided that kXk ≤ 1, where k.k is
any submultiplicative norm (e.g., trace norm), we have,
∞
X
r
r
(I + X) =
Xi
(144)
i
D(AkB) ≥
i=0
We have A = αC+(1−α)B = B+α(C−B), where 0 ≤
α ≤ 1 to make A a quantum state. By (143), D(AkB)
can be upper-bounded as follows:
D(AkB) ≤ c−1 (Tr{(B + α(C − B))1+c B −c } − 1)
= c−1 (Tr{B 1+c (I + αB −1 (C − B))1+c B −c } − 1)
∞
X
1+c
(a) −1
= c (Tr{B
(αB −1 (C − B))i } − 1)
i
i=0
∞
X
1+c i
−1
=c (
α Tr{B(B −1 (C − B))i } − 1)
i
i=0
−1
=c
(Tr{B} + (1 + c)α Tr{(C − B)}
26
(1 + c)c 2
α Tr{(C − B)2 B −1 }
2
(1 + c)c(−1 + c) 3
+
α Tr{B(B −1 (C − B))3 }
6
∞
X
1+c i
+
α Tr{B(B −1 (C − B))i } − 1)
i
+
i=4
=
(1 + c) 2
α Tr{(C − B)2 B −1 }
2
(1 − c2 ) 3
−
α Tr{B(B −1 (C − B))3 }
6
∞
X
1+c i
−1
+c
α Tr{B(B −1 (C − B))i } (145)
i
i=4
where (a) follows from (144) when α ≤ kB −1 (C −
B)k−1 .
By (142), D(AkB) can be lower-bounded as follows:
D(AkB) ≥ c−1 Tr{A − (B + α(C − B))1−c B c }
∞
X
1 − c i 1−c −1
(a) −1
= c (1 − Tr{
α B (B (C − B))i B c })
i
i=0
∞
X
1−c i
−1
= c (1 −
α Tr{B(B −1 (C − B))i })
i
i=0
−1
(1 − Tr{B} − (1 − c)α Tr{(C − B)}
(1 − c)(−c) 2
α Tr{B −1 (C − B)2 }
−
2
(1 − c)(−c)(−1 − c) 3
α Tr{B(B −1 (C − B))3 }
−
6
∞
X
1−c i
−
α Tr{B(B −1 (C − B))i })
i
=c
i=4
(1 − c) 2
=
α Tr{(C − B)2 B −1 }
2
1 − c2 3
−
α Tr{B(B −1 (C − B))3 }
6
∞
X
1−c i
−1
−c
α Tr{B(B −1 (C − B))i } (146)
i
i=4
where again (a) follows from (144) when α ≤ kB −1 (C−
B)k−1 .
By (145) and (146) we have:
D(AkB) ≤
(1 + c) 2
α Tr{(C − B)2 B −1 } + O(α3 )
2
and,
(1 − c) 2
α Tr{(C − B)2 B −1 } + O(α3 ).
2
Since c > 0 is arbitrary, we conclude:
D(AkB) ≥
α2
Tr{(C − B)2 B −1 } + O(α3 ).
2
for 0 ≤ α ≤ min{1, kB −1 (C − B)k−1 }.
D(AkB) =
| 7 |
A short note about diffuse Bieberbach groups
arXiv:1703.04972v3 [] 26 Sep 2017
Rafał Lutowski∗, Andrzej Szczepański∗ and Anna Gąsior†
Abstract
We consider low dimensional diffuse Bieberbach groups. In particular we classify diffuse Bieberbach
groups up to dimension 6. We also answer a question from [7, page 887] about the minimal dimension of
a non-diffuse Bieberbach group which does not contain the three-dimensional Hantzsche-Wendt group.
1
Introduction
The class of diffuse groups was introduced by B. Bowditch in [2]. By definition a group Γ is diffuse, if every
finite non-empty subset A ⊂ Γ has an extremal point, i.e. an element a ∈ A such that for any g ∈ Γ \ {1}
either ga or g −1 a is not in A. Equivalently (see [7]) a group Γ is diffuse if it does not contain a non-empty
finite set without extremal points.
The interest in diffuse groups follows from Bowditch’s observation that they have the unique product
property 1 . Originally unique products were introduced in the study of group rings of discrete, torsion-free
groups. More precisely, it is easily seen that if a group Γ has the unique product property, then it satisfies
Kaplansky’s unit conjecture. In simple terms this means that the units in the group ring C[Γ] are all trivial,
i.e. of the form λg with λ ∈ C∗ and g ∈ Γ. For more information about these objects we refer the reader to
[1], [9, Chapter 10] and [7]. In part 3 of [7] the authors prove that any torsion-free crystallographic group
(Bieberbach group) with trivial center is not diffuse. By definition a crystallographic group is a discrete and
cocompact subgroup of the group O(n) n Rn of isometries of the Euclidean space Rn . From Bieberbach’s
theorem (see [12]) the normal subgroup T of all translations of any crystallographic group Γ is a free abelian
group of finite rank and the quotient group (holonomy group) Γ/T = G is finite.
In [7, Theorem 3.5] it is proved that for a finite group G:
1. If G is not solvable then any Bieberbach group with holonomy group isomorphic to G is not diffuse.
2. If every Sylow subgroup of G is cyclic then any Bieberbach group with holonomy group isomorphic to
G is diffuse.
3. If G is solvable and has a non-cyclic Sylow subgroup then there are examples of Bieberbach groups
with holonomy group isomorphic to G which are and examples which are not diffuse.
Using the above the authors of [7] classify non-diffuse Bieberbach groups in dimensions ≤ 4. One of the
most important non-diffuse groups is the 3-dimensional Hantzsche-Wendt group, denoted in [11] by ∆P . For
the following presentation
∆P = hx, y | x−1 y 2 x = y −2 , y −1 x2 y = x−2 i
the maximal abelian normal subgroup is generated by x2 , y 2 and (xy)2 (see [6, page 154]). At the end of part
3.4 of [7] the authors ask the following question.
∗ Institute
of Mathematics, University of Gdańsk, ul. Wita Stwosza 57, 80-952 Gdańsk, Poland
of Mathematics, Maria Curie-Skłodowska University, Pl. Marii Curie-Skłodowskiej 1, 20-031 Lublin, Poland
E-mail addresses: [email protected], [email protected], [email protected]
2010 Mathematics Subject Classification: Primary 20E40; Secondary 20H15, 20F65.
Key words and phrases: unique product property; diffuse groups, Bieberbach groups.
1 The group Γ is said to have the unique product property if for every two finite non-empty subsets A, B ⊂ Γ there is an
element in the product x ∈ AḂ which can be written uniquely in the form x = ab with a ∈ A and b ∈ B.
† Institute
c 2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license
http://creativecommons.org/licenses/by-nc-nd/4.0/
Question 1. What is the smallest dimension d0 of a non-diffuse Bieberbach group which does not contain
∆P ?
The answer for the above question was the main motivation for us. In fact we prove, in the next section,
that d0 = 5. Moreover, we extend the results of part 3.4 of [7] and with support of computer, we present the
classification of all Bieberbach groups in dimension d ≤ 6 which are (non)diffuse.
2
(Non)diffuse Bieberbach groups in dimension ≤ 6.
We use the computer system CARAT [10] to list all Bieberbach groups of dimension ≤ 6.
Our main tools are the following observations:
1. The property of being diffuse is inherited by subgroups (see [2, page 815]).
2. If Γ is a torsion-free group, N C Γ such that N and Γ/N are both diffuse then Γ is diffuse (see [2,
Theorem 1.2 (1)]).
Now let Γ be a Bieberbach group of dimension less than or equal to 6. By the first Betti number β1 (Γ)
we mean the rank of the abelianization Γ/[Γ, Γ]. Note that we are only interested in the case when β1 (Γ) > 0
(see [7, Lemma 3.4]). Using a method of E. Calabi [12, Propostions 3.1 and 4.1], we get an epimorphism
f : Γ → Zk , where k = β1 (Γ).
(1)
From the assumptions ker f is a Bieberbach group of dimension < 6. Since Zk is a diffuse group our problem
is reduced to the question about the group ker f.
Remark 1. Up to conjugation in GL(n + 1, R), Γ is a subgroup GL(n, Z) n Zn ⊂ GL(n + 1, Q), i.e. it is a
group of matrices of the form
A a
,
0 1
where A ∈ GL(n, Z), a ∈ Qn . If p : Γ → GL(n, Z) is a homomorphism which takes the linear part of every
element of Γ
A a
A a
p
= A for every
∈ Γ,
0 1
0 1
then there is an isomorphism ρ : G → p(Γ) ⊂ GL(n, Z). It is known that that the rank of the center of a
Bieberbach group equals the first Betti number (see [5, Proposition 1.4]). By [12, Lemma 5.2], the number
of trivial constituents of the representation ρ is equal to k. Hence without lose of generality we can assume
that the matrices in Γ are of the form
A B a
0 I b
0 0 1
where A ∈ GL(n − k, Z), I is the identity matrix of
a ∈ Qn−k and b ∈ Qk . Then f may be defined by
A
f 0
0
degree k, B is an integral matrix of dimension n − k × k,
B
I
0
a
b = b
1
and one can easily see that the map F : ker f → GL(n − k + 1, Q) given by
A B a
A a
F 0 I 0 =
0 1
0 0 1
is a monomorphism and hence its image is a Bieberbach group of rank n − k.
2
Now if Γ has rank 4 we know that the only non-diffuse Bieberbach group of dimension less than or equal
to 3 is ∆P . Using the above facts we obtain 17 non-diffuse groups. Note that the list from [7, section 3.4]
consists of 16 groups. The following example presents the one which is not in [7] and illustrates computations
given in the above remark.
Example 1. Let Γ be a crystallographic group denoted by "05/01/06/006" in [3] as a subgroup of GL(5, R).
Its non-lattice generators are as follows
0 1 −1 0 1/2
0 −1 1 0 1/2
0 1 0
−1 0 1 0
0 1/2
0
0
0
0 1 0 1/2 and B = −1 1 0
A= 0
.
0 0 0 −1 0
0
0 0 −1 1/2
0 0 0
0
1
0
0 0 0
1
Conjugating the above matrices by
1
0
Q=
1
0
0
1
1
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
∈ GL(5, Z)
0
1
one gets
1
0
AQ =
0
0
0
0
−1
0
0
0
0
0
−1
0
0
−1
0 1/2
0
1 0
Q
0 1/2
and B = 0
0
1 0
0
0 1
0 0
1 0
0 −1
0 0
0 0
−1 0
0 1/2
0
0
.
1
0
0
1
Now its easy to see that the rank of the center of Γ equals 1 and the kernel of the epimorphism Γ → Z is
isomorphic to a 3-dimensional Bieberbach group Γ0 with the following non-lattice generators:
−1 0 0
0
1 0
0 1/2
0 1 0 1/2
0 −1 0
0
0
A0 =
0 0 −1 1/2 and B = 0 0 −1 0
0 0 0
1
0 0
0
1
Clearly the center of Γ0 is trivial, hence it is isomorphic to the group ∆P .
Now we formulate our main result.
Theorem 1. The following table summarizes the number of diffuse and non-diffuse Bieberbach groups of
dimension ≤ 6.
Dimension
1
2
3
4
5
6
Total
1
2
10
74
1060
38746
Non-diffuse
0
0
1
17
352
19256
Diffuse
1
2
9
57
708
19490
Proof. If a group has a trivial center then it is not diffuse. In other case we use the Calabi (1) method and
induction. A complete list of groups was obtained using computer algebra system GAP [4] and is available
here [8].
Before we answer Question 1 from the introduction, let us formulate the following lemma:
3
Lemma 1. Let α, β be any generators of the group ∆P . Let γ = αβ, a = α2 , b = β 2 , c = γ 2 . Then the
following relations hold:
[a, b] = 1
aβ = a−1 aγ = a−1
α
−1
[a, c] = 1 b = b
bγ = b−1
(2)
α
−1
β
−1
[b, c] = 1 c = c
c =c
where xy := y −1 xy denotes the conjugation of x by y.
The proof of the above lemma is omitted. Just note that the relations are easily checked if consider the
following representation of ∆P as a matrix group
−1 0 0
0 +
1 0
0 1/2
*
0 1 0 1/2
0 −1 0 1/2
x=
0 0 −1 0 , y = 0 0 −1 1/2 ⊂ GL(4, Q).
0 0 0
1
0 0
0
1
Proposition 1. There exists an example of a five dimensional non-diffuse Bieberbach group which does not
contain any subgroup isomorphic to ∆P .
Proof. Let Γ be the Bieberbach group enumerated in CARAT
elements γ1 , γ2 , l1 , . . . , l5 where
1
0 1 0 0
0
0
0
1 0 0 0
0
0
0 0 1 0
0 1/2
and γ2 = 0
γ1 =
0
0 0 0 −1 0 1/4
0
0 0 0 0 −1 0
0
0 0 0 0
0
1
as "min.88.1.1.15". It generated by the
0
−1
0
0
0
0
0
0
−1
0
0
0
0 0 1/2
0 0 0
0 0 0
−1 0 0
0 1 1/2
0 0 1
and l1 , . . . , l5 generate the lattice L of Γ:
li :=
I5
0
ei
1
where ei is the i-th column of the identity matrix I5 . Γ fits into the following short exact sequence
π
1 −→ L −→ Γ −→ D8 −→ 1
where π takes the linear part of every element of Γ:
A a
7 A
→
0 1
and the image D8 of π is the dihedral group of order 8.
Now assume that Γ0 is a subgroup of Γ isomorphic to ∆P . Let T be its maximal normal abelian subgroup.
Then T is free abelian group of rank 3 and Γ0 fits into the following short exact sequence
1 −→ T −→ Γ0 −→ C22 −→ 1,
where Cm is a cyclic group of order m. Consider the following commutative diagram
1
T ∩L
T
H = π(T )
1
1
L
π −1 (H)
H
1
We get that H must be an abelian subgroup of D8 = π(Γ) and T ∩ L is a free abelian group of rank 3 which
lies in the center of π −1 (H) ⊂ Γ. Now if H is isomorphic to either to C4 or C22 then the center of π −1 (H) is
4
of rank at most 2. Hence H must be the trivial group or the cyclic group of order 2. Note that as Γ0 ∩ L is
a normal abelian subgroup of Γ0 it must be a subgroup of T :
T ∩ L ⊂ Γ0 ∩ L ⊂ T ∩ L,
hence T ∩ L = Γ0 ∩ L. We get the following commutative diagram with exact rows and columns
1
1
1
1
T ∩L
T
H
1
1
Γ0 ∩ L
Γ0
G
1
1
1
C22
C22
1
1
1
1
where G = π(Γ0 ). Consider two cases:
1. H is trivial. In this case G is one of the two subgroups of D8 isomorphic to C22 . Since the arguments
for both subgroups are similar, we present only one of them. Namely, let
G = hdiag(1, −1, −1, −1, 1), diag(−1, −1, 1, 1, 1).i
In this case Γ0 is generated by the matrices of the form
−1
1 0
0
0 0 x1 − 21
0 −1 0
0
0
0
x
2
0 0 −1 0 0
x3
and β = 0
α=
0 0
0
0 −1 0
x4
1
0 0
0
0
0 1 x5 − 2
0 0
0
0 0
1
0
where xi , yi ∈ Z for i = 1, . . . , 5. If c = (αβ)2 then
0 0 0
0 0 0
0 0 0
cα − c−1 =
0 0 0
0 0 0
0 0 0
0
−1
0
0
0
0
0 0 0 y1 + 21
0 0 0 y2 − 12
1 0 0
y3
,
0 1 0 y4 + 12
0 0 1
y5
0 0 0
1
by Lemma 1 cα = c−1 , but
0 0
0
0 0
0
0 0
0
.
0 0
0
0 0 4y5 + 4x5 − 2
0 0
0
Obviously solutions of the equation 4y5 + 4x5 − 2 = 0 are never integral and we get a contradiction.
2. H is of order 2. Then G
γ1 γ2 L and γ2 L, hence
0 −1
1 0
0 0
α=
0 0
0 0
0 0
= D8 and H is the center of G. The generators α, β of Γ0 lie in the cosets
0
0
−1
0
0
0
0 0
0 0
0 0
1 0
0 −1
0 0
x1
x2 −
x3 −
x4 +
x5 +
1
1
1
0
2
1
2 and β = 0
1
0
4
1
0
2
0
5
0
−1
0
0
0
0
0
0
−1
0
0
0
0 0 y1 −
0 0
y2
0 0
y3
−1 0
y4
0 1 y5 −
0 0
1
1
2
1
2
where xi , yi ∈ Z for i = 1, . . . , 5, as before. Setting a = α2 , b = β 2 we get
0 0 0 0 0 2 − 4y1
0 0 0 0 0
0
0 0 0 0 0
0
ab − ba =
0
0 0 0 0 0
0 0 0 0 0
0
0 0 0 0 0
0
and again the equation 2 − 4y1 = 0 does not have an integral solution.
The above considerations show that Γ does not have a subgroup which is isomorphic to ∆P .
Acknowledgements
This work was supported by the Polish National Science Center grant 2013/09/B/ST1/04125.
Bibliography
References
[1] A. Bartels, W. Lück, and H. Reich. On the Farrell-Jones conjecture and its applications. J. Topol.,
1(1):57–86, 2008.
[2] B. H. Bowditch. A variation on the unique product property. J. London Math. Soc. (2), 62(3):813–826,
2000.
[3] H. Brown, R. Bülow, J. Neubüser, H. Wondratschek, and H. Zassenhaus. Crystallographic groups of
four-dimensional space. Wiley-Interscience [John Wiley & Sons], New York-Chichester-Brisbane, 1978.
Wiley Monographs in Crystallography.
[4] The GAP Group. GAP – Groups, Algorithms, and Programming, Version 4.8.3, 2016. http://www.
gap-system.org/.
[5] H. Hiller and C.-H. Sah. Holonomy of flat manifolds with b1 = 0. Quart. J. Math. Oxford Ser. (2),
37(146):177–187, 1986.
[6] J. A. Hillman. Four-manifolds, geometries and knots, volume 5 of Geometry & Topology Monographs.
Geometry & Topology Publications, Coventry, 2002.
[7] S. Kionke and J. Raimbault. On geometric aspects of diffuse groups. Doc. Math., 21:873–915, 2016.
With an appendix by Nathan Dunfield.
[8] R. Lutowski. Diffuse property of low dimensional Bieberbach groups.
~rlutowsk/diffuse/
https://mat.ug.edu.pl/
[9] W. Lück. L2 -invariants: theory and applications to geometry and K-theory, volume 44 of Ergebnisse
der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results
in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. SpringerVerlag, Berlin, 2002.
[10] J. Opgenorth, W. Plesken and T. Schultz. CARAT - Crystallographic Algorithms and Tables, Version
2.0, 2003. http://wwwb.math.rwth-aachen.de/CARAT/
[11] S.D. Promislow. A simple example of a torsion-free, nonunique product group. Bull. London Math. Soc.,
20(4):302–304, 1988.
[12] A. Szczepański. Geometry of crystallographic groups, volume 4 of Algebra and Discrete Mathematics.
World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2012.
6
| 4 |
AN EXOTIC GROUP AS LIMIT OF FINITE SPECIAL LINEAR GROUPS
arXiv:1603.00691v3 [] 29 May 2017
ALESSANDRO CARDERI AND ANDREAS THOM
Abstract. We consider the Polish group obtained as the rank-completion of an inductive limit
of finite special linear groups. This Polish group is topologically simple modulo its center, it
is extremely amenable and has no non-trivial strongly continuous unitary representation on a
Hilbert space.
Contents
Introduction
1. No non-trivial unitary representations
2. A second route to extreme amenability
3. Topological simplicity of the central quotient
4. On 1-discrete subgroups of A(q)
Acknowledgements
References
1
3
7
10
11
12
13
Introduction
Let Fq be a finite field with q = ph elements and let SLn (q) be the special linear group over
Fq . We denote by r(k) the rank of a matrix k ∈ Mn (Fq ). We equip the groups SLn (q) with the
(normalized) rank-distance, d(1, h) := n1 r(1 − h) ∈ [0, 1]. Note that d is a bi-invariant metric,
which means that it is a metric such that d(1h, 1k) = d(h, k) = d(h1, k1) for every 1, h, k ∈ SLn (q).
For every n ∈ N, we consider the diagonal embedding
1 0
.
ϕn : SL2n (q) → SL2n+1 (q), defined by ϕn (1) :=
0 1
Observe that for every n, ϕn is an isometric homomorphism. We denote by A0 (q) the
countable group arising as the inductive limit of the family {(SL2n (q), ϕn )}n and observe that
we can extend the rank-metric d canonically to A0 (q). Let A(q) be the metric-completion of
A0 (q) with respect to d, i.e., A(q) is a Polish group and the natural extension of the rank-metric
is complete and bi-invariant. The purpose of this note is to study the topological group A(q).
Date: April 10, 2018.
1
2
ALESSANDRO CARDERI AND ANDREAS THOM
In many ways one can think of A(q) as a finite characteristic analogue of the unitary
group of the hyperfinite II1 -factor, which arises as a certain metric inductive limit of the
sequence of unitary groups U(2) ⊂ U(4) ⊂ U(8) ⊂ · · · , analogous to the construction above.
However, the group A(q) reflects at the same time in an intrinsic way asymptotic properties
of the sequence of finite groups SLn (q) – in particular of the interplay of group structure,
normalized counting measure, and normalized rank-metric. In this note we want to develop
this analogy and prove various results that show similarities but also differences to the II1 factor case. Some of the techniques that we are using are inspired by the theory of von
Neumann algebras whereas others are completely independent.
Our main result is the following theorem.
Theorem. The Polish group A(q) has the following properties:
•
•
•
•
every strongly continuous unitary representation of A(q) on a Hilbert space is trivial,
the group A(q) is extremely amenable,
the center of A(q) is isomorphic to F×q and the quotient by its center is topologically simple,
it contains every countable amenable group and, in case q is odd, the free group on two
generators as discrete subgroups.
Before we start recalling some of the concepts that are used in the statement of the theorem,
let us make some further remarks. In a similar way, one can form an inductive limit of unital
rings
Fq ⊂ M2 (Fq ) ⊂ M4 (Fq ) ⊂ · · · ⊂ M0 (Fq ),
where again, the normalized rank-metric allows to complete M0 (Fq ) to a complete von
Neumann regular ring M(Fq ). Following von Neumann, we can view M(Fq ) as the coordinatization of a continuous geometry, see [27] for more details. The unit group of this ring is
naturally isomorphic to A(q). Indeed, any invertible element in M(Fq ) must be the limit of
invertible elements and a rank-one perturbation takes care of the determinant. Note that the
algebra M(Fq ) does not depend on the special choice of inclusions. Indeed, Halperin showed
in [19] that (in complete analogy to the II1 -factor situation) any choice whatsoever yields the
same algebra. We thank Gábor Elek for pointing out Halperin’s work.
Unitary representability. We recall that a group is unitarily representable if it embeds in the
unitary group of a Hilbert space as a topological group and it is exotic if every continuous
unitary representation on a Hilbert space is trivial, see [3]. While all second countable locally
compact groups are unitarily representable – via the regular representation – there are several
examples of Polish groups that are not unitarily representable. For example, the Banach space
ℓp is unitarily representable if and only if 1 ≤ p ≤ 2, [25], see also [12, 13]. The first example
of an exotic group was found by Herer and Christensen in [20] and a surprising result of
Megrelishvili, [26], states that the group of orientation preserving homeomorphisms of the
interval has no non-trivial unitary representation (not even a representation on a reflexive
AN EXOTIC GROUP AS LIMIT OF FINITE SPECIAL LINEAR GROUPS
3
Banach space). However, most of the known examples of exotic groups are either abelian or
do not have a compatible bi-invariant metric.
Amenability and extreme amenability. A topological group is said to be amenable if there exists an invariant mean on the commutative C∗ -algebra of left-uniformly continuous complexvalued functions on the group. It is a standard fact that any topological group that admits a
dense locally finite subgroup is amenable – in particular, the group A(q) is amenable for any
prime-power q. See Appendix G of [4] for more details.
A topological group is said to be extremely amenable if each continuous action of the group
on a compact topological space admits a fixed point. See [29] for more details. Let us remark
that every amenable exotic group is extremely amenable. Indeed, by amenability, any action
on a compact space preserves a probability measure; since the Koopman representation of
the action is by hypothesis trivial, the measure has to be a Dirac measure. The first example
of an extremely amenable group was found by Herer and Christensen and they proved
extreme amenability exactly by showing that the group is amenable and exotic. Nowadays,
several examples of extremely amenable groups are known, for example the unitary group of
a separable Hilbert space [18], the automorphism group of a standard probability space [14],
and the full group of a hyperfinite equivalence relation [15]. However as for the previous
property, most of the known examples do not have a compatible bi-invariant metric.
Generalizations and Open problems. The proof of the theorem is robust and can be generalized to other (inductive) sequences of non-abelian finite (quasi-)simple groups of increasing
rank. Let us finish the introduction by listing a number of open problems that we find
interesting and challenging.
Is A(q) contractible?
Is A(q)/F×q simple?
Does it have a unique Polish group topology?
More generally, is every homomorphism from A(q) to another Polish group automatically continuous?
• Does it have (isometric) representations on a reflexive Banach space?
• Does A(q)/F×q have the bounded normal generation property, see [6]?
•
•
•
•
The first question has a positive answer in the case of the hyperfinite II1 -factors by work
of Popa-Takesaki [31], but we were unable to generalize the methods to our setting. A more
detailed study of the more basic properties of the algebras M(Fq ) and their unit groups A(q)
will be subject of another study.
1. No non-trivial unitary representations
In this first section, we will prove that A(q) has no non-trivial continuous representation
on a Hilbert space, that is we will show that A(q) is exotic.
4
ALESSANDRO CARDERI AND ANDREAS THOM
Definition 1.1. A complex continuous function ψ on a Polish group G is positive definite if
ψ(1G ) = 1 and for every a1 , . . . , an ∈ C and all 11 , . . . , 1n ∈ G, we have
n
X
ai ā j ψ(1−1
j 1i ) ≥ 0.
i, j=1
A positive definite function χ is a character if it is conjugation invariant, that is for every
1, h ∈ G, we have χ(h1h−1 ) = χ(1).
We will use some easy facts about positive definite functions which are covered in the
Appendix C of [4]. For example the Cauchy-Schwarz inequality for positive functions
implies that such functions have their maximal value at the identity so that any positive
definite function is bounded in absolute value by 1. We say that a positive definite function
is trivial if ψ(1) = 1 for every 1 ∈ G. Every positive definite function ψ also satisfies the
following standard inequality, which can be found in [4, Proposition C.4.2]:
(⋆)
|ψ(1) − ψ(h)|2 ≤ 2(1 − Re(ψ(1−1 h))).
There is an important relation between positive definite functions and unitary representations: the GNS construction; any positive definite function gives rise to a unitary representation and the diagonal matrix coefficients of any unitary representation are positive
definite functions. In particular, a group without any non-trivial continuous positive definite
function has no non-trivial strongly continuous unitary representation on a Hilbert space,
see Appendix C of [4] for more informations.
The following useful proposition will be needed in the course of the proof.
Proposition 1.2. Let n, m ∈ N. The canonical inclusion SL2n (q) ֒→ A0 (q) extends to an isometric
m
homomorphism SL2n (q2 ) ֒→ A0 (q).
Proof. For every k, the field Fq2k+1 is an algebraic extension of degree 2 of Fq2k , hence there
exists σk ∈ Fq2k+1 such that {1, σk } is a Fq2k -base of Fq2k+1 and there are αk , βk ∈ Fq2k such that
k+1
σ2k = αk σk + βk . For every k, and for every 1 ∈ SL2n (q2 ) there are are 10 , 11 ∈ M2n (Fq2k ) such
that 1 = 10 + σk 11 . We define
k+1
k
1
β
1
0
1
k
.
In,k : M2n q2
→ M2n+1 q2 , as In,k (1) :=
11 10 + αk 11
It is a straightforward computation to show that for every 1, h ∈ M2n (Fq2k+1 ), we have that
In,k (1h) = In,k (1)In,k (h). Moreover, if 1 ∈ SL2n (q), then det(In,k (1)) = 1:
10 + σk 11 σk 10 + (αk σk + βk )11
10
βk 11
det
= det
11
10 + αk 11
11 10 + αk 11
10 + σk 11
0
= det
11
10 + (αk − σk )11
AN EXOTIC GROUP AS LIMIT OF FINITE SPECIAL LINEAR GROUPS
5
by hypothesis det(10 + σk 11 ) = 1 and αk − σk is the Galois-conjugate of σk , hence det(10 +
k
k+1
(αk − σk )11 ) = 1. This proves that In,k : SL2n (q2 ) → SL2n+1 (q2 ) is a well-defined group
homomorphism. The homomorphism In,k is also isometric, in fact v = v0 + σk v1 ∈ ker(1) if
and only if (v0 , v1 ) ∈ ker(In,k (1)). Composing the maps
m
Im+n−1,0 ◦ · · · ◦ In+1,m−2 ◦ In,m−1 : SL2n (q2 ) → SL2n+m (q)
m
we obtain an inclusion SL2n (q2 ) ֒→ A0 (q) that extends the standard inclusion SL2n (q) ֒→
A0 (q). This finishes the proof.
Remark 1.3. Using the idea in the proof of Proposition 1.2, one can even show that the
m
∞
inclusion SL2n (q) ֒→ A0 (q) extends to SL2n (q2 ) := ∪n SL2n (q2 ) – a special linear group over
∞
an infinite field. The characters of the group SL2n (q2 ) were completely classified by Kirillov
[21] for n > 1 and by the work of Peterson and the second author [30] for n = 1. Any
non-trivial irreducible character of SL2n (q∞ ) is induced by its center, that is, χ(1) = χ(h) for
every non-central 1, h ∈ SL2n (q∞ ). This can be used in a straightforward way to show that the
group A(q) has no non-trivial continuous character. Note that Alt(2n ) ⊂ SL2n (q) and that also
the work of Dudko and Medynets [7] can be invoked to study characters on A(q).
Now, Theorem 2.22 of [1] states that any amenable Polish group which admits a complete
bi-invariant metric is unitary representable if and only if its characters separate the points.
Whence, our observation from above implies readily that A(q) cannot be embedded into
any unitary group of a Hilbert space as a topological group. Note that only recently, nonamenable polish groups with a complete bi-invariant metric were found, which are unitarily
representable but fail to have sufficiently many characters, see [2].
In order to show that A(q) does not have any unitary representation on a Hilbert space,
we will show that every continuous positive definite function is trivial. For this, we need the
following lemma.
Lemma 1.4. Let q be a prime power and let ψ : SLn (q) → C be a positive definite function. If there
exists a non-central element 1 ∈ SLn (q) such that |1 − ψ(x−1 1x)| < ε for some ε ∈ (0, 1) and for all
x ∈ SLn (q), then
|1 − ψ(h)| < 9(2ε + 16/q)1/2 , ∀h ∈ SLn (q).
Proof. We set
χ(h) :=
X
1
ψ(x−1 hx)
|SLn (q)|
x∈SLn (q)
and note that χ : SLn (q) → C is a character. Hence, we can write
X
λπ χπ (h), h ∈ SLn (q),
χ(h) = λ +
π
where π runs through the non-trivial irreducible representations π of SLn (q), χπ denotes the
P
normalized character of π, and λ + π λπ = 1, λ ≥ 0 and λπ ≥ 0 for all π. By a result
6
ALESSANDRO CARDERI AND ANDREAS THOM
of Gluck [16, Theorem 3.4 and Theorem 5.3], for every non-central element h ∈ SLn (q) and
every non-trivial, normalized, and irreducible character χπ we have |χπ (h)| < 8/q. By our
assumption |1 − χ(1)| < ε and thus
X
λπ χπ (1) > 1 − ε − 8/q.
λ = χ(1) −
π
We conclude that
|1 − χ(h)| ≤ |χ(h) − λ| + (1 − λ) ≤ 2(1 − λ) < 2ε + 16/q,
∀h ∈ SLn (q).
For fixed h, the preceding inequality, Markov’s inequality and the fact that ϕ is bounded in
absolute value by 1 imply that
n
o
|SLn (q)|
x ∈ SLn (q) |1 − ψ(x−1 hx)| ≥ 3(2ε + 16/q) ≤
3
and hence that at least 2/3 of all elements in the conjugacy class of h satisfy
|1 − ψ(k)| < 3(2ε + 16/q).
Since this holds for all conjugacy classes, we can set
A := k ∈ SLn (q) | |1 − ψ(k)| < 3(2ε + 16/q)
and conclude that |A| ≥ 2/3 · |SLn (q)|. Therefore for any h ∈ SLn (q), the set A ∩ hA−1 , ∅ and
thus there exist k1 , k2 ∈ A such that h = k1 k2 . So the inequality (⋆) yields
|ψ(k1 ) − ψ(h)|2 ≤ 2(1 − Re(ψ(k2 ))) ≤ 6(2ε + 16/q)
and hence
|1 − ψ(h)| ≤ |ψ(1) − ψ(k1 )| + |ψ(k1 ) − ψ(h)| ≤ 3(2ε + 16/q) + (6(2ε + 16/q))1/2 ≤ 9(2ε + 16/q)1/2 .
This proves the claim.
Let us now fix a positive definite function ψ : A(q) → C. Let ε > 0 and choose δ > 0 such
that d(1A(q) , x) < δ implies |1 − ψ(x)| < ε. For n large enough, the group SL2n (q) will contain
a non-central element 1 with d(1A(q) , 1) < δ. By conjugation invariance of the metric, we
conclude that |1 − ψ(x1x−1 )| < ε for all x ∈ A(q). Moreover, SL2n (q) ⊂ SL2n (qm ) ⊂ A(q) as
explained in Proposition 1.2. So applying the previous lemma to SL2n (qm ), we get that the
restriction of ψ to SL2n (q) satisfies
|1 − ψ(h)| < 9(2ε + 16/qm )1/2 ,
∀h ∈ SL2n (q).
Since this holds for any m ∈ N, we conclude that |1 − ψ(h)| ≤ 9(2ε)1/2 for all h ∈ SL2n (q) and n
large enough. Since ε > 0 was arbitrary, we must have that the restriction of ψ to any SL2n (q)
is trivial, and hence by density of A0 (q) in A(q) we can conclude the proof.
AN EXOTIC GROUP AS LIMIT OF FINITE SPECIAL LINEAR GROUPS
7
Remark 1.5. For the proof of our main theorem, it is not necessary to apply Gluck’s concrete
estimates. Indeed, it follows already from Kirillov’s work on characters of SLn (k) for infinite
fields k [21] that character values of non-central elements in SLn (q) have to be uniformly small
as q tends to infinity. This is enough to conclude the proof. One can see the existence of small
uniform bounds either by going through the techniques in Kirillov’s proof (see also the proof
of Theorem 2.4 in [30]) or by applying Kirillov’s character rigidity to a suitable ultra-product
of finite groups SLn (q) (for n fixed and q variable) which can be identified with SLn (k), where
k is the associated ultraproduct of finite fields, which is itself an infinite field.
2. A second route to extreme amenability
2.1. Lévy groups. As outlined in the beginning, every amenable and exotic group is extremely amenable. However, there is a different route to extreme amenability using the
phenomenon of measure-concentration. In this section, we want to show that A(q) is a Lévy
group and hence extremely amenable. See [29] for more background on this topic. A similar approach was recently carried out in [5] to give a direct proof that the unitary group
of the hyperfinite II1 -factor is extremely amenable. As a by-product, we can give explicit
bounds on the concentration function that are useful to give quantitative bounds in various
non-commutative Ramsey theoretic applications.
Definition 2.1. A metric measure space (X, d, µ) consists in a set X equipped with a distance
d and measure µ which is Borel for the topology induced by the metric, see [29]. In the
following we will always assume that µ is a probability measure. Given a subset A ⊂ X we
will denote by Nr (A) the r-neighbourhood of A, i.e., Nr (A) := {x ∈ X | ∃y ∈ A, d(x, y) < r}. The
concentration function of (X, d, µ) is defined as
1
(1)
α(X,d,µ) (r) := sup 1 − µ(Nr (A)) | A ⊂ X, µ(A) ≥
, r > 0.
2
Definition 2.2. A sequence of metric measure spaces (Xn , dn , µn ) with diameter constant
equal to 1, is a Lévy family if for every r ≥ 0,
α(Xn ,dn ,µn ) (r) → 0.
A Polish group G (with compatible metrid d) is called a Lévy group if there exists a sequence
(Gn )n of compact subgroups of G equipped with their normalized Haar measure µn , such
that (Gn , d|Gn , µn ) is a Lévy family. If this is the case, we say that the measure concentrates
along the sequence of subgroups.
The relationship with extreme amenability is given by the following theorem which can
be found in [29, Theorem 4.1.3].
Theorem 2.3. Every Lévy group is extremely amenable.
8
ALESSANDRO CARDERI AND ANDREAS THOM
Our second route to extreme amenability is therefore to show that (SL2n (q))n is a Lev́y
family with respect to the normalized counting measure and the normalized rank-metric.
Theorem 2.4. The normalized counting measure on the groups SL2n (q) concentrates with respect to
the rank-metric. In particular, A(q) is a Lévy group and hence extremely amenable.
The proof of the theorem is a straightforward application of the following theorem, whose
proof can be found in [29, Theorem 4.5.3] or [22, Theorem 4.4]
Theorem 2.5. Let G be a compact metric group, metrized by a bi-invariant metric d, and let
{1} = H0 < H1 < H2 < · · · < Hn = G
be a chain of subgroups. Denote by ai the diameter of the homogenous space Hi /Hi−1 , i = 1, . . . , n, with
regard to the factor metric. Then the concentration function of the metric-measure space (G, d, µ),
where µ is the normalized Haar measure, satisfies
2
r
αG (r) ≤ 2exp − Pn 2 .
16 i=1 ai
Proof of Theorem 2.4. Let us fix a base {e1 , . . . , en } of Fnq . Using the previous theorem, it is
enough to show that the diameter of SLn (q)/SLn−1 (q) is smaller than n2 , where SLn−1 (q) < SLn (q)
is the subgroup fixing en . For this, it is enough to show that for every 1 ∈ SLn (q), there exists
h ∈ SLn−1 (q) ⊂ SLn (q), such that r(h1−1) ≤ 2. Let V be a 2-dimensional vector space containing
en and 1en and let V ′ be a complement. There exists h′ ∈ SL(V) with h′ (1en ) = en . Hence,
(h′ ⊕ 1V′ )1en = en and thus (h′ ⊕ 1V′ )1 ∈ SLn−1 (q). Moreover,
d(1, (h′ ⊕ 1V′ )1) =
1
2
1
r(1 − (h′ ⊕ 1V′ )1) = r((h′ − 1V ) ⊕ 0V′ ) ≤ .
n
n
n
This proves the claim.
2.2. Ramsey theory. The crucial concept behind the strategy in the previous proof is the
P
notion of length of a metric measure space, which is bounded by the value ( i a2i )1/2 as above
and essentially the infimum over these quantities. The formal definition is quite technical
and we invite the interested reader to check Definition 4.3.16 in [29]. The following lemma
is immediate from our computation.
Lemma 2.6. The length of the metric measure space (SLn (q), d, µn ) is at most 2n−1/2 .
Any estimate on the length of a metric measure space can be used directly to get explicit
bounds on the concentration phenomena, such as in the following standard lemma, whose
proof is inspired by the proof of Theorem 4.3.18 in [29].
Lemma 2.7. Let (G, d, µ) be a metric measure space of length L and let ε be a positive real. Then for
2 2
any measurable subset A ⊂ G satisfying µ(A) > 2e−ε /L we have
µ(N4ε (A)) ≥ 1 − 2e−ε
2 /L2
.
AN EXOTIC GROUP AS LIMIT OF FINITE SPECIAL LINEAR GROUPS
9
Proof. Consider the function dA : G → R defined by dA (x) := d(A, x) = inf{d(a, x) | a ∈ A} and
R
set λ := µ dA . Since dA is 1-Lipschitz, Lemma 4.3.17 in [29] implies that
µ ({x ∈ G | |dA (x) − λ| ≥ 2ε}) ≤ 2e−ε
2
/L2
.
If λ ≥ 2ε, then for every x ∈ A we have that |dA (x) − λ| = λ ≥ 2ε and the above inequality
would give us a contradiction. Therefore we must have that λ ≤ 2ε and
1 − µ(N4ε (A)) =µ ({x ∈ G | dA (x) ≥ 4ε})
≤µ ({x ∈ G | |dA (x) − λ| ≥ 2ε}) ≤ 2e−ε
2 /L2
.
Let us finish this section by applying the previously obtained bounds to deduce an explicit
metric Ramsey theoretic result for the finite groups SLn (q).
As usual, we say that a covering U of a metric space X has Lebesgue number ε > 0, if for
every point x ∈ X there exists an element of the cover U ∈ U such that the ε-neighborhood
of x is contained in U. A covering that admits a positive Lebesgue number ε will be called
a uniform covering or ε-covering. Uniform coverings have been studied in the context of
amenability, extreme amenability, and Ramsey theory in [32, 33]. Our main result in this
section is a quantitative form of the metric Ramsey property that is satisfied by the finite
groups SLn (q) – resembling the fact the A(q) is extremely amenable.
Theorem 2.8. Let ε > 0, q be a prime power, and k, m ∈ N. If we set
N := 64ε−2 max{log(2k), log(2m)}
then for any n > N and any ε-cover U of SLn (q) of cardinality at most m the following holds: for every
subset F ⊂ SLn (q) of cardinality at most k, there exists 1 ∈ SLn (q) and U ∈ U such that 1F ⊂ U.
In order to illustrate the result, consider the case k = 3, m = 2. We may think of a covering
by two sets as a coloring of SLn (q) with two colors, where some group elements get both
colors. This covering is uniform if for every element 1 ∈ SLn (q) the ε-neighborhood of 1
can be colored by one of the two colors. Now, if n > 128 · ε−2 , any subset of three elements
{a, b, c} ⊂ SLn (q) has a translate {1a, 1b, 1c} colored in the same color. This is in contrast to
(R/Z, +) equipped with its usual metric, since it is easy to see that this group does not admit
any uniform covering.
Proof of Theorem 2.8. Let U be the covering and for every U ∈ U we set U0 := {x ∈ SLn (q) |
B(x, ε) ⊂ U}. By assumption, the collection V := {U0 | U ∈ U} still forms a covering of X.
If the cardinality of the covering is at most m, there must be one element V ∈ V in the new
covering with µn (V) ≥ m1 . Take U ∈ U such that U0 = V and observe that the ε-neighborhood
2
of V is also contained in U. If n > 64 log(2m)ε−2 then 1/m > 2e−ε n/64 and hence Lemma 2.7
implies that
−
µ(U) ≥ µ(Nε (V)) ≥ 1 − 2e
ε2
16L2
ε2 n
= 1 − 2e− 64 .
10
ALESSANDRO CARDERI AND ANDREAS THOM
For a subset F ⊂ SLn (q) we have
\
\
1 ∈ SLn (q) | 1F ⊂ U =
1 ∈ SLn (q) | 1h ⊂ U =
Uh−1
h∈F
h∈F
and thus if |F| ≤ k, we get that µ({1 ∈ SLn (q) | 1F ⊂ U}) ≥ 1 − 2k exp(−ε2 n/64). Hence, some
element 1 ∈ SLn (q) as desired will exist as soon as n > 64ε−2 max{log(2k), log(2m)}.
3. Topological simplicity of the central quotient
In this last section, we will determine the center of A(q) and we will prove that A(q) is
topologically simple modulo its center.
As we remarked in the introduction A(q) is isomorphic to the group of invertible elements
of M(Fq ). Therefore for every α ∈ F×q the “diagonal matrix” whose non-zero entries are equal
to α is an element of A(q) and it is in the center. Let us denote by Z ⊂ A(q) the group of
diagonal matrices with constant values, Z F×q . We claim that Z is actually the center of A(q)
and that A(q)/Z is topologically simple. Note that the quotient A(q)/Z can be understood as
the completion of the metric inductive limit of the corresponding projective linear groups
with respect to the projective rank-metric as studied in [35].
To prove the claim, we will need the following theorem of Liebeck and Shalev which
follows from [23, Lemma 5.4], see also [35] for a discussion of the results of Liebeck-Shalev
in the context of length functions on quasi-simple groups.
Theorem 3.1 (Liebeck-Shalev). There exists c ∈ N such that for every n ∈ N and for every
1 ∈ SLn (q) with δ := min{d(1, z) | z ∈ F×q } > 0, we have CSLn (q) (1)⌈c/δ⌉ = SLn (q).
For an element 1 of the group H, we denote by CH (1) its conjugacy class. As a corollary of
this theorem we obtain the following.
Proposition 3.2. For every 1 ∈ A(q) such that δ = min{d(1, z) | z ∈ F×q } > 0, the set CA(q) (1)⌈c/δ⌉ ⊂
A(q) is dense.
Before proving the propostion, let us remark that the proposition implies our claim. Let
N < A(q) be a closed normal subgroup which contains strictly Z and take x ∈ N \ Z. By the
previous proposition, there exists m ∈ N such that CA(q) (x)m is dense, which implies that N is
dense and therefore N = A(q).
Proof of Proposition 3.2. For every ε < δ/2, we consider an element 1′ ∈ A0 (q) with d(1, 1′ ) ≤ ε.
Let n0 be such that 1′ ∈ SL2n0 (q). By Theorem 3.1 for every n ≥ n0 we have CSL2n (q) (1′ )m =
SL2n (q) for m := ⌈c/δ⌉, whence
CA(q) (1′ )m ⊃ A0 (q).
This means that for any element k ∈ A(q), there exist h1 , . . . , hm ∈ A(q) such that
′ −1
d(h1 1′ h−1
1 · · · hm 1 hm , k) ≤ ε.
AN EXOTIC GROUP AS LIMIT OF FINITE SPECIAL LINEAR GROUPS
11
Since the rank-metric is bi-invariant we obtain that
′ −1
−1
−1
′
d(h1 1′ h−1
1 · · · hm 1 hm , h1 1h1 · · · hm 1hm ) ≤ md(1, 1 ),
which implies that d(CA(q) (1)m , k) ≤ (m + 1)ε.
Remark 3.3. Since the center of A(q) is Z = F×q , we can conclude that A(q) is isomorphic to
A(q′ ) if and only if q = q′ . One could ask also if the quotient group A(q)/Z depends on q.
The question seems harder, but one can study centralizer of elements in A(q)/Z as in [37] to
deduce that this group depends at least on the characteristic of the field.
4. On 1-discrete subgroups of A(q)
In this section we want to study discrete subgroups of A(q). In fact, we will focus on
subgroups Γ ⊂ A(q) so that d(1, 1) = 1 for all non-identity elements of Γ – we call such a
subgroup 1-discrete in the metric group (A(q), d). Since the diameter of (A(q), d) is equal to 1,
1-discrete subgroups should play a special role in the study of the metric group (A(q), d). Let
us start with amenable groups.
Proposition 4.1. Every countable amenable group is isomorphic to a 1-discrete subgroup of A(q).
Proof. Let Γ be a countable amenable group and suppose at first that there is a sequence of
Følner sets (Fn ) such that Fn+1 = Fn Dn where Dn ⊂ Γ is a finite subset, so that the family
{Fn c | c ∈ Dn } consists of pairwise disjoint subsets. For every h ∈ Γ and n ∈ N, we define
Lhn := {1 ∈ Fn | h1 ⊂ Fn } and we observe that for every ε > 0 and h ∈ Γ, there exists N ∈ N such
that for every n ≥ N, we have |Lhn | > (1 − ε)|Fn |. We define elements hn ∈ M|Fn | (Fq ) as the usual
action by permutation on the left on Lhn and 0 on the complement. It is now easy to observe
that (hn )n is a Cauchy sequence for every h ∈ Γ and hence we can define a maximal discrete
embedding of Γ into the group of invertible elements of the completion of the inductive limit
of matrices of size |Fn |, which as explained in the introduction, by the result of Halperin [19],
is isomorphic to A(q).
For a general countable amenable group we will use Ornstein-Weiss theory [28]. In fact
the proposition follows from a similar argument using as Følner sets which are quasi-tiling
[28, page 24, Theorem 6]. Or one can also observe that A(q) contains the inductive limit of
symmetric groups which is the full group of the hyperfinite equivalence relation, which by
[28], contains any amenable group as maximal discrete subgroup.
The aim of this section is to show the following theorem.
Theorem 4.2. If q is odd, A(q) contains a non-abelian free group as a 1-discrete subgroup.
Theorem 4.2 follows form Elek’s work [11] and [10]. In fact, Elek showed that any amenable
skew-field embeds as a discrete sub-algebra of M(Fq ) and by [17], there exist amenable skewfields with free subgroups. However we do not need the full strength of Elek’s results and
we will construct an explicit embedding of a skew-field containing a free group into M(Fq ).
12
ALESSANDRO CARDERI AND ANDREAS THOM
Note that the analogous result is not true in the setting of II1 -factors, i.e., the free group
on two generators is not a 1-discrete (which in this case should mean that different group
elements are orthogonal with respect to the trace) subgroup of the unitary group of the
hyperfinite II1 -factor. Moreover, the preceding result seems to yield the first example of a
discrete non-amenable subgroup of an amenable Polish group, whose topology is given by
a bi-invariant metric. It is an open problem, if the free group on two generators (or in fact
any discrete non-amenable group) can be a discrete subgroup of the unitary group of the
hyperfinite II1 -factor or the topological full group of the hyperfinite equivalence relation.
The proof of Theorem 4.2 is based on the following lemma, which is inspired by Elek’s
canonical rank function, see [9].
Lemma 4.3. Let Γ be a finitely generated amenable group and suppose that there is a sequence of
Følner sets (Fn ) such that Fn+1 = Fn Dn where Dn ⊂ Γ is a finite subset. Consider the embedding
Φ : Γ → M(Fq ) given in Proposition 4.1. Then for every element a of the group algebra Fq Γ which is
not a zero-divisor, the element Φ(a) ∈ M(Fq ) is invertible.
Proof. Let us fix a non zero-divisor a ∈ Fq Γ. Let us denote by S ⊂ Γ the support of a. For every
n, as in Proposition 4.1, we define Lan := {1 ∈ Fn | S1 ⊂ Fn } and we observe that for every ε,
there exists N ∈ N such that for every n ≥ N, we have |Lan | > (1 − ε)|Fn |. Observe also that a
is the limit of the elements an which acts on Lan as left translation. We claim that the vectors
{an 1}1∈Lan are linearly independent. In fact if 11 , . . . , 1k ∈ Lan and α1 , . . . , αk ∈ Fq are such that
P
P
i a(αi 1i ) = 0 then a( i αi 1i ) = 0 and whence a is a zero-divisor in Fq Γ. So the rank of an is at
least 1 − ε and hence the rank of a is 1 and therefore it is invertible.
Proof of Theorem 4.2. Let Γ be an elementary amenable, torsion free group which satisfies
the hypothesis of Lemma 4.3. By [24, Theorem 2.3], every non-zero element of the group
algebra Fq Γ is a not a zero-divisor. By Tamari [36, Example 8.16], we know that group rings
of amenable groups which do not contain zero-divisors satisfy the Ore condition, for more
about the Ore localization see [34]. Hence the Ore completion of Fq Γ, denoted by Q(Fq Γ)
is a skew-field. Consider the embedding Φ : Γ → M(Fq ) defined in Proposition 4.1 and
extend it to Fq Γ. Observe that by the universality of the Ore completion and by Lemma 4.3,
Φ extends to a map Φ : Q(Fq Γ) → M(Fq ). When Γ is also nilpotent, for example when Γ is
the Heisenberg group, Q(Fq Γ) has free subgroups by Theorem 5 of [17]. This concludes the
proof.
It is plausible that for every maximal discrete embedding of a non amenable group into
M(Fq ) the conclusion of Lemma 4.3 cannot hold.
Acknowledgements
This research was supported by ERC Starting Grant No. 277728 and ERC Consolidator
Grant No. 681207. Part of this work was written during the trimester on Random Walks and
AN EXOTIC GROUP AS LIMIT OF FINITE SPECIAL LINEAR GROUPS
13
Asymptotic Geometry of Groups in 2014 at Institute Henri Poincaré in Paris – we are grateful
to this institution for its hospitality. We thank Gábor Elek for interesting remarks that led to
the results in Section 4. We also thank the unknown referee for remarks that improved the
exposition.
References
[1] Hiroshi Ando and Yasumichi Matsuzawa. On Polish groups of finite type. Publ. Res.
Inst. Math. Sci., 48:389–408, 2012. 5
[2] Hiroshi Ando, Yasumichi Matsuzawa, Andreas Thom, and Asger Törnquist.
Unitarizability, Maurey–Nikishin factorization, and Polish groups of finite type.
arXiv:1605.06909, submitted for publication. 5
[3] Wojciech Banaszczyk. Additive subgroups of topological vector spaces, volume 1466 of
Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1991. 2
[4] Bachir Bekka, Pierre de la Harpe, and Alain Valette. Kazhdan’s Property (T). Cambridge
University Press, 2008. 3, 4
[5] Philip Dowerk and Andreas Thom. A new proof of extreme amenability of the unitary
group of the hyperfinite II1 factor. Bull. Belg. Math. Soc. Simon Stevin, 22(5):837–841, 2015.
7
[6] Philip Dowerk and Andreas Thom. Bounded Normal Generation and Invariant Automatic Continuity. arXiv:1506.08549, submitted for publication. 3
[7] Artem Dudko and Konstantin Medynets. On characters of inductive limits of symmetric
groups. J. Funct. Anal., 264(7):1565–1598, 2013. 5
[8] Gertrude Ehrlich. Characterization of a Continuous Geometry Within the Unit Group.
Trans. AMS, 83(2):397–416, 1956.
[9] Gábor Elek. The rank of finitely generated modules over group algebras. Proc. Amer.
Math. Soc. 131(11):3477–3485, 2011. 12
[10] Gábor Elek. Infinite dimensional representations of finite dimensional algebras and
amenability. arXiv:1512.03959. 11
[11] Gábor Elek. Convergence and limits of linear representations of finite groups. J. Algebra
450: 588–615, 2016. 11
[12] Jorge Galindo. On unitary representability of topological groups. Math. Z., 263(1):
211–220, 2009. 2
[13] Su Gao. Unitary group actions and hilbertian polish metric spaces. In Logic and its
applications, volume 380 of Contemp. Math., page 53–72. Amer. Math. Soc., Providence,
RI, 2005. 2
[14] Thierry Giordano and Vladimir Pestov. Some extremely amenable groups. C. R. Math.
Acad. Sci. Paris, 334(4):273–278, 2002. 3
14
ALESSANDRO CARDERI AND ANDREAS THOM
[15] Thierry Giordano and Vladimir Pestov. Some extremely amenable groups related to
operator algebras and ergodic theory. J. Inst. Math. Jussieu, 6(02):279–315, 2007. 3
[16] David Gluck. Sharper character value estimates for groups of Lie type. J. Algebra, 174:
229–266, 1995. 6
[17] Jairo Z. Gonçalves, Arnaldo Mandel, and Mazi Shirvani. Free products of units in
algebras. I. Quaternion algebras. J. Algebra 214:301–316, 1999. 11, 12
[18] Misha Gromov. Asymptotic invariants of infinite groups. In Geometric group theory, Vol. 2
(Sussex, 1991), volume 182 of London Math. Soc. Lecture Note Ser., page 1–295. Cambridge
Univ. Press, Cambridge, 1993. 3
[19] Israel Halperin. von Neumann’s manuscript on inductive limits of regular rings. Canad.
J. Math., 20:477–483, 1968. 2, 11
[20] Wojchiech Herer and Jens Peter Reus Christensen. On the existence of pathological
submeasures and the construction of exotic topological groups. Math. Ann., 213(3):
203–210, 1975. 2
[21] Alexander A. Kirillov. Positive-definite functions on a group of matrices with elements
from a discrete field. Dokl. Akad. Nauk SSSR, 162:503–505, 1965. 5, 7
[22] Michel Ledoux. The concentration of measure phenomenon, volume 89 of Mathematical
Surveys and Monographs. American Mathematical Society, Providence, RI, 2001. 8
[23] Martin W. Liebeck and Aner Shalev. Diameters of finite simple groups: sharp bounds
and applications. Ann. Math., 154(2):383–406, 2001. 10
[24] Peter A. Linnell, Noncommutative localization in group rings. Non-commutative localization in algebra and topology, Cambridge Univ. Press, Cambridge, 40–59, 2006. 12
[25] Michael G. Megrelishvili. Reflexively but not unitarily representable topological groups.
In Topology Proceedings, volume 25, 615–625 (2002), 2000. 2
[26] Michael G. Megrelishvili. Every semitopological semigroup compactification of the
group H+ [0, 1] is trivial. Semigroup Forum, 63(3):357–370, 2001. 2
[27] John von Neumann. Continuous geometry (foreword by Israel Halperin). Princeton Mathematical Series 25, 1960. 2
[28] Donald S. Ornstein and Benjamin Weiss. Entropy and isomorphism theorems for actions
of amenable groups. J. Analyse Math., 48:1–141, 1987. 11
[29] Vladimir Pestov. Dynamics of infinite-dimensional groups, volume 40 of University Lecture
Series. American Mathematical Society, Providence, RI, 2006. 3, 7, 8, 9
[30] Jesse Peterson and Andreas Thom. Character rigidity for special linear groups. J. Reine
Angew. Math. 716 (2016), 207–228. 5, 7
[31] Sorin Popa and Masamichi Takesaki. The topological structure of the unitary and
automorphism groups of a factor. Comm. Math. Phys., 155(1):93–101, 1993. 3
[32] Friedrich Martin Schneider and Andreas Thom. Topological matchings and amenability.
arXiv:1502.02293, to appear in Fundamenta Mathematicae. 9
AN EXOTIC GROUP AS LIMIT OF FINITE SPECIAL LINEAR GROUPS
15
[33] Friedrich Martin Schneider and Andreas Thom. On Følner sets in topological groups.
arXiv:1608.08185, submitted for publication. 9
[34] Bo Stenström. Rings of quotients. Die Grundlehren der Mathematischen Wissenschaften,
Band 217, An introduction to methods of ring theory, Springer-Verlag, New York, 1975.
12
[35] Abel Stolz and Andreas Thom. On the lattice of normal subgroups in ultraproducts of
compact simple groups. Proc. London Math. Soc., 108:73–102, 2014. 10
[36] Dimitri Tamari. A refined classification of semi-groups leading to generalized polynomial rings with a generalized degree concept. Proceedings of the ICM, Amsterdam 3,
439–440, 1954. 12
[37] Andreas Thom and John S. Wilson. Metric ultraproducts of finite simple groups. C. R.
Math. Acad. Sci. Paris, 352(6):463-466, 2014. 11
A.C., Institut für Geometrie, TU Dresden, 01062 Dresden, Germany
E-mail address: [email protected]
A.T., Institut für Geometrie, TU Dresden, 01062 Dresden, Germany
E-mail address: [email protected]
| 4 |
Extracting Ontological Knowledge from Textual Descriptions
Kevin Alex Mathews
P Sreenivasa Kumar
Indian Institute of Technology Madras
[email protected]
Indian Institute of Technology Madras
[email protected]
KEYWORDS
Text Analytics, Ontology Learning, Knowledge Extraction
1
the input sentence without substantial background knowledge of
the domain. Most current authoring systems produce one formalization of a sentence. We find that the best way to handle these
challenges is to construct axioms corresponding to alternative formalizations of the sentence so that the end-user can make an appropriate choice.
In this paper, we propose a novel ontology authoring system
that simplifies the authoring process for the users. Our system
takes English text that conforms to a newly proposed CNL called
TEDEI input and generates corresponding OWL axioms. We scope
ontology authoring to building the schema (TBox) of an ontology.
Within this context, we focus on OWL class expression axioms.
We have outlined the architecture of our system in Figure 1.
The five main modules of the system are lexical ambiguity handler, TEDEI parser, semantic ambiguity handler, syntactic transformation and ACE-to-OWL translation. Lexical ambiguity handler
accepts the input sentences, expressed in English, and generates
possible lexicalizations of the sentence using POS tag patterns. Lexicalization is the process of breaking the given sentence into tokens such that these tokens can be identified as various ontology
elements, namely, classes, individuals, properties and concept constructors. Then TEDEI parser parses the lexicalizations on the basis
of TEDEI grammar rules. Only valid TEDEI lexicalizations are processed further. Sentences not conforming to TEDEI are indicated as
such to the user so that they can be reformulated. Semantic ambiguity handler generates possible interpretations of the TEDEI lexicalizations using specific sentence patterns. The interpretations
are converted to ACE in the syntactic transformation module using rules of transformation. Finally ACE sentences are converted
to OWL by the existing ACE parser.
The output of the system is compared against human-authored
axioms and in substantial number of cases, human-authored axiom
is indeed one of the alternatives given by the system. Our framework clearly outperforms ACE in terms of the number and types
of sentences the system can handle. In comparison with existing
systems, due to the use of TEDEI, our framework is a robust way
to generate ontological axioms from text. TEDEI helps in clearly
defining the scope of the system and provides the ability to reject
a sentence and ask for reformulation. Also, employing ACE as an
intermediate language aids formalization and reduces the complexity of the system.
Our contributions in this paper are as follows: (1) an ontology authoring process, (2) an ontology authoring language whose
grammar reflects the constructs of OWL and which has better expressivity than ACE, (3) extraction of OWL axioms from sentences
of the language, and (4) handling ambiguity associated with formalization.
The remainder of the paper is structured as follows: in Section 2,
we discuss the related works. In Section 3, we discuss the important
Un
pu
No blis
t f he
or d w
di o
str rk
ib ing
ut d
io ra
n ft
arXiv:1709.08448v3 [] 28 Sep 2017
ABSTRACT
Authoring of OWL-DL ontologies is intellectually challenging and
to make this process simpler, many systems accept natural language text as input. A text-based ontology authoring approach
can be successful only when it is combined with an effective
method for extracting ontological axioms from text. Extracting axioms from unrestricted English input is a substantially challenging task due to the richness of the language. Controlled natural
languages (CNLs) have been proposed in this context and these
tend to be highly restrictive. In this paper, we propose a new CNL
called TEDEI (TExtual DEscription Identifier) whose grammar is
inspired by the different ways OWL-DL constructs are expressed
in English. We built a system that transforms TEDEI sentences
into corresponding OWL-DL axioms. Now, ambiguity due to different possible lexicalizations of sentences and semantic ambiguity present in sentences are challenges in this context. We find that
the best way to handle these challenges is to construct axioms corresponding to alternative formalizations of the sentence so that
the end-user can make an appropriate choice. The output is compared against human-authored axioms and in substantial number
of cases, human-authored axiom is indeed one of the alternatives
given by the system. The proposed system substantially enhances
the types of sentence structures that can be used for ontology authoring.
INTRODUCTION
The key to building large and powerful AI systems is knowledge
representation. An ontology is a knowledge representation mechanism. It provides a vocabulary describing a domain of interest, and
a specification of the terms in that vocabulary. However the manual creation of ontologies is intellectually challenging and timeconsuming [3]. Also the knowledge in an ontology is generally expressed in ontology languages such as RDFS [1] or OWL [6], which
are based on DL [17]. As a result, it is difficult for non-logicians
to create, edit or manage ontologies. So a user-friendly format for
communication of ontological content is required for ontology authoring. Many systems accept natural language (English) text as input. Now a text-based ontology authoring system can be successful
only when it is combined with an effective method for extracting
ontological axioms from text. Extracting axioms from English is
challenging due to its unrestricted and ambiguous nature. CNLs,
which are restricted unambiguous variants of natural languages,
have been proposed in this context.
Now, ambiguity due to different possible lexicalizations of sentences and semantic ambiguity present in sentences are challenges
in the context of ontology authoring. It is difficult to disambiguate
2017-10-02 00:09 page 1 (pp. 1-8)
POS Tag Patterns
English Text
Lexical
Ambiguity
Handler
Sentence Patterns
TEDEI CNL
Lexicalizations
TEDEI
Parser
TEDEI Lexicalizations
Semantic
Ambiguity
Handler
Transformation Rules
Interpretations
Syntactic
Transformation
ACE Parser
ACE Text
ACE-to-OWL
Translation
OWL Axioms
Non-TEDEI Lexicalizations
Figure 1: Architecture
grammar rules of TEDEI. In Section 4, we discuss ACE and transformation of TEDEI text to ACE. In Section 5, we describe ambiguity
in formalization and how it is handled by the proposed system. Section 6 describes the results and evaluation and Section 7 concludes
the paper.
Un
pu
No blis
t f he
or d w
di o
str rk
ib ing
ut d
io ra
n ft
2
exploitation of lexico-syntactic patterns [15], utilization of
knowledge-rich resources such as linked data or ontologies [13],
and syntactic transformation [24]. Statistics-based techniques involve relevance analysis [2], co-occurrence analysis, clustering,
formal concept analysis, association rule mining [8] and deep
learning [20]. Wong et al. [26] presents a study on prominent
ontology learning systems such as OntoLearn Reloaded [23],
Text2Onto [4], and OntoGain [8]. Most of the current systems employ a combination of the aforementioned techniques.
In comparison with existing systems, the proposed system has
a streamlined approach to the generation of axioms from text. Existing systems attempt to convert any English sentence and the
process would either fail or generate an incorrect axiom in several
cases. However, in the proposed approach, since it is guided by a
grammar, the system can identify a sentence it can not handle and
give an error signal. Although existing systems might be using a
grammar, since it is not explicitly mentioned, it is difficult to define the scope of the input. In our framework, the use of grammar
clearly defines the set of sentences the system can handle. Our approach gives the end-user an opportunity to rewrite the sentence
and possibly convey the same information in a different way.
In addition, by using ACE in our framework, we incorporate the
advantage of using a CNL in our framework. Also, existing systems
generate only one formalization per sentence without taking into
the account the impact of ambiguity in formalization.
RELATED WORKS
In this section, we discuss the works related to ontology authoring and how the proposed system is an improvement over the existing systems. Section 2.1 discusses ontology authoring based on
CNLs. Note that a text-based ontology authoring approach needs
to have a corresponding solution to the problem of extracting ontology from text. The success of the authoring tool depends on
an effective technique for extraction. Extracting ontology is also
termed as ontology learning in the literature and we review some
of the relevant works in Section 2.2.
2.1
CNL-based ontology authoring
Some CNLs that were developed specifically for creating OWL ontologies are ACE [11], Rabbit [7], CLOnE [12] and SOS [5]. However ACE is the most commonly used CNL for Semantic Web.
There are numerous tools based on ACE such as ACEWiki1 and
APE2 . ACE is designed to be unambiguous and less complex as
compared to standard English. In the context of ontologies, ACE
offers a simple platform for domain experts who are not comfortable with ontology languages to author ontologies. ACE-OWL, a
sublanguage of ACE, has a bidirectional mapping with OWL.
Although ACE is used for ontology authoring, ACE has two
main limitations. Firstly, ACE is a subset of standard English. As a
result, many English sentences are not present in ACE. Secondly,
in some cases, even though a sentence is valid in ACE, the ACE
parser, named Attempto Parsing Engine (APE), fails to axiomatize
it because only a subset of ACE sentences map to OWL axioms.
Thus we find that though ACE is good for authoring ontologies, it
restricts the user to a limited subset of English.
However, we noted that with suitable transformation, it is possible to make sentences ACE-compliant or OWL-compliant. The
proposed system addresses both the limitations of ACE in the syntactic transformation phase. Syntactic transformation carries out
the necessary transformation and as a result, it extends ACE for
the purpose of ontology authoring.
2.2
3 TEXT DESCRIPTION IDENTIFIER (TEDEI)
Only sentences having at least one lexicalization that is valid according to TEDEI rules are handled by our system. The expressivity of DL covered by the language defined by TEDEI is ALCQ.
We use ANTLR [19] parser generator to generate the parser for
TEDEI. ANTLR takes TEDEI grammar as input and generates a recognizer for the language. We employ the recognizer to read the
input stream and check whether it conforms to the syntax specified by TEDEI.
We developed TEDEI keeping in mind the rich set of primitives
of OWL 2 [14]—the W3C recommended and widely adopted ontology language. We also mined Brown corpus [10] to identify various ways in which OWL primitives can be expressed in natural
language. We used various regular expressions to extract the patterns. Brown corpus, being a prominent text corpus, is rich enough
to contain various kinds of such patterns. Those patterns that correspond to some OWL primitive are encoded as rules in the grammar.
The non-terminals of the grammar correspond to OWL primitives. In this section, we describe only the important rules of the
Ontology learning
The main techniques in ontology learning from text are linguisticsbased and statistics-based. Linguistics-based techniques involve
1 http://attempto.ifi.uzh.ch/acewiki/
2 http://attempto.ifi.uzh.ch/ape/
2
|
|
|
|
grammar. The primary rule in TEDEI corresponds to a class expression axiom. A class expression axiom defines a concept. The textual
description corresponding to a class expression axiom consists of a
concept and a class expression. An example is the sentence “Every
square contains right angles,” which consists of the concept square
and the class expression contains right angles.
A class expression can either be an atomic class or a complex
class. An atomic class denotes a single concept. A complex class
can be constructed in various ways. It is possible to employ set
operations (union, intersection, and complement) and property restrictions (existential restriction, universal restriction, cardinality
restriction, value restriction, and self-restriction.) It is also possible to construct complex classes by enumerating all the instances
in the class.
The natural language indicators for various OWL primitives are
given in the production rules of the grammar. Brown corpus has
been beneficial in identifying these indicators. We consider the relative pronouns which, that, and who as the natural language indicators for intersection of classes. An example is the sentence “Every
square is a quadrilateral that has 4 right angles.” Sometimes, the absence of any such relative pronoun also indicates an intersection,
as demonstrated by the sentence “Every rectangle is a quadrilateral
having 4 right angles.” The natural language indicator for union
of classes is the conjunction or. An example is the sentence “Every polygon is concave or convex.” The determiner some indicates
existential restriction. An example is the sentence “Every rectangle
contains some right angles.” The determiner only indicates universal restriction. An example is the sentence “Every rectangle contains
only right angles.”
The important rules of TEDEI are outlined below in BNF notation. For the sake of readability, we provide only an abstract version of the rules originally written in ANTLR. The non-terminals
of the grammar are shown in italics and terminals are shown in
uppercase. CLASS, INDIVIDUAL, and PROPERTY are three special terminals of the grammar, and they correspond to ontology elements
class, individual and property respectively. The actual text tokens
that correspond to these terminals are identified during the lexicalization process. Sometimes, there may be multiple lexicalizations
for a sentence and we discuss the details of the lexicalization process in Section 5.1. Rest of the terminals such as THAT and WHICH
correspond to words or phrases of the input sentence.
start ::= lexpr rexpr
qualMaxCard //qualified maximum cardinality
indValueRes //individual-value restriction
selfValueRes //self-value restriction
classComb
complement ::=
preComplementInd PROPERTY classComb
| PROPERTY postComplementInd classComb
uniRes ::=
PROPERTY universalInd classComb
| universalInd PROPERTY classComb
existRes ::=
PROPERTY existentialInd classComb
| PROPERTY classComb
Un
pu
No blis
t f he
or d w
di o
str rk
ib ing
ut d
io ra
n ft
exactCard ::=
PROPERTY exactCardInd CARDINALITY
| PROPERTY ambiExactCardInd CARDINALITY
minCard ::=
PROPERTY preMinCardInd CARDINALITY
| PROPERTY CARDINALITY postMinCardInd
maxCard ::=
PROPERTY preMaxCardInd CARDINALITY
| PROPERTY CARDINALITY postMaxCardInd
qualExactCard ::=
PROPERTY exactCardInd CARDINALITY classComb
| PROPERTY ambiExactCard CARDINALITY classComb
qualMinCard ::=
PROPERTY preMinCardInd CARDINALITY classComb
| PROPERTY CARDINALITY postMinCardInd classComb
qualMaxCard ::=
PROPERTY preMaxCardInd CARDINALITY classComb
| PROPERTY CARDINALITY postMaxCardInd classComb
indValueRes ::= PROPERTY INDIVIDUAL
selfValueRes ::= PROPERTY selfInd
classComb ::= CLASS (clsExpInd classComb)*
clsExpInd ::= AND | OR | ,
unionInd ::= OR
intersectionInd ::= THAT | WHICH | WHO | WHOSE
preCompInd ::= DOES NOT | DO NOT | DID NOT
postCompInd ::= NOT | NO
lexpr ::= CLASS | INDIVIDUAL
rexpr ::= union
universalInd ::=
EXCLUSIVELY | NOTHING BUT | NOTHING EXCEPT | ONLY
union ::= (intersection | complement) (unionInd union)*
intersection ::= clsExpComb (intersectionInd intersection)*
existentialInd ::=
A | AN | ALL | ANY | FEW | MANY | SOME | SEVERAL
clsExpComb ::= clsExp (clsExpInd? clsExpComb)*
exactCardinalityInd ::= EXACTLY | JUST | MAY BE | ONLY
clsExp ::= complement
| uniRes
//universal restriction
| existlRes
//existential restriction
| exactCard
//exact cardinality
| minCard
//minimum cardinality
| maxCard
//maximum cardinality
| qualExactCard//qualified exact cardinality
| qualMinCard //qualified minimum cardinality
ambiExactCardInd ::=
ABOUT | ALMOST | APPROXIMATELY | AROUND | CLOSE TO
preMinCardInd ::=
ATLEAST | LEAST | MORE THAN | NOT LESS THAN
postMinCardInd ::= OR MORE
preMaxCardInd ::=
ATMOST | MOST | LESS THAN | NOT MORE THAN | WITHIN
3
through syntactic transformation. The transformed sentences become valid ACE sentences. Table 2 lists down a few ACE sentences
that are not OWL-compliant, along with the reasons for their noncompliance. We then show corresponding sentences that are made
ACE-compliant through suitable transformation. The transformed
sentences can successfully be formalized into OWL.
Note that it might be possible to apply syntactic transformation either by modifying the grammar rules of ACE or by improving APE. However, ACE is a formal language that was developed
with a focus on knowledge representation. Since the focus of our
work is ontology authoring, we chose to have a separate module of
syntactic transformation, which is independent of ACE/APE. This
module will be unaffected by any update which would be made to
ACE/APE. Hence, we decided to use ACE only as an intermediate
language to generate axioms, instead of modifying ACE/APE.
postMaxCardInd ::= OR LESS
selfInd ::= MYSELF | OURSELVES | YOURSELF | YOURSELVES | HIMSELF |
HERSELF | ITSELF | THEMSELVES
4
SYNTACTIC TRANSFORMATION
Un
pu
No blis
t f he
or d w
di o
str rk
ib ing
ut d
io ra
n ft
The proposed system employs syntactic transformation to convert
TEDEI text (i.e. sentences having at least one TEDEI lexicalization)
to ACE text. We implemented syntactic transformation as actions
that are attached to grammar elements in TEDEI. We describe here
the important steps in this transformation process.
We refer to the phrases in a sentence that correspond to OWL
concepts as concept-terms and the phrases that correspond to
OWL relations as relation-terms. For example, in the sentence “Every adenine is a purine base found in DNA”, purine base is a conceptterm, and found in is a relation-term.
In a sentence, a concept-term or a relation-term may contain
more than one word. As per the rules of ACE, multi-word terms
are required to have hyphens. We handle such terms by inserting
a hyphen between consecutive words in the term. For example, the
concept-term purine base is transformed to purine-base.
If a concept-term or a relation-term is not present in the ACE
lexicon, we add it to the lexicon dynamically by tagging it with the
prefix that indicates its word class. All noun phrases having common nouns are identified as OWL concepts and they are tagged
with the prefix n. All verb phrases are identified as OWL relations and they are tagged with the prefix v. All noun phrases having proper nouns are identified as OWL individuals and they are
tagged with the prefix p. For example, the concept-term purine base
is tagged with n: to denote that it is an OWL concept. These prefixes are as per requirement of the ACE parser.
According to the ACE construction rules, every noun should be
preceded by an article. In the absence of an article, we insert an
appropriate article. We consider the determiner a to be the default
article for OWL atomic concepts and the determiner some to be the
default article for OWL role fillers.
We handle missing role fillers by adding something as the role
filler. This transformation is semantics-preserving because something denotes the top concept in OWL. For example, the sentence
all kids play is transformed to all kids play something.
In ACE, all coordinated noun phrases have to agree in number and verb form (either finite or infinite). In this context, noun
phrases denote OWL role fillers. In the case of a role filler that is
coordinated by and and or, we distribute the individual role fillers
to the relation according to ACE semantics. For example, the coordinated noun phrase likes cats and dogs is transformed to likes cats
and likes dogs.
In ACE, all coordinated verb phrases have to agree in number
and verb form (either finite or infinite). In this context, verb phrases
denote OWL relations. In the case of a relation that is coordinated
by and and or, we distribute the individual relations to the role
filler according to ACE semantics. For example, the coordinated
verb phrase seizes and detains a victim is transformed to seizes a
victim and detains a victim.
Table 1 lists down a few English sentences that are not ACEcompliant, along with the reasons for their non-compliance. We
then show corresponding sentences that are made ACE-compliant
5 AMBIGUITY IN FORMALIZATION
A natural language sentence can be ambiguous due to various reasons. We note that there are different types of ambiguity such as
lexical ambiguity, scope ambiguity, syntactic ambiguity and semantic ambiguity. The concept of ambiguity is well-studied in the context of NLP tasks such as POS tagging, word sense disambiguation
and sentence parsing [18] [16]. For instance, syntactic parsing of a
sentence can often lead to multiple parse trees. Various algorithms
are used to identify the most-probable parse tree. However, as far
as we know, there is no concrete study of ambiguity in the context
of formalization of sentences into OWL axioms. We investigate
the impact of two types of ambiguities, namely lexical ambiguity
and semantic ambiguity. In the following section, we describe each
type in detail and how both are handled by the system.
5.1 Lexical Ambiguity
Lexical ambiguity is a major type of ambiguity associated with formalization of sentences. An ontology element (i.e. a class, relation
or instance) can be lexicalized in more than one way due to various
modeling possibilities. It depends on factors such as the domain, individual preferences, and application. As a result, it is possible to
generate multiple formalizations for the same sentence, by combining different lexicalizations of its elements in various ways. For
example, consider the following sentence: Every vegetable pizza is
a tasty pizza. Note the adjective tasty that describes the noun pizza.
While formalizing the sentence, this adjective can either be lexicalized as a separate concept or associated with the corresponding
noun. This lexical ambiguity with respect to the adjective results
in two different formalizations, as shown below:
(1) VegetablePizza ⊑ TastyPizza
(2) VegetablePizza ⊑ Tasty ⊓ Pizza
Lexically ambiguous sentences can be disambiguated by using
a point of reference such as an existing ontology [9]. In that case,
the lexicalization that best fits the reference can be chosen. However, in the absence of suitable points of reference, the best possible
way is to generate all possible lexicalizations. Our system identifies
lexical ambiguity associated with formalization and generates all
possible formalizations of the sentence.
Williams [25] studies the syntax of identifiers from a corpus of
over 500 ontologies. The identifiers chosen for the study are class
4
Table 1: Making English sentences ACE-compliant through syntactic transformation
English Sentence
Every battery produces electricity.
An adenine is a purine base.
An abdomen exists between thorax and
pelvis.
A kidnapper seizes and detains a victim.
Every binomial consists of two terms.
Reason for non-compliance
Every noun should be prefixed by a determiner.
Multi-term expressions should be hyphenated.
Coordinated noun phrases have to agree in
number and verb form.
Coordinated verb phrases have to agree in
number and verb form.
Prepositional phrases should be attached to
the corresponding verb.
Transformed Sentence
Every battery produces some electricity.
An adenine is a purine-base.
An abdomen exists between thorax and exists between pelvis.
A kidnapper seizes a victim and detains a
victim.
Every binomial consists-of two terms.
ACE Sentence
All kids play.
Un
pu
No blis
t f he
or d w
di o
str rk
ib ing
ut d
io ra
n ft
Table 2: Making ACE sentences OWL-compliant through syntactic transformation
Reason for non-compliance
Intransitive verbs are not supported by APE
OWL generator.
Every person should learn some maths.
Modal verbs are not supported by APE
OWL generator.
Every abacus efficiently performs some Adverbs are not supported by APE OWL
arithmetic.
generator.
A console houses some electronic instru- Adjectives are not supported by APE OWL
ments.
generator.
Every person should-learn some maths.
Every abacus efficiently-performs some
arithmetic
A console houses some electronicinstruments.
one vehicle to become a driver. A universal quantification is inappropriate due to the fact that a driver might drive any vehicle, not
necessarily a car.
Now consider a structurally similar sentence: Every vegetable
pizza is made of vegetable items. There are 3 possible interpretations of this sentence. The corresponding formalizations are:
(1) VegPizza ⊑ ∃madeOf.VegItems
(2) VegPizza ⊑ ∀madeOf.VegItems
(3) VegPizza ⊑ ∃madeOf.VegItems ⊓ ∀madeOf.VegItems
However, in this case, according to world knowledge and commonsense, the correct interpretation is denoted by the third formalization.
It is necessary to have access to sufficient background knowledge to disambiguate such sentences, as demonstrated above. However, in the absence of background knowledge, the best possible
way is to generate possible multiple interpretations of the semantically ambiguous sentence. We studied various sentence patterns
and then manually identified the ones that are semantically ambiguous. We also record as to how to map them to corresponding
interpretations. In the online phase, the system checks whether
the new sentence makes use of any one of the patterns that were
identified beforehand. In such cases, the system generates all the
corresponding interpretations of the sentence. In the case of sentences that do not contain any of the patterns, we generate only
one interpretation of the sentence. The list of sentence patterns
that indicate ambiguity is extendable.
identifiers, named individual identifiers, object property identifiers
and data property identifiers. According to the study, identifiers
follow simple syntactic patterns and each type of identifier can
be expressed through relatively few syntactic patterns. These patterns are expressed using Penn POS tag set [21]. We adapt these
syntactic patterns in our approach.
First, we POS-tag each word in the input text using Stanford
part-of-speech tagger [22]. Then, we use the aforementioned syntactic patterns to extract all possible identifiers from sentences. By
properly combining the identifiers, we generate all the formalizations of the sentence. In case of the above example, the following
phrases/words are extracted as identifiers: vegetable pizza, is, tasty
pizza, tasty and pizza. From these identifiers, our system generates
both the given formalizations.
5.2
Transformed Sentence
All kids play something.
Semantic Ambiguity
Another type of ambiguity associated with formalization is semantic ambiguity. Due to semantic ambiguity, a sentence can have
more than interpretation. For example, consider the following sentence: Every driver drives a car. There are 3 possible interpretations
of this sentence. The formalizations corresponding to each interpretation are:
(1) Driver ⊑ ∃drives.Car
(2) Driver ⊑ ∀drives.Car
(3) Driver ⊑ ∃drives.Car ⊓ ∀drives.Car
Note the quantifications associated with each axiom. The first axiom denotes the correct interpretation (and hence the correct formalization) of the given sentence. The existential quantification is
appropriate due to the fact that a person should be driving at least
5
6
Every n:adenine is a n:purine-base and v:found-in a n:DNA.
RESULTS AND DISCUSSION
The details of the evaluation of our framework are given below. All
the datasets and evaluation results are available online3 .
6.1
axiom
Every n:adenine
Datasets
a n:purine-base and v:found-in a n:DNA.
Every adenine
clsExp
a n:purine-base
classComb
CLASS
a purine base
clsExp
v:found-in a n:DNA
existRes
PROPERTY classComb
a n:DNA
found in
CLASS
Axiom Generation
We use the ACE parser, Attempto Parsing Engine (APE), to translate ACE text into OWL axioms. The target ontology language chosen for formalization is OWL 2, in OWL/XML syntax.
Few examples to demonstrate syntactic transformation and axiom generation are given in Table 3. The table presents sentences
of various types and illustrates how the proposed system handles each type by generating the appropriate axiom. Each example
shows an input sentence and a formalization produced from the
sentence by the system. Here from the set of axioms produced by
the system we only show the most appropriate one. In Figure 2
we present a parse tree for the input sentence of the first example Every adenine is a purine base found in DNA . The sentence is
translated through syntactic transformation in a bottom-up fashion resulting in an equivalent ACE sentence. The transformations
performed on the sentence are shown at respective locations in
the parse tree. In this case, the resultant ACE sentence is shown
at the root of the parse tree. The corresponding OWL-DL axiom is
Adenine ⊑ PurineBase ⊓ ∃foundIn.DNA.
Few examples to demonstrate handling of ambiguity are given
in Table 4. The given sentences are both lexically and semantically
ambiguous. Each example shows an input sentence and the various
axioms produced from the sentence by the system.
6.3
intersection
CLASS
Un
pu
No blis
t f he
or d w
di o
str rk
ib ing
ut d
io ra
n ft
6.2
rexpr
lexpr
We use various types of datasets in our evaluation covering multiple domains. This facilitates concrete evaluation of our approach
and also ensures that our approach is neither dataset-oriented nor
domain-dependent. The datasets used for evaluation are PIZZA,
SSN, VSAO, PP1 , PP2 and LEXO. PIZZA, SSN and VSAO are obtained
from Emani et al. [9]. Each dataset consists of 25 sentences that
are text descriptions. Each dataset is based on a domain ontology.
PIZZA is based on Pizza ontology, SSN is based on Semantic Sensor Network ontology and VSAO is based on Vertebrate Skeletal
Anatomy ontology. PP1 and PP2 are built from scratch. We identified 40 classes from people-pets ontology4 and collected their descriptions from Wikipedia and WordNet resulting in PP1 and PP2
datasets respectively. LEXO is obtained from Völker et al. [24]. This
dataset contains 115 sentences covering a variety of domains.
dna
Figure 2: Parse tree for the input sentence “Every adenine is
a purine base found in dna."
Text Every adenine is a purine base found in DNA.
Axm Adenine ⊑ PurineBase ⊓ ∃foundIn.DNA
Text Sloppy giuseppe pizza is topped with mozzarella and
parmesan.
Axm SloppyGiuseppePizza ⊑ ∃toppedWith.Mozzarella ⊓
∃toppedWith.Parmesan
Text An interesting pizza is a pizza that has at least 3 toppings.
Axm InterestingPizza ⊑ Pizza ⊓ ≥3has.Toppings
Text Every abdication is the act of abdicating.
Axm Abdication ⊑ ∃actOfAbdicating.⊤
Text Every exotic species is a species that is not native to a
region.
Axm ExoticSpecies ⊑ Species ⊓ ¬∃isNativeToRegion
Table 3: Syntactic transformation and axioms generation
Grammatical Coverage Analysis
guaranteed to generate at least one valid OWL axiom from every
TEDEI sentence. For every dataset, the number of TEDEI sentences
is significantly more than the number of ACE sentences. For instance, out of the 25 sentences in the PIZZA dataset, none of the
sentences are valid according to ACE, whereas 19 sentences are
TEDEI sentences. We can observe a similar pattern in every other
dataset.
We evaluated the quality of TEDEI by analyzing its grammatical
coverage. We compare the coverage of ACE and TEDEI. Our analysis shows that TEDEI has larger grammatical coverage as compared
to ACE.
The analysis is reported, in detail, in Table 5. IS refers to the
number of input sentences that are present in the dataset. AS refers
to the number of ACE sentences i.e. sentences that conform to
ACE grammar. TS refers to the number of TEDEI sentences i.e. sentences that have at least one TEDEI lexicalization. Our system is
6.4 Formalization
We evaluated the number of formalizations generated by the proposed system. We chose the datasets having at least 50% TEDEI
3 https://anon.to/SuLSzZ
4 http://sadi-ontology.semanticscience.org/people-pets.owl
6
DATASET
PIZZA
SSN
VSAO
IS
25
25
25
LX
677
5964
560
TLX
157
297
93
INP
260
594
186
VAX
206
180
72
Table 6: Evaluation of formalization on various datasets (IS
Text Quarks possess color charge.
Axm Quark ⊑ ∃possess.ColorCharge
Axm Quark ⊑ ∃possess.Color ⊓ ∃possess.Charge
Axm Quark ⊓ ∃possess.ColorCharge ⊑ ⊤
Axm Quark ⊓ ∃possess.Color ⊓ ∃possess.Charge ⊑ ⊤
Text A vegetarian pizza is an interesting pizza.
Axm VegetarianPizza ⊑ InterestingPizza
Axm VegetarianPizza ⊑ Interesting ⊓ Pizza
Axm VegetarianPizza ⊓ InterestingPizza ⊑ ⊤
Axm VegetarianPizza ⊓ Interesting ⊓ Pizza ⊑ ⊤
denotes the number of input sentences, LX denotes the total number of lexicalizations, TLX denotes the total number of valid TEDEI
lexicalizations, INP denotes the total number of interpretations,
and VAX denotes the total number of valid ACE axioms)
Un
pu
No blis
t f he
or d w
di o
str rk
ib ing
ut d
io ra
n ft
Table 4: Handling of ambiguity
DATASET
PIZZA SSN VSAO PP1 PP2
LEXO
IS
25
25
25
40
40
115
AS
0
0
0
1
2
1
TS
19
19
16
7
22
22
Table 5: Comparison of grammatical coverage of TEDEI and
ACE on various datasets (IS refers to the number of input sen-
7 CONCLUSIONS AND FUTURE WORKS
In this paper, we propose an ontology authoring framework that
extracts class expression axioms from natural language sentences
through grammar-based syntactic transformation. The input sentences that conform to the grammar are transformed into ACE.
We then use ACE parser to generate OWL axioms from the transformed text. We also explore the effect of ambiguity on formalization and construct axioms corresponding to alternative formalizations of a sentence.
Our framework clearly outperforms ACE in terms of the number and types of sentences the system can handle. In comparison
with LExO and other ontology learning systems, our framework
is a robust way to generate ontological axioms from text. TEDEI
helps in clearly defining the scope of the system and provides the
ability to reject a sentence and ask for reformulation. Employing
ACE as an intermediate language aids formalization and reduces
the complexity of the system. The output of the system is compared against human-authored axioms and in a substantial number
of cases, human-authored axiom is indeed one of the alternatives
given by the system.
Our future works include enhancing the grammar so that the
framework can handle a larger range of sentences and investigating the impact of other types of ambiguities such as scope ambiguity on formalization.
tences, AS refers to the number of ACE sentences, and TS refers to
the number of TEDEI sentences.)
sentences for this evaluation. As can be observed from Table 5,
PIZZA, SSN, and VSAO have at least 50% TEDEI sentences. Hence,
these datasets are used for this evaluation.
The results are reported in Table 6. IS denotes the number of
input sentences in the dataset. LX denotes the total number of lexicalizations generated by lexical ambiguity handler for all the input
sentences. TLX denotes the total number of valid TEDEI lexicalizations i.e. lexicalizations that conform to TEDEI. INP denotes the
total number of interpretations generated by semantic ambiguity
handler. Each interpretation is the result of syntactic transformation of the TEDEI lexicalizations. VAX denotes the total number of
valid ACE axioms i.e. interpretations that conform to ACE.
As we can observe from the table, the proposed system generates many lexicalizations from a given input sentence, out of which
many are valid TEDEI sentences. Based on various sentence patterns, we identify semantically ambiguous sentences, which have
more than one interpretation. A major portion of the complete set
of interpretations are valid ACE sentences, each of which can be
converted to an equivalent OWL axiom. For instance, from the 25
sentences in the PIZZA dataset, 677 lexicalizations are generated,
out of which 157 are valid according to TEDEI. Subsequently, 260
interpretations are generated, out of which 206 are valid according
to ACE.
We also evaluated the correctness of the axioms. We chose the
PIZZA dataset for this evaluation. We manually authored gold standard set of axioms and compared it with axioms generated by the
system. Out of the 25 sentences in the dataset, for 17 sentences, the
human-authored axiom is indeed one of the alternatives given by
the system.
REFERENCES
[1] Dan Brickley and R.V. Guha. 2004.
RDF Vocabulary Description Language 1.0: RDF Schema.
W3C February (2004), 1–15.
https://doi.org/10.1002/9780470773581
[2] L Bühmann and Daniel Fleischhacker. 2014.
Inductive Lexical Learning of Class Expressions.
Knowledge Engineering . . . (2014), 42–53.
http://link.springer.com/chapter/10.1007/978-3-319-13704-9
[3] Paul Buitelaar, Philipp Cimiano, and Bernardo Magnini. 2004.
Ontology Learning from Text : An Overview.
Learning (2004), 1–10.
https://doi.org/10.1.1.70.3041
[4] Johanna Cimiano, Philipp and Völker. 2005. Text2Onto: A Framework for Ontology Learning and Data-Driven Change Discovery. Natural Language Processing
and Information Systems (2005), 227–238. http://dx.doi.org/10.1007/11428817
[5] Anne Cregan, Rolf Schwitter, and Thomas Meyer. 2007. Sydney OWL syntax Towards a controlled natural language syntax for OWL 1.1. In CEUR Workshop
Proceedings, Vol. 258.
[6] Frank van Harmelen Deborah L. McGuinness. 2004. Owl web ontology language overview. W3C recommendation 10.2004-03 2004, February (2004), 1–12.
https://doi.org/10.1145/1295289.1295290
[7] Cathy Dolbear, Glen Hart, Katalin Kovacs, John Goodwin, and Sheng Zhou. 2007.
The Rabbit language: description, syntax and conversion to OWL. Ordnance
Survey Research Labs Technical Report (2007).
7
Un
pu
No blis
t f he
or d w
di o
str rk
ib ing
ut d
io ra
n ft
[8] Euthymios Drymonas, Kalliopi Zervanou, and Euripides G M Petrakis. 2010. Unsupervised ontology acquisition from plain texts: The OntoGain system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 6177 LNCS. 277–287.
[9] Cheikh Kacfah Emani, Catarina Ferreira Da Silva, Bruno Fiés, and Parisa
Ghodous. 2015. BEAUFORD: A Benchmark for Evaluation of Formalisation of
Definitions in OWL. Open Journal of Semantic Web (OJSW) 2, 1 (2015), 4–15.
http://www.ronpub.com/publications/OJSW
[10] W Nelson Francis and Henry Kucera. 1979. The Brown Corpus: A Standard
Corpus of Present-Day Edited American English. (1979).
[11] Norbert E. Fuchs, Kaarel Kaljurand, and Tobias Kuhn. 2008. Attempto controlled
english for knowledge representation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 5224 LNCS. 104–124.
[12] Adam Funk, Valentin Tablan, Kalina Bontcheva, Hamish Cunningham, Brian
Davis, and Siegfried Handschuh. 2007. CLOnE: Controlled language for ontology editing. In Lecture Notes in Computer Science (including subseries Lecture
Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 4825
LNCS. 142–155.
[13] Aldo Gangemi, Valentina Presutti, Diego Reforgiato Recupero, Andrea Giovanni
Nuzzolese, Francesco Draicchio, and Misael Mongiovi. 2016. Semantic Web Machine Reading with FRED. Semantic-Web-Journal (2016).
[14] Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, Bijan Parsia, Peter PatelSchneider, and Ulrike Sattler. 2008. OWL 2: The next step for OWL. Web Semantics 6, 4 (nov 2008), 309–322.
[15] Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics-Volume
2. Association for Computational Linguistics, 539–545.
[16] Dan Jurafsky. 2000. Speech & language processing. Pearson Education India.
[17] C.J.H. Mann. 2007. The Description Logic Handbook âĂŞ Theory, Implementation and Applications. Kybernetes 32, 9/10 (dec 2007), k.2003.06732iae.006.
https://doi.org/10.1108/k.2003.06732iae.006
[18] Christopher D Manning, Hinrich Schütze, and Others. 1999. Foundations of statistical natural language processing. Vol. 999. MIT Press.
[19] T J Parr and R W Quong. 1995. ANTLR: A predicated-LL(k) parser generator.
Software: Practice and Experience 25, 7 (1995), 789–810.
[20] Giulio Petrucci, Chiara Ghidini, and Marco Rospocher. 2016. Ontology Learning
in the Deep. Knowledge Engineering and Knowledge Management: 20th International Conference, EKAW 2016, Bologna, Italy, November 19-23, 2016, Proceedings
(2016), 480–495. https://doi.org/10.1007/978-3-642-33876-2
[21] Beatrice Santorini. 1990. Part-of-Speech Tagging Guidelines for the Penn Treebank Project (3rd Revision). University of Pennsylvania 3rd Revision 2nd Printing 53, MS-CIS-90-47 (1990), 33. https://doi.org/10.1017/CBO9781107415324.004
arXiv:arXiv:1011.1669v3
[22] Kristina Toutanova, Dan Klein, and Christopher D Manning. 2003. Feature-rich
part-of-speech tagging with a cyclic dependency network. In Proceedings of
the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1 (NAACL ’03),
(2003), 252–259. https://doi.org/10.3115/1073445.1073478
[23] Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. Ontolearn Reloaded:
A Graph-Based Algorithm for Taxonomy Induction. Computational Linguistics
39, 3 (2013), 665–707.
[24] Johanna Völker, Pascal Hitzler, and Philipp Cimiano. 2007. Acquisition of OWL
DL axioms from lexical resources. The Semantic Web: Research and Applications
(2007), 670–685.
[25] Sandra Williams. An analysis of POS tag patterns in ontology identifiers and labels.
Technical Report.
[26] Wilson Wong, Wei Liu, and Mohammed Bennamoun. 2012. Ontology Learning
from Text: A Look Back and into the Future. ACM Comput. Surv. 44, 4 (sep 2012),
20:1—-20:36. https://doi.org/10.1145/2333112.2333115
8
| 2 |
arXiv:0910.5833v1 [] 30 Oct 2009
From Single-thread to Multithreaded: An
E ient Stati Analysis Algorithm
Jean-Loup Carré
Charles Hymans
November 2, 2009
Abstra t
A great variety of stati analyses that ompute safety properties of
single-thread programs have now been developed. This paper presents
a systemati method to extend a lass of su h stati analyses, so that
they handle programs with multiple POSIX-style threads. Starting
from a pragmati operational semanti s, we build a denotational semanti s that expresses reasoning à la assume-guarantee. The nal
algorithm is then derived by abstra t interpretation. It analyses ea h
thread in turn, propagating interferen es between threads, in addition
to other semanti information. The ombinatorial explosion, ensued
from the expli it onsideration of all interleavings, is thus avoided.
The worst ase omplexity is only in reased by a fa tor n ompared
to the single-thread ase, where n is the number of instru tions in the
program. We have implemented prototype tools, demonstrating the
pra ti ality of the approa h.
1
Introdu tion
Many stati
analyses have been developed to
he k safety properties of se-
quential programs [1, 2, 3, 4, 5℄ while more and more software appli ations
are multithreaded. Naive approa hes to analyze su h appli ations would run
by exploring all possible interleavings, whi h is impra ti al. Some previous
proposals avoid this
ombinatorial explosion (see Related Work). Our
tribution is to show that
every
stati
on-
analysis framework for single-thread
programs extends to one that analyzes multithreaded
1
ode with dynami
1
INTRODUCTION
thread
2
reation and with only a modest in rease in
on urren y spe i
bugs, e.g., ra e
authors [6℄. If any, su h bugs
omplexity. We ignore
onditions or deadlo ks, as do some other
an be dete ted using orthogonal te hniques
[7, 8℄.
Outline
We des ribe in Se tion 2 a toy imperative language.
This
tains essential features of C with POSIX threads [9℄ with a thread
primitive. The main feature of multithreaded
may
interfere,
on-
reation
ode is that parallel threads
i.e., side-ee ts of one thread may
hange the value of vari-
ables in other threads. To take interferen e between threads into a
ount,
we model the behavior of a program by an innite transition system: this is
the operational semanti s of our language, whi h we des ribe in Se tion 2.3.
It is
ommon pra ti e in abstra t interpretation to go from the
the abstra t semanti s through an intermediate so- alled
ti s [10℄. In our
G-
ase a dierent but similar
on ept is needed, whi h we
olle ting semanti s, and whi h we introdu e in Se tion 3.
ti s will dis over states, a
thread and
on rete to
olle ting semanThis seman-
umulate transitions en ountered in the
olle t interferen es from other threads.
urrent
The main properties
of this semanti sProposition 2 and Theorem 1are the te hni al
this paper.
all
These properties allow us to overapproximate the
G-
ore of
olle ting
semanti s by a denotational semanti s. Se tion 4 then derives an abstra t
semanti s from the
G-
olle ting semanti s through abstra t interpretation.
We dis uss algorithmi
issues, implementation, question of pre ision, and
possible extensions in Se tion 5, and examine the
te hnique in se tion 6, and
Related Work
omplexity of our analysis
on lude in Se tion 7.
A great variety of stati
analyses that
ompute safety
properties of single-thread programs have been developed, e.g., intervals [4℄,
points-to-graph [11, 3℄, non-relational stores [1, 2℄ or relational stores su h
as o tagons [5℄.
Our approa h is similat to Rugina and Rinard [12, 13℄, in the sens that
we also use an abstra t semanti s that derives tuples
about
urrent states, transitions of the
other threads.
ontaining information
urrent thread, and interferen e from
While their main parallel primitive is
threads ans waits for their
ompletion before resuming
mostly interested in the more
whi h spawn a thread that
hallenging thread
par ,
whi h runs too
omputation, we are
reation primitive
create ,
an survive its father. In Se tion 6.3, we handle
2
SYNTAX AND OPERATIONAL SEMANTICS
par
to show how they
3
an be dealt with our te hniques.
Some authors present generalizations of spe i
analyses to multithreaded
ode, e.g., Venet and Brat [14℄ and Lammi h and Müller-Olm [6℄, while our
framework extends any single-threaded
ode analysis.
Our approa h also has some similarities with Flanagan and Qadeer [15℄.
They use a model- he king approa h to verify multi-threaded programs.
Their algorithm
see our stati
omputes a guarantee
analysis framework as
ondition for ea h thread; one
an
omputing a guarantee, too. Further-
more, both analyses abstra t away both number and ordering of interferen es
from other threads. Flanagan and Qadeer's approa h still keeps some
rete information, in the form of triples
ontaining a thread id, and
on-
on rete
stores before and after transitions.
They
polynomial time in the size of the
omputed set of triples. However, su h
sets
laim that their algorithm takes
an have exponential size in the number of global variables of the pro-
gram. When the nesting depth of loops and thread
reation statements is
bounded, our algorithm works in polynomial time.
Moreover, we demon-
strate that our analysis is still pre ise on realisti
examples. Finally, while
Flanagan and Qadeer assume a given, stati , set of threads
gram start-up, we handle dynami
required in Malkis
et al. [16℄.
thread
reated at pro-
reation. The same restri tion is
The 3VMC tool [17℄ has a more general s ope. This is an extension of
TVLA designed to do shape analysis and to dete t spe i
multithreaded
bugs. However, even without multithreading, TVLA already runs in doubly
exponential time [18℄.
Other papers fo us on bugs that arise be ause of multithreading primitives. This is orthogonal to our work. See [19, 20℄ for atomi ity properties,
Lo ksimth and Goblint tools [7, 21, 22℄ for data-ra es and [8℄ for deadlo k
dete tion using geometri
2
ideas.
Syntax and Operational Semanti s
2.1 Simplied Language.
The syntax of our language is given in Fig. 1. The syntax of the language
is de omposed in two parts:
ommands (cmd ) and statements (stmt ). A
′
statement cmd , ℓ is a ommand with a return label where it should go after
ℓ
ℓ
ompletion. E.g., in Fig 2a, a thread at label ℓ3 will exe ute 3 create( 4 x :=
2
SYNTAX AND OPERATIONAL SEMANTICS
4
lv
::=
left value
cmd ::=
ommand
| x
variable
ℓ
| lv := e
assignment
| ∗e
pointer deref
| cmd 1 ; cmd 2
sequen e
e
::=
expression
| if (cond )then{cmd 1 }else{cmd 2 } if
| c
onstant
| ℓ while(cond){cmd}
while
| lv
left value
| ℓ create(cmd )
new thread
| o(e1 , e2 ) operator
stmt ::=
statement
| &x
address
′
| cmd , ℓ
ommand
cond ::=
ondition
| ℓ guard(cond), ℓ′
guard
| x
variable
ℓ
′′
′
| spawn(ℓ ), ℓ
new thread
| ¬cond negation
Figure 1: Syntax
ℓ1
x := 0;
ℓ2
while(true)
{ℓ3 create(ℓ4 x := x + 1)}, ℓ∞
(a) ℓ1 example 1 , ℓ∞
ℓ1
y := 0; ℓ2 z := 0;
ℓ3
create(ℓ4 y := y + z);
ℓ5
z := 3, ℓ∞
( ) ℓ1 example 3 , ℓ∞
ℓ5
x := 0; ℓ6 y := 0;
ℓ7
create(ℓ8 x = x + y);
ℓ9
y := 3, ℓ∞
(b) ℓ5 example 2 , ℓ∞
ℓ10
y := 0; ℓ11 z := 0;
ℓ12
create(ℓ13 y := 3);
ℓ14
y := 1; ℓ15 z := y, ℓ∞
(d) ℓ10 example 4 , ℓ∞
Figure 2: Program Examples
x + 1), ℓ2 .
Commands and statements are labeled, and we denote by
Labels
ℓ
′
ontrol ow: the statement stmt, ℓ
′
and terminates at label ℓ , e.g., in Fig 2b, a thread at label ℓ2
the set of labels. Labels represent the
begins at label
ℓ
will exe ute the assignment
a given
x := x+1 and go to label ℓ3 .
It is assumed that in
ommand or statement ea h label appears only on e. Furthermore, to
represent the end of the exe ution, we assume a spe ial label
appears in a
ℓ∞
whi h never
ommand, but may appear as the return label of a statement.
Intuitively, this label represents the termination of a thread: a thread in this
label will not be able to exe ute any statement.
Noti e that sequen es
cmd 1 ; cmd 2
are not labeled. Indeed, the label of
a sequen e is impli itly the label of the rst
ommand, e.g., the program of
2
SYNTAX AND OPERATIONAL SEMANTICS
5
ℓ
Fig. 2b is a sequen e labeled by ℓ5 . We write cmd when the label of cmd is
ℓ
′
ℓ and we write stmt, ℓ the statement stmt labeled by ℓ and ℓ′ . A program is
ℓ
represented by a statement of the form cmd , ℓ∞ . Other statements represent
a partial exe ution of a program. The statements
atomi , there are
create , while
and
if
are not
steps, e.g., to enter in a while
ℓ
steps, we introdu e the statements 1 spawn(ℓ2 ), ℓ3
omposed of several basi
loop. To model these basi
ℓ
and 1 guard(cond ), ℓ2 . Then, the semanti s of create , while and if will be
ℓ
ℓ
dened using the semanti s of 1 spawn(ℓ2 ), ℓ3 and 1 guard(cond ), ℓ2 . Lo al
variables are irrelevant to our work. Then, all variables in our language are
global. Nevertheless, lo al variables have been implemented (See Se tion 5)
as a sta k.
This is a toy imperative language with dynami
thread
reation. It
an
easily be extended to handle real-world languages like C or Ada, see Se tions
2.4 and 5.
2.2 Des ription of the system.
To represent threads, we use a set
Ids
of
thread identiers.
During an exe-
ution of a program, ea h thread is represented by a dierent identier. We
assume a distinguished identier
main ∈ Ids,
and take it to denote the
initial thread.
When a program is exe uted, threads go from a label to another one
independently.
A
ontrol point
is a partial fun tion
identiers to labels and that is dened in
ea h thread with its
urrent label.
main .
A
The domain of
P
that maps thread
ontrol point asso iates
P
is the set of
reated
threads, the other identiers may be used after in the exe ution, for new
P be the set of ontrol points. We write Dom(P ) the domain
def
let P [i 7→ ℓ] be the partial fun tion dened by P [i 7→ ℓ](j) =
if i = j
if i ∈ Dom(P ) r {j}
threads. Let
P and
ℓ
P (j)
of
undened
else
Furthermore, threads may
reate other threads at any time. A
genealogy
(i, ℓ, j) ∈ Ids×Labels×Ids su h that
(i2 , ℓ2 , j2 ) have distin t third omponent
of threads is a nite sequen e of tuples
(a) ea h two tuples
(i.e.,
tuple
been
(i1 , ℓ1 , j1 )
and
j1 6= j2 ), (b) main is never the third omponent of a tuple. Su h a
(i, ℓ, j) means that thread i reated thread j at label ℓ. We write j has
reated in g to say that a uple (i, ℓ, j) appears in g . Let Genealogies
2
SYNTAX AND OPERATIONAL SEMANTICS
6
on atenation
′
of the genealogies
be the set of genealogies. We write g · g the
′
g and g . The hypothesis (a) means that a thread is never reated twi e,
the hypothesis (b) means that the thread
main
is never
reated: it already
exists at the begining of the exe ution.
We let
Stores
be the set of
stores.
We leave the pre ise semanti s of
stores undened for now, and only require two primitives
write lv:=e (σ)
and
bool (σ, cond). Given a store σ , write lv:=e returns the store modied by the
assignment lv := e. The fun tion bool evaluates a ondition cond in a store
σ , returning true or false .
A uple (i, P, σ, g) ∈ Ids × P × Stores × Genealogies is a state if (a) i ∈
Dom(P ), (b) Dom(P ) is the disjoint union between {main } and the set of
threads reated in g . Let States be the set of states. A state is a tuple
(i, P, σ, g) where i is the urrently running thread, P states where we are
in the ontrol ow, σ is the urrent store and g is the genealogy of thread
reations. Dom(P ) is the set of existing threads. The hypothesis (a) means
that the
urrent thread exists, the hypothesis (b) means that the only threads
that exist are the initial threads and the thread
In the single-threaded
reated in the past.
ase, only the store and the
unique thread is needed. In the
ea h thread is needed: this is
ontrol point of the
ase of several threads, the
ontrol point of
P.
There are two standard ways to model interferen es between threads:
•
Either all threads are a tive, and at any time any threads
an re a
transition,
•
or, in ea h state there is an a tive thread, a.k.a., a urrent thread,
and some so
alled s hedule transitions
Our model rests on latter
during exe ution.
an
hange the a tive thread.
hoi e: this allows us to keep tra k of a thread
Thread ids do not
arry information as to how threads
reated. This is the role of the g omponent of states.
ℓ
Given a program 0 cmd , ℓ∞ the set Init of initial states is the set of
were
(main , P0 , σ, ǫ) where Dom(P0 ) = {main }, P0 (main ) = ℓ0 , σ is an
ǫ is the empty word.
′
′
′
′
A transition is a pair of states τ = (i, P, σ, g), (i , P , σ , g · g ) su h that
∀j ∈ Dom(P ) r {i}, P (j) = P ′ (j) and if (j, ℓ, j ′ ) is a letter of g ′, then j = i
and P (i) = ℓ.
def
We denote by Tr the set of all transitions and we denote by Schedule =
{((i, P, σ, g), (j, P, σ, g)) ∈ Tr | i 6= j} the set of transitions that may appear
tuples
arbitrary store, and
2
SYNTAX AND OPERATIONAL SEMANTICS
7
σ ′ = write lv:=e (σ)
bool(σ, cond ) = true
assign ℓ
guard
ℓ1
′
1
lv := e, ℓ2 ⊢ (ℓ1 , σ) → (ℓ2 , σ )
guard(cond ), ℓ2 ⊢ (ℓ1 , σ) → (ℓ2 , σ)
ℓ1
ℓ1
guard(cond), ℓ2 ⊢ t
guard(¬cond ), ℓ3 ⊢ t
while
entry
while exit
ℓ1
ℓ2
ℓ1
while(cond ){ cmd }, ℓ3 ⊢ t
while(cond){ℓ2 cmd}, ℓ3 ⊢ t
ℓ1
guard(cond), ℓ2 ⊢ t
then
ℓ1
if (cond)then{ℓ2 cmd 1 }else{ℓ3 cmd2 }, ℓ4 ⊢ t
ℓ1
guard(¬cond ), ℓ3 ⊢ t
else
ℓ1
if (cond)then{ℓ2 cmd 1 }else{ℓ3 cmd2 }, ℓ4 ⊢ t
ℓ1
P (i) = ℓ
stmt, ℓ2 ⊢ (ℓ, σ) → (ℓ′ , σ ′ )
parallel
ℓ1
stmt, ℓ2 (i, P, σ, g) → (i, P [i 7→ ℓ′ ], σ ′ , g)
P (i) = ℓ1
j is fresh in (i, P, σ, g)
P ′ = P [i 7→ ℓ3 ][j 7→ ℓ2 ]
spawn
ℓ1
spawn(ℓ2 ), ℓ3 (i, P, σ, g) → (i, P ′ , σ, h · (i, ℓ2 , j))
ℓ2
cmd , ℓ4 τ
then body
ℓ1
ℓ3
if (cond )then{ cmd 2 }else{ℓ3 cmd2 }, ℓ4 τ
ℓ2
cmd , ℓ4 τ
else body
ℓ1
ℓ2
if (cond )then{ cmd 1 }else{ℓ3 cmd2 }, ℓ4 τ
ℓ1
ℓ2
spawn(ℓ2 ), ℓ3 τ
cmd, ℓ1 τ
while body
reate
ℓ1
ℓ2
ℓ1
create( cmd ), ℓ3 τ
while(cond ){ℓ2 cmd }, ℓ3 τ
ℓ1
ℓ2
cmd 1 , ℓ2 τ
cmd 2 , ℓ3 τ
sequen
e
1
sequen e 2
ℓ1
ℓ2
ℓ1
cmd 1 ; cmd 2 , ℓ3 τ
cmd 1 ; ℓ2 cmd 2 , ℓ3 τ
ℓ2
P (j) is dened
i 6= j
cmd , ℓ∞ τ
hild
s hedule
ℓ1
ℓ2
ℓ
′
create( cmd ), ℓ3 τ
stmt, ℓ
(i, P, σ, g) → (j, P, σ, g)
Figure 3: Operational semanti s rules
in the
only
on lusion of rule s hedule, respe tively.
hanges the identier of the
A transition in
Schedule
urrent thread.
2.3 Evolution.
To model interleavings, we use a small step semanti s: ea h statement gives
rise to an innite transition system over states where edges
spond to elementary
ℓ
judgment 1 stmt, ℓ2
s1 → s2
orre-
s1 to s2 . We dene the
s1 → s2 is one of these global
′
exe uted, returning to label ℓ
omputation steps from state
s1 → s2
to state that
omputation steps that arise when
cmd
on termination. To simplify semanti
is
rules, we use an auxiliary judgment
2
SYNTAX AND OPERATIONAL SEMANTICS
ℓ1
stmt, ℓ2 ⊢ (ℓ, σ) → (ℓ′ , σ ′ )
8
to des ribe evolutions that are lo al to a given
thread.
Judgments are derived using the rules of Fig. 3. The rule parallel transforms lo al transitions into global transitions. While body and sequen e
rules are global be ause while loop and sequen es may ontain global subℓ
ℓ
ℓ
ommands, e.g., 1 while(x){ 2 create( 3 x := 0)}. In spawn, the expression
j is fresh in
(i, P, σ, g) means that i 6= j and P (j) is not dened and j
g , i.e., in g , there is no tuples (i, ℓ, i′ ) with i or i′ equal to
nevers appears in
j.
Intuitively, a fresh identier is an identier that has never been used (we
keep tra k of used identiers in
g ).
We dene the set of transitions generated by the statement
Tr ℓ stmt,ℓ′ = {(s, s′ ) | ℓ stmt, ℓ′
ℓ
stmt, ℓ′ :
s → s′ }.
Noti e that, unlike Flanagan and Qadeer [15℄, an arbitrary number of
threads may be spawned, e.g., with the program
Therefore,
Ids is innite, an so are P and Tr ℓ stmt,ℓ′ .
ℓ1 example 1 , ℓ∞ of Fig. 2a.
Furthermore, Stores may
be innite, e.g., if store maps variables to integers. Therefore, we
a
omplexity depending of
Example
Let us
ardinal of
annot have
Tr ℓ stmt,ℓ′ .
onsider stores that are maps from a unique variable to
an integer. We write
[x = n]
the store that maps
x
to the integer
n.
The
transitions generated by the statements extra ted from Fig. 2a are:
Tr ℓ1 x:=0,ℓ2 ={((i, P, [x = n], g), (i, P [i 7→ ℓ2 ], [x = 0], g)) | P (i) = ℓ1
∧ i ∈ Ids ∧ n ∈ Z}.
Tr ℓ4 x:=x+1,ℓ∞ ={((i, P, [x = n], g), (i, P [i 7→ ℓ∞ ], [x = n + 1], g)) | P (i) = ℓ4
∧ i ∈ Ids ∧ n ∈ Z}.
2.4 Properties of the language
Let
Labs(ℓ cmd , ℓ∞ )
be the set of labels of the statement
ℓ
cmd , ℓ∞ .
We also dene by indu tion on ommands, the set of labels of subthreads
Labs child (·) by Labs child (ℓ1 create(ℓ2 cmd ), ℓ3 ) = Labs(ℓ2 cmd , ℓ∞ ),
Labs child (ℓ1 cmd 1 , ℓ2 cmd 2 , ℓ3 ) = Labs child (ℓ1 cmd 1 , ℓ2 ) ∪ Labs child (ℓ2 cmd 2 , ℓ3 ),
Labs child (ℓ1 if (cond)then{ℓ2 cmd 1 }else{ℓ3 cmd 2 }, ℓ4 ) =
Labs child (ℓ2 cmd 1 , ℓ4 ) ∪ Labs child (ℓ3 cmd 2 , ℓ4 ),
3
G-COLLECTING SEMANTICS
9
Labs child (ℓ1 while(cond ){ℓ2 cmd }, ℓ3) = Labs child (ℓ2 cmd 1 , ℓ1 ),
ℓ
and, for basi
ommands Labs child ( 1 basic, ℓ2 ) = ∅.
A statement generates only transitions from its labels and to its labels,
this is formalized by the following lemma:
Lemma 1. If (s, s′ ) ∈ Tr ℓ stmt,ℓ′ r Schedule then label (s) ∈ Labs(ℓ stmt, ℓ′ ) r
{ℓ′ } and label(s′ ) ∈ Labs(ℓ stmt, ℓ′ ) and thread (s) = thread (s′ ).
As a
onsequen e of Lemma 1, we have the following lemma :
Lemma 2. If label(s) ∈/ Labs(ℓ stmt, ℓ′ ) r {ℓ′ } then for all state s′ , (s, s′ ) ∈/
Tr ℓ stmt,ℓ′ r Schedule
If, during the exe ution of a statement
ℓ
stmt, ℓ′ ,
thred, then, the subthread is in a label of the
Labs child (ℓ stmt, ℓ′ ).
a thread
reates another
ommand, furthermore, it is in
Lemma 3. If (s, s′ ) = ((i, P, σ, g), (i′, P ′ , σ′ , g ′)) ∈ Tr ℓ stmt,ℓ′ r Schedule and
j ∈ Dom(P ′ ) r Dom(P ) then P ′(j) ∈ Labs child (ℓ stmt, ℓ′ ) ⊆ Labs(ℓ stmt, ℓ′ ).
Lemma 4. If (s, s′ ) ∈ Tr ℓ stmt,ℓ′ rSchedule and label (s) ∈ Labs child (ℓ stmt, ℓ′ )r
{ℓ′ } then label (s′ ) ∈ Labs(ℓ stmt, ℓ′ ).
Furthermore ℓ ∈
/ Labs(ℓ stmt, ℓ′ ) and ℓ′ ∈
/ Labs(ℓ stmt, ℓ′ ).
Noti e that in Fig. 3 some statements are atomi . We
basi statements.
ments
ℓ
form 1 lv
:=
On basi
all these state-
Formally, a basi statement is a statement of the
ℓ1
e, ℓ2 , guard(cond ), ℓ2 or ℓ1 spawn(ℓ3 ), ℓ2 .
statement, we have a more pre ise lemma on labels:
Lemma 5. Let ℓ1 basic, ℓ2 be a basi statement.
If (s, s′ ) = ((i, P, σ, g), (i′, P ′ , σ ′ , g ′)) ∈ Tr ℓ1 basic,ℓ2 r Schedule then thread (s) =
thread (s′ ) and label (s) = ℓ1 and label (s′ ) = ℓ2 .
3
G-
olle ting Semanti s
3.1 Basi Con epts
To prepare the grounds for abstra tion, we introdu e an intermediate semanti s,
alled
G-
olle ting semanti s, whi h asso iates a fun tion on
tions with ea h statement.
ongura-
The aim of this semanti s is to asso iate with
3
G-COLLECTING SEMANTICS
10
def
thread (i, P, σ, g) = i
def
label (i, P, σ, g) = P (i)
def
after (i, P, σ, g) = {(j, P ′ , σ ′ , g · g ′ ) ∈ States|j ∈ desc g′ ({i})}
For X ⊆ P(Ids) :
def
• desc ǫ (X) = X
•
and
(
desc g (X ∪ {j})
desc (i,ℓ,j)·g (X) =
desc g (X)
if
def
i∈X
else
Figure 4: Auxiliary denitions
ea h statement a transfer fun tion that will be abstra ted (see Se tion 4) as
an abstra t transfer fun tion.
A
on rete onguration
of the system during an exe ution,
urrent thread and its des endants
what the other threads
Q = hS, G, Ai : 1. S is the urrent state
2. G, for guarantee, represents what the
an do 3. and A, for assume, represents
is a tuple
an do.
Formally, S is a set of states, and G and A are sets of transitions ontaining
Schedule . The set of on rete ongurations is a omplete latti e for the
ordering hS1 , G1 , A1 i 6 hS2 , G2 , A2 i ⇔ S1 ⊆ S2 ∧ G1 ⊆ G2 ∧ A1 ⊆ A2 . Proposition
4 will establish the link between operational and G- olle ting semanti s.
Figure 5 illustrates the exe ution of a whole pro-
j1
gram. Ea h verti al line represents the exe ution of a
thread from top to bottom, and ea h horizontal line
represents the
reation of a thread. At the beginning
(top of the gure), there is only the thread
j3
j0
•s0
main = j0 .
j2
•s
j5
During exe ution, ea h thread may exe ute transitions. At state
running thread
Fig. 5, the
thread of
s
s0 , thread (s0 )
urrent thread of
is
denotes the
urrent thread ),
(or
s0
is
j0
ates
j1
j3 .
is a
j6
urrently
see Fig. 4.
and the
On
Figure 5: States
urrent
j2 .
j0
During the program exe ution given in Fig. 5,
that
j4
hild
of
j0
and
j0
is the
We then introdu e the
a des endant of
j0
parent
on ept of
be ause it has been
of
j1 .
reates
Furthermore,
des endant :
reated by
j1
j1 .
We say
j1
the thread
whi h has been
re-
j3
is
re-
3
G-COLLECTING SEMANTICS
11
j0 . More pre isely, des endants depend on genealogies. Consider
the state s0 = (j0 , P0 , σ0 , g0 ) with g0 = [(j0 , ℓ1 , j1 )]: the set of des endants
of j0 from g0 (written desc g0 ({j0 }), see Fig. 4) is just {j0 , j1 }. The set of
ated by
des endants of a given thread in reases during the exe ution of the pro-
s is of the form g0 · g for some g , here
g = [(j0 , ℓ2 , j2 ), (j1 , ℓ3 , j3 ), (j2 , ℓ4 , j4 )]. When the exe ution of the program
rea hes the state s, the set of des endants of j0 from g0 · g is desc g0 ·g ({j0 }) =
{j0 , j1 , j2 , j3 , j4 }.
gram.
In Fig. 5, the genealogy of
In a genealogy, there are two important pie es of information. First, there
is a tree stru ture: a thread
reates
hildren that may
reates
g,
j2
on... Se ond, there is a global time, e.g., in
before the thread
the thread
hildren and so
has been
reated
j3 .
Lemma 6. Let g · g ′ a genealogy and i, j whi h are not reated in g ′ . Therefore, either desc g′ ({j}) ⊆ desc g·g′ ({i}) or desc g′ ({j}) ∩ desc g·g′ ({i}) = ∅.
Proof.
We prove this lemma by indu tion on
g ′.
If
g ′ = ǫ,
then
desc ǫ ({j}) =
{j}.
g ′ = g ′′ · (i′ , ℓ, j ′ ). By indu tion hypothesis either
desc g′′ ({j}) ⊆ desc g·g′′ ({i}) or desc g′′ ({j}) ∩ desc g·g′′ ({i}) = ∅.
′
′
In the rst ase, if i ∈ desc g ′′ ({j}), therefore j ∈ desc g ′′ ·(i′ ,ℓ′ ,j ′ ) ({j}) and
′
′
j ∈ desc g·g′′ ·(i′ ,ℓ′ ,j ′ ) ({i}), else j ∈
/ desc g′′ ·(i′ ,ℓ′ ,j ′ ) ({j}).
′
In the se ond ase, let us onsider the sub ase i ∈ desc g ′′ ({j}). Therefore
i′ ∈
/ desc g·g′′ ({i}). In addition to this, j is not reated in g · g ′′ (a thread
annot be reated twi e in a genealogy), therefore j ∈
/ desc g·g′′ ({i}). Hen e
′
′
j ∈ desc g′′ ·(i′ ,ℓ′ ,j ′) ({j}) and j ∈
/ desc g·g′′ ·(i′ ,ℓ′ ,j ′) ({i}).
′
′′
The sub ase i ∈ desc g·g ({i}) is similar. Let us onsider the sub ase
i′ ∈
/ desc g′′ ({j})∪desc g·g′′ ({i}). Therefore desc g·g′′ ·(i′ ,ℓ′ ,j ′) ({i}) = desc g·g′′ ({i})
and desc g ′′ ·(i′ ,ℓ′ ,j ′ ) ({j}) = desc g ′′ ({j}).
Let us
onsider the
ase
g . In this partial genealdesc g ({j0 }) = {j0 , j2 , j4 }. Noti e
even though the reation of j3 is in the genealogy g .
During an exe ution, after having en ountered a state s0 = (j0 , P0 , σ0 , g0 )
we distinguish two kinds of des endants of j0 : (i) those whi h already exist in
state s0 (ex ept j0 itself ) and their des endants, (ii) j0 and its other des enWe also need to
onsider sub-genealogies su h as
j1 has not been
that j3 ∈
/ desc g ({j0 })
ogy,
reated by
j0 .
Hen e
dants. Ea h thread of kind (i) has been
j0 .
We
all
after (s0 ) the states from whi
reated by a statement exe uted by
h a thread of kind (ii)
an exe ute a
3
G-COLLECTING SEMANTICS
12
transition. In Fig. 5, the thi k lines des ribe all the states en ountered while
exe uting the program that fall into
after (s0 ).
The following lemma expli its some properties of
after :
Lemma 7. Let T a set of transitions. Let (s0 , s1 ) ∈ T ⋆ therefore:
1. If thread (s0 ) = thread (s1 ) then s1 ∈ after(s0 )
2. If s1 ∈ after (s0 ) then after(s1 ) ⊆ after (s0 )
Proof.
(i0 , P0 , σ0 , g0 ) = s0 and (i1 , P1 , σ1 , g1 ) = s1 . By denition of
g1′ su h that g1 = g0 · g1′ . Be ause i0 ∈ desc ǫ ({i0 }),
i0 ∈ desc g1′ ({i0 }). Therefore, if thread (s) = thread (s′ ), i.e., i1 = i0 , then
s1 ∈ after (s0 ) (By denition of after ).
Let us assume that s1 ∈ after (s0 ). Let s2 = (i2 , P2 , σ2 , g2 ) ∈ after(s1 ).
′
′
′
′
Therefore, there exists g2 su h that g2 = g1 · g2 = g0 · g1 · g2 and i2 ∈
desc g2′ ({i1 }). Be ause s1 ∈ after(s0 ), by denition, i1 ∈ desc g1′ ({i0 }). Therefore i1 ∈ desc g ′ ({i1 })∩desc g ′ ·g ′ ({i0 }). A ording to Lemma 6, desc g ′ ({i1 }) ⊆
2
1 2
2
desc g1′ ·g2′ ({i0 }). Hen e i2 ∈ desc g1′ ·g2′ ({i0 }) and therefore s2 ∈ after(s0 ).
Let
transitions, there exists
When a s hedule transition is exe uted, the
futur des endants of the past
urrent thread
urrent thread and the new
hange. The
urrent thread are
dients. This is formalized by the following lemma:
Lemma 8. If (s1 , s2 ) ∈ Schedule then after (s1 ) ∩ after(s2 ) = ∅.
Proof.
(i1 , P1 , σ1 , g1 ) = s1 and i2 = thread (s2 ). Therefore (i2 , P1 , σ1 , g1 ) =
s = (i, P, σ, g) ∈ after (s1 ) ∩ after(s2 ).
′
′
By denition of after , there exists g su h that g = g1 · g , i ∈ desc g ′ ({i1 })
and i ∈ desc g ′ ({i2 }). Furthermore i1 and i2 are in Dom(P1 ). Therefore i1 and
i2 are either reated in g1 , or are main . Hen e, i1 and i2 annot be reated
′
in g . Therefore, i2 ∈
/ desc g′ ({i1 }) and therefore desc g′ ({i2 }) ⊆ desc ǫ·g′ ({i1 }).
Using Lemma 6 we on lude that desc g ′ ({i1 }) ∩ desc g ′ ({i2 }) = ∅. This is a
ontradi tion with i ∈ desc g ′ ({i1 }) and i ∈ desc g ′ ({i2 }).
s2 .
Let
Let
During the exe ution of a set of transition
T
that do not
reate threads,
the set of des endants does not in rease:
Lemma 9. Let T a set of transitions su h that:
for all (s, s′ ) = ((i, P, σ, g), (i′, P ′, σ ′ , g ′)) ∈ T, g = g ′.
Let s0 = (i0 , P0 , σ0 , g0 ), s = (i, P, σ, g0 · g) and s = (i′ , P ′ , σ ′ , g0 · g · g ′).
If (s, s′ ) ∈ (A|after (s0 ) ∪ T )⋆ then desc g·g′ {i0 } = desc g {i0 }.
3
G-COLLECTING SEMANTICS
Proof.
13
s1 , . . . , sn a sequen e of states su h that s1 = s, for all k ∈
{1, . . . , n − 1}, (sk , sk+1) ∈ A|after (s0 ) ∪ T )⋆ , and sn = s′ .
Let (ik , Pk , σk , g0 · g · gk ) = sk .
If gk 6= gk+1 then, (sk , sk1 ) ∈ A|after (s ) and then ik ∈
/ desc g·gk {i0 } and then
0
desc g·gk {i} = desc g·gk+1 {i0 }.
Therefore, in all ases desc g·gk {i} = desc g·gk+1 {i} and then, by straightforward indu tion, desc g·g ′ {i} = desc g {i}.
Let
Lemma 10. Let T a set of transitions su h that:
for all (s, s′ ) = ((i, P, σ, g), (i′, P ′, σ ′ , g ′)) ∈ T, g = g ′.
Let s = (i, P, σ, g) and s = (i′ , P ′, σ ′ , g · g ′ ).
If (s, s′ ) ∈ (A|after (s0 ) ∪ T )⋆ then desc g′ {i} = {i}.
Proof.
Apply Lemma 9 with
These lemmas has a
s0 = s.
onsequen e on
after :
Lemma 11. Let T a set of transitions su h that:
for all (s, s′ ) = ((i, P, σ, g), (i′, P ′, σ ′ , g ′)) ∈ T, g = g ′.
If (s0 , s1 ) ∈ (A|after (s0 ) ∪ T )⋆ and s1 ∈ after (s0 ) then thread (s1 ) = thread (s0 ).
Proof.
Let (i0 , P0 , σ0 , g0 ) = s0 and (i1 , P1 , σ, g0 · g1 ) = s1 .
desc g1 {i0 } = {i0 } and by denition of after , i1 ∈ desc g1 {i0 }.
By Lemma 10
Lemma 12. Let T1 a set of transitions su h that:
for all (s, s′ ) = ((i, P, σ, g), (i′, P ′, σ ′ , g ′)) ∈ T, g = g ′.
Let T2 a set of transitions.
Let s0 , s1 , s three states su h that (s0 , s1 ) ∈ T1⋆ , thread (s0 ) = thread (s1 )
and (s1 , s) ∈ T ⋆ .
If s ∈ after (s0 ) then s ∈ after (s1 ).
Proof.
(i0 , P0 , σ0 , g0 ) = s0 , (i1 , P1 , σ, g0 ·g1) = s1 and (i, P, σ, g0 ·g1 ·g) = s.
desc g1 {i0 } = {i0 } and by denition of after , i1 ∈ desc g1 {i0 }.
Therefore desc g1 ·g ({i0 }) = desc g (desc g1 ({i0 })) = desc g ({i0 }).
Be ause s ∈ after (s0 ), idesc g1 ·g ({i0 }), therefore idesc g ({i0 }). Hen e s ∈
after (s1 ).
Let
By Lemma 10
3
G-COLLECTING SEMANTICS
j0
j1
•s0
j0
j1
•s0
j3
•s1 j
5
(a) Reach
j0
j1
•s0
j2
j3
•s1 j
5
(b) Reach
•s1 j
5
( ) Par
Figure 6:
14
G-
j4
j0
•s0
j3
j2
•s1 j
5
j4
j6
(d) Sub
olle ting semanti s
3.2 Denition of the G- olle ting Semanti s
Let us re all some lassi al denitions. For any binary relation R on states
′
let R|S = {(s, s ) ∈ R | s ∈ S} be the
of R to S and RhSi =
′
′
′
′′
{s | ∃s ∈ S : (s, s ) ∈ R} be the
of R on S . R; R = {(s, s ) |
′
′
′
′ ′′
′
∃s ∈ States
: (s, s ) ∈ R ∧ (s , s ) ∈ R } is the
of R and R . Let
S
⋆
0
k+1
k
R = k∈N R where R = {(s, s) | s ∈ States} and R
= R; Rk . Finally,
restri tion
appli ation
omposition
omplement
S , let S = States r S be the
of S.
ℓ
The denition of the G- olle ting semanti s
stmt, ℓ′ of a statement
ℓ
stmt, ℓ′ requires some intermediate relations and sets. The formal denition
for any set of states
is given by the following denition:
Denition 1.
ℓ
where:
def
stmt, ℓ′ hS, G, Ai = hS′ , G ∪ Self ∪ Par ∪ Sub, A ∪ Par ∪ Subi
ℓ
def
stmt, ℓ′ hS, G, Ai = [Reach, Ext, Self, Par, Sub]
Reach =
⋆
(s0 , s1 ) ∈ (G|after (s0 ) ∩ Tr ℓ stmt,ℓ′ ) ∪ A|after(s0 )
(s0 , s1 )
∧thread (s0 ) = thread (s1 ) ∧ label(s0 ) = ℓ
S′ = {s1 |s1 ∈ ReachhSi ∧ label (s1 ) = ℓ′ }
Self = {(s, s′ ) ∈ Tr ℓ stmt,ℓ′ |s ∈ ReachhSi}
Par = {(s, s′ ) ∈ Tr ℓ stmt,ℓ′ |∃s0 ∈ S : (s0 , s) ∈ Reach; Schedule ∧ s ∈ after (s0 )}
⋆
Ext(s0 , s1 ) = (G|after(s0 ) ∩ Tr ℓ stmt ,ℓ′ ) ∪ A|after (s0 ) ∪ G|after (s1 )
∃s0 , s1 ∈ S × S′ : (s0 , s1 ) ∈ Reach∧
′
Sub = (s, s )
(s1 , s) ∈ Ext(s0 , s1 ) ∧ s ∈ after (s0 ) r after (s1 )
3
G-COLLECTING SEMANTICS
15
Let us read together, on some spe ial
ases shown in Fig. 6. This will
explain the rather intimidating of Denition 1 step by step, introdu ing the
ne essary
ompli ations as they
ome along.
s0 = (j0 , P, σ, g)
The statement is exe uted between states
s1 =
and
(j0 , P ′, σ ′ , g · g ′).
Figure 6(a) des ribes the single-thread ase: there is no thread intera tion
ℓ
′
during the exe ution of stmt, ℓ . The thread j5 is spawned after the exe ution
ℓ
of the statement. E.g., in Fig. 2b, 6 y := 0; ℓ7 .
In this simple
exists a path from
ase, a state
s0
to
s
s
is rea hable from
s0
if and only if there
using only transitions done by the unique thread
(these transitions should be in the guarantee G) and that are generated by
′
the statement. S represents the nal states rea hable from S. Finally, in this
ase:
⋆
Reach = {(s0 , s1 ) ∈ G ∩ Tr ℓ stmt ,ℓ′ |label (s0 ) = ℓ}
S′ = {s1 | s1 ∈ Reach(S) ∧ label(s1 ) = ℓ′ }
Self = {(s, s′ ) ∈ Tr ℓ stmt,ℓ′ | s ∈ Reach(S)}
ℓ
stmt, ℓ′ hS, G, Schedulei = hS′ , G ∪ Self, ScheduleiPar
Figure 6(b) is more
omplex:
j0 interferes with
A. Some states
interferen es are assumed to be in
= Sub = ∅
threads
j1
and
su h interferen e transitions. E.g,
y, ℓ∞ in Fig. 2d:
the statement
label
ℓ15 .
j3 .
These
an be rea hed only with
ℓ
ℓ
onsider the statement 14 y := 1; 15 z :=
at the end of this statement, the value of z may be
ℓ13
y := 3, ℓ∞ may be exe uted when the thread
3, be
main
ause
is at
Therefore, to avoid missing some rea hable states, transitions of
A
are taken into a ount in the denition of Reach. In Fig. 6(b), the statement
ℓ
stmt, ℓ′ is exe uted by des endants of j0 of kind (ii) (i.e., after (s0 )), and the
interferen es
after (s0 )).
ome from
j1
and
Finally, we nd the
Reach =
j3
whi h are des endants of kind (i) (i.e., in
omplete formula of Denition 1:
⋆
(s0 , s1 ) ∈ (G|after (s0 ) ∩ Tr ℓ stmt,ℓ′ ) ∪ A|after (s0 )
.
(s0 , s1 )
∧thread (s0 ) = thread (s1 ) ∧ label(s0 ) = ℓ
In Fig. 6( ), when j0 exe utes the statement
(j2 and
j4 )
antee
is not supposed to
G
ℓ
stmt, ℓ′ it
reates subthreads
whi h exe ute transitions in parallel of the statement. The guarontain only transitions exe uted by the
urrent
thread but also these transitions. These transitions, represented by thi k lines
in Fig. 6( ), are
olle ted into the set
Par.
Consider su h a transition, it is ex-
e uted in parallel of the statement, i.e., from a state of
Schedule◦Reach({s0 }).
3
G-COLLECTING SEMANTICS
(s, s′ ) ∈ (A|after (s) ∪ Schedule)⋆
s ∃s ∈ S :
∧thread (s) = thread (s′ )
′ ∃s = (i, P, σ, g · (i, ℓ, j)) ∈ States :
s
s′ ∈ after(s)
(i, P, σ, g ′) ∈ S
′
(j, P, σ, g ) ∃i, g :
∧g ′ = g · (i, ℓ, j)
hinterfereA∪(G|post (ℓ) ) ◦ s hedule- hild (S),
Schedule, A ∪ (G|post(ℓ) )i
hinterfereA∪G′ (S), G ∪ G′ , A ∪ G′ i
G′ with hS′ , G′ , A′ i = f hS, G, Ai
exe ute-thread↑ω
f,S,A (G)
def
interfereA (S) =
def
post(ℓ) =
def
s hedule- hild(S) =
def
init- hildℓ (hS, G, Ai) =
def
ombinehS,G,Ai (G′ ) =
def
exe ute-threadf,S,A (G) =
def
guaranteef hS, G, Ai =
′
Figure 7: Basi
Furthermore, this transition
lier thread, hen e from
16
semanti
fun tions
ame from the statement, and not from an ear-
after (s0 ).
Par = {(s, s′) ∈ Tr ℓ stmt,ℓ′ | ∃s0 ∈ S : (s0 , s) ∈ Schedule◦Reach∧s ∈ after(s0 )}.
The threads
reated by
j0
when it exe utes the statement
survive when this statement returns in
thread
i (here, i
is
j4
or
j5
or
j6 )
s1 ,
ℓ
stmt, ℓ′
as shown in Fig. 6(d).
Su h a
an exe ute transitions that are not in
i
may
Par.
create statement
′
exe uted between s0 and s1 . Hen e, su h a transition (s, s ) is exe uted
from a state in after(s0 ) r after (s1 ). The path from s1 to s is omprised
of transitions in (G|after(s0 ) ∩ Tr ℓ stmt,ℓ′ ) ∪ A|after (s ) (similarly to Reach) and of
0
transitions of j0 or j5 under the dotted line, i.e., transitions in G|after (s1 ) .
Sub
olle ts these transitions. The
reation of
results of a
3.3 Properties of the G- olle ting Semanti s
To prepare for our stati
G-
analysis we provide a
ompositional analysis of the
olle ting semanti s in Theorem 1 below. To this end, we introdu e a set
of helper fun tions, see Fig. 7.
def S
f ↑ω (X) = n∈N f n (X).
1A
We dene, for any extensive
1
fun tion
f,
fun tion f of domain D is extensive if and only if for every set X ⊆ D, X ⊆ f (X)
3
G-COLLECTING SEMANTICS
17
interfereA (S) returns states that are rea hable
applying interferen es in A. Noti e that these interferen es do not
The fun tion
label of the
from
S
by
hange the
urrent thread:
Lemma 13. Let s = (i, P, σ, g) and s′ = (i′ , P ′, σ′ , g ′). If (s, s′) ∈ (A|after (s) ∪
Schedule)⋆ then P (i) = P ′ (i), i.e., label (s) = P ′(thread (s)).
If furthermore thread (s) = thread (s′ ) then label (s) = label(s′ ).
Proof.
s0 , . . . , sn su h that s0 = s and
sn = s and for all k ∈ {0, . . . , n − 1}, (sk , sk+1 ) ∈ A|after(s) ∪ Schedule .
Let (ik , Pk , σk , gk ) = sk . Let us prove by indu tion that Pk (i) = P (i). If
(sk , sk+1 ) ∈ Schedule and Pk (i) = P (i) then Pk+1(i) = P (i). If (sk , sk+1 ) ∈
A|after (s) and Pk (i) = P (i) then sk ∈
/ after (sk ) and then ik 6= i and then
Pk+1 (i) = Pk (i) = P (i).
There exists a sequen e of states
′
The fun tion
after having
post(ℓ)
transition to the last
omputes a
omputes the set of states that may be rea hed
reated a thread at label
hild of the
onguration for the last
genealogies to dene
post(ℓ)
applies a s hedule
urrent thread. The fun tion
interferen es with its parent using
exe ute-thread
ℓ; s hedule- hild
hild
reated at
post(ℓ);
ℓ,
init- hildℓ
taking into a
ount
noti e that we need here the
and then to have Theorem 1.
The fun tion
omputes a part of the guarantee (an under-approximation),
f from onguraexe ute-thread to ompute
given the semanti s of a ommand represented as a fun tion
tion to
onguration. And
guarantee
iterates
the whole guarantee.
During the exe ution of a statement
ℓ
stmt, ℓ′ , some interferen
e transition
may be red at any time. Nevertheless, the labels of the thread(s) exe uting
the statement are still in a label of the statement:
Lemma 14. If (s0 , s) ∈ (Tr ℓ stmt,ℓ′ ∪ A|after (s0) )∗ , label (s0 ) ∈ Labs(ℓ stmt, ℓ′ )
and s ∈ after(s0 ) then label (s) ∈ Labs(ℓ stmt, ℓ′ ).
Futhermore, if label (s) = ℓ′ or label (s) = ℓ then thread (s0 ) = thread (s).
Proof. There exists a path s1 , . . . , sn su h that sn = s and for all k ∈
{0, . . . , n − 1}, (sk , sk−1) ∈ Tr ℓ stmt,ℓ′ ∪ A|after (s0 ) . Let (i0 , P0 , σ0 , g0 ) = s0 and
for k > 1, let (ik , Pk , σk , g0 · gk ) = sk .
ℓ
′
Let us prove by indu tion on k that Pk (i) ∈ Labs( stmt, ℓ ) and for all
ℓ
′
j ∈ desc gk ({i0 }) r {i0 }, Pk (j) ∈ Labs child ( stmt, ℓ ).
Let us assume that k satisfy the indu tion property, and let us show that
k + 1 saties the indu tion property.
3
G-COLLECTING SEMANTICS
18
(sk , sk+1) ∈ A|after(s0 ) , ik ∈
/ desc gk ({i0 }) and then for all j =
desc gk ({i0 }) = desc gk+1 ({i0 }), Pk (j) = Pk+1 (j).
In the ase (sk , sk+1 ) ∈ Tr ℓ stmt,ℓ′ and ik = i0 , by Lemma 1, Pk+1 (ik ) ∈
Labs(ℓ stmt, ℓ′ ). Furthermore, if j ∈ desc gk ({i0 }) then Pk (j) = Pk+1 (j). If
j ∈ desc gk+1 ({i0 }) r desc gk ({i0 }), then j ∈ Dom(Pk+1 ) r Dom(Pk ) and by
ℓ
′
Lemma 3, Pk+1 (j) ∈ Labs child ( stmt, ℓ ).
In the ase(sk , sk+1 ) ∈ Tr ℓ stmt,ℓ′ and ik = i0 , we on lude similarly by
Lemma 4. If s ∈ after (s0 ), then in ∈ desc gn ({i0 }) and therefore label (s) ∈
Labs(ℓ stmt, ℓ′ ).
′
′
If label (s) = ℓ or label(s) = ℓ, then, be ause by Lemma 4, ℓ and ℓ are
ℓ
′
not in Labs child ( stmt, ℓ ), we have thread (s0 ) = thread (s).
In the
ase
The following lemma summarizes the
onsequen es on
Reach
of Lemmas
7 and 14:
Lemma 15. Let [Reach, Ext, Self, Par, Sub] =
stmt, ℓ′ hS, G, Ai.
If (s0 , s) ∈ Reach therefore s ∈ after (s0 ), after(s) ⊆ after (s0 ) and
label(s) ∈ Labs(ℓ stmt, ℓ′ ).
⋆
Proof. (s0 , s) ∈ (G|after (s0 ) ∩ Tr ℓ stmt,ℓ′ ) ∪ A|after (s0 ) , then by Lemma 7, s ∈
after (s0 ) and after (s) ⊆ after (s0 ). Furthermore, by Lemma 14, label(s) ∈
Labs(ℓ stmt, ℓ′ ).
The following proposition show that
ℓ
guarantee
olle t all transitions
generated by a statement.
Proposition 1 (Soundness of guarantee ). Let hS, G, Ai a on rete onguration, ℓ stmt, ℓ′ a statement and G∞ = guarantee ℓ stmt,ℓ′ hS, G, Ai. Let s0 ∈ S
and s ∈ after(s0 ) su h that (s, s′ ) ∈ Tr ℓ stmt,ℓ′ .
⋆
If (s0 , s) ∈ (Tr ℓ stmt,ℓ′ )|after(s0 ) ∪ A|after (s0 ) then (s, s′) ∈ G∞
Proof.
G
hSk , Gk , Ak i = exe ute-threadk ℓ
stmt,ℓ′ ,S,A
ℓ
and [Reachk , Extk , Selfk , Park , Subk ] =
stmt, ℓ′ hS, Gk , Ai
and T = Tr ℓ stmt,ℓ′
′
Let s0 , . . . , sn+1 a path su h that sn = s, sn+1 = s and for all k ,
⋆
(sk , sk+1 ) ∈ T|after (s0 ) ∪ A|after (s0 ) . Let m an arbitrary integer. Then, let k0
the smallest k (if it exists) su h that (sk , sk+1 ) ∈ T|after (s0 ) r Gm . Then, by
denition, (sk , sk+1 ) ∈ Selfm ∪ Parm ⊆ Gm+1 ⊆ G∞ .
Let
3
G-COLLECTING SEMANTICS
19
3.4 Basi Statements
Basi
statement have
ommon properties, therefore, we will study them at
the same time. Proposition 2 explain how to overapproximate the semanti s
of a basi
statement. It will be used in the abstra t semanti s.
An exe ution path of a basi
statement
feren es, then one transition of the basi
an be de omposed in inter-
statement, and then, some other
interferen es. The following lemma show this. This lemma will allow us to
prove Proposition 2.
Lemma 16. Let ℓ1 basic, ℓ2 be a basi
statement,
and [Reach, Ext, Self, Par, Sub] =
then:
ℓ1
basic, ℓ2 hS, G, Ai. Let (s0 , s) ∈ Reach
• either s ∈ interfereA ({s0 }) and label(s) = ℓ1 ,
• or s ∈ interfereA (Tr ℓ1 basic,ℓ2 r SchedulehinterfereA ({s0 })i)
and label (s) = ℓ2
Proof.
(s0 , s) ∈ (A|after (s0 ) ∪Schedule)⋆ . By denition
of Reach, thread (s0 ) = thread (s). Therefore s ∈ interfere A ({s0 }). By
Lemma 13, label (s0 ) = label (s), hen e, label (s) = ℓ1 .
Let us onsider the ase (s0 , s) ∈
/ (A|after(s0 ) ∪ Schedule)⋆ Be ause (s0 , s) ∈
Reach, (s0 , s) ∈ [(G|after (s0 ) ∩ Tr ℓ1 basic,ℓ2 )A|after (s0 ) ]⋆ . So (s0 , s) ∈ (A|after (s0 ) ∪
Schedule)⋆ ; [G|after(s0 ) ∩Tr ℓ1 basic,ℓ2 rSchedule]; [(G|after (s0 ) ∩Tr ℓ1 basic,ℓ2 )A|after (s0 ) ]⋆ .
Let s1 , s2 , s3 , . . . , sn a sequen e of states su h that (s1 , s2 ) ∈ (A|after (s ) ∪
0
Schedule)⋆ and (s2 , s3 ) ∈ G|after (s0 ) ∩ Tr ℓ1 basic,ℓ2 r Schedule and for all k ∈
{3, . . . , n}, (sk , sk+1 ) ∈ (G|after (s0 ) ∩ Tr ℓ1 basic,ℓ2 )A|after (s0 ) .
Noti e that (s1 , s2 ) ∈ G|after (s0 ) and therefore s1 ∈ after (s0 ). By Lemma
11, thread (s0 ) = thread (s1 ). Therefore s1 ∈ interfere A ({s0 }).
By Lemma 5, label(s2 ) = ℓ2 .
Let k0 the smallest (if it exists) k > 2 su h that (sk , sk+1 ) ∈ Tr ℓ1 basic,ℓ2 r
Schedule . Therefore (s2 , sk0 ) ∈ (A|after (s0 ) ∪ Schedule)⋆ . By Lemma 13,
label(sk0 ) = label(s2 ) = ℓ2 . A ording to Lemma 5, this is a ontradi tion.
Therefore, for all k ∈ {3, . . . , n}, (sk , sk+1 ) ∈ Schedule ∪ A|after(s ) .
0
By Lemma 1, thread (s1 ) = thread (s2 ), hen e thread (s2 ) = thread (s).
Therefore s2 ∈ interfere A ({s2 })
Let us
onsider the
ase
Now, we introdu e some
laims on the semanti s of basi
Claims 1 and 2 say that when a basi
statements.
statement is exe uted, only one thread
3
G-COLLECTING SEMANTICS
is exe uted. Noti e that
The Claim 3
spawn
20
reates a subthread, but does not exe ute it.
ara terizes the transitions done by the urrent thread. The
S′ , the set of states rea hed at the
Claim 4 gives an overapproximation of
end of the exe ution of a basi
statement.
Claim 1. Let ℓ1 basic, ℓ2 a basi statement and [Reach, Ext, Self, Par, Sub] =
ℓ1
basic, ℓ2 hS, G, Ai. Therefore, Par = ∅.
Proof.
(s, s′ ) ∈ Par. Therefore, (s, s′ ) ∈ Reach; SchedulehSi. So, there
exists s0 ∈ S0 and s1 su h that (s0 , s1 ) ∈ Reach, (s1 , s) ∈ Schedule and
s ∈ after (s0 ). Hen e, by Lemma 7, thread (s) = thread (s′ ). Given that
(s, s1 ) ∈ Reach, thread (s) = thread (s1 ). But, be ause (s1 , s) ∈ Schedule ,
thread (s) 6= thread (s1 ). There is a ontradi tion. Hen e Par = ∅.
Let
Claim 2. Let ℓ1 basic, ℓ2 a basi statement and [Reach, Ext, Self, Par, Sub] =
ℓ1
basic, ℓ2 hS, G, Ai. Therefore, Sub = ∅.
Proof.
(s, s′ ) ∈ Sub. There exists s0 ∈ S and s1 su h that (s0 , s1 ) ∈
Reach, (s2 , s) ∈ Ext(s0 , s1 ) and s ∈ after (s0 ) r after (s1 ).
Let (i0 , P0 , σ0 , g0 ) = s0 and (i1 , P1 , σ1 , g0 · g1 ) = s1 . Be ause (s0 , s1 ) ∈
Reach, thread (s0 ) = thread (s1 ). Let j ∈ desc g1 ({i0 }). Let s′1 = (j, P1 , σ1 , g0 ·
g1 ). Therefore s′1 ∈ after(s0 ) and (s0 , s′1 ) ∈ (Tr ℓ1 basic,ℓ2 ∪A|after (s0 ) )⋆ ; (Schedule ⋆ ).
′
By lemma 11, j = thread (s1 ) = thread (s0 ) = i0 . Hen e desc g1 ({i0 }) = {i0 }.
Let (i, P, σ, g0 · g1 · g) = s. By denition of desc and a straightforward
indu tion on g , desc g1 ·g ({i0 }) = desc g ({i0 }).
Be ause s ∈ after (s0 ), then i ∈ desc g1 ·g ({i0 }). Therefore i = i0 . By
Lemma 7, s ∈ after (s1 ). This is ontradi tory with s ∈ after (s0 ) r after(s1 ).
Hen e Sub = ∅.
Let
Claim 3. Let ℓ1 basic, ℓ2 a basi statement and [Reach, Ext, Self, Par, Sub] =
ℓ1
basic, ℓ2 hS, G, Ai.
Therefore, Par ⊆ {(s, s′ ) ∈ Tr ℓ1 basic,ℓ2 | s ∈ interfereA (S)} ∪ Schedule .
Proof.
(s, s′ ) ∈ Self r Schedule . Then (s, s′ ) ∈ Tr ℓ1 basic,ℓ2 and s ∈
ReachhSi. Then, there exists s0 ∈ S su h that (s0 , s) ∈ Reach. Be ause
(s0 , s) ∈ Tr ℓ1 basic,ℓ2 r Schedule , by Lemma 5, label (s) = ℓ1 6= ℓ2 . By Lemma
16, s ∈ interfere A ({s0 }) ⊆ interfere A (S). Be ause thread (s0 ) = thread (s),
(s, s′ ) ∈ Self.
ℓ
′ ′ ′
1
hS, G, Ai
Claim 4. Let ℓ1 basic, ℓ2 a basi statement,
hS
,
G
,
A
i
=
basic,
ℓ
2
ℓ
1
and [Reach, Ext, Self, Par, Sub] =
basic, ℓ2 hS, G, Ai.
′
Therefore, S ⊆ interfereA Tr ℓ1 basic,ℓ2 r SchedulehinterfereA (S)i .
Let
3
G-COLLECTING SEMANTICS
Proof.
21
s ∈ S′ . Therefore, label (s) = ℓ2 and there exists s0 ∈ S su h that
(s0 , s) ∈ Reach.
Be ause label (s) = ℓ2 6= ℓ1 , a ording to Lemma 16, s ∈ interfere A (Tr ℓ1 basic,ℓ2 r
SchedulehinterfereA ({s0 })i) ⊆ interfere A (Tr ℓ1 basic,ℓ2 rSchedulehinterfereA (S)i)
Let
Proposition 2 (Basi
statements)
ℓ1
. Let ℓ1 basic, ℓ2 be a basi statement, then:
basic, ℓ2 hS, G, Ai 6 hS′′ , G ∪ Gnew , Ai
where S′′ = interfereA Tr ℓ1 basic,ℓ2 r SchedulehinterfereA (S)i
and Gnew = {(s, s′ ) ∈ Tr ℓ1 basic,ℓ2 | s ∈ interfereA (S)}
Proof.
This proposition is a straightforward
onsequen e of Claims 1, 2, 3
and 4.
3.5 Overapproximation of the G- olle ting Semanti s
The next theorem shows how the
G-
olle ting semanti s an be over-approximated
by a denotational semanti s, and is the key point in dening the abstra t semanti s.
Theorem 1. 1.
cmd 1 ; ℓ2 cmd 2 , ℓ3 (Q) 6 ℓ2 cmd 2 , ℓ3 ◦ ℓ1 cmd 1 , ℓ2 (Q)
ℓ2
ℓ4
2. ℓ1 if ((cond)then{
cmd
}else{
cmd
},
ℓ
(Q) 6
1
2
3
ℓ2
cmd 1 , ℓ3 ◦ ℓ1 guard(cond ), ℓ2 (Q)⊔ ℓ4 cmd 2 , ℓ3 ◦ ℓ1 guard(¬cond ), ℓ4 (Q)
ℓ
1
},
ℓ
(Q)
6
guard(¬cond
),
ℓ
◦ loop↑ω (Q)
3. ℓ1 while(cond ){ℓ2 cmd
3
3
with loop(Q′ ) = ℓ2 cmd, ℓ1 ◦ ℓ1 guard(cond ), ℓ2 (Q′ ) ⊔ Q′
4. ℓ1 create(ℓ2 cmd ), ℓ3 (Q) 6 ombineQ′ ◦guarantee ℓ2 cmd,ℓ ◦init- hildℓ2 (Q′ )
∞
with Q′ = ℓ1 spawn(ℓ2 ), ℓ3 (Q)
of
ℓ
1
While points 1 and 3 are as expe ted, the overapproximation of semanti s
ℓ1
create(ℓ2 cmd), ℓ3 (point 4) omputes interferen es whi h will arise from
exe uting the
hild and its des endants with
this result with the
onguration of the
guarantee
and then
ombines
urrent thread. This theorem will
be proved later.
The following proposition
T.
The only
onstraint on
T
onsider a statement
is on the use of labels
ℓ
stmt, ℓ′ set
ℓ
′
of stmt, ℓ .
of transition
3
G-COLLECTING SEMANTICS
The proposition
22
onsider an exe ution of the statement from a state
a state s1 , and, after, an exe ution
ℓ
′
of stmt, ℓ mays only be used :
s2 , . . . , sn
of other
•
for interferen es,
•
or by the statement,
•
after having applied the statement, i.e., after
s0
to
ommands. The labels
s1 .
. This Proposition ensures us that any transition exe uted by a thread reℓ
′
ated during the exe ution of stmt, ℓ (i.e., between s0 and s1 ) is a transition
ℓ
′
generated by the statement stmt, ℓ .
Proposition 3. Let ℓ stmt, ℓ′ astatement,
[Reach, Ext, Self, Par, Sub] = ℓ stmt, ℓ′ hS, G, Ai. Let (s0 , s1 ) ∈ Reach and
T a set of transitions su h that for all (s, s′ ) ∈ T , if label (s) ∈ Labs(ℓ stmt, ℓ′ )
then (s, s′ ) ∈ Tr ℓ stmt,ℓ′ or s ∈ after (s1 ) ∪ after (s0 ).
Let s2 , . . . , sn a sequen e of states su h that for all k ∈ {1, . . . , n − 1},
(sk , sk+1 ) ∈ T . Therefore, if sk ∈ after(s0 ) then either sk ∈ after (s1 ) or
(sk , sk+1 ) ∈ Tr ℓ stmt,ℓ′
Proof.
Let for all
k > 1,
let
(ik , Pk , σk , g0 · gk ) = sk .
Let us show by indu tion on k > 1 that for all j , if j ∈ desc g0 ·gk ({i1 }) r
desc gk ({i1 }) then Pk (j) ∈ Labs(ℓ stmt, ℓ′ ).
′
Let j0 ∈ desc g0 ·gk ({i1 }) r desc g0 ({i1 }) and s1 = (j0 , P1 , σ1 , g0 · g1 ). There′
′
fore s1 ∈ after (s0 ). Given that(s0 , s1 ) ∈ Reach; Schedule , by Lemma 15,
P1 (j1 ) = label (s′1 ) ∈ Labs(ℓ stmt, ℓ′ ).
By indu tion hypothesis, for all
ℓ
′
then Pk−1 (j) ∈ Labs( stmt, ℓ ).
j,
if
j ∈ desc g0 ·gk−1 ({i1 }) r desc gk−1 ({i1 })
j ∈ desc g0 ·gk ({i1 }) r desc gk ({i1 }).
thread (sk−1 ) = j , therefore, sk−1 ∈ after (s0 ) r after (s1 ). Furtherℓ
′
more, by indu tion hypothesis, Pk−1 (j) = label(sk−1 ) ∈ Labs( stmt, ℓ ). By
denition of T , (sk−1 , sk ) ∈ Tr ℓ stmt,ℓ′ . By Lemma 1, Pk (j) = label (sk ) ∈
Labs(ℓ stmt, ℓ′ ).
If j ∈ Dom(Pk ) r Dom(Pk−1 ), then, thread (sk−1 ) ∈ desc g0 ·gk−1 ({i1 }) r
desc gk−1 ({i1 }). Hen e, as above, (sk−1 , sk ) ∈ Tr ℓ stmt,ℓ′ . Hen e, a ording to
ℓ
′
Lemma 3, Pk (j) = label(sk ) ∈ Labs( stmt, ℓ ).
Else, by denition of a transition, Pk−1 (j) = Pk (j).
Let
If
3
G-COLLECTING SEMANTICS
23
k su h that sk ∈ after(s0 ), hen e, either sk ∈ after(s1 ), or sk ∈
/
after (s1 ). In the last ase ik ∈ desc g0 ·gk−1 ({i1 })rdesc gk−1 ({i1 }), and therefore
label(sk ) ∈ Labs(ℓ stmt, ℓ′ ). Hen e, by denition of T , (sk , sk+1 ) ∈ Tr ℓ stmt,ℓ′ .
Let
3.5.1 Proof of Property 1 of Theorem 1
Lemma 17. Tr ℓ1 cmd 1 ;ℓ2 cmd 2 ,ℓ3 = Tr ℓ1 cmd 1,ℓ2 ∪ Tr ℓ2 cmd 2 ,ℓ3
In this se tion, we onsider an initial onguration : Q0 = hS0 , G0 , A0 i and
ℓ
ℓ
a sequen e 1 cmd 1 ; 2 cmd 2 , ℓ3 . We write Tr 1 = Tr ℓ1 cmd 1 ,ℓ2 and Tr 2 = Tr ℓ2 cmd 2 ,ℓ3
and
′
Tr = Tr ℓ1 cmd 1 ;ℓ2 cmd 2 ,ℓ3
Dene:
′ ′
Q = hS , G , A′ i = ℓ1 cmd 1 ; ℓ2 cmd 2 , ℓ3 (Q0 )
ℓ1
K = [Reach, Ext, Self,
cmd 1 ; ℓ2 cmd 2 , ℓ3 (Q0 )
=
ℓ Par, Sub]
Q1 = hS1 , G1 , A1 i = 1 cmd 1 , ℓ2 (Q0 )
K1 = [Reach1 , Ext1 ,Self1 , Par1, Sub1 ] = ℓ1 cmd 1 , ℓ2 (Q0 )
Q2 = hS2 , G2 , A2 i = ℓ2 cmd 2 , ℓ3 (Q1 )
K2 = [Reach2 , Ext2 , Self2 , Par2 , Sub2 ] = ℓ2 cmd 2 , ℓ3 (Q1 )
Lemma 18. If (s, s′ ) ∈ Tr and label (s) ∈ Labs(ℓ1 cmd 1 , ℓ2 ) r {ℓ2 } then
(s, s′ ) ∈ Tr 1 .
If (s, s′ ) ∈ Tr and label (s) ∈ Labs(ℓ2 cmd 2 , ℓ3 ) then (s, s′ ) ∈ Tr 2 .
Proof.
ℓ
Let us onsider that label (s) ∈ Labs( 1 cmd 1 , ℓ2 )r{ℓ2 }. Hen e be ause
ℓ
ℓ
labels of 1 cmd1 ; 2 cmd 2 , ℓ3 are pairwise distin t, label(s) ∈
/ Labs(ℓ2 cmd 3 , ℓ3 ).
′
′
By Lemma 2, (s, s ) ∈
/ Tr 2 . Hen e, by Lemma 17, (s, s ) ∈
/ Tr 1
ℓ2
The ase label (s) ∈ Labs( cmd 2 , ℓ3 ) is similar.
Lemma 19. Using the above notations, for every (s0 , s) ∈ Reach su h that
s0 ∈ S0 ,
• either (s0 , s) ∈ Reach1 and label(s) 6= ℓ2
• or there exists s1 ∈ S1 su h that (s0 , s1 ) ∈ Reach1 , (s1 , s) ∈ Reach2
Proof.
(s0 , s) ∈ Reach1 or (s0 , s) ∈
/ Reach1 .
In the rst ase, either label (s) 6= ℓ2 , or label (s) = ℓ2 . If label(s) = ℓ2 ,
then, by denition, s ∈ S1 . By denition, (s, s) ∈ Reach2 and (s, s) ∈
Ext1 (s0 , s). We just have to hoose s1 = s.
Let
(s0 , s) ∈ Reach.
Either
3
G-COLLECTING SEMANTICS
24
/ Reach1 . Let T0 = (G0|after(s0 ) ∩Tr 1 )∪A0 |after(s0 ) .
In the se ond ase, (s0 , s) ∈
′
′
Sin e (s, s ) ∈ Reach , thread (s0 ) = thread (s) and label(s0 ) = ℓ1 . Furthermore (s0 , s) ∈
/ Reach1 , so (s0 , s) ∈
/ T0⋆ . Sin e (s, s′ ) ∈ Reach′ ⊆ [(G0|after (s0 ) ∩
Tr ) ∪ A0 |after (s0 ) ]⋆ , Tr = Tr 1 ∪ Tr 2 (using Lemma 17) and Tr 1 ⊃ Schedule ,
⋆
therefore (s0 , s) ∈ [T0 ∪ (G0|after(s0 ) ∩ Tr 2 r Schedule)] .
⋆
⋆
Re all (s0 , s) ∈
/ T , hen e (s0 , s) ∈ T0 ; (G0|after (s0 ) ∩ Tr 2 r Schedule); [T0 ∪
(G0|after (s0 ) ∩ Tr 2 )]⋆ . Therefore, there exists s1 , s2 su h that:
• (s0 , s1 ) ∈ T0⋆
• (s1 , s2 ) ∈ G0 |after (s0 ) ∩ Tr 2 r Schedule
• (s2 , s) ∈ [T0 ∪ (G0|after (s0 ) ∩ Tr 2 )]⋆
s0 ∈ S0 , label(s0 ) = ℓ1 ∈ Labs(ℓ1 cmd 1 , ℓ2 ). Sin e (s1 , s2 ) ∈ G0|after (s0 ) ,
s1 ∈ after (s0 ). Furthemore (s0 , s1 ) ∈ T0⋆ ⊆ Tr 1 ∪ A0 |after (s0 ) , so, a ording to
ℓ
Lemma 14, label (s1 ) ∈ Labs( 1 cmd 1 , ℓ2 ).
Given that (s1 , s2 ) ∈ Tr 2 r Schedule , a ording to Lemma 2, label(s1 ) ∈
Labs(ℓ2 cmd 2 , ℓ3 ). Hen e label (s1 ) ∈ Labs(ℓ2 cmd 2 , ℓ3 ) ∩ Labs(ℓ1 cmd 1 , ℓ2 ). Beℓ
ℓ
ause the labels of 1 cmd 1 ; 2 cmd 2 , ℓ3 are pairwise distin ts, label (s1 ) = ℓ2 .
Using Lemma 14, we on lude that thread (s0 ) = thread (s1 ).
⋆
Given that thread (s0 ) = thread (s) and label(s0 ) = ℓ1 and (s0 , s1 ) ∈ T0 ,
we on lude that (s0 , s1 ) ∈ Reach1 . Furthermore label (s1 ) = ℓ2 and s0 ∈ S0 ,
therefore s1 ∈ S1 .
(s1 , s) ∈ [T0 ∪ (G0|after(s0 ) ∩ Tr 2 )]⋆ . Therefore, by proposition 3, (s1 , s) ∈
[T0 ∪ (G0|after(s1 ) ∩ Tr 2 )]⋆ ⊆ Ext1 (s0 , s1 ).
⋆
Re all that (s2 , s) ∈ [T0 ∪ (G0|after (s0 ) ∩ Tr 2 )] , then there exists s3 , . . . , sn
su h that for all k ∈ {3, . . . , n − 1}, (sk , sk+1 ) ∈ T0 ∪ (G0 |after (s0 ) ∩ Tr 2 ). By
denition, if (sk , sk+1 ) ∈ G0|after (s0 ) ∩ Tr 1 , then (sk , sk+1 ) ∈ Sub1 .
We show by indu tion on k that if (sk , sk+1 ) ∈ G0|after(s0 ) ∩ Tr 1 r Schedule ,
then sk ∈
/ after (s1 ). By indu tion hypothesis, (s2 , sk ) ∈ (G0|after (s0 ) ∩Tr 1 )|after(s1 ) ∪
A0 |after (s0 ) ∪ (G0|after (s0 ) ∩ Tr 2 )]⋆ . Therefore, by Lemma 14, if sk ∈ after(s2 ),
ℓ
then label (sk ) ∈ Labs( 2 cmd2 , ℓ3 ). Therefore, be ause labels are pairwise distin t, if sk ∈ after (s2 ), then label (sk ) ∈
/ Labs(ℓ1 cmd1 , ℓ2 ) r {ℓ2 }. Therefore,
by Lemma 2, if sk ∈ after (s2 ), then (sk , sk+1 ) ∈
/ Tr 1 .
⋆
Hen e, (s1 , s) ∈ [Sub1 |after (s ) ∪A0 |after(s ) ∪(G0 |after (s0 ) ∩Tr 2 )] . By Lemma 7,
1
0
after (s1 ) ⊆ after(s0 ), hen e (s1 , s) ∈ [(Sub1 ∪A0 )|after (s0 ) ∪(G0 |after(s0 ) ∩Tr 2 )]⋆ ⊆
[A1 |after(s0 ) ∪ (G0|after(s0 ) ∩ Tr 2 )]⋆ . Therefore (s1 , s) ∈ Reach2 .
Sin e
3
G-COLLECTING SEMANTICS
25
Lemma 20. Using the above notations, for every (s0 , s) ∈ Reach su h that
s0 ∈ S0 and s′ ∈ S′ , there exists s1 ∈ S1 su h that (s0 , s1 ) ∈ Reach1 , (s1 , s) ∈
Reach2 and (s1 , s) ∈ Ext1 (s0 , s1 ).
Proof.
label(s) ∈ Labs(ℓ1 cmd1 , ℓ2 ).
′
not possible be ause s ∈ S .
In this
Therefore, a ording to Lemma 19 there exists s1 ∈ S1 su h that (s0 , s1 ) ∈
Reach1 , (s1 , s) ∈ Reach2 and (s1 , s) ∈ Ext1 (s0 , s1 )
(s0 , s) ∈ Reach1 , then, a
ase label (s) 6= ℓ3 . This is
If
ording to Lemma 15,
Lemma 21. Using the notations of this se tion, let s0 ∈ S0 , s1 ∈ S1 , s2 ∈ S2 , s
su h that (s0 , s1 ) ∈ Reach1 , (s1 , s2 ) ∈ Reach2 ∩ Ext1 (s0 , s1 ) and (s2 , s) ∈
Ext(s0 , s2 ). Therefore (s1 , s) ∈ Ext1 (s0 , s1 ).
Proof.
Noti e that, by Lemma 7,
after(s2 ) ⊆ after (s1 ) ⊆ after (s0 ).
Re all that:
⋆
Ext(s0 , s2 ) = (G0|after(s0 ) ∩ Tr ) ∪ A0 |after(s0 ) ∪ G0|after(s2 )
⋆
Ext1 (s0 , s1 ) = (G0|after (s0 ) ∩ Tr 1 ) ∪ A0 |after (s0 ) ∪ G0|after(s1 )
By Lemma 17, Ext(s0 , s2 ) =
(G0 |after(s0 ) ∩ Tr 1 ) ∪ (G0 |after(s0 ) ∩ Tr 2 ) ∪
⋆
A0 |after (s0 ) ∪ G0|after (s2 ) . Let T = (G0|after(s0 ) ∩ Tr 2 ) ∪ G0|after(s2 ) . Therefore,
be ause after (s2 ) ⊆ after (s0 ), Ext(s0 , s2 ) = (G0|after(s0 ) ∩ Tr 1 ) ∪ A0 |after (s ) ∪
0
⋆
T|after (s0 ) .
⋆
By Proposition 3, (s2 , s) ∈ (G0|after (s0 ) ∩ Tr 1 ) ∪ A0 |after(s ) ∪ T|after(s1 ) .
0
Be ause after (s2 ) ⊆ after(s1 ) ⊆ after (s0 ), T|after(s1 ) = (G0|after (s1 ) ∩ Tr 2 ) ∪
G0|after (s2 ) . Hen e (s2 , s) ∈ Ext1 (s0 , s1 ). Hen e (s1 , s) ∈ Ext1 (s0 , s1 ); Ext1 (s0 , s1 ) =
Ext1 (s0 , s1 ).
Lemma 22. Using the notations of this se tion, let s0 ∈ S0 , s1 ∈ S1 , s2 ∈ S2 , s
su h that (s0 , s1 ) ∈ Reach1 , (s1 , s2 ) ∈ Reach2 ∩ Ext1 (s0 , s1 ) and (s2 , s) ∈
Ext(s0 , s2 ). Therefore (s2 , s) ∈ Ext2 (s1 , s2 ).
Proof.
Noti e that, by Lemma 7,
Re all that
after(s2 ) ⊆ after (s1 ) ⊆ after (s0 ).
⋆
Ext(s0 , s2 ) = (G0|after(s0 ) ∩ Tr ) ∪ A0 |after(s0 ) ∪ G0|after(s2 )
⋆
Ext2 (s1 , s2 ) = (G1|after (s1 ) ∩ Tr 2 ) ∪ A1 |after (s1 ) ∪ G1|after(s2 )
Sin e (s2 , s) ∈ Ext(s0 , s2 ), A0 ⊆ A1 , G0 ⊆ A1 , and after (s1 ) ⊆ after(s0 )
there exists s3 , . . . , sn su h that sn = s and for all k ∈ {3, . . . , n − 1},
(sk , sk+1 ) ∈ (G1|after (s0 ) ∩ Tr ) ∪ A1 |after (s1 ) ∪ G1|after (s2 ) .
Due to Lemma 17, for all k ∈ {3, . . . , n − 1}, (sk , sk+1 ) ∈ (G1|after (s0 ) ∩
Tr 1 ) ∪ (G1|after(s0 ) ∩ Tr 2 ) ∪ A1 |after (s1 ) ∪ G1|after (s2 ) .
3
G-COLLECTING SEMANTICS
26
⋆
(s1 , s2 ) ∈ Reach2 , (s1 , s2 ) ∈ (G1|after (s1 ) ∩ Tr 2 )A1 |after (s1 ) ⊆
⋆
(G1|after (s0 ) ∩ Tr 2 ) ∪ (G1|after (s0 ) ∩ Tr 2 ) ∪ A1 |after(s1 ) ∪ G1|after(s2 ) .
ℓ
Hen e, by Proposition 3 applied on the statement 1 cmd 1 , ℓ2 , for all k ∈
{3, . . . , n − 1}, (sk , sk+1) ∈ (G1|after (s0 ) ∩ Tr 1 ) ∪ (G1|after (s1 ) ∩ Tr 2 ) ∪ A1 |after(s1 ) ∪
G1|after (s2 ) .
Given that (G1|after (s0 ) ∩ Tr 1 ) = (G1|after (s0 )rafter (s0 ) ∩ Tr 1 ) ∪ (G1|after (s1 ) ∩ Tr 1 )
and G1|after (s2 ) ∩ Tr 1 ⊆ G1|after (s2 ) , by Proposition 3 applied on the stateℓ
ment 2 cmd 2 , ℓ3 , we on lude that for all k ∈ {3, . . . , n − 1}, (sk , sk+1 ) ∈
(G1|after (s0 )rafter (s1 ) ∩ Tr 1 ) ∪ (G1|after(s1 ) ∩ Tr 2 ) ∪ A1 |after(s1 ) ∪ G1|after(s2 ) . Let k0
su h that (sk0 , sk0 +1 ) ∈ (G1|after(s0 )rafter (s1 ) ∩ Tr 1 ) r G1|after (s2 ) . By Lemma 21,
(s1 , sk0 ) ∈ Ext1 (s0 , s1). Therefore (sk0 , sk0 +1 ) ∈ Sub1 .
Hen e (s2 , s) ∈
Sub1|after (s0 )rafter (s1 ) ∪ (G1|after(s1 ) ∩ Tr 2 ) ∪ A1 |after (s0 ) ∪
⋆
G1|after (s2 ) . Be ause Sub1|after(s0 )rafter (s1 ) ⊆ A|after s1 , we on lude that (s2 , s) ∈
Ext2 (s1 , s2 ).
Be ause
′
To prove the Property 1 of Theorem 1, we have to prove that Q2 > Q . We
′
′
′
laim that (a) S ⊆ S2 (b) Self ⊆ Self1 ∪Self2 ( ) Par ⊆ Par1 ∪Par2 ∪Sub1
′
(d) Sub ⊆ Sub1 ∪ Sub2 . Using this laims and the denition of the semanti s
· , we on lude that Q2 > Q′ .
Now, we prove these
laims:
Claim 5. Using the notations of this se tion, S′ ⊆ S2 .
Proof.
(s0 , s) ∈ Reach′ and
label(s) = ℓ3 . A ording to Lemma 20 there exists s1 ∈ S1 su h that (s1 , s) ∈
Reach2 . Therefore s ∈ S2 .
Let
s ∈ S′ ,
so there exists
s0 ∈ S
su h that
Claim 6. Using the notations of this se tion, Self′ ⊆ Self1 ∪ Self2 .
Proof.
(s, s′ ) ∈ Self′ .
(s0 , s) ∈ Reach′ .
Let
So
(s, s′) ∈ Tr ,
and there exists
s0 ∈ S
su h that
(s0 , s) ∈ Reach1 and label(s) 6= ℓ2 , or there
(s0 , s1 ) ∈ Reach1 and (s1 , s) ∈ Reach2 .
ℓ
In the rst ase, a ording to Lemma 15, label (s) ∈ Labs( 1 cmd 1 , ℓ2 ).
′
Sin e label(s) 6= ℓ2 and by Lemma 18, (s, s ) ∈ Tr 1 . Hen e, by denition,
(s, s′ ) ∈ Self1
′
ℓ
In the se ond ase, by Lemma 14, label(s ) ∈ Labs( 2 cmd 2 , ℓ3 ). Sin e
′
′
(s, s ) ∈ Tr , by Lemma 18 (s, s ) ∈ Tr 2 . Given that s ∈ ReachhS1 i and
(s, s′ ) ∈ Tr 2 , we on lude that (s, s′ ) ∈ Self2 .
A
exists
ording to Lemma 19 either
s1 ∈ S1
su h that
3
G-COLLECTING SEMANTICS
27
Claim 7. Using the notations of this se tion Par′ ⊆ Par1 ∪ Par2 ∪ Sub1 .
Proof.
and
A
s2
Let
su
(s, s′ ) ∈ Par′ . Therefore, (s, s′) ∈ Tr and there exists s0 ∈ S0
′
h that (s0 , s2 ) ∈ Reach , (s2 , s) ∈ Schedule and s ∈ after(s0 ).
ording to Lemma 19 there are two
ases:
ase: (s0 , s2 ) ∈ Reach1 and label (s2 ) 6= ℓ2 . Then, using the fa t
Schedule ⊆ Tr 1 , (s0 , s) ∈ (Tr 1 ∪ A0 |after (s0 ) )⋆ . Be ause s ∈ after (s0 ), by
ℓ
Lemma 14, label (s) ∈ Labs( 1 cmd 1 , ℓ2 ) r {ℓ2 }. Hen e, a ording to Lemma
′
′
18, (s, s ) ∈ Tr 1 . We on lude that (s, s ) ∈ Par1 .
Se ond ase: There exists s1 ∈ S1 su h that (s0 , s1 ) ∈ Reach1 , (s1 , s2 ) ∈
Reach2 and (s1 , s2 ) ∈ Ext1 (s0 , s1 ). Hen e (s1 , s) ∈ Ext1 (s0 , s1 ); Schedule =
Ext1 (s0 , s1 ).
If s ∈ after (s1 ), then, be ause (s1 , s) ∈ Reach2 ; Schedule , by Lemma 14,
label(s) ∈ Labs(ℓ2 cmd 2 , ℓ3 ). So, in this ase, by Lemma 18, (s, s′ ) ∈ Tr 2 and
′
then (s, s ) ∈ Par2 .
Let us onsider the ase s ∈
/ after (s1 ). Given that (s0 , s1 ) ∈ Reach,
(s1 , s) ∈ Ext1 (s1 , s2 ), so by Proposition 3, (s, s′ ) ∈ Tr 1 . Hen e, (s, s′ ) ∈
Sub1 .
First
that
Claim 8. Using the notations of this se tion Sub′ ⊆ Sub1 ∪ Sub2 .
Proof.
(s, s′ ) ∈ Sub′ . Then, there exists s0 and s2 su h that (s0 , s2 ) ∈
Reach and (s2 , s) ∈ Ext(s0 , s2 ). A ording to Lemma 20, there exists s1 ∈ S1
su h that (s0 , s1 ) ∈ Reach1 and (s1 , s2 ) ∈ Reach2 and (s1 , s2 ) ∈ Ext1 (s0 , s1 ).
By Lemma 21 and Lemma 22, (s1 , s) ∈ Ext1 (s0 , s1 ) and (s2 , s) ∈ Ext2 (s1 , s2 ).
Let us onsider the ase s ∈
/ after(s1 ). Be ause s ∈ after(s0 ), then
s ∈ after (s0 ) r after(s1 ). Furthermore, given that (s0 , s1 ) ∈ Reach1 and
(s1 , s) ∈ Reach2 , by Proposition 3, (s, s′ ) ∈ Tr 1 . We on lude that (s, s′ ) ∈
Sub1 .
Let us onsider the ase s ∈ after (s1 ). Be ause s ∈ after (s0 ) r after(s2 ),
s ∈ after(s1 ) r after (s2 ). By Lemma 14, label (s) ∈ Labs(ℓ2 cmd 2 , ℓ2 ). Hen e,
′
′
by Lemma 18, (s, s ) ∈ Tr 2 and therefore, (s, s ) ∈ Sub2 .
Let
′
3.5.2 Proof of Property 2 of Theorem 1
if (cond)then{ℓ2 cmd 1 }else{ℓ3 cmd 2 }, ℓ4
Q0 = hS0 , G0 , A0 i
In this se tion, we onsider a ommand
and an initial
onguration
ℓ1
3
G-COLLECTING SEMANTICS
28
hS′ , G′ , A′ i = ℓ1 if (cond )then{ℓ2 cmd}else{ℓ3 cmd }, ℓ4 hS, G, Ai.
hS+ , G+ , A+ i = ℓ1 guardcond
, ℓ2 hS, G, Ai.
hS1 , G1 , A1 i = ℓ2 cmd 1 , ℓ4 hS+ , G+, A+ i.
hS¬ , G¬ , A¬ i = ℓ1 guard¬cond
, ℓ3 hS, G, Ai.
hS2 , G2 , A2 i = ℓ3 cmd 1 , ℓ4 hS¬ , G¬ , A¬ i.
Tr = Tr ℓ1 if (cond )then{ℓ2 cmd}else{ℓ3 cmd},ℓ4 .
Let
Let
Let
Let
Let
Let
Lemma 23. Tr ℓ1 if (cond )then{ℓ2 cmd}else{ℓ3 cmd},ℓ4 = Tr ℓ1 guardcond ,ℓ2 ∪ Tr ℓ2 cmd 1,ℓ4 ∪
Tr ℓ1 guard ¬cond ,ℓ3 ∪ Tr ℓ3 cmd 1 ,ℓ4 .
Lemma 24. If (s0 , s) ∈ Reach and s0 ∈ S0 , then, one of the three folowing
properties hold:
1. s ∈ interfereA0 ({s0 }),
2. or there exists s1 ∈ S+ su h that (s1 , s) ∈ Reach1 ∩ Ext+ (s0 , s1 )
3. or there exists s1 ∈ S¬ su h that (s1 , s) ∈ Reach2 ∩ Ext¬ (s0 , s1 )
Proof.
s ∈
/ interfere A0 ({s0 }). Be ause (s0 , s) ∈
Reach, (s0 , s) ∈ [(G0|after (s0 ) ∩ Tr ) ∪ A0 |after (s0 ) ]⋆ .
⋆
′
′
Therefore, there exists s0 and s1 su h that (s0 , s0 ) ∈ (A0 |after (s ) ∪Schedule) ,
0
(s′0 , s1 ) ∈ G0|after(s0 ) ∩ Tr and (s1 , s) ∈ [(G0|after(s0 ) ∩ Tr ) ∪ A0 |after (s0 ) ]⋆ . Be′
ause (s0 , s1 ) ∈ G0|after(s0 ) ∩ Tr , s ∈ after (s0 ). By Lemma 11, thread (s0 ) =
thread (s′0 ). By Lemma 13, label (s0 ) = label (s′0 ) = ℓ1 . Therefore, due to
′
′
Lemmas 1 and 23, (s0 , s1 ) ∈ Tr ℓ1 guard cond,ℓ2 ∪ Tr ℓ1 guard¬cond ,ℓ3 . Either (s0 , s1 ) ∈
′
Tr ℓ1 guard cond,ℓ2 or (s0 , s1 ) ∈ Tr ℓ1 guard ¬cond,ℓ3 .
In the rst ase, by Lemma 1, thread (s0 ) = thread (s1 ) and label (s1 ) = ℓ2 .
Therefore, (s0 , s1 ) ∈ Reach+ and s1 ∈ S+ . There exists a sequen e s2 , sn su h
that sn = s and ∀k ∈ {1, . . . n − 1}, (sk , sk+1 ) ∈ (G0|after (s0 ) ∩ Tr ) ∪ A0 |after(s ) .
0
Let us prove by indu tion on k , that ∀k ∈ {1, . . . n}, (sk , sk+1 ) ∈ (G0|after(s1 ) ∩
Tr ℓ2 cmd,ℓ4 ) ∪ A0 |after(s0 ) . Let us onsider the ase (sk , sk+1 ) ∈ G0|after (s0 ) ∩ Tr .
⋆
By indu tion hypothesis (s1 , sk ) ∈ [(G0|after (s1 ) ∩ Tr ℓ2 cmd,ℓ4 ) ∪ A0 |after (s ) ] .
0
Hen e, by Proposition 3, either (sk , sk+1 ) ∈ Tr ℓ1 guard (cond ),ℓ2 or sk ∈ after(s1 ).
If (sk , sk+1 ) ∈ Tr ℓ1 guard(cond ),ℓ2 and sk ∈ after (s1 ) then (sk , sk+1 ) ∈ Sub+ .
This is ontradi tory with Claim 2. Therefore sk ∈ after (s1 ). By Lemma
ℓ
14, label (sk ) ∈ Labs( 2 cmd 1 , ℓ4 ). Hen e, by Lemmas 1 and 23, (s1 , sk ) ∈
Tr ℓ2 cmd,ℓ4 .
∗
We on lude that (s1 , s) ∈ [(G0|after (s1 ) ∩Tr ℓ2 cmd,ℓ4 )∪A0 |after (s ) ] ⊆ Reach1 ∩
0
Ext+ (s0 , s1 ).
Let us
onsider the
ase
3
G-COLLECTING SEMANTICS
The se ond
29
ase is similar.
Claim 9. S′ ⊆ S1 ∪ S2
Proof.
s ∈ S′ . Therefore there exists s0 ∈ S0 su h that (s0 , s) ∈ Reach
and label (s) = ℓ4 6= ℓ1 . Hen e, due to Lemma 13, s ∈
/ interfereA0 {s0 }.
A ording to Lemma 24, there exists s1 su h that either (1) s1 ∈ S+
and (s1 , s) ∈ Reach1 ∩ Ext+ (s0 , s1 ), (2) or, s1 ∈ S¬ and (s1 , s) ∈ Reach2 ∩
Ext¬ (s0 , s1 ).
In the rst ase, by denition, s ∈ S1 and in the se ond ase s ∈ S2
Let
Claim 10. Self ⊆ Self+ ∪ Self1 ∪ Self¬ ∪ Self2 .
Proof.
Let
(s, s′ ) ∈ Self.
Reach.
Let us
ℓ1 .
onsider the
ase
Then, there exists
s0 ∈ S0 )
su h that
(s0 , s) ∈
s ∈ interfereA0 ({s0 }). By Lemma 13, label(s) =
(s, s′ ) ∈ Tr ℓ1 guardcond ,ℓ2 ∪ Tr ℓ1 guard ¬cond,ℓ3 .
Hen e, by Lemmas 1 and 23,
′
Hen e, (s, s ) ∈ Self+ ∪ Self¬ .
s ∈
/ interfereA0 ({s0 }), then, there exists
s1 su h that either (1) s1 ∈ S+ and (s1 , s) ∈ Reach1 ∩ Ext+ (s0 , s1 ), (2) or,
s1 ∈ S¬ and (s1 , s) ∈ Reach2 ∩ Ext¬ (s0 , s1 ).
ℓ
In the rst ase, by Lemma 14, label (sk ) ∈ Labs( 2 cmd 1 , ℓ4 ). Hen e, by
′
Lemmas 1 and 23, (s1 , sk ) ∈ Tr ℓ2 cmd,ℓ4 and therefore (s, s ) ∈ Self1 .
′
In the se ond ase, we similarly on lude that (s, s ) ∈ Self2 .
A
ording to Lemma 24, if
Claim 11. Par ⊆ Par1 ∪ Par2 .
Proof.
(s, s′ ) ∈ Par. Therefore, there exists s0 ∈ S0 and s2 su h that
(s0 , s2 ) ∈ Reach and (s2 , s) ∈ Schedule and s ∈ after (s0 ). Noti e that
thread (s0 ) = thread (s2 ) 6= thread (s).
Assume by ontradi tion that s2 ∈ interfere ( {s0 }). Hen e, due to Lema
11, thread (s) = thread (s0 ). This is ontradi tory.
Therefore, a ording to Lemma 24, there exists s1 su h that either (1) s1 ∈
S+ and (s1 , s) ∈ Reach1 ∩ Ext+ (s0 , s1 ), (2) or, s1 ∈ S¬ and (s1 , s) ∈ Reach2 ∩
Ext¬ (s0 , s1 ).In the two ases, by Lemma 12, s ∈ after(s1 ).
ℓ
In the rst ase, by Lemma 14, label(s) ∈ Labs( 2 cmd 1 , ℓ4 ) and therefore,
′
′
by Lemmas 23 and 1, (s, s ) ∈ Tr ℓ2 cmd 1 ,ℓ4 . Hen e, (s, s ) ∈ Par1
′
In the se ond ase, we similarly on lude that (s, s ) ∈ Par2 .
Let
Claim 12. Sub ⊆ Sub1 ∪ Sub2 .
3
G-COLLECTING SEMANTICS
Proof.
30
(s, s′ ) ∈ Sub. Therefore, there exists s0 ∈ S0 and s2 ∈ S′ su h
that (s0 , s2 ) ∈ Reach and (s2 , s) ∈ Ext(s0 , s2 ) and s ∈ after (s0 ) r after(s2 ).
Noti e that thread (s0 ) = thread (s2 ) 6= thread (s).
Assume by ontradi tion that s2 ∈ interfere ( {s0 }). Hen e, due to
′
Lemma 13, label (s2 ) = ℓ1 . This is ontradi tory with s2 ∈ S .
Therefore, a ording to Lemma 24, there exists s1 su h that either (1) s1 ∈
S+ and (s1 , s) ∈ Reach1 ∩ Ext+ (s0 , s1 ), (2) or, s1 ∈ S¬ and (s1 , s) ∈ Reach2 ∩
Ext¬ (s0 , s1 ).In the two ases, by Lemma 12, s ∈ after(s1 ).
In the rst ase, be ause s ∈
/ after (s2 ), by Proposition 3, (s, s′ ) ∈ Tr ℓ1 cmd 1 ,ℓ2 .
′
Hen e, (s, s ) ∈ Sub1
′
In the se ond ase, we similarly on lude that (s, s ) ∈ Sub2 .
Let
Property 2 of Theorem 1 is a straightforward
onsequen e of Claims 9,
10, 11, 12.
3.5.3 Proof of Property 3 of Theorem 1
In this se tion, we
tial
Let
Let
Let
Let
Let
Let
Let
Let
Let
Let
onsider a
ommand
ℓ1
while(cond ){ℓ2 cmd }, ℓ3
and an ini-
Q0 =
ℓhS0 , G0 , A0 i. ℓ
Q = hS , G , A i = 1 while(cond){ 2 cmd }, ℓ3 Q0 .
Qω = hSω , Gω , Aω i =loop↑ω (Q0 ).
ℓ2
Q′′ = hS′′ , G′′ , A′′ i = ℓ1 while(cond ){
cmd }, ℓ3 Qω .
ℓ1
K = [Reach, Ext, Self,
while(cond
){ℓ2 cmd }, ℓ3 Qω .
ℓPar, Sub] =
Q+ = hS+ , G+ , A+ i = 1 guard(cond), ℓ2 (Qω).
K+ = [Reach+ , Ext+ , Self+ , Par+ , Sub+ ] = ℓ1 guard(cond),
ℓ ℓ2 (Qω ).
2
Kcmd = [Reachcmd , Ext
cmd , ℓ1 (Q+ ).
ℓ cmd , Selfcmd , Parcmd
, Subcmd ] =
1
Q¬ = hS¬ , G¬ , A¬ i =
.
guard(¬cond ), ℓ3 Q
ω
K¬ = [Reach¬ , Ext¬ , Self¬ , Par¬ , Sub¬ ] = ℓ1 guard(¬cond ), ℓ3 Qω .
Tr = Tr ℓ1 while(cond ){ℓ2 cmd},ℓ3 .
onguration
′
′
′ ′
Lemma 25.
Tr ℓ1 while(cond ){ℓ2 cmd},ℓ3 = Tr ℓ1 guard (¬cond ),ℓ3 ∪ Tr ℓ1 guard(cond ),ℓ2 ∪ Tr ℓ2 cmd ,ℓ1
Noti e that, by denition,
Q0 6 Qω
Lemma 26. We use the above notations. Let s0 , s1 , . . . , sn , . . . , sm a sequen e
of states su h that (s0 , sm ) ∈ Reachω , (s0 , sn) ∈ Reachω , sn ∈ Sω and for all
k ∈ {0, . . . , m − 1}, (sk , sk+1) ∈ (Gω |after(s0 ) ∩ Tr ) ∪ Aω |after (s0 ) .
Therefore, (sn , sm ) ∈ Reachω .
3
G-COLLECTING SEMANTICS
Proof.
For all
31
k , (sk , sk+1) ∈ (Gω|after (sn ) ∩ Tr ) ∪ (Gω |after(s0 )rafter (sn ) ∩ Tr ) ∪
Aω |after (s0 ) .
Let k0 > n su h that (sk0 , sk0 +1 ) ∈ (Gω |after (s0 )rafter (sn ) ∩ Tr ). Noti e that
(sn , sk0 ) ∈ Extω (s0 , sn ) and sk0 ∈ after (s0 ) r after (sn ). Hen e, (sk0 , sk0 +1 ) ∈
Subω ⊆ Aω . Therefore (sk0 , sk0 +1 ) ∈ Aω |after(s1 ) .
In addition to this, a ording to Lemma 15, after(sn ) ⊆ after (s0 ), so, for
all k > n, (sk , sk+1 ) ∈ (Gω|after (sn ) ∩ Tr ) ∪ Aω |after (s ) .
0
Lemma 27. Using the notations of this se tion, if s ∈ ReachhS0 i, then, there
exists s0 ∈ Sω su h that:
1. either (s0 , s) ∈ Reach¬ ,
2. or there exists s1 ∈ S+ su h that (s0 , s1 ) ∈ Reach+ and (s1 , s) ∈
Reachcmd and label (s) 6= ℓ1 .
Proof.
s ∈ ReachhS0 i.
s0 , . . . , sn of minimal
sn = s, (2) s0 ∈ Sω ,
(3) for all k ∈ {0, . . . , n − 1}, (sk , sk+1 ) ∈ (Gω |after(s0 ) ∩ Tr ) ∪ Aω |after(s ) . A
0
su h sequen e exists be ause S0 ⊆ Sω .
If for all k ∈ {0, . . . , n−1}, (sk , sk+1 ) ∈ Schedule ∪Aω |after(s ) then (s0 , s) ∈
0
Reach+ ∩ Reach¬ ⊆ Reach¬ .
Let us assume, from now, that there exists k ∈ {0, . . . , n − 1} su h that
(sk , sk+1 ) ∈ Gω|after (s0 ) ∩ Tr ℓ1 while(cond ){ℓ2 cmd},ℓ3 r Schedule . Let k0 the smallest
su h k .
Therefore (sk0 , sk0 +1 ) ∈ Gω|after (s0 ) , so, sk0 ∈ after (s0 ). A ording to
Lemma 11, thread (s0 ) = thread (sk0 ). By Lemma 13, label(s0 ) = label(sk0 ).
/ Tr ℓ2 cmd,ℓ1 . ThereBut label(s0 ) = ℓ1 , therefore, by Lemma 2, (sk0 , sk0 +1 ) ∈
fore, by Lemma 25, either (sk0 , sk0 +1 ) ∈ Tr ℓ1 guard(¬cond ),ℓ3 or (sk0 , sk0 +1 ) ∈
Tr ℓ1 guard (cond ),ℓ2 .
In the rst ase, by Lemma 5, label(sk0 +1 ) = ℓ3 . Let us prove by indu tion on k that for all k > k0 , (sk , sk+1 ) ∈ Aω |after (s ) ∪ Schedule . By
0
⋆
indu tion hypothesis (sk0 , sk ) ∈ [Aω |after (s ) ∪ Schedule] . Let us onsider the
0
ase (sk , sk+1 ) ∈ Gω|after (s0 ) ∩ Tr . Therefore sk ∈ after (s0 ), then by Lemma
11, thread (sk ) = thread (sk0 +1 ). By Lemma 13, label (sk ) = label(sk0 +1 ) = ℓ3 .
So, by Lemma 2, (sk , sk+1 ) ∈ Schedule . Hen e (s0 , s) ∈ Reach¬ .
In the se ond ase, (s0 , sk0 +1 ) ∈ Reach+ and therefore, by Lemma 5,
sk0 +1 ∈ S+ . Either there exists k1 > k0 su h that (sk1 , sk1 +1 ) ∈ G|after (s0 ) ∩
(Tr ℓ1 guard (¬cond ),ℓ3 ∪ Tr ℓ1 guard(cond ),ℓ3 ) or there does not exists a su h k1 .
Let
We
onsider a sequen e
length su h that the following properties hold:
(1)
3
G-COLLECTING SEMANTICS
32
k1 exists, therefore, by Lemma 5, label(sk0 ) =
ℓ1 . A ording to Lemma 14, thread (s) = thread (s0 ). Hen e, (s0 , sk1 ) ∈
Reachω . So, by Lemma 26, (sk1 , sn ) ∈ Reachω . This is ontradi tory with
the minimality of the path s1 , . . . , sn . Therefore k1 does not exists.
Hen e, for all k > k0 , (sk , sk+1 ) ∈ (Gω |after(s0 ) ∩ Tr ℓ2 cmd,ℓ1 ) ∪ Aω |after (s ) . A 0
ording to proposition 3, for all k > k0 , (sk , sk+1 ) ∈ (Gω |after (s1 ) ∩ Tr ℓ2 cmd,ℓ1 ) ∪
Aω |after (s0 ) . Therefore, (sk0 , s) ∈ Reachω
Assume by ontradi tion that
Claim 13. Using the notation of this se tion S′ ⊆ S¬ .
Proof.
Let
Hen e, a
a
s ∈ S′ ,
ording to Lemma 27,
Hen e
s ∈ ReachhS0 i. Furthermore, label(s) = ℓ3 .
s1 , (s1 , s) ∈
/ Reachω . Therefore,
there exists s0 ∈ Sω su h that (s0 , s) ∈ Reach¬ .
therefore,
ording to Lemma 15, for all
s ∈ S¬ .
Claim 14. Self ⊆ Self¬ ∪ Self+ ∪ Selfcmd
Proof.
(s, s′ ) ∈ Self. A
Tr ℓ1 guard (cond ),ℓ2 ∪ Tr ℓ2 cmd,ℓ1 .
Let
ording to Lemma 25,
(s, s′ ) ∈ Tr ℓ1 guard(¬cond ),ℓ3 ∪
(s, s′ ) ∈ Tr ℓ1 guard (¬cond ),ℓ3 ∪ Tr ℓ1 guard (cond ),ℓ2 . Due
to Lemma 5, label (s) = ℓ1 Hen e, a ording to Lemma 27, either (s0 , s) ∈
Reach¬ or there exists s1 ∈ S+ su h that (s1 , s) ∈ Reachcmd ( ontradi tion
with Lemma 15 and label (s) = ℓ1 ). A ording to Lemma 16, either label(s) =
ℓ2 6= ℓ1 ( ontradi tion) or s ∈ interfereA0 (S0 ) ⊆ Reach¬ hSω i ∩ Reach+ hSω i.
′
′
Therefore either (s, s ) ∈ Self¬ or (s, s ) ∈ Self+ .
′
Let us onsider the ase (s, s ) ∈ Tr ℓ2 cmd,ℓ1 . Therefore, a ording to
ℓ
′′
Lemma 1, label (s) ∈ Labs( 2 cmd , ℓ1 ) r {ℓ1 }. If s ∈ Reach¬ hSω i, then, by
′′
Lemma 16, label (s ) ∈ {ℓ1 , ℓ3 }. Hen e, s ∈
/ Reach¬ hSω i. So, by Lemma
27, there exists s ∈ S0 and s1 ∈ S+ su h that (s0 , s1 ) ∈ Reach+ and
(s1 , s) ∈ Reachcmd . A ording to Proposition 3, (s, s′ ) ∈ after(s1 ) and there′
fore (s, s ) ∈ Selfcmd .
Let us
onsider the
ase
Claim 15. Par ⊆ Parcmd
Proof.
(s, s′ ) ∈ Par. There exists s0 and s2 su h that (s0 , s2 ) ∈ Reachω .
By Lemma 16, either (s0 , s2 ) ∈ Reach¬ or there exists s1 ∈ S+ su h that
(s0 , s1 ) ∈ Reach+ and (s1 , s2 ) ∈ Reachcmd and label (s2 ) 6= ℓ2 .
In the rst ase, be ause s ∈ after (s0 ), by Lemma 11, thread (s) =
thread (s0 ). But, by denition of Schedule and Reach¬ , thread (s2 ) 6= thread (s)
and thread (s0 ) = thread (s2 ). This is ontradi tory.
Let
3
G-COLLECTING SEMANTICS
33
s ∈ after s1 . Be ause thread (s) 6=
thread (s0 ) = thread (s2 ), by Lemma 14, label(s) ∈ Labs(ℓ2 cmd , ℓ1 ) r {ℓ2 }.
′
′
Therefore, by Lemmas 25 and 5, (s, s ) ∈ Tr ℓ2 cmd,ℓ1 . Hen e (s, s ) ∈ Parcmd
In the se ond
ase, by Proposition 3,
Claim 16. Sub ⊆ Sub¬
Proof.
(s, s′) ∈ Sub. Therefore, there exists s0 Sω and s1 ∈ S′ su h that
(s0 , s1 ) ∈ Reach and (s1 , s) ∈ Ext(s0 , s1 ).
Noti e that label (s1 ) = ℓ3 , therefore, a ording to Lemma 15, s1 ∈
/
Reach+ ; Reachcmd hSω i. hen e, by Lemma 27, (s0 , s1 ) ∈ Reach¬ .
(s1 , s) ∈ Ext(s0 , s1 ) ⊆ (Gω |after (s0 ) ∩ Tr ) ∪ Aω |after (s0 ) ∪ Gω|after (s1 ) . By
Proposition 3, (s1 , s) ∈ (Gω |after (s0 ) ∩ Tr ℓ1 guard (¬cond ),ℓ2 ) ∪ (Gω |after(s1 ) ∩ Tr r
Tr ℓ1 guard (¬cond ),ℓ2 ) ∪ Aω |after (s0 ) ∪ Gω |after (s1 ) = Ext¬ (s1 , s2 ).
Let
Property 3 of Theorem 1 is a straightforward
onsequen e of Claims 13,
14, 15 and 16.
3.5.4 Proof of Property 4 of Theorem 1
Let
Let
Let
Let
Let
Let
Let
Let
Let
Q0 = hS0 , G0 , A0 i a onguration.
Q′ = hS′ , G′ , A′ i = ℓ1 create(ℓ2 cmd), ℓ3 (Q0 )
ℓ1
ℓ2
K = [Reach, Ext, Self,
ℓ Par, Sub] = create( cmd ), ℓ3 (Q0 )
Q1 = hS1 , G1 , A1 i = 1 spawn(ℓ2 ), ℓ3 (Q0 )
K1 = [Reach1 , Ext1 , Self1 , Par1 , Sub1 ] = ℓ1 spawn(ℓ2 ), ℓ3 (Q0 )
Q2 = hS2 , G2 , A2 i = init- hildℓ2 (Q1 )
G∞ = guarantee ℓ2 cmd,ℓ∞ (Q2 )
K3 = [Reach3 , Ext3 , Self3 , Par3 , Sub3 ] = ℓ2 cmd , ℓ∞ hS2 , G∞ , A2 i
Q3 = hS3 , G3 , A3 i = ombineQ0 (G∞ ) Let Tr = Tr ℓ1 create (ℓ2 cmd),ℓ3
Lemma 28. Tr ℓ1 create(ℓ2 cmd),ℓ3 = Tr ℓ1 spawn(ℓ2 ),ℓ3 ∪ Tr ℓ2 cmd,ℓ∞
Lemma 29. Let T a set of transitions. Let s0 , s1 , s2 , s and s′ su h that
(s0 , s1 ) ∈ Reach1 , s2 ∈ s hedule- hild {s1 }, label (s1 ) = ℓ3 , (s2 , s) ∈ T ⋆ and
s ∈ after(s0 ).
Therefore, s ∈ after (s1 ) ∪ after (s2 ).
Proof.
′
′
′
ording to Lemma 16, there exists s0 and s1 su h that, s0 ∈
interfereA0 {s0 }, (s′0 , s′1 ) ∈ Tr ℓ1 spawn(ℓ2 ),ℓ3 rSchedule , and s1 ∈ interfereA0 {s′1 }.
′
′
By Lemmas 11 and 1, thread (s0 ) = thread (s0 ) = thread (s1 ) = thread (s1 ).
A
3
G-COLLECTING SEMANTICS
34
i0 = thread (s0 ) and i = thread (s).
′
′
′′
Let g0 , g0 , j , g1 and g su h that, respe tively, the genealogy of s0 , s0 , s0 ,
s1 , s2 , s is g0 , g0 · g0′ , g0 · g0′ · (i0 , ℓ2 , j), g0 · g0′ · (i0 , ℓ2 , j) · g1, g0 · g0′ · (i0 , ℓ2 , j) · g1 ,
g0 · g0′ · (i0 , ℓ2 , j) · g1 · g . Noti e that s1 and s2 have the same genealogy.
′
∗
Be ause (s0 , s0 ) ∈ [A0 |after (s ) ∪ Schedule] , by Lemma 10, desc g ′ {i0 } =
0
0
{i0 }.
′′
∗
Be ause (s1 , s1 ) ∈ [A0 |after (s ) ∪Schedule] , by Lemma 10, desc (i0 ,ℓ2 ,j)·g1 {i0 } =
0
desc (i0 ,ℓ2 ,j) {i0 } = {i0 , j}.
By denition of desc , desc g ′ ·(i0 ,ℓ2 ,j)·g1 ·g ({i0 }) = desc g [desc (i0 ,ℓ2 ,j)·g1 (desc g ′ {i0 })] =
0
0
desc g {i0 , j} By denition of desc , desc g0′ ·(i0 ,ℓ2 ,j)·g1·g ({i0 }) = desc g ({i0 }) ∪
desc g ({j}).
Be ause s ∈ after (s0 ), i ∈ desc g ′ ·(i0 ,ℓ2 ,j)·g2 ·g ({i0 }). Therefore either i ∈
0
desc g ({i0 }) or i ∈ desc g ({j}). If i ∈ desc g ({i0 }) then s ∈ after(s1 ). If
i ∈ desc g ({j}) then s ∈ after (s2 ).
Let
Lemma 30. Let s0 , s1 , s2 , s and s′ su h that (s0 , s1 ) ∈ Reach1 , s2 ∈
s hedule- hild{s1 }, label (s1 ) = ℓ3 , (s2 , s) ∈ (G0 ∪ A0 )⋆|after (s ) and (s, s′ ) ∈
1
G0|after (s0 ) ∩ Tr .
Therefore, s ∈ after (s2 ) (i.e., (s, s′ ) ∈ G0|after(s2 ) ∩ Tr ).
Proof.
s ∈ after (s1 ) ∪ after(s2 ). Assume by ontradi tion
that s ∈ after (s1 ). Therefore, by Lemma 11, thread (s) = thread (s1 ) and by
Lemma 13, label (s) = label(s1 ) = ℓ3 . This is ontradi tory with Lemma 1
whi h implies label (s) 6= ℓ3 .
Due to Lemma 29,
Lemma 31. If (s0 , s) ∈ Reach then:
• either s ∈ interfereA0 (s0 ) and label(s) = ℓ1
• or there exists s1 , s2 , s3 su h that (s0 , s1 ) ∈ Reach1 , (s1 , s2 ) ∈ Schedule ,
(s2 , s3 ) ∈ Reach3 ∩Ext1 (s0 , s1 ), (s3 , s) ∈ Schedule and s2 ∈ s hedule- hild {s1 }.
Furthermore label(s1 ) = label(s) = ℓ3 and s ∈ interfereG0 ∪A0 {s1 }.
Proof.
(s0 , s) ∈ [A0 |after (s0 ) ∪ Schedule]∗
13, label (s) = ℓ1 .
If
Lemma
then
s ∈ interfereA0 (s0 )
and by
onsider the other ase: (s0 , s) ∈
/ [A0 |after(s0 ) ∪ Schedule]∗ .
′
′
∗
Therefore, there exists s0 and s1 su h that (s0 , s0 ) ∈ [A0 |after (s ) ∪ Schedule] ,
0
(s′0 , s1 ) ∈ (G0|after(s0 ) ∩ Tr ) and (s1 , s) ∈ [(G0|after(s0 ) ∩ Tr ) ∪ A0 |after(s0 ) ]⋆ .
Then, let us
3
G-COLLECTING SEMANTICS
35
s′0 ∈ after (s0 ), thread (s′0 ) = thread (s0 ). A ′
ording to Lemma 5, thread (s1 ) = thread (s0 ) = thread (s0 ) and label (s1 ) = ℓ3 .
Therefore (s0 , s1 ) ∈ Reach1 .
′
′
Let (i1 , P1 , σ1 , g1 ) = s1 . Let g1 and j su h that g1 · (i, ℓ2 , j) = g1 . Let s2 =
(j, P1 , σ1 , g1 ). Therefore, s2 ∈ s hedule- hild {s1 } and (s1 , s2 ) ∈ Schedule .
Let (i, P, σ, g) = s and s3 = (j, P, σ, g). Therefore, (s3 , s) ∈ Schedule .
Given that Schedule ⊆ A0 ∩G0 ∩Tr , we on lude that (s2 , s3 ) ∈ [(G0|after(s0 ) ∩
Tr )∪A0 |after (s0 ) ]⋆ . Using Lemma 30 and a straightforward indu tion, (s2 , s3 ) ∈
[(G0|after(s2 ) ∩ Tr ) ∪ A0 |after (s0 ) ]⋆ . Then (s2 , s3 ) ∈ Ext1 (s0 , s1 ). Furthermore
by Lemma 7, after(s2 ) ⊆ after(s0 ). Hen e (s2 , s3 ) ∈ [(G0|after(s2 ) ∩ Tr ) ∪
A0 |after (s2 ) ]⋆ . Therefore, by Proposition 1, (s2 , s3 ) ∈ Reach3 .
Due to Lemma 11, be ause
Claim 17. S′ ⊆ interfereG0∪A0 (S1 ).
Proof.
s ∈ S′ . Therefore there exists s0 ∈ S0 su h that (s0 , s) ∈ Reach
and label (s) = ℓ3 6= ℓ1 . A ording to Lemma 31 there exists s1 su h that
(s0 , s1 ) ∈ Reach1 , label (s1 ) = ℓ3 and s ∈ interfereG0 ∪A0 {s1 }. Therefore
s1 ∈ S1 and s ∈ interfere G0 ∪A0 (S1 ).
Let
Claim 18. Self ⊆ Self1 .
Proof.
(s, s′ ) ∈ Self. A ording to Lemma 1, label (s) 6= ℓ3 . There
exists s0 ∈ S0 su h that (s0 , s) ∈ Reach. Therefore, a ording to lemma
31, s ∈ interfere A0 {s0 }. Therefore (s0 , s) ∈ Reach1 and, by Lemma 13,
label(s) = ℓ1 . Due to Lemmas 2 and 28, (s, s′ ) ∈ Tr ℓ1 spawn(ℓ2 ),ℓ3 . Hen e
(s, s′ ) ∈ Self1 .
Let
Claim 19. Par ⊆ Self3 ∪ Par3 .
Proof.
(s, s′ ) ∈ Par. Therefore, there exists s0 ∈ S0 su h that (s0 , s) ∈
Reach; Schedule and s ∈ after (s0 ). Noti e that by denition of Schedule ,
thread (s0 ) 6= thread (s).
Assume by ontradi tion, that s ∈ Schedulehinterfere A0 {s0 }i. Due to
Lemma 11, thread (s0 ) = thread (s). This is ontradi tory.
Hen e, by Lemma 31, there exists s1 , s2 , s3 su h that (s0 , s1 ) ∈ Reach1 ,
(s1 , s2 ) ∈ Schedule , (s2 , s3 ) ∈ Reach3 , (s3 , s) ∈ Schedule , s2 ∈ s hedule- hild {s1 },
and label (s1 ) = label(s) = ℓ3 .
Hen e, s1 ∈ S1 , s2 ∈ S2 .
A ording to Lemma 8 after (s1 ) ∩ after (s2 ) = ∅. Given that (s2 , s) ∈
Reach; Schedule; Schedule , (s2 , s) ∈ (G0 ∪ A0 )⋆|after (s ) . Hen e, du to Lemma
1
26, s ∈ after(s2 ).
Let
3
G-COLLECTING SEMANTICS
36
thread (s) = thread (s2 ), then (s2 , s) ∈ Reach3
thread (s) 6= thread (s2 ), then (s, s′ ) ∈ Par3 .
If
and
(s, s′ ) ∈ Self3 .
If
Claim 20. Sub ⊆ Self3 ∪ Par3 .
Proof.
(s, s′ ) ∈ Sub. There exists s0 , s4 su h that (s0 , s4 ) ∈ Reach and
(s4 , s) ∈ Ext(s0 , s4 ) and s4 ∈ S′ . By Lemma 31, there exists s1 , s2 , s3 su h
that (s0 , s1 ) ∈ Reach1 , s2 ∈ s hedule- hild A ({s1 }), (s2 , s3 ) ∈ Reach3 ∩
Ext1 (s0 , s1 ) and (s3 , s4 ) ∈ Schedule .
Furthermore, s ∈ after(s0 ) r after(s4 ). Due to Lemma 29, either s ∈
after (s1 ) r after(s4 ) or s ∈ after (s2 ) r after(s4 ).
′
Assume by ontradi tion that s ∈ after (s1 )rafter (s4 ). Therefore (s, s ) ∈
Sub1 . But, by Claim 2, Sub1 = ∅. Therefore s ∈ after (s2 ) r after (s4 ).
Let (i, P, σ, g) = s and s5 = (thread (s2 ), P5 , σ5 , g5 ).
∗
Given that (s4 , s) ∈ Ext(s0 , s4 ), (s4 , s) ∈ [(G0|after (s0 ) ∩ Tr ) ∪ A2 |after (s ) ]
0
∗
and by Lemma 29, (s4 , s) ∈ [(G0|after(s1 )∪after (s2 ) ∩ Tr ) ∪ A2 |after(s ) ] .
0
By denition of post, after (s1 ) ⊆ post(ℓ2 ). Furthermore by Lemma 8,
after (s1 ) ∩ after(s2 ) = ∅. Therefore after (s1 ) ⊆ post(ℓ2 ) r after (s2 ). Hen e,
(s4 , s) ∈ [(G0 |after (s2 ) ∩ Tr ) ∪ A2 |after(s0 ) ∪ G0 |post(ℓ2 )rafter (s2 ) ]∗ . By Lemma 7,
after (s2 ) ⊆ after(s), therefore (s4 , s) ∈ [(G0|after(s2 ) ∩Tr )∪(A2 ∪G0|post(ℓ2 ) )|after (s0 ) ]∗ .
∗
By Proposition 1, (s4 , s) ∈ [(G∞|after(s2 ) ∩ Tr ) ∪ (A2 ∪ G0|post(ℓ2 ) )|after (s ) ] .
0
Let (i, P, σ, g) = s and s5 = (thread (s2 ), P, σ, g). Therefore, (s2 , s5 ) ∈
Reach3 .
′
If i = thread (s2 ), then s5 = s and (s, s ) ∈ Self3 . If i 6= thread (s2 ), then
(s5 , s) ∈ Schedule and (s, s′ ) ∈ Par3 .
Let
3.6 Overapproximation of the Exe ution of a Program
Lemma 32. For all P and σ, after ((main, P, σ, ǫ)) = States.
In parti ular, if Init is the set of initial states of a program and s ∈ Init ,
then after (s) = States.
The following proposition shows the
and the
G-
onne tion between the operational
olle ting semanti s.
Proposition 4
(Conne tion with the operational semanti s)
program cmd, ℓ∞ and its set of initial states Init . Let:
ℓ
def
hS′ , G′ , A′ i =
ℓ
cmd , ℓ∞ hInit, G∞ , Schedulei
. Consider a
4
ABSTRACT SEMANTICS
37
with G∞ = guarantee ℓ cmd,ℓ hInit, Schedule, Schedulei
∞
Then:
S′ = {(main, P, σ, g) ∈ Tr ⋆ℓ cmd,ℓ∞ hIniti | P (main) = ℓ∞ }
G′ = G∞ = {(s, s′) ∈ Tr ℓ cmd,ℓ∞ | s ∈ Tr ⋆ℓ cmd,ℓ∞ hIniti} ∪ Schedule
A′ = {(s, s′) ∈ Tr ℓ cmd,ℓ∞ | s ∈ Tr ⋆ℓ cmd,ℓ∞ hIniti ∧ thread (s) 6= main}
∪Schedule
Proof. We only have to prove that Reach = {s ∈ Tr ⋆ℓ cmd,ℓ∞ hIniti | thread (s) =
main }.
s1 ∈ {s ∈ Tr ⋆ℓ cmd,ℓ∞ hIniti | thread (s) = main }.
⋆
There exists s0 ∈ S su h that (s0 , s) ∈ Tr ℓ cmd,ℓ By proposition 1, (s0 , s) ∈
∞
G∞ ∩ Tr ⋆ℓ cmd,ℓ∞
⋆
By Lemma 32, (s0 , s)(G∞|after (s0 ) ∩ Tr ℓ cmd,ℓ ) ∪ Schedule |after(s ) . Hen e
∞
0
(s0 , s).
⋆
It is straightforward to he k that Reach ⊆ {s ∈ Tr ℓ cmd,ℓ hIniti | thread (s) =
∞
main }.
Proof.
Let
⋆
Re all that Tr ℓ cmd,ℓ (Init) is the set of states that o ur on paths starting
∞
′
from Init . S represents all nal states rea hable by the whole program from
′
an initial state. G represents all transitions that may be done during any
′
exe ution of the program and A represents transitions of hildren of
.
main
4
Abstra t Semanti s
4.1 Abstra tion
Re all from the theory of abstra t interpretation [4℄ that a
Galois on-
ne tion
[23℄ between a
It is a
lassi al result [23℄ that an adjoint uniquely determines the other in
X and an abstra t omplete
latti e Y is a pair of monotoni fun tions α : X → Y and γ : Y → X su h
that ∀x ∈ X, ∀y ∈ Y, α(x) 6 y ⇔ x 6 γ(y); α is alled the abstra tion fun tion and γ the on retization fun tion. Produ t latti es are ordered by the
produ t ordering and sets of fun tions from X to a latti e L are ordered by
the pointwise ordering f 6 g ⇔ ∀x ∈ X, f (x) 6 g(x). A monotoni fun tion
f ♯ is an abstra tion of a monotoni fun tion f ♭ if and only if α ◦ f ♭ ◦ γ 6 f ♯ .
a Galois
on rete
omplete latti e
onne tion; therefore, we sometimes omit the abstra tion fun tion
(lower adjoint) or the
on retization fun tion (upper adjoint).
4
ABSTRACT SEMANTICS
Our
38
on rete latti es are the powersets
P(States) and P(Tr) ordered
by
in lusion. Remember, our goal is to adapt any given single-thread analysis
in a multithreaded setting. A
latti e
D
ordingly, we are given an abstra t
of abstra t states and an abstra t
transitions.
These
omplete latti e
R
omplete
of abstra t
on rete and abstra t latti es are linked by two Galois
αD , γD
onne tions, respe tively
and
αR , γR.
We assume that abstra tions
of states and transitions depend only on stores and that all the transitions
that leave the store un hanged are in
abstra t
guard
and
spawn
of Table 1, whi h are
or-
re t abstra tion of the
or-
responding on rete fun tions.
ℓ⋆ ∈ Labels
⊥.
Con rete fun tion
given the abstra t operators
a
spe ial label whi h is never
used in statements. Furtherdef
more, we dene post(ℓ⋆ ) =
States.
We dene a Galois
This assumption allows us to
as the least abstra t transition
We also assume we are
We assume
γR (⊥).
on-
Abstra t fun tion
λ(i, P, σ, g).(i, P, write lv:=e (σ), g)
w rite lv:=e
:
D →D
λA, S.interfereA (S)
inter : R × D →
D
λS.{(i, P, σ, g), (i, P ′ , σ ′ , g′ ) ∈ w rite -inter lv:=e :
Tr|
(i, P, σ, g) ∈ S ∧ σ ′ ∈ D → R
write lv:=e (S)}
λS.{(i, P, σ, h)
∈
S
| enforce cond
:
bool (σ, cond ) = true}
D →D
P(States)
and P(Labels): αL (S) = {ℓ ∈
Table 1: Given abstra tions
Labels | S ∩ post
(ℓ)
=
6
∅}
T
and γL (L ) =
ℓ∈LabelsrL post(ℓ) (by onvention, this set is States when
L = Labels). The set αL (S) represents the set of labels that may have been
ne tion between
en ountered before rea hing this point of the program.
Note that we have two distin t ways of abstra ting states
by using
αD ,
whi h only depends on the store
depends on the genealogy
to the multithreaded
g
and the
σ,
(i, P, σ, g), either
αL whi h only
or by using
urrent thread
i.
The latter is spe i
ase, and is used to infer information about possible
interferen es.
Just as
αR
αD
was not enough to abstra t states in the multithreaded setting,
is not enough, and lose the information that a given transition is or not
in a given
post(ℓ).
This information is needed be ause
Theorem 1 and Fig. 7.
G|post(ℓ)
Let us introdu e the following Galois
is used in
onne tion
P(Tr) and the abstra t latti e R Labels, the
produ t of |Labels| opies of P(Tr), to this end: αK (G) = λℓ.αR (G|post(ℓ) )
γK (K ) = {(s, s′) ∈ Tr | ∀ℓ ∈ Labels, s ∈ post(ℓ) ⇒ (s, s′ ) ∈ γR (K (ℓ))}.
between the
on rete latti e
4
ABSTRACT SEMANTICS
K = αK (G)
39
is an abstra tion of the guarantee
ondition:
K (ℓ⋆ )
represents
K (ℓ) represents the interferen es of a hild with its
G|post(ℓ) .
Abstra t ongurations are tuples hC , L , K , I i ∈ D×P(Labels)×R Labels ×
R su h that inter I C = C and ℓ⋆ ∈ L . The meaning of ea h omponent of an
abstra t onguration is given by the Galois onne tion α fg , γ fg :
the whole set
G,
and
parent, i.e., abstra ts
def
α fg hS, G, Ai = hinter αR (A) (αD (S)), αL (S), αK (G), αR (A)i
def
γ fg hS, G, Ai = hγD (C ) ∩ γL (L ), γK (K ), γR (I )i
C
abstra ts the possible
so far in the exe ution.
urrent stores
I
S. L
abstra ts the labels en ountered
is an abstra tion of interferen es
A.
4.2 Appli ations: Non-Relational Stores and Gen/Kill
Analyses
As an appli ation, we show some
on rete and abstra t stores that
used in pra ti e. We dene a Galois onne tion
and abstra t stores and en ode both
as abstra t stores, i.e.,
D = R.
αstore , γstore between
abstra t states
Abstra t states are
and
an be
on rete
abstra t transitions
on retized by:
def
γD (σ ♯ ) = {(i, P, σ, g) | σ ∈ γstore (σ ♯ )}.
Non-relational store
Su h a store is a map from the set of variables
on rete values
abstra t values
♭
Var
Var
V of
, and abstra t stores are maps from
♯
omplete latti e V of
. Given a Galois onne tion
♭
♯
between V and V , the following is a lassi al, so alled non-relational
to some set
to some
αV , γV
abstra tion of stores:
def
αstore (σ) = λx.αD (σ(x))
Let
val C (e)
and
addr C (lv)
and
def
γstore (σ ♯ ) = {σ | ∀x, σ(x) ∈ γD (x)}.
be the abstra t value of the expression
the set of variables that may be represented by
lv , respe
tively, in the
e
and
ontext
4
ABSTRACT SEMANTICS
40
C.
def
γR (σ ♯ ) =
def
((i, P, σ, h), (i′ , P ′ , σ ′ , h′ )) | ∀x, σ ′ (x) ∈ γD (σ ♯ (x)) ∪ {σ(x)}
w rite x:=e (C ) = C [x 7→ val C (e)]
[
def
w rite x:=e (C )
w rite lv:=e (C ) =
x∈addr C (lv)
def
w rite -inter lv:=e (C ) = λx.if x ∈ addr C (lv)
then
val C (e)
else
⊥
def
inter I (C ) = I ⊔ C
def
enforce x (σ) = σ[x 7→ true ♯ ]
Gen/kill analyses
and
enforce ¬x (σ) = σ[x 7→ false ♯ ]
In su h analyses [6℄, stores are sets, e.g., sets of ini-
tialized variables, sets of edges of a point-to graph.
The set of stores is
P(X) for some set X , D = R = P(X), and the abstra tion is trivial
αstore = γstore = id . Ea h gen/kill analysis gives, for ea h assignment, two
sets: gen(lv := e, σ) and kill(lv := e, σ). These sets may take the urrent
store σ into a ount (e.g. Rugina and Rinard's strong ag [12, 13℄); gen
(resp. kill ) is monotoni (resp. de reasing) in σ . We dene the on retization
of transitions and the abstra t operators:
def
γR (σ ♯ ) =
def
(i, P, σ, h) → (i′ , P ′, σ ′ , h′ ) | σ ′ ⊆ σ ∪ σ ♯
w rite lv:=e (C ) = (C r kill (lv := e, σ)) ∪ gen(lv := e, σ)
def
w rite -inter lv:=e (C ) = gen(lv := e, σ)
def
inter I (C ) = I ∪ C
def
enforce x (σ) = σ
4.3 Semanti s of Commands
Lemma 33. αL (S) = αL (interfereA (S)).
Lemma 34. αL (s hedule- hild (S)) = λℓ.⊥.
Lemma 35. Let G1 and G2 two set of transitions and S2 = {s | ∃s′ : (s, s′ ) ∈
G2 }.
Hen e, αK (G1 ∪G2 ) 6 λℓ.if ℓ ∈ αL (S2 ) then K (ℓ)⊔w rite -inter lv:=e (C ) else K (ℓ)
4
ABSTRACT SEMANTICS
assign lv:=e hC , L , K , I i
′′
with K
= λℓ.if ℓ ∈ L then
guard cond hC , L , K , I i
spaw n ℓ hC , L , K , I i
child -spaw n ℓ hC , L , K , I i
com bine hC ,L ,K ,I i (K ′ )
execute -thread ℓ cmd,ℓ′ ,C ,L ,I (K )
′
′
′ ′
with hC , L , K , I i
guarantee ℓ cmd,ℓ′ (hC , L , K , I i)
41
def
= hinter I ◦ w rite lv:=e (C ), L , K ′′ , I i
K (ℓ) ⊔ w rite -inter lv:=e (C ) else K (ℓ)
def
= hinter I ◦ enforce cond (C ), L , K , I i
def
= hC , L ∪ {ℓ}, K , I i
def
= hinter I ⊔K (ℓ) (C ), L , λℓ.⊥, I ⊔ K (ℓ)i
def
= hinter I ⊔K ′ (ℓ⋆ ) (C ), L , K ⊔ K ′ , I ⊔ K ′ (ℓ⋆ )i
def
= K′
= Lℓ cmd, ℓ′ MhC , L , K , I i
def
= execute -thread ↑ω
ℓ cmd,ℓ′ ,C ,L ,I (K )
Figure 8: Basi
abstra t semanti
The fun tions of Fig. 8 abstra t the
fun tions
orresponding fun tions of the
G-
olle ting semanti s (See Fig. 7).
Proposition 5. The abstra t fun tions assign lv:=e , guard cond , spaw n ℓ2 , child
-spaw n ℓ2 ,
ℓ
com bine
ℓ abstra tionsof the on rete fun
ℓand guarantee ℓ cmd
are
ℓ tions lv :=
′
′
e, ℓ , guard(cond), ℓ , 1 spawn(ℓ2 ), ℓ3 , init- hild ℓ1 ◦ 1 spawn(ℓ2 ), ℓ3 ,
respe tively.
ombine and guarantee ℓ
cmd ,ℓ∞
Proof.
are straightforward. The ase
The ases of com bine and guarantee ℓ
cmd
child -spaw n ℓ2 is a straightforward onsequen e of Lemma 34.
Let hC , L , K , I i an abstra t onguration and hS, G, Ai = γ fg hC , L , K , I i.
Therefore S = interfere A (S).
ℓ
′ ′ ′
Let hS , G , A i =
lv := e, ℓ′ and hC ′ , L ′ , K ′ , I ′ i = assign lv:=e hC , L , K , I i.
Therefore, by denition, inter I ◦ w rite ℓ lv:=e,ℓ′ ◦ inter I . By Proposition 2,
S′ = interfereA Tr ℓ lv:=e,ℓ′ r SchedulehinterfereA (S)i . Hen e αD (S′ ) 6 C ′ .
′
′
A ording to Proposition 2, G ⊆ G∪Gnew with Gnew = {(s, s ) ∈ Tr ℓ1 basic,ℓ2 |
′
s ∈ interfereA (S)} = {(s, s ) ∈ Tr ℓ1 basic,ℓ2 | s ∈ S}. Hen e αR (Gnew ) 6
w rite -inter ℓ lv:=e,ℓ′ (C ).
of
Therefore by Lemma 35:
αK (G′ ) 6 λℓ.if ℓ ∈ L then K (ℓ) ⊔ w rite -inter lv:=e (C ) else K (ℓ)
′
′
If (s, s ) ∈ Tr ℓ lv:=e,ℓ′ then, s ∈ post(ℓ) ⇔ s ∈ post(ℓ). Therefore, by
′
Lemma 33, αL (S) = αL (S ).
′ ′ ′
′
′
′ ′
Hen e α fg (hS , G , A i) 6 hC , L , K , I i. Given that αR (Tr ℓ guard (cond ),ℓ′ ) =
⊥ and ∀(s, s′ ) ∈ Tr ℓ guard(cond ),ℓ′ , s′ ∈ post(ℓ) ⇔
ℓ s ∈ post(ℓ), we
prove in the
′
guard(cond), ℓ .
is an abstra tion of
same way that guard
cond
4
ABSTRACT SEMANTICS
Lℓ lv := eMQ
Lℓ1 cmd 1 ; ℓ2 cmd 2 MQ
Lℓ1 while(cond){ℓ2 cmd}MQ
′
with loop (Q )
Lℓ1 create(ℓ2 cmd)MQ
′
with Q
def
=
=
def
=
def
=
def
=
def
=
def
42
assign lv:=e Q
Lℓ2 cmd 2 M ◦ Lℓ1 cmd 1 MQ
guard ¬cond loop ↑ω Q
LcmdM ◦ guard cond Q ′ ⊔ Q ′
com bine Q ′ ◦ guarantee ↑ω
ℓ2 cmd ◦ child -spaw n ℓ (Q )
2
spaw n ℓ2 (Q )
Figure 9: Abstra t semanti s
αR (Tr ℓ1 spawn(ℓ2 ),ℓ3 ) = ⊥ and ∀(s, s′ ) ∈ Tr ℓ1 spawn(ℓ2 ),ℓ3 , s′ ∈ post(ℓ) ⇔
s ∈ post(ℓ) ∨ ℓ = ℓ2 , we prove in the same way thatspaw n ℓ2 is an abstra tion
ℓ
of 1 spawn(ℓ2 ), ℓ3 .
Given that
The assign
fun tion updates K by adding the modi ation of the store
lv:=e
to all labels en ountered so far (those whi h are in L ). It does not hange
L
be ause no thread is
store, we
reated. Noti e that in the
an simplify fun tion
assign
ase of a non-relational
using the fa t that
inter I ◦ w rite x:=e (C ) =
C [x 7→ val C (e) ⊔ I (x)].
The abstra t semanti s is dened by indu tion on syntax, see Fig. 9, and,
with Prop.5, it is straightforward to
he k the soundness of this semanti s:
Theorem 2 (Soundness). Lcmd, ℓM is an abstra tion of cmd, ℓ .
4.4 Example
Consider Fig. 10 and the non-relational store of ranges [4℄.
We will apply
our algorithm on this example.
Our algorithm
omputes a rst time
not rea hed, and then,
is
then, the xpoint is
omputed another time.
Q 0 = hC 0 , {ℓ⋆ }, K 0, ⊥i where C 0 = [y =?, z =?]
and L 0 = {ℓ⋆ } and K 0 = λℓ.⊥ and I 0 = ⊥.
ℓ
1
The onguration Q 1 =
y := 0; ℓ2 z := 0, ℓ3 (Q 0 ) is omputed. Q 1 =
hC 1 , {ℓ⋆ }, K 1, ⊥i where C 1 = [y = 0, z = 0] and K 1 = ℓ⋆ 7→ [y = 0, z =
0]. The L and I omponnents are not hanged be ause no new thread
1. Initial
2.
execute -thread
execute -thread ,
is
onguration :
reated.
4
ABSTRACT SEMANTICS
43
Q 2 = child -spaw n ℓ3 (Q 1 ) is omputed. Q 2 = hC 2 , {ℓ⋆ }, K 2, ⊥i
where C 2 = C 1 and K 2 = λℓ.⊥. Noti e that be ause K 1 (ℓ3 ) = ⊥ the
equality C 2 = C 1 holds.
ℓ
4
The onguration Q 3 =
y := y + z, ℓ∞ (Q 2 ) is omputed. Q 3 =
hC 3 , {ℓ⋆ }, K 3, ⊥i where C 3 = [y = 0, z = 0] and K 3 = ℓ⋆ 7→ [y = 0].
3. The onguration
4.
Q 4 = com bine spaw n ℓ (Q 2 ) (Q 3 ) is omputed. Q 4 = hC 4 , {ℓ⋆ , ℓ3 }, K 4, I 4 i.
3
C 4 = [y = 0, z = 0] and K 4 = [ℓ⋆ 7→ [y = 0, z = 0]] and I 4 = [y = 0].
ℓ
5
The onguration Q 5 =
z := 3, ℓ∞ Q 4 is omputed. Q 5 = hC 5 , {ℓ⋆ , ℓ3 }, K 5 , I 5 i.
C 5 = [y = 0, z = 3] and K 5 = [ℓ⋆ 7→ [y = 0, z = [0, 3]] and I 5 = I 4 .
5. The onguration
6.
Then, we
ration
ompute a se ond time
execute -thread ,
on a new initial
ongu-
hC 0 , L 0 , K 5 , I 0 i.
Noting hange, ex ept at the step 3, when
child -spaw n
′
is applied. The onguration obtained is then Q 2 =
hC 2′ , {ℓ⋆ }, K 5, I 2′ i where C 2′ = [y = 0, z = [0, 3]] and
I 2′ = [z = 3]. Then, the algorithm dis overs that the
value of y may be 3.
The details of the exe ution of the algorithm is
given in the following tabular:
ℓ1
y := 0; ℓ2 z := 0;
ℓ3
create(ℓ4 y := y + z);
ℓ5
z := 3, ℓ∞
Figure 10:
ple
Exam-
5
PRACTICAL RESULTS
Initial
ℓ
onguration
1
y := 0, ℓ2
ℓ
2
z := 0, ℓ3
ℓ
4
child -spaw n ℓ3
y := y + z, ℓ∞
com bine spaw n ℓ
3
ℓ
5
ℓ
1
y := 0, ℓ2
ℓ
2
z := 0, ℓ3
ℓ
4
(·)
z := 3, ℓ∞
Initial
onguration
child -spaw n ℓ3
y := y + z, ℓ∞
com bine spaw n ℓ
ℓ
5
5
3
(·)
z := 3, ℓ∞
44
C
y = ?
z = ?
y = 0
z = ?
y = 0
z = 0
y = 0
z = 0
y = 0
z = 0
y = 0
z = 0
y = 0
z = 3
y = ?
z = ?
y = 0
z = ?
y = 0
z = 0
y =
0
z = [0, 3]
y = [0, 3]
z = [0, 3]
y = [0, 3]
z =
0
y = [0, 3]
z =
3
L
K
I
{ℓ⋆ }
λℓ.⊥
⊥
{ℓ⋆ }
ℓ⋆ 7→ y = 0
⊥
{ℓ⋆ }
ℓ⋆ 7→ y = 0, z = 0
⊥
{ℓ⋆ }
λℓ.⊥
⊥
{ℓ⋆ }
ℓ⋆ 7→ y = 0
⊥
{ℓ⋆ , ℓ3 }
ℓ⋆ 7→ y = 0, z = 0
y=0
ℓ⋆
ℓ3
ℓ⋆
ℓ3
ℓ⋆
ℓ3
ℓ⋆
ℓ3
{ℓ⋆ , ℓ3 }
{ℓ⋆ }
{ℓ⋆ }
{ℓ⋆ }
7→
7→
7
→
7→
7
→
7→
7
→
7→
y = 0, z = [0, 3]
z=3
y = 0, z = [0, 3]
z=3
y = 0, z = [0, 3]
z=3
y = 0, z = [0, 3]
z=3
⊥
z=3
{ℓ⋆ }
ℓ⋆ 7→ y = [0, 3]
z=3
{ℓ⋆ , ℓ3 }
{ℓ⋆ , ℓ3 }
ℓ⋆
ℓ3
ℓ⋆
ℓ3
7→ y = [0, 3], z = [0, 3]
7→
z=3
7
→
y = [0, 3], z = [0, 3]
7→
z=3
ompute it re ursively.
This requires to
ompute xpoints and may fail to terminate. For this rea↑ω
son, ea h time we have to ompute f (X) we ompute instead the over↑▽
approximation f , where ▽ is a widening operator, in the following way:
2. Compute
⊥
λℓ.⊥
The abstra t semanti s is denotational, so we may
X1 := X
⊥
{ℓ⋆ }
Pra ti al Results
1. Assign
y=0
X2 := f (X1 )
3. If
X2 6 X 1
then returns
X2 ,
y = [0, 3]
y = [0, 3]
5
PRACTICAL RESULTS
45
X1 := X1 ▽X2 and go ba k to 2. Our nal algorithm
is to ompute re ursively guarantee ℓ
applied to the initial onguration
cmd,ℓ∞
h⊤, {ℓ⋆ }, λℓ.⊥, ⊥i, overapproximating all xpoint omputations.
otherwise, 4. Assign
We have implemented
L.o.C.
two tools, Parint and MT-
two dierent abstra t stores.
The rst one maps variables to integer intervals
and
omputes an over-
false
time
65
0.05
0.20s
0
27 100
-
0.34s
7
Message
Embedded
MT-Penjili
time
Penjili, in O aml with the
front-end C2newspeak, with
Parint
alarms
Test 12
342
-
3.7s
1
Test 15
414
3.8
-
-
approximation of the val-
Table 2: Ben hmarks
ues of the variables. The
se ond one extends the analysis of Allamigeon et al. [2℄, whi h fo uses on
pointers, integers, C-style strings and stru ts and dete ts array overows.
It analyzes programs in full edged C (ex ept for dynami
memory allo a-
tion library routines) that use the Pthreads multithread library. We ignore
mutexes and
ondition variables in these implementations. This is sound be-
ause mutexes and
ondition variables only restri t possible transitions. We
lose pre ision if mutexes are used to
reate atomi
blo ks, but not if they are
used only to prevent data-ra es.
In Table 2 we show some results on ben hmarks of dierents sizes. L.o.C.
means Lines of Code.
Message is a C le, with 3 threads: one thread
sends an integer message to another through a shared variable. Embedded
is extra ted from embedded C
ode with two threads. Test 12 and Test
15 are sets of 12 and 15 les respe tively, ea h one fo using on a spe i
thread intera tion.
To give an idea of the pre ision of the analysis, we indi ate how many
false alarms were raised. Our preliminary experiments show that our algorithm loses pre ision in two ways: 1. through the (single-thread) abstra tion
on stores 2. by abstra tion on interferen es. Indeed, even though our algorithm takes the order of transitions into a
ount for the
urrent thread, it
onsiders that interferen e transitions may be exe uted in an arbitrary order
and arbitrary many times. This does not
ause any loss in Message, sin e
the thread whi h send the message never put an in orre t value in the shared
variable. Despite the fa t that Embedded is a large ex erpt of an a tual
industrial
ode, the loss of pre ision is moderate: 7 false alarms are reported
6
COMPLEXITY
46
on a total of 27 100 lines. Furthermore, be ause of this arbitrary order, our
analysis straightforwardly extends to models with "relaxed- onsisten y" and
"temporary" view of thread memory due to the use of
a he, e.g., OpenMP.
6
Complexity
The
omplexity of our algorithm greatly depends on widening and narrowing
ℓ
Given a program 0 prog, ℓ∞ , the
of the widening and
slowness
operators.
narrowing in an integer
at most
w
w
su h that: widening-narrowing stops in always
steps on ea h loop and whenever
guarantee
is
omputed (whi h
also requires doing an abstra t xpoint
omputation). Let the
of a program be the nesting depth of
while
sub ommand
and of
create
nesting depth
whi h
2
have a
create .
Proposition 6. Let d be the nesting depth, n the number of ommands of
our program, and, w the slowless of our widening. The time omplexity of
our analysis is O(nw d+1) assuming operations on abstra t stores are done in
onstant time.
This is
omparable to the
thread analysis, and
O(nw d )
omplexity of the
ertainly mu h better that the
orresponding single-
ombinatorial explosion
of interleaving-based analyses. Furthermore, this is beter than polynomial in
an exponential number of states [15℄.
Proof.
c(ℓ cmd, ℓ′ ), n(ℓ cmd , ℓ′) and d(ℓ cmd , ℓ′ ) and w(ℓ cmd , ℓ′ ) be the
ℓ
′
ℓ
′
omplexity of analyzing cmd , ℓ , the size of cmd , ℓ and the nesting depth of
ℓ
′
cmd, ℓ , the slowless of the widening and narrowing on ℓ cmd , ℓ′ respe tively.
Let a and k the omplexity of assign and of reading K (ℓ) respe tively.
Let
Proposition 6 is a straightforward
3
onsequen e of the following lemma :
Lemma 36. The omplexity of omputing Lℓ cmd, ℓ′ MQ is O(an(w + k)wd−1 )
This lemma is proven by indu tion.
c(lv := e) = a
c(ℓ1 cmd 1 ; ℓ2 cmd 2 , ℓ3 ) = c(ℓ1 cmd 1 , ℓ2 ) + c(ℓ2 cmd 2 , ℓ3 )
2 In
our Semanti s, ea h create needs a xpoint omputation, ex ept create with no
sub ommand create .
3 The fun tions arguments are omitted in the name of simpli ity.
6
COMPLEXITY
47
c(ℓ1 while(cond){ℓ2 cmd}, ℓ3 ) 6 w(ℓ1 while(cond ){ℓ2 cmd }, ℓ3 ) × c(ℓ2 cmd , ℓ1 )
ℓ2
cmd
ontain any sub ommand create , then the xpoint
ℓ
ℓ
ℓ
omputation terminates in one step: c( 1 create( 2 cmd ), ℓ3 ) = k + c( 2 cmd )
ℓ1
ℓ2
ℓ1
ℓ2
ℓ2
Else: c( create( cmd ), ℓ3 ) = k + w( create( cmd ), ℓ3 )) × c( cmd)
If
does not
6.1 Complexity of Operations on R Labels
Noti e that we have assumed that operation on
stant time in Proposition 6.
dierent ways.
R Labels
are done in
on-
This abstra t store may be represented in
The main problem is the
omplexity of the
assign
fun -
tion, whi h
omputes a union for ea h element in L . The naive approa h
Labels
is to represent K ∈ R
as a map from P(Labels) to R . Assuming
that operations on maps are done in onstant time, this approa h yields
O(tnw d ) omplexity where t is the number4 of create s in the program.
Labels
We may also represent K ∈ R
as some map K M from P(Labels)
a
S
K (ℓ) = L ∋ℓ K M (L ) and the fun tion assign is done in
def
hC , L , K , I i = hinter I ◦ w rite lv:=e (C ), L , K M [L 7→
onstant time : assign
lv:=e
K M (L ) ⊔ w rite -inter lv:=e (C )], I i. Nevertheless, to a ess to the value K (ℓ)
may need up to t operations, whi h in reases the omplexity of child -spaw n
d−1
and com bine . The omplexity is then O(n(w + t)w
).
to
R
su h that
6.2 Compexity of Widdenning
The slowness of the widening and narrowing operators,
w,
depends on the
abstra tion. Nevertheless, a widening is supposed to be fast.
′
′
Consider the naive widening on intervals : [x, x ]▽[y, y ] =
(
x
z=
−∞
if
y>x
else
and
(
x′
z′ =
+∞
if
y6x
else
[z ′ , z ′ ]
where
.
This widening never widen more than two times on the same variable. Therefore this naive widening is linear in the worst
4 This
ase.
is dierent to the number of threads sin e an arbitrary number of threads may
be reated at the same lo ation.
7
CONCLUSION
48
def
Lℓ0 par {ℓ1 cmd 1 |ℓ2 cmd 2 }M(Q ) = hC 1 ⊓ C 2 , L , K ′ , I 1 ⊔ I 2 i
with hC 1 , L 1 , K 1 , I 1 i = guarantee ℓ1
◦ child -spaw n ℓ1 (Q )
cmd 1 ,ℓ∞
and hC 2 , L 2 , K 2 , I 2 i = guarantee ℓ2
◦ child -spaw n ℓ2 (Q )
cmd 2 ,ℓ∞
′
and K = K [ℓ1 7→ K 2 (ℓ⋆ ) ⊔ K (ℓ1 )][ℓ2 7→ K 1 (ℓ⋆ ) ⊔ K (ℓ2 )]
Figure 11: Extended syntax
6.3 Other form of parallelism
Our te hnique also applies to other forms of
how Rugina and Rinard's
par
on urren y, Fig. 11 displays
onstru tor [12, 13℄ would be
omputed with
our abstra tion. Corre tness is a straightforward extension of the te hniques
des ribed in this paper.
Our model handle programs that use
create
OpenMP programs with parallel and task
7
and
par .
Then, it
an handle
onstru tors.
Con lusion
We have des ribed a generi
stati
analysis te hnique for multithreaded pro-
grams parametrized by a single-thread analysis framework and based on a
form of rely-guarantee reasoning.
modular
To our knowledge, this is the rst su h
framework: all previous analysis frameworks
ti ular abstra t domain.
on entrated on a par-
Su h modularity allows us to leverage any stati
analysis te hnique to the multithreaded
ase.
We have illustrated this by
applying it to two abstra t domains: an interval based one, and a ri her
one that also analyzes array overows, strings, pointers [2℄. Both have been
implemented.
We have shown that our framework only in urred a moderate (low-degree
polynomial) amount of added
omplexity. In parti ular, we avoid the
om-
binatorial explosion of all interleaving based approa hes.
Our analyses are always
orre t, and produ e reasonably pre ise infor-
mation on the programs we tested.
lo ks/mutexes and
onditions into a
lieve that is an orthogonal
is already present
from the
without
on ern:
Clearly, for some programs, taking
ount will improve pre ision.
We be-
the non-trivial part of our te hnique
syn hronization primitives, as should be manifest
orre tness proof of our
G-
olle ting semanti s.
We leave the in-
8
ACKNOWLEDGMENT
49
tegration of syn hronisation primitives with our te hnique as future work.
However, lo ks whose sole purpose are to prevent data ra es (e.g. ensuring
that two
on urrent a
esses to the same variable are done in some arbitrary
sequential order) have no inuen e on pre ision. Taking lo ks into a
may be interesting to isolate atomi
8
ount
blo ks.
A knowledgment
We thank Jean Goubault-Larre q for helpful
omments.
Referen es
[1℄ A. Miné, Field-sensitive value analysis of embedded C programs with
union types and pointer arithmeti s, in: ACM SIGPLAN LCTES'06,
ACM Press, 2006, pp. 5463,
arti le-mine-l tes06.pdf.
http://www.di.ens.fr/~mine/publi/
[2℄ X. Allamigeon, W. Godard, C. Hymans, Stati
Analysis of String Ma-
nipulations in Criti al Embedded C Programs, in: K. Yi (Ed.), Stati
Analysis, 13th International Symposium (SAS'06), Vol. 4134 of Le ture
Notes in Computer S ien e, Springer Verlag, Seoul, Korea, 2006, pp.
3551.
[3℄ B. Steensgaard, Points-to analysis in almost linear time, in: POPL '96:
Pro eedings of the 23rd ACM SIGPLAN-SIGACT symposium on Priniples of programming languages, ACM Press, New York, NY, USA,
1996, pp. 3241. doi:http://doi.a m.org/10.1145/237721.237727.
[4℄ P. Cousot, R. Cousot, Basi Con epts of Abstra t Interpretation, Kluwer
A ademi
Publishers.
[5℄ A. Miné,
bound
Verlag,
A new
matri es,
2001,
pp.
numeri al abstra t
in:
PADO
155172,
arti le-mine-padoII.pdf.
II,
domain based
Vol.
2053
of
on dieren e-
LNCS,
Springer-
http://www.di.ens.fr/~mine/publi/
[6℄ P. Lammi h, M. Müller-Olm, Pre ise xpoint-based analysis of programs
with thread- reation and pro edures, in: L. Caires, V. T. Vas on elos
REFERENCES
50
(Eds.), CONCUR, Vol. 4703 of Le ture Notes in Computer S ien e,
Springer, 2007, pp. 287302.
[7℄ P. Pratikakis, J. S. Foster, M. Hi ks, Lo ksmith:
ontext-sensitive
or-
relation analysis for ra e dete tion, in: PLDI '06: Pro eedings of the
2006 ACM SIGPLAN
onferen e on Programming language design and
implementation, ACM Press, New York, NY, USA, 2006, pp. 320331.
doi:http://doi.a m.org/10.1145/1133981.1134019.
[8℄ L. Fajstrup, E. Goubault, M. Rauÿen, Dete ting deadlo ks in
on urrent
systems, in: CONCUR '98: Pro eedings of the 9th International Conferen e on Con urren y Theory, Springer-Verlag, London, UK, 1998, pp.
332347.
[9℄ D. R. Butenhof, Programming with POSIX Threads, Addison-Wesley,
2006.
[10℄ P. Cousot, R. Cousot, Abstra t interpretation and appli ation to logi
programs, Journal of Logi
Programming '92.
[11℄ L. O. Andersen, Program analysis and spe ialization for the C programming language, Ph.D. thesis, DIKU, University of Copenhagen
(May 1994).
http://repository.reads heme.org/ftp/papers/topps/
D-203.ps.gz
URL
[12℄ R. Rugina, M. C. Rinard, Pointer analysis for multithreaded programs,
in: PLDI, 1999, pp. 7790.
URL
iteseer.ist.psu.edu/rugina99pointer.html
[13℄ R. Rugina,
M. C. Rinard,
Pointer analysis for stru tured parallel
programs, ACM Trans. Program. Lang. Syst. 25 (1) (2003) 70116.
doi:http://doi.a m.org/10.1145/596980.596982.
[14℄ A. Venet, G. Brat, Pre ise and e ient stati
array bound
he king for
large embedded C programs, in: PLDI '04: Pro eedings of the ACM
SIGPLAN 2004
onferen e on Programming language design and im-
plementation, ACM Press, New York, NY, USA, 2004, pp. 231242.
doi:http://doi.a m.org/10.1145/996841.996869.
REFERENCES
51
[15℄ C. Flanagan, S. Qadeer, Thread-modular model
he king, in: T. Ball,
S. K. Rajamani (Eds.), SPIN, Vol. 2648 of Le ture Notes in Computer
S ien e, Springer, 2003, pp. 213224.
[16℄ A. Malkis, A. Podelski, A. Rybal henko, Pre ise thread-modular veri ation, in: H. R. Nielson, G. Filé (Eds.), SAS, Vol. 4634 of Le ture
Notes in Computer S ien e, Springer, 2007, pp. 218232.
[17℄ E. Yahav,
using
Verifying safety properties of
3-valued
logi ,
in:
POPL
'01:
on urrent java programs
Pro eedings
of
the
28th
ACM SIGPLAN-SIGACT symposium on Prin iples of programming
languages,
ACM
Press,
New
York,
NY,
USA,
2001,
pp.
2740.
doi:http://doi.a m.org/10.1145/360204.360206.
[18℄ T. W. Reps, Personnal
ommuni ation (2008).
[19℄ C. Flanagan, S. N. Freund, M. Lifshin, Type inferen e for atomi ity, in:
TLDI '05, ACM Press, 2005, pp. 4758.
[20℄ R. J. Lipton, Redu tion:
a method of proving properties of parallel
programs, Commun. ACM 18 (12) (1975) 717721.
[21℄ V. Vojdani, V. Vene, Goblint:
Path-sensitive data ra e analysis, in:
SPLST, 2007.
[22℄ V. Vene, M. Muller-olm, Global invariants for analyzing multi-threaded
appli ations, in: In Pro . of Estonian A ademy of S ien es: Phys., Math,
2003, pp. 413436.
[23℄ G. Gierz, K. Hofmann, K. Keimel, J. Lawson, M. Mislove, D. S ott,
Continuous Latti es and Domains, Cambridge University Press, 2003.
| 6 |
Efficiently Decodable Non-Adaptive Threshold
Group Testing
Thach V. Bui∗ , Minoru Kuribayashi‡ , Mahdi Cheraghchi§, and Isao Echizen∗†
arXiv:1712.07509v3 [] 30 Jan 2018
∗ SOKENDAI
(The
Graduate University
for Advanced
Studies), Hayama,
Kanagawa, Japan
[email protected]
‡ Graduate
§ Department of
† National Institute
School
of Natural Science
Computing, Imperial
of Informatics,
and Technology,
College London, UK
Tokyo, Japan
Okayama University,
[email protected] [email protected]
Okayama, Japan
[email protected]
Abstract
We consider non-adaptive threshold group testing for identification of up to d defective items in a set of
N items, where a test is positive if it contains
least
and negative
otherwise. The
2 ≤uu ≤ d defective items,
at
2
2
1
d
+
log
u
log
·
d
log
N
tests
with
probability
defective items can be identified using t = O d2d−u2 e(d−u)
u
ǫ
u u
2
· d log Nd + u log ud · d2 log N tests with probability 1.
at least 1 − ǫ for any ǫ > 0 or t = O d2d−u2 e(d−u)
u
t
The decoding time is O(d2 log N ) × poly(d2 log N ). This result significantly improves the best known results for
decoding non-adaptive threshold group testing: O(N log N + N log 1ǫ ) for probabilistic decoding, where ǫ > 0, and
O(N u log N ) for deterministic decoding.
I. I NTRODUCTION
The goal of combinatorial group testing is to identify at most d defective items among a population of
N items (usually d is much smaller than N). This problem dates back to the work of Dorfman [1], who
proposed using a pooling strategy to identify defectives in a collection of blood samples. In each test, a
group of items are pooled, and the combination is tested. The result is positive if at least one item in the
group is defective and is otherwise negative. Damaschke [2] introduced a generalization of classical group
testing known as threshold group testing. In this variation, the result is positive if the corresponding group
contains at least u defective items, where u is a parameter, is negative if the group contains no more than
ℓ defective items, where 0 ≤ ℓ < u, and is arbitrary otherwise. When u = 1 and ℓ = 0, threshold group
testing reduces to classical group testing. We note that ℓ is always smaller than the number of defective
items. Otherwise, every test yields a negative outcome, then no information is enhanced from tests.
There are two approaches for the design of tests. The first is adaptive group testing in which there are
several testing stages, and the design of each stage depends on the outcomes of the previous stages. The
second is non-adaptive group testing (NAGT) in which all tests are designed in advance, and the tests are
performed in parallel. NAGT is appealing to researchers in most application areas, such as computational
and molecular biology [3], multiple access communications [4] and data steaming [5] (cf. [6]). The focus
of this work is on NAGT.
In both threshold and classical group testing, it is desirable to minimize the number of tests and,
to efficiently identify the set of defective items (i.e., have an efficient decoding algorithm). For both
testings, one needs Ω(d log N) tests to identify all defective items [6], [7], [8] using adaptive schemes. In
adaptive schemes, the decoding algorithm is usually implicit in the test design. The number of tests and
the decoding time are significantly different between classical non-adaptive (CNAGT) and non-adaptive
threshold group testing (NATGT).
1
In CNAGT, Porat and Rothschild [9] first proposed explicit nonadaptive constructions using O(d2 log N)
tests. However, there is no efficient (sublinear-time) decoding algorithm associated with their schemes. For
exact identification, there are explicit schemes allowing defective items be identified using poly(d, log N)
tests in time poly(d, log N) [10], [11] (the number of tests can be as low as O(d1+o(1) log N) if false
positives are allowed in the reconstruction). To achieve a nearly optimal number of tests in adaptive group
testing and with low decoding complexity, Cai et al. [12] proposed using probabilistic schemes that need
O(d log d · log N) tests to find the defective items in time O(d(log N + log2 d)).
In
threshold group testing, Damaschke [2] showed that the set of positive items can be identified with
N
tests with up to g false positives and g false negatives, where g = u − ℓ − 1 is the gap parameter.
u
Cheraghchi [13] showed that it is possible to find the defective items with O(dg+2 log d · log(N/d)) tests,
and that this trade-off is essentially optimal. Recently, De Marco et al. [14] improved this bound to
O(d3/2 log(N/d)) tests under the extra assumption that the number of defective items is exactly d, which
is rather restrictive in application. Although the number of tests has been extensively studied, there have
been few reports that focus on the decoding algorithm as well. Chen andFu [15] proposed schemes
based
d−u
d u
d
N
on CNAGT for when g = 0 that can find the defective items using O u
d log d tests in
d−u
√
1
u
time O(N log N). Chan et al. [16] presented a randomized algorithm with O log ǫ · d u log N tests to
find the defective items in time O(N log N + N log 1ǫ ) given that g = 0 and u = o(d). The cost of these
decoding schemes increases with N. Our objective is to find an efficient decoding scheme to identify at
most d defective items in NATGT when g = 0.
Contributions: In this paper, we consider the case where g = 0, i.e., ℓ = u − 1 (u ≥ 2), and call
this model u-NATGT. We first propose an efficient scheme for identifying at most d defective items in
t
NATGT in time O(d2 log
× poly(d2 log N), where t is the number of tests. Our main idea is to create at
N)
least a specified number of rows in the test matrix such that the corresponding test in each row contains
exactly u defective items and such that the defective items in the rows are the defective items to be
identified. We “map” these rows using a special matrix constructed from a disjunct matrix (defined later)
and its complementary matrix, thereby converting the outcome in NATGT to the outcome in CNAGT.
The defective items in each row can then be efficiently identified.
Although Cheraghchi [13] and De Marco et al. [14] proposed nearly optimal bounds on the number of
tests, there are no decoding algorithms associated with their schemes. On the other hand, the scheme of
Chen et al. [15] requires an exponential number of tests, which are much larger than the number of tests
in our scheme. Moreover, the decoding complexity of their scheme is exponential in the number of items
N, which is impractical. Chan et al. [16] proposed a probabilistic approach to achieve a small number
of tests, which combinatorially can be better than our scheme. However, their scheme is only applicable
when threshold u is much smaller than d (u = o(d)) and the decoding complexity remains high, namely
O(N log N + N log 1ǫ ), where ǫ > 0 is the precision parameter.
We present a divide and conquer scheme which we then instantiate via deterministic and randomized
decoding. Deterministic decoding is a deterministic scheme in which all defective items can be found
with probability 1. Randomized decoding reduces the number of tests; all defective items can be found
t
× poly(d2 log N). A
with probability at least 1 − ǫ for any ǫ > 0. The decoding complexity is O(d2 log
N)
comparison with existing work is given in Table I.
II. P RELIMINARIES
For consistency, we use capital calligraphic letters for matrices, non-capital letters for scalars, and bold
letters for vectors. All matrix and vector entries are binary. Here are some of the notations used:
1) N, d, x = (x1 , . . . , xN )T : number of items, maximum number of defective items, and binary representation of N items.
2) S = {j1 , j2 , . . . , j|S| }: the set of defective items; cardinality of S is |S| ≤ d.
2
TABLE I
C OMPARISON WITH EXISTING WORK .
Cheraghchi [13]
De Marco et al. [14]
Chen et al. [15]
Chan et al. [16]
Deterministic decoding
Randomized decoding
Number of tests (t)
O(d2 log d · log Nd )
q
O d2 · d−u
· log Nd
du
d−u
N
d
d u
d
log
O
u
d−u
d
√
1
O
log
u
log
N
·
d
u ǫ
2
· d log N
O d2d−u2 e(d−u)
+ u log ud · d2 log N
u
d
2
u
2
d
1
O d2d−u2 e(d−u)
u
log
+
log
·
d
log
N
u
u
ǫ
Decoding complexity
×
Decoding type
×
×
×
O(N u log N )
O(N log N + N log
t
O(d2 log N)
t
O(d2 log N)
Deterministic
1
)
ǫ
Random
2
× poly(d log N )
Deterministic
2
× poly(d log N )
Random
3) ⊗, ⊙: operation related to u-NATGT and CNAGT, to be defined later.
4) T : t × N measurement matrix used to identify at most d defective items in u-NATGT, where integer
t ≥ 1 is the number of tests.
5) G = (gij ): h × N matrix, where h ≥ 1.
6) M = (mij ): a k × N (d + 1)-disjunct matrix used to identify at most u defective items in u-NATGT
and (d + 1) defective items in CNAGT, where integer k ≥ 1 is the number of tests.
7) M = (mij ): the k × N complementary matrix of M; mij = 1 − mij .
8) Ti,∗ , Gi,∗ , Mi,∗, Mj : row i of matrix T , row i of matrix G, row i of matrix M, and column j of
matrix M, respectively.
9) xi = (xi1 , . . . , xiN )T , Si : binary representation of items and set of indices of defective items in row
Gi,∗ .
10) diag(Gi,∗ ) = diag(gi1 , . . . , giN ): diagonal matrix constructed by input vector Gi,∗ .
A. Problem definition
We index the population of N items from 1 to N. Let [N] = {1, 2, . . . , N} and S be the defective set,
where |S| ≤ d. A test is defined by a subset of items P ⊆ [N]. (d, u, N)-NATGT is a problem in which
there are at most d defective items among N items. A test consisting of a subset of N items is positive
if there are at least u defective items in the test, and each test is designed in advance. Formally, the test
outcome is positive if |P ∩ S| ≥ u and negative if |P ∩ S| < u.
We can model (d, u, N)-NATGT as follows: A t×N binary matrix T = (tij ) is defined as a measurement
matrix, where N is the number of items and t is the number of tests. x = (x1 , . . . , xN )T is the binary
representation vector of N items, where |x| ≤ d. xj = 1 indicates that item j is defective, and xj = 0
indicates otherwise. The jth item corresponds to the jth column of the matrix. An entry tij = 1 naturally
means that item j belongs to test i, and tij = 0 means otherwise. The outcome of all tests is y =
(y1 , . . . , yt )T , where yi = 1 if test i is positive and yi = 0 otherwise. The procedure to get the outcome
vector y is called the encoding procedure. The procedure used to identify defective items from y is called
the decoding procedure. Outcome vector y is
T1,∗ ⊗ x
y1
def
def
.
= ...
..
y =T ⊗x=
(1)
Tt,∗ ⊗ x
yt
P
where ⊗ is a notation for the test operation in u-NATGT; namely, yi = Ti,∗ ⊗ x = 1 if N
j=1 xj tij ≥ u,
PN
and yi = Ti,∗ ⊗ x = 0 if j=1 xj tij < u for i = 1, . . . , t. Our objective is to find an efficient decoding
scheme to identify at most d defective items in (d, u, N)-NATGT.
3
B. Disjunct matrices
When u = 1, u-NATGT reduces to CNAGT. To distinguish CNAGT and u-NATGT, we change notation
⊗ to ⊙ and use a k × N measurement matrix M instead of the t × N matrix T . The outcome vector y
in ((1)) is equal to
WN
M1,∗ ⊙ x
y1
N
j=1 xj ∧ m1j
_
def
def
def
.
.
.
.
y=M⊙x=
Mj = ...
=
(2)
=
.
.
WN
j=1,x
=1
j
Mk,∗ ⊙ x
yk
j=1 xj ∧ mkj
where ⊙ is the Boolean operator for vector multiplication in which multiplication is replaced
with the
W
AND (∧) operator and addition is replaced with the OR (∨) operator, and yi = Mi,∗ ⊙x = N
x
∧mij =
j
j=1
WN
j=1,xj =1 mij for i = 1, . . . , k.
W
W
W
T
The union of r columns of M is defined as follows: ri=1 Mji = ( ri=1 m1ji , . . . , ri=1 mtji ) . A
column is said to not be included in another column if there exists a row such that the entry in the first
column is 1 and the entry in the second column is 0. If M is a (d + 1)-disjunct matrix satisfying the
property that the union of at most (d + 1) columns does not include any remaining column, x can always
be recovered from y. We need M to be a (d + 1)-disjunct matrix that can be efficiently decoded, as
in [11], [10], to identify at most d defective items in u-NATGT. A k × N strongly explicit matrix is a
matrix in which the entries can be computed in time poly(k). We can now state the following theorem:
Theorem 1. [10, Theorem 16] Let 1 ≤ d ≤ N. There exists a strongly explicit k × N (d + 1)-disjunct
matrix with k = O(d2 log N) such that for any k × 1 input vector, the decoding procedure returns the set
of defective items if the input vector is the union of at most d + 1 columns of the matrix in poly(k) time.
C. Completely separating matrix
We now introduce the notion of completely separating matrices which are used to get efficient decoding
algorithms for (d, u, N)-NATGT. A (u, w)-completely separating matrix is defined as follows:
Definition 1. An h × N matrix G = (gij )1≤i≤h,1≤j≤N is called a (u, w)-completely separating matrix if
for any pair of subsets I, J ⊂ [N] such that |I| = u, |J| = w, and I ∩ J = ∅, there exists row l such
that glr = 1 for any r ∈ I and gls = 0 for any s ∈ J. Row l is called a singular row to subsets I and J.
When u = 1, G is called a w-disjunct matrix.
This definition is slightly different from the one described by Lebedev [17]. It is easy to verify that, if
a matrix is a (u, w)-completely separating matrix, it is also a (u, v)-completely separating matrix for any
v ≤ w. Below we present the existence of such matrices.
Theorem 2. Given integers 1 ≤ u, w < N, there exists a (u, w)-completely separating matrix of size
h × N, where
eN
(u + w)2 ew u
e(u + w)
h =
(u + w) log
·
+ u log
+1
u+w
u
u(2u + w) u
and e is base of the natural logarithm.
Proof: An h × N matrix G = (gij )1≤i≤h,1≤j≤N is generated randomly in which each entry gij is
assigned to 1 with probability of p and to 0 with probability of 1 − p. For any pair of subsets I, J ⊂ [N]
such that |I| = u, |J| = w, the probability of a row is not singular is:
1 − pu (1 − p)w
4
(3)
Then, the probability that there is no singular row to subsets I and J is:
f (p) = (1 − pu (1 − p)w )h
(4)
Using union bound, the probability that any pair of subsets I, J ⊂ [N] such that |I| = u, |J| = w does
not have a singular row, i.e., the probability that G is not a (u, w)-separating matrix, is:
u+w
N
u+w
N
(1 − pu (1 − p)w )h
(5)
f (p) =
g(p, h, u, w, N) =
u
u+w
u
u+w
To ensure that there exists G which is a (u, w)-separating matrix, one needs to find p and h such that
u
, we have:
g(p, h, u, w, N) < 1. Choose p = u+w
u+w !h
u u
u
u
w h
f (p) = (1 − p (1 − p) ) = 1 −
1−
(6)
w
u+w
u+w !
u u
u
1−
, where exp(x) = ex
(7)
≤ exp −h
w
u+w
u u
u u w(2u + w)
u2
−u
≤ exp −h
1−
= exp −h
(8)
·e
w
(u + w)2
ew
(u + w)2
x n
x2
−x
x
We get (7) because 1 − x ≤ e for any x > 0 and (8) because 1 + n ≥ e 1 − n for n > 1,
|x| ≤ n. Then we have:
u+w
u
e(u + w)
eN
u+w
N
f (p) ≤
f (p) (9)
g(p, h, u, w, N) =
u+w
u
u
u+w
u+w
u
u u w(2u + w)
e(u + w)
eN
(10)
· exp −h
≤
u+w
u
ew
(u + w)2
<1
(11)
u+w
u
eN
e(u + w)
u u w(2u + w)
⇐⇒
(12)
< exp h
u+w
u
ew
(u + w)2
(u + w)2 ew u
eN
e(u + w)
·
⇐⇒
h > (u + w) log
+ u log
(13)
u+w
u
w(2u + w) u
k
We got (9) because nk ≤ en
and (10) by using (8). From (13), if we choose
k
(u + w)2 ew u
e(u + w)
eN
·
+ u log
+1
(14)
h = (u + w) log
u+w
u
w(2u + w) u
then g(p, h, u, w, N) < 1, i.e., there exists a (u, w)-completely separating matrix of size h × N.
Suppose that G is an h × N (u, w)-completely separating matrix. If w is set to d − u, then every
h × d submatrix, which is constructed by its d columns, is a (u, d − u)-completely separating matrix.
This property is strict and makes the number of rows in G is high. To reduce the number of rows, we
relax this property as follows: each h × d submatrix, which is constructed by d columns of G, is a
(u, d − u)-completely separating matrix with high probability. The following corollary describes this idea
in details.
5
Corollary 1. Let u, d, N be any given positive integers such that 1 ≤ u < d < N. For any ǫ > 0, there
exists an h × N matrix such that each h × d submatrix, which is constructed by its d columns, is a
(u, d − u)-completely separating matrix with probability at least 1 − ǫ, where
u
ed
1
e(d − u)
d2
u log
+ log
h = 2
d − u2
u
u
ǫ
and e is base of the natural logarithm.
Proof: An h × N matrix G = (gij )1≤i≤h,1≤j≤N is generated randomly in which each entry gij is
assigned to 1 with probability of ud and to 0 with probability of 1 − ud . Our task is now to prove that each
h × d matrix G ′ , which is constructed by d columns of G, is a (u, d − u)-completely
separating
matrix with
u
e(d−u)
d2
1
ed
probability at least 1 − ǫ for any ǫ > 0. Specifically, we prove that h = d2 −u2
+
log
u
log
u
u
ǫ
is sufficient to achieve such G ′ . Similar to the proof in Theorem 2, the probability that G ′ is not a
(u, d − u)-completely separating matrix at most ǫ is
h u
u u
u u
u d−u
u d−u
ed
d
1−
1−
(15)
1−
≤
exp −h
d
d
u
d
d
u
u
u
(d − u)(d + u)
ed
u
≤
exp −h
(16)
u
e(d − u)
d2
≤ǫ
(17)
u
u 2
d − u2
1 ed
u
(18)
⇐⇒
≤ exp h
ǫ u
e(d − u)
d2
u
d2
ed
e(d − u)
1
⇐⇒
h≥ 2
u log
(19)
+ log
d − u2
u
u
ǫ
n
x n
en k
−x
.
(16)
is
derived
because
1
+
≥
We get (15)
because
1
−
x
≤
e
for
any
x
>
0
and
≤
k
n
k
x2
x
e 1 − n for n > 1, |x| ≤ n. This completes our proof.
III. P ROPOSED SCHEME
The basic idea of our scheme, which uses a divide and conquer strategy, is to create at least κ rows,
e.g., i1 , i2 , . . . , iκ such that |Si1 | = · · · = |Siκ | = u and Si1 ∪ . . . ∪ Siκ = S. Then we “map” these rows by
using a special matrix that enables us to convert the outcome in NATGT to the outcome in CNAGT. The
defective items in each row can then be efficiently identified. We present a particular matrix that achieves
efficient decoding for each row in the following section.
A. When the number of defective items equals the threshold
In this section, we consider a special case in which the number of defective items equals the threshold,
i.e., |x| = u. Given a measurement matrix M and a representation vector of u defective items x (|x| = u),
def
def
def
def
what we observe is y = M ⊗ x = (y1 , . . . , yk )T . Our objective is to recover y′ = M ⊙ x = (y1′ , . . . , yk′ )T
from y. Then x can be recovered if we choose M as a (d + 1)-disjunct matrix described in Theorem 1.
To achieve this goal, we create a measurement matrix:
M
def
(20)
A=
M
where M = (mij ) is a k × N (d + 1)-disjunct matrix as described in Theorem 1 and M = (mij ) is
the complement matrix of M, mij = 1 − mij for i = 1, . . . , k and j = 1, . . . , N. We note that M can
6
be decoded in time poly(k) = poly(d2 log N) because k = O(d2 log N). Let us assume that the outcome
vector is z. Then we have:
M ⊗ x def y
def
def
(21)
z=A⊗x=
=
y
M⊗x
where y = M ⊗ x = (y1 , . . . , yk )T and y = M ⊗ x = (y 1 , . . . , y k )T . The following lemma shows that
y′ = M ⊙ x is always obtained from z; i.e., x can always be recovered.
Lemma 1. Given integers 2 ≤ u ≤ d < N, there exists a strongly explicit 2k × N matrix such that if
there are exactly u defective items among N items in u-NATGT, the u defective items can be identified
in time poly(k), where k = O(d2 log N).
Proof: We construct the measurement matrix A in (20) and assume that z is the observed vector as
in (21). Our task is to create vector y′ = M ⊙ x from z. One can get it using the following rules, where
l = 1, 2, . . . , k:
1) If yl = 1, then yl′ = 1.
2) If yl = 0 and y l = 1, then yl′ = 0.
3) If yl = 0 and y l = 0, then yl′ = 1.
We now prove the correctness of the above rules. Because yl = 1, there are at least u defective items in
row Ml,∗ . Then, the first rule is implied.
If yl = 0, there are less than u defective items in row Ml,∗. Because |x| = u, y l = 1, and the threshold
is u, there must be u defective items in row Ml,∗. Moreover, since Ml,∗ is the complement of Ml,∗ , there
must be no defective item in test l of M. Therefore, yl′ = 0, and the second rule is implied.
If yl = 0, there are less than u defective items in row Ml,∗ . Similarly, if y l = 0, there are less than u
defective items in row Ml,∗ . Because Ml,∗ is the complement of Ml,∗ , the number of defective items in
row Ml,∗ or Ml,∗ cannot be equal to zero, since either yl would equal 1 or y l would equal 1. Since the
number of defective items in row Ml,∗ is not equal to zero, the test outcome is positive, i.e., yl′ = 1. The
third rule is thus implied.
Since we get y′ = M ⊙x, M is a (d + 1)-disjunct matrix and u ≤ d, u defective items can be identified
in time poly(k) by Theorem 1.
Example: We demonstrate Lemma 1 by setting u = d = 2, k = 9, and N = 12 and defining a 9 × 12
2-disjunct matrix M with the first two columns as follows:
1
0
0 0 0 0 0 0 1 1 1 1 0 0
0
1
0
0 0 0 1 1 1 0 0 0 1 0 0
0
0
1
1 1 1 0 0 0 0 0 0 1 0 0
1
1
0
0 0 1 0 0 1 0 0 1 0 1 0
0
′
0
0
1
0
0
1
0
0
1
0
0
1
0
0
(22)
M=
, y = , y = , y = 1
0
1
1 0 0 1 0 0 1 0 0 0 1 0
0
0
1
0 1 0 1 0 0 0 0 1 0 0 1
0
1
0
0 0 1 0 1 0 1 0 0 0 0 1
0
0
1
1 0 0 0 0 1 0 1 0 0 0 1
0
Assume that the defective items are 1 and 2, i.e., x = [1, 1, 0, 0, 0, 0, 0, 0, 0]T ; then the observed vector
is z = [yTWyT ]T . Using the three rules in the proof of Lemma 1, we obtain vector y′ . We note that
y′ = M1 M2 = M ⊙ x. Using a decoding algorithm (which is omitted in this example), we can
identify items 1 and 2 as defective items from y′ .
7
B. Encoding procedure
To implement the divide and conquer strategy, we need to divide the set of defective items into small
⌉≥1
subsets such that defective items in those subsets can be effectively identified. We define κ = ⌈ |S|
u
as an integer, and create a h × N matrix G containing κ rows, denoted as i1 , i2 , . . . , iκ , with probability
at least 1 − ǫ such that (i) |Si1 | = · · · = |Siκ | = u and (ii) Si1 ∪ . . . ∪ Siκ = S for any ǫ ≥ 0 where Si is
the set of indices of defective items in row Gi,∗ . For example, if N = 6, the defective items are 1, 2, and
3, and G1,∗ = (1, 0, 1, 0, 1, 1), then S1 = {1, 3}. These conditions guarantee that all defective items will
be included in the decoded set.
To achieve such a G, for any |S| ≤ d, a pruning matrix G ′ of size h×d after removing N −d columns Gx
for x ∈ [N]\S must be a (u, d−u)-completely separating matrix with high probability. From Definition 1,
G ′ is also a (u, |S| − u)-completely separating matrix. Then, the κ rows are chosen as follows. We choose
a collection of sets of defective items: Pl = {j(l−1)u+1 , . . . , jlu } for l = 1, . . . , κ − 1. P ′ is a set satisfying
κ−1
′
′
P ′ ⊆ ∪κ−1
l=1 Pl and |P | = κu − |S|. Then we pick the last set as follows: Pκ = S \ ∪l=1 Pl ∪ P . From
Definition 1, for any Pl , there exists a row, denoted il , such that gil x = 1 for x ∈ Pl and gil y = 0
for y ∈ S \ Pl , where l = 1, . . . , κ. Then, Sil = Pl and row il is singular to sets Sil and S \ Sil for
l = 1, . . . , κ. Condition (i) thus holds. Condition (ii) also holds because ∪κl=1 Sil = ∪κl=1 Pl = S. The
matrix G is specified in section IV.
After creating the matrix G, we generate matrix A as in (20). Then the final measurement matrix T of
size (2k + 1)h × N is created as follows:
G1,∗
M × diag(G1,∗ )
G1,∗
A × diag(G1,∗ ) M × diag(G1,∗ )
..
..
T =
(23)
=
.
.
Gh,∗
Gh,∗
A × diag(Gh,∗ )
M × diag(Gh,∗ )
M × diag(Gh,∗ )
The vector observed using u-NATGT after performing the tests given by the measurement matrix T is
G1,∗
G1,∗ ⊗ x
A × diag(G1,∗ )
A ⊗ x1
.
..
⊗x=
..
y =T ⊗x =
.
G
G ⊗ x
h,∗
h,∗
A × diag(Gh,∗ )
A ⊗ xh
G1,∗ ⊗ x
y1
M ⊗ x1 y1
y1
M ⊗ x1 y1 z1
. .
..
= .. = ..
=
.
G ⊗ x y y
h
h
h,∗
M ⊗ xh yh
zh
yh
M ⊗ xh
def
(24)
def
where xi = diag(Gi,∗ ) × x, yi = Gi,∗ ⊗ x, yi = M ⊗ xi = (yi1 , . . . , yik )T , yi = M ⊗ xi = (y i1 , . . . , yik )T ,
and zi = [yiT yTi ]T for i = 1, 2, . . . , h.
We note that xi is the vector representing the defective items corresponding to row Gi,∗ . If xi =
(xi1 , xi2 , . . . , xiN )T , Si = {l | xil = 1, l ∈ [N]}. We thus have |Si | = |xi | ≤ d. Moreover, yi = 1 if and
only if |xi | ≥ u.
8
C. The decoding procedure
′
′ T
The decoding procedure is summarized as Algorithm 1, where yi′ = (yi1
, . . . , yik
) is presumed to be
M ⊙ xi . The procedure is briefly explained as follows: Line 2 enumerates the h rows of G. Line 3 checks
if there are at least u defective items in row Gi,∗ . Lines 4 to 14 calculate yi′ , and Line 16 checks if all
items in Gi are truly defective and adds them into S.
Algorithm 1 Decoding procedure for u-NATGT
Input: Outcome vector y, M, M, T .
Output: The set of defective items S.
1: S = ∅.
2: for i = 1 to h do
3:
if yi = 1 then
4:
for l = 1 to k do
5:
if yil = 1 then
6:
yil′ = 1
7:
end if
8:
if yil = 0 and y il = 1 then
9:
yil′ = 0
10:
end if
11:
if yil = 0 and y il = 0 then
12:
yil′ = 1
13:
end if
14:
end for
15:
Decode yi′ usingWM to get the defective set Gi .
16:
if |Gi | = u and j∈Gi Mj ≡ yi then
17:
S = S ∪ Gi .
18:
end if
19:
end if
20: end for
21: Return S.
D. Correctness of the decoding procedure
Our objective is to recover xi from yi and zi for i = 1, 2, . . . , h. Line 2 enumerates the h rows of G.
We have that yi is the indicator that whether there are at least u defective items in row Gi,∗ . If yi = 0, it
implies that there are less than u defective items in row Gi,∗ . Since we only focus on row Gi,∗ which has
exactly u defective items, zi is not considered if yi = 0. Lines 3 does this task.
When yi = 1, it implies that there are at least u defective items in row Gi,∗ . If there are exactly u
defective items in this row, they are always identified as described in Lemma 1. Our task now is to
prevent accusing false defective items by decoding yi′ .
Lines 4 to 14 calculates yi′ from zi . We do not know that yi′ is the union of many columns in M, i.e.,
how many defective items are in row Gi,∗ . Therefore, our task is to decode yi′ using matrix M to get the
defective set Gi , then validate whether all items in Gi are defective.
There exists at least κ rows of G such that there are exactly u defective items in each row. And all
defective items in these rows are the defective items we need to identify. Therefore, we only consider the
case when the number of defective items obtained from decoding yi′ equals to u, i.e., |Gi | = u. Our task
is now to prevent identifying false defective items, which is described in Line 16. There are two sets of
9
defective items corresponding to zi : the first one is the true set, which is Si and unknown, and the second
one is Gi , which is expected to be Si (but not sure) and |Gi | = u. If Gi ≡ Si , we can always identify
u defective items and the condition in line 16 always holds because of Lemma 1. We need to consider
the case Gi 6≡ Si , i.e., there are more than u defective items in row Gi,∗ . We classify this case into two
categories:
1) |Gi \ Si |W= 0: in this case, all elements in Gi are defective items. We do not need to consider
whether j∈Gi Mj ≡ yi . If this condition holds, we receive the true defective items. If it does not
hold, we do not take Gi into the defective
W item set.
2) |Gi \ Si | =
6 0: in this case, we prove that j∈Gi Mj ≡ yi does not hold, i.e., none of elements in Gi
is added to the defective item set. Let pick j1 ∈ Gi \ Si and j2 ∈ Gi \ {j1 }. Since |Si | ≤ d and M is
a (d + 1)-disjunct matrix, there exists a row, denoted τ , such that mτ j1 = 1, mτ j2 = 0, and mτ x = 0
for x ∈ Si . In the other hand, because |Gi | = u and |Si | ≤ d, there is less than u defective
items
W
′
in row τ , i.e., yiτ = 0. Because
x∈Gi mτ x =
W u ≤ |Si |, yiτW= 1. That implies yiτ = 0. However,
W
W
W
′
miτ =
1 = 1 6= 0 = yiτ . Therefore, j∈Gi Mj 6≡ yi .
x∈Gi \{j1 } mτ x
x∈Gi \{j1 } mτ x
Thus, line 16 eliminates all false defective items. Line 21 just returns the defective item set S.
E. The decoding complexity
Because T is constructed using G and M, the probability of successful decoding of y depends on these
choices. Given an input vector yi′ , we get the set of defective items from decoding of M. The probability
of successful decoding of y thus depends only on G. Since G has κ rows satisfying (i) and (ii) with
probability at least 1 − ǫ, all |S| defective items can be identified in h × poly(k) time using t = h(2k + 1)
tests with probability of at least 1 − ǫ for any ǫ ≥ 0. We summarize the divide and conquer strategy in
the following theorem:
Theorem 3. Let 2 ≤ u ≤ d < N be integers and S be the defective set. Suppose that an h × N matrix G
contains κ rows, denoted as i1 , . . . , iκ , such that (i) |Si1 | = · · · = |Siκ | = u and (ii) Si1 ∪ . . . ∪ Siκ = S,
where Sil is the index set of defective items in row Gil ,∗ . And suppose that an k × N matrix M is a
(d + 1)-disjunct matrix that can be decoded in time A. Then a (2k + 1)h × N measurement matrix T , as
defined in (23), can be used to identify at most d defective items in u-NATGT in time O(h × A).
The probability of successful decoding depends only on the event that G has κ rows satisfying (i) and
(ii). Specifically, if that event happens with probability at least 1−ǫ, the probability of successful decoding
is also at least 1 − ǫ for any ǫ ≥ 0.
IV. C OMPLEXITY OF PROPOSED SCHEME
We specify the matrix G in Theorem 3 to get the desired number of tests and decoding complexity for
identifying at most d defective items. Specifying G leads to two approaches on decoding: deterministic
and randomized. Deterministic decoding is a deterministic scheme in which all defective items can be
found with probability 1. It is achievable when every its h × d submatrices, which are constructed by
its d columns, are (u, d − u)-completely separating matrices. Randomized decoding reduces the number
of tests; all defective items can be found with probability at least 1 − ǫ for any ǫ > 0. It is achievable
when each its h × d submatrix, which is constructed by its d columns, is (u, d − u)-completely separating
matrix with probability at least 1 − ǫ.
A. Deterministic decoding
The following theorem states that there exists a deterministic algorithm for identifying all defective
items by choosing G of size h × N to be a (u, d − u)-completely separating matrix in Theorem 2.
10
Theorem 4. Let 2 ≤ u ≤ d ≤ N. There exists a t × N matrix such that at most d defective items in
t
× poly(d2 log N), where
u-NATGT can be identified in time O(d2 log
N)
u
N
e(d − u)
d
d2
2
· d log N
· d log + u log
t=O
d2 − u 2
u
d
u
Proof: On the basis of Theorem 3, a t × N measurement matrix T is generated as follows:
1) Choose anh × N (u, d − u)-completely separating matrix G as in Theorem 2, where
u
2
h = d2d−u2 e(d−u)
+ u log ed
+ 1.
· d log eN
u
d
u
2) Choose a k × N (d + 1)-disjunct matrix M as in Theorem 1, where k = O(d2 log N) and the
decoding time of M is poly(k).
3) T is defined as in (23).
Since G is a h × d (u, d − u)-completely separating matrix, for any |S| ≤ d, an h × d pruning matrix
′
G , which is created by removing N − d columns Gx for x ∈ [N] \ S, is also a (u, d − u)-completely
separating matrix with probability 1. From Definition 1, G ′ is also a (u, |S| − u)-completely separating
matrix. Then, there exists κ rows satisfying (i) and (ii) as described in section III-B. From Theorem 3,
d defective items can be recovered using t = h × O(d2 log N) tests with probability at least 1, i.e., the
probability 1, in time h × poly(k).
B. Randomized decoding
For randomized decoding, G is chosen such that the pruning matrix G ′ of size h×d created by removing
N − d columns Gx of G for x ∈ [N] \ S is a (u, d − u)-completely separating matrix with probability at
least 1 − ǫ for any ǫ > 0. This results is an improved number of tests and decoding time compared to
Theorem 4:
Theorem 5. Let 2 ≤ u ≤ d ≤ N. For any ǫ > 0, at most d defective items in u-NATGT can be identified
using
u
d
d2
1
e(d − u)
2
t = O
u log + log
· d log N
d2 − u2
u
u
ǫ
tests with probability at least 1 − ǫ. The decoding time is
t
O(d2 log N )
× poly(d2 log N).
Proof: On the basis of Theorem 3, a t × N measurement matrix T is generated
as follows:
u
2
e(d−u)
1
ed
1) Choose an h × N matrix G as in Corollary 1, where h = d2d−u2
+
log
.
u
log
u
u
ǫ
2
2) Generate a k × N (d + 1)-disjunct matrix M using Theorem 1, where k = O(d log N) and the
decoding time of M is poly(k).
3) Define T as (23).
Let G be an h × N matrix as described in Corollary 1. Then for any |S| ≤ d, an h × d pruning matrix
G ′ , which is created by removing N − d columns Gx for x ∈ [N] \ S, is a (u, d − u)-completely separating
matrix with probability at least 1 − ǫ. From Definition 1, G ′ is also a (u, |S| − u)-completely separating
matrix. Then, there exists κ rows satisfying (i) and (ii) as described in section III-B with probability at
least 1 − ǫ. From Theorem 3, |S| defective items can be recovered using t = h × O(d2 log N) tests with
probability at least 1 − ǫ in time h × poly(k).
V. C ONCLUSION
We introduced an efficient scheme for identifying defective items in NATGT. However, the algorithm
works only for g = 0. Extending the results to g > 0 is left for future work. Moreover, it would be
interesting to consider noisy NATGT as well, in which erroneous tests are present in the test outcomes.
11
VI. ACKNOWLEDGEMENT
The first author thanks to SOKENDAI for supporting him via The Short-Stay Abroad Program 2017.
R EFERENCES
[1] R. Dorfman, “The detection of defective members of large populations,” The Annals of Mathematical Statistics, vol. 14, no. 4, pp. 436–
440, 1943.
[2] P. Damaschke, “Threshold group testing,” in General theory of information transfer and combinatorics, pp. 707–718, Springer, 2006.
[3] M. Farach, S. Kannan, E. Knill, and S. Muthukrishnan, “Group testing problems with sequences in experimental molecular biology,”
in Compression and Complexity of Sequences 1997. Proceedings, pp. 357–367, IEEE, 1997.
[4] J. Wolf, “Born again group testing: Multiaccess communications,” IEEE Transactions on Information Theory, vol. 31, no. 2, pp. 185–
191, 1985.
[5] G. Cormode and S. Muthukrishnan, “What’s hot and what’s not: tracking most frequent items dynamically,” ACM Transactions on
Database Systems (TODS), vol. 30, no. 1, pp. 249–278, 2005.
[6] D. Du and F. Hwang, Combinatorial group testing and its applications, vol. 12. World Scientific, 2000.
[7] H.-B. Chen and A. De Bonis, “An almost optimal algorithm for generalized threshold group testing with inhibitors,” Journal of
Computational Biology, vol. 18, no. 6, pp. 851–864, 2011.
[8] H. Chang, H.-B. Chen, H.-L. Fu, and C.-H. Shi, “Reconstruction of hidden graphs and threshold group testing,” Journal of combinatorial
optimization, vol. 22, no. 2, pp. 270–281, 2011.
[9] E. Porat and A. Rothschild, “Explicit non-adaptive combinatorial group testing schemes,” Automata, languages and programming,
pp. 748–759, 2008.
[10] H. Q. Ngo, E. Porat, and A. Rudra, “Efficiently decodable error-correcting list disjunct matrices and applications,” in International
Colloquium on Automata, Languages, and Programming, pp. 557–568, Springer, 2011.
[11] M. Cheraghchi, “Noise-resilient group testing: Limitations and constructions,” Discrete Applied Mathematics, vol. 161, no. 1, pp. 81–95,
2013.
[12] S. Cai, M. Jahangoshahi, M. Bakshi, and S. Jaggi, “Grotesque: noisy group testing (quick and efficient),” in Communication, Control,
and Computing (Allerton), 2013 51st Annual Allerton Conference on, pp. 1234–1241, IEEE, 2013.
[13] M. Cheraghchi, “Improved constructions for non-adaptive threshold group testing,” Algorithmica, vol. 67, no. 3, pp. 384–417, 2013.
[14] G. De Marco, T. Jurdziński, M. Różański, and G. Stachowiak, “Subquadratic non-adaptive threshold group testing,” in International
Symposium on Fundamentals of Computation Theory, pp. 177–189, Springer, 2017.
[15] H.-B. Chen and H.-L. Fu, “Nonadaptive algorithms for threshold group testing,” Discrete Applied Mathematics, vol. 157, no. 7,
pp. 1581–1585, 2009.
[16] C. L. Chan, S. Cai, M. Bakshi, S. Jaggi, and V. Saligrama, “Stochastic threshold group testing,” in Information Theory Workshop
(ITW), 2013 IEEE, pp. 1–5, IEEE, 2013.
[17] V. S. Lebedev, “Separating codes and a new combinatorial search model,” Problems of Information Transmission, vol. 46, no. 1, pp. 1–6,
2010.
12
| 7 |
Verifying Buchberger’s Algorithm
in Reduction Rings
arXiv:1604.08736v1 [cs.SC] 29 Apr 2016
Alexander Maletzky∗
Doctoral College “Computational Mathematics” and RISC
Johannes Kepler University Linz, Austria
[email protected]
Abstract
In this paper we present the formal, computer-supported verification of a functional
implementation of Buchberger’s critical-pair/completion algorithm for computing Gröbner
bases in reduction rings. We describe how the algorithm can be implemented and verified
within one single software system, which in our case is the Theorema system.
In contrast to existing formal correctness proofs of Buchberger’s algorithm in other systems, e. g. Coq and ACL2, our work is not confined to the classical setting of polynomial
rings over fields, but considers the much more general setting of reduction rings; this, naturally, makes the algorithm more complicated and the verification more difficult.
The correctness proof is essentially based on some non-trivial results from the theory
of reduction rings, which we formalized and formally proved as well. This formalization
already consists of more than 800 interactively proved lemmas and theorems, making the
elaboration an extensive example of higher-order theory exploration in Theorema.
Keywords: Buchberger’s algorithm, Gröbner bases, reduction rings, Theorema
1
Introduction
Buchberger’s algorithm was first introduced in [1] for computing Gröbner bases of ideals in
polynomial rings over fields. Later, this setting was generalized to so-called reduction rings
[3, 10], which are essentially unital commutative rings, not necessarily free of zero divisors and
not necessarily possessing any polynomial structure. The algorithm the present investigations are
concerned with is a variant of Buchberger’s original critical-pair/completion algorithm adapted
to the reduction-ring setting. It should not come as a surprise that the increased generality of
the underlying domain makes the algorithm slightly more complicated, compared to the case of
polynomials over fields. The main differences will be explained in Section 2.
The theory of Gröbner bases, and in particular Buchberger’s algorithm, has already undergone
formal treatment of various kinds. For instance, the algorithm was proved correct, e. g., in Coq
[11] and ACL2 [9]. A formal analysis of its complexity in some special case was carried out by
the author of this paper in [8], and last but not least, the algorithm could even be synthesized
automatically from its specification in [6]. However, all of this was done only in the classical
setting 1 , and not in the far more general setting of reduction rings. The computational aspect of
reduction rings, without formal proofs and verification of any kind, was considered in [4].
∗ This
1 In
research was funded by the Austrian Science Fund (FWF): grant no. W1214-N15, project DK1
this paper, the phrase “classical setting” always refers to the case of polynomials over fields.
1
The software system used both for implementing the algorithm, in a functional-programming
style involving pattern-matching and recursion, as well as verifying it, is the Theorema system
[5]. Theorema is a mathematical assistant system for all phases of theory exploration: introducing new notions, designing/verifying/executing algorithms, and creating nicely structured
documents.
The rest of the document is structured as follows: Section 2 first defines the most important
notions and then presents the algorithm in question. Section 3 outlines the main ideas behind
the computer-supported formal verification of the algorithm by means of interactive theorem
proving in Theorema. Section 4 puts the work described here in a broader context, reporting
on the underlying formal treatment of all of reduction ring theory in Theorema; readers only
interested in Buchberger’s algorithm may skip this section. Finally, Section 5 summarizes the
content of this paper and hints on possible extensions and future work.
2
The Algorithm
We now outline the algorithm under consideration. For this, let in the remainder of this paper
R be a reduction ring, i. e. a unital commutative ring, additionally endowed with a partial
Noetherian ordering (among other things, which go beyond the frame of this paper). Before
we can state the algorithm, we need to define the concepts of reduction and Gröbner basis:
Definition 1. Let C ⊆ R and a, b ∈ R. Then a reduces to b modulo C, written a →C b, iff
b = a − m c for some c ∈ C and m ∈ R, and in addition b ≺ a. →∗C and ↔∗C denote the
reflexive-transitive and the reflexive-symmetric-transitive closure of →C , respectively.
C is called a Gröbner basis iff →C is confluent.
Algorithm 1 is Buchberger’s critical-pair/completion algorithm in reduction rings, given in
a functional-programming style with pattern-matching and recursion, and following precisely
its implementation in Theorema this paper is concerned with. Sticking to Theorema notation,
function application is denoted by square brackets, tuples are denoted by angle brackets, the
length of a tuple T is denoted by |T |, and the i-th element of T by Ti . Variables suffixed with
three dots are so-called sequence variables which may be instantiated by sequences of terms of
any length (including 0).
As can be seen, function GB only initiates the recursion by calling GBAux with suitable
arguments. GBAux, in contrast, is the main function, defined recursively according to the three
equations (2) (base case), (3) and (4). Please note that termination of GBAux is by no means
obvious, since its second argument, which eventually has to become the empty tuple in order for
the function to terminate, may be enlarged by function update in the second case of (4).
Algorithm 1 Buchberger’s algorithm in reduction rings
GB[C]
GBAux[C, hi, i, j, hi]
:=
:=
GBAux[C, hhk, li, r . . .i, i, j, hi] :=
GBAux[C, pairs[|C|], 1, 1, hi]
C
(1)
(2)
GBAux[C, hr . . .i, k, l, cp[Ck , Cl ]]
(3)
GBAux[C, P, i, j, hhb, bi, t . . .i] :=
GBAux[C, P, i, j, ht . . .i]
let
GBAux[app[C, h], update[P, |C| + 1], i, j, ht . . .i]
h=cpd[b,b,i,j,C]
2
(4)
⇐ h=0
⇐ h=
6 0
The five arguments of function GBAux have the following meaning:
• The first argument, denoted by C, is the basis constructed so far, i. e. it serves as the
accumulator of the tail-recursive algorithm. As such, it is a tuple of elements of R that
is initialized by the original input-tuple in equation (1) and returned as the final result in
equation (2). Please note that the only place where it is modified is in the second case of
(4), where a new element h is added to it (function app).
• The second argument is a tuple of pairs of indices of the accumulator C. It contains precisely
those indices corresponding to pairs of elements of C that still have to be considered; hence,
it is initialized by all possible pairs of element-indices in (1), using pairs, and updated
whenever a new element is added to C in (4) using update.
• The third and fourth arguments, denoted by i and j, are the indices of the pair of elements
of C whose critical pairs are currently under investigation.
• The last argument is the tuple containing the critical pairs of Ci and Cj that have not
been considered so far. Once initialized by function cp in (3), it is simply traversed from
beginning to end without being enlarged by any new pairs.
The most important auxiliary function appearing in Algorithm 1 is function cpd in equation
(4). cpd[b, b, i, j, C] returns a ring-element h, which is constructed by first finding g, g with
b →∗C g, b →∗C g and both g and g irreducible modulo C, and then setting h := g − g. Note that
g and g are not unique in general, but their concrete choice is irrelevant for the correctness of
GBAux.
Please note that “let” is a Theorema-built-in binder used for abbreviating terms in expressions. In equation (4) cpd[b, b, i, j, C] is abbreviated by h.
Readers familiar with the classical setting might have noticed the following three differences
between GBAux and the classical Buchberger algorithm: Firstly, not only pairs of distinct
elements Ci , Cj have to be considered, but also pairs of identical constituents; this, in particular,
implies that in reduction rings one-element sets are not automatically Gröbner bases. Secondly,
one single pair Ci , Cj may give rise to more than one critical pair; this is why cp returns a
tuple of critical pairs. Thirdly, in function cpd it is not possible to reduce the difference b − b to
normal form, even though this would be more efficient.
The algorithm as presented in this paper is not the most efficient one: as in the classical
setting there are some improvements that could be applied. For instance, the so-called chain
criterion [2] could be used to detect “useless” critical pairs, i. e. critical pairs for which h in
equality (4) will certainly be 0, without having to apply the (in general computation-intensive)
function cpd. Although the chain criterion was introduced only for the case of polynomials over
fields, it readily extends to reduction rings, too. Another possible improvement originating from
the classical setting consists of immediately auto-reducing each element of the current basis C
modulo h in the second case of equality (4); however, it is not yet clear whether employing this
improvement will always lead to correct results in the general case of arbitrary reduction rings –
this still requires further investigations.
3
The Verification
Function GB has to satisfy four requirements for being totally correct: For a given tuple C of
elements of R:
1. the function must terminate,
3
2. GB[C] has to be a tuple of elements of R,
3. the ideal (over R) generated by the elements of GB[C] has to be the same as the ideal
generated by the elements of C, and
4. GB[C] has to be a Gröbner basis.
The whole verification has been carried out in Theorema, using the proving capabilities of the
system. More precisely, each of the four proof obligations, as well as a range of auxiliary lemmata
(approx. 160; see Table 1), have been proved interactively in a GUI-dialog-based manner: for this,
one first needs to initiate a proof attempt by setting up the initial “proof situation”, composed
of the formula one wants to prove (“proof goal”) and the list of assumptions one wants to use
(“knowledge base”). The system then tries to perform some simple and obvious inference steps
(e. g. proving implications by assuming their premises and proving their conclusions), until no
more of these simple inferences are possible. Then, the user is asked to decide how to continue,
i. e. which of the more advanced (and maybe “unsafe”) inferences to apply, how to apply them in
case there are several possibilities (e. g. providing suitable terms when instantiating a universally
quantified assumption), and where to continue in the proof search in case the current alternative
does not look promising. This process is iterated until a proof is found or the proof search is
aborted. Summarizing, it is really the human user who conducts the proof, but under extensive
assistance by the system, which in particular ensures that all inference steps are really correct.
In the remainder of this paper, the term “Theorema-generated proof” always refers to precisely
this type of proof.
In the following subsections we address the four proof obligations in more detail.
3.1
Termination
As mentioned already in Section 2, termination is by no means obvious. In fact, if R were not
a reduction ring, termination could not even be proved, since one of the axioms characterizing
reduction rings is needed only for guaranteeing termination of Algorithm 1. The crucial point is
that the second case of (4) can occur only finitely often: this is guaranteed by requiring that in
reduction rings there are no infinite sequences of sets D1 , D2 , . . . with red[Di ] ⊂ red[Di+1 ] for all
i ≥ 1, where red[D] denotes the set of reducible elements modulo the set D. In the second case
of (4) it is easy to see that red[C] ⊂ red[app[C, h]], meaning that this may happen only finitely
many times.
Eventually, termination is proved by finding a Noetherian ordering on the set of all possible
argument-quintuples that is shown to decrease in each recursive call of the function. In fact, this
ordering is a lexicographic combination of the following two orderings:
1. “E1 ” is defined for subsets of R as A ⊳1 B :⇔ red[B] ⊂ red[A]. This ordering is Noetherian
because of the non-existence of certain infinite sequences in reduction rings, as sketched
above.
2. “E2 ” is defined for arbitrary tuples as S ⊳2 T :⇔ |S| < |T |. Since the length of a tuple is
a natural number, this ordering is clearly Noetherian.
For comparing two argument-quintuples (C1 , P1 , i1 , j1 , T1 ) and (C2 , P2 , i2 , j2 , T2 ), first C1 and
C2 are compared w. r. t. E1 ; if they are equal, P1 and P2 are compared w. r. t. E2 ; if they are
equal as well, T1 and T2 are also compared w. r. t. E2 (the indices i and j do not play any role
for termination and hence are ignored in the comparison). As one can easily see, the arguments
of every recursive call of GBAux always decrease w. r. t. this lexicographic ordering, which
furthermore is Noetherian because its constituents are.
4
Please note that the formal, Theorema-generated proofs of the remaining three obligations
proceed by Noetherian (or “well-founded”) induction on the set of argument-quintuples, based
on the Noetherian ordering.
3.2
Type and Ideal
The fact that GB[C] is a tuple of elements of R is obvious, since the accumulator C is only
modified by adding one new ring-element in the recursive call in the second case of (4). Apart
from that, it always remains unchanged. Furthermore, even the third requirement can be seen
to be fulfilled rather easily: The element h added to C in the second case of (4) is clearly an
R-linear combination of elements of C, and hence is contained in the ideal generated by C. This
further implies that the ideal does not change when adding h to C.
3.3
Gröbner Basis
The most important requirement is the fourth one. It describes the essential property the output
should have, namely being a Gröbner basis – that is why the function is called GB. Gröbner bases
play a very important role in computational ideal theory, since many non-trivial ideal-theoretic
questions can be answered easily as soon as Gröbner bases for the ideals in question are known.
Most importantly, ideal membership and ideal equality can simply be decided by reducing some
elements to their unique normal forms modulo the given Gröbner bases.
For proving the fourth requirement we need one of the main results of reduction ring theory,
containing a finite criterion for checking whether reduction modulo a given set, or tuple, is
confluent. This result was proved formally in Theorema.
Theorem 1 (Main Theorem of Reduction Ring Theory). Let C ⊆ R. Then reduction modulo
C is confluent iff for all c, c ∈ C (not necessarily distinct) and all minimal non-trivial common
reducibles a of c and c there exists a critical pair hb, bi, with a →{c} b and a →{c} b, that can be
connected below a modulo C.
The statement of Theorem 1 is somewhat vague. For the precise definition of minimal nontrivial common reducible we have to refer the interested reader to [3, 10] or to our recent technical
report [7]. Only note that in the classical setting the minimal non-trivial common reducible of
two polynomials p and q is precisely the least common multiple of the leading monomials of p and
q. Finally, two elements b, b ∈ R are connectible below another element a modulo C iff b ↔∗C b
and every intermediate element in the chain of reductions is strictly smaller than a w. r. t. .
Now we can outline how function GB can be shown to satisfy the fourth requirement: In (4),
if the element h constructed by cpd is 0 then the critical pair hb, bi can be connected (modulo
C) below the minimal non-trivial common reducible a of Ci and Cj it corresponds to; otherwise,
b and b can certainly be connected below a modulo the enlarged tuple app[C, h] (second case).
Hence, in either case the critical situation corresponding to hb, bi is resolved, and the algorithm
proceeds with the next critical pair of Ci and Cj , unless all of them have already been dealt with;
in that case, the next pair of elements of C is considered (equation (3)). Termination guarantees
that at some point all pairs of elements of C have been dealt with (even those added in (4)), such
that in the end the criterion of Theorem 1 is fulfilled and the output returned by the algorithm
is indeed a Gröbner basis.
More formally, the crucial property of GBAux is the following:
Theorem 2. For all tuples C of elements in R, all index-pair tuples P , all indices i and j, and
all critical-pair tuples M : The result G of GBAux[C, P, i, j, M ] is again a tuple of elements of R
such that all critical pairs of all Ck , Cl , for hk, li ∈ P , can be connected below their corresponding
5
minimal non-trivial common reducibles modulo G, and the same is true also for the critical pairs
in M .
As mentioned at the end of Section 3.1, the interactively generated Theorema-proof of Theorem 2 proceeds by Noetherian induction on the set of all input-quintuples, distinguishing four
cases based on the shape of the input arguments, according to the left-hand-sides of the three
equalities (2), (3) and (4) (where the case corresponding to (4) is split into two subcases depending on whether h = 0 or not).
The total effort for first formalizing and then verifying Algorthm 1, already knowing Theorem 1, was approximately 70 working hours. As can be seen in Table 1 of the next section, the
number of formulas that had to be proved for that purpose is 165.
4
The Formal Treatment of Reduction Ring Theory
What has been presented in the previous sections of this paper actually only constitutes a
small fragment of a much larger endeavor: The formalization and formal verification of the
theory of reduction rings in Theorema. This project was started two years ago with the aim of
representing all aspects of the theory, both theoretic and algorithmic, in a unified and – most
importantly – certified way in a computer system. At the moment the whole formalization
consists of eight individual components, each being a separate Theorema notebook containing
definitions, theorems and algorithms of a particular part of reduction ring theory. Figure 1, which
is taken from [7], shows the entire theory graph with all components and their dependencies on
each other. The algorithm this paper is concerned with, as well as its correctness proof as
described in the previous section, is contained in theory GroebnerRings, whereas Theorem 1
together with its proof are contained in theory ReductionRings. For more information on the
formalization the interested reader is referred to [7]; however, note that there the correctnessproof of Buchberger’s algorithm is still labelled as “future work”, because the proof has been
completed only after writing the report.
As can be seen from the dashed arrows in Figure 1, the formal verification of some parts
of the theory still awaits its completion: the proofs that certain basic domains, namely fields,
the integers, integer quotient rings, and polynomials represented as tuples of monomials, are
reduction rings have not been carried out yet. This is not because these proofs turned out
to be extraordinarily difficult, but rather the opposite: we do not expect any major difficulties
there and instead focused on the far more involved proofs (in ReductionRings, Polynomials and
GroebnerRings) first, just to be sure that everything works out as it is supposed to. After all, the
correctness of Buchberger’s algorithm is absolutely independent of theories Fields, Integers,
etc.
Table 1 lists the sizes of the individual components of the formalization in terms of the
numbers of formulas, the numbers of proofs, and the average and maximum proof sizes. Summing
things up one arrives at almost 1700 formulas and more than 1100 interactively-generated proofs
in the formalization, making it an extensive piece of computerized mathematics.
5
Conclusion
On the preceding pages we described the implementation and formal verification of a non-trivial
algorithm of high relevance in computational ideal theory. Although the work is of interest
on its own, it also serves as a major case study in how program verification, including the
formal development of the underlying theories, can effectively be carried out in the Theorema
6
ElementaryTheories
ReductionRings
Polynomials
Fields
PolyTuples
Integers
IntegerQuotientRings
GroebnerRings
Figure 1: The structure of the formalization. An arrow from A to B denotes dependency of B on A,
in the sense that formulas from A are used in B in proofs (gray) or computations (red). Dashed arrows
denote future dependencies.
Theory
ElementaryTheories
ReductionRings
Polynomials
GroebnerRings
Fields
Integers
IntegerQuotientRings
PolyTuples
Formulas
630
315
397
226
17
20
19
66
1690
Proofs
390
253
341
165
0
0
0
0
1149
Proof Size (avg./max.)
21.9 / 137
38.1 / 198
45.8 / 322
37.0 / 154
34.7 / 322
Table 1: Number of formulas and proofs in the formalization. The proof size refers to the number of
inference steps.
system. In addition, most of the elementary mathematical concepts formalized for the present
verification, like tuples, (lexicographic) orders and infinite sequences, can be reused for the
Theorema-verification of algorithms and programs in completely different areas in the future.
The work described in this paper also revealed a potential improvement of Theorema: Correctness proofs of functional programs are typically achieved following a fixed set of steps, consisting
of finding termination orders, proving specialized induction schemas, and using these schemas
to prove that certain properties hold for the function, provided they hold for each recursive call.
At present, these steps have to be carried out manually, but it is clearly possible to automate
the process at least in some way – just as in the well-known Isabelle system [12].
Acknowledgements I thank Bruno Buchberger and Wolfgang Windsteiger for many inspiring
discussions about Gröbner bases and Theorema, and I also thank the anonymous referees for their
valuable comments and suggestions.
This research was funded by the Austrian Science Fund (FWF): grant no. W1214-N15,
7
project DK1.
References
[1] Bruno Buchberger. Ein Algorithmus zum Auffinden der Basiselemente des Restklassenringes nach einem nulldimensionalen Polynomideal. PhD thesis, Mathematical Institute,
University of Innsbruck, Austria, 1965. English translation in J. Symbolic Computation
41(3-4):475–511, 2006.
[2] Bruno Buchberger. A Criterion for Detecting Unnecessary Reductions in the Construction
of Gröbner Bases. In E. W. Ng, editor, Proceedings of the EUROSAM 79 Symposium on
Symbolic and Algebraic Manipulation, Marseille, June 26-28, 1979, volume 72 of Lecture
Notes in Computer Science, pages 3–21. Copyright: Springer, Berlin - Heidelberg - New
York, 1979.
[3] Bruno Buchberger. A Critical-Pair/Completion Algorithm for Finitely Generated Ideals in
Rings. In E. Boerger, G. Hasenjaeger, and D. Roedding, editors, Logic and Machines: Decision Problems and Complexity (Proceedings of the Symposium "Rekursive Kombinatorik",
Münster, May 23-28, 1983), volume 171 of Lecture Notes in Computer Science, pages 137–
161, 1984.
[4] Bruno Buchberger. Gröbner Rings in Theorema: A Case Study in Functors and Categories.
Technical Report 2003-49, Johannes Kepler University Linz, SFB F013, November 2003.
[5] Bruno Buchberger, Wolfgang Windsteiger, et al.
Theorema – A System for
Mathematical Theory Exploration.
RISC, Johannes Kepler University Linz.
http://www.risc.jku.at/research/theorema/software/.
[6] Adrian Craciun. Lazy Thinking Algorithm Synthesis in Gröbner Bases Theory. PhD thesis,
JKU Linz, 2008.
[7] Alexander Maletzky. Exploring Reduction Ring Theory in Theorema. Technical Report
15-11, Research Institute for Symbolic Computation (RISC), Johannes Kepler University
Linz, Schloss Hagenberg, 4232 Hagenberg, Austria, 2015.
[8] Alexander Maletzky and Bruno Buchberger. Complexity Analysis of the Bivariate Buchberger Algorithm in Theorema. In H. Hong and C. Yap, editors, Mathematical Software –
ICMS 2014 (Proceedings of ICMS’2014, August 5-9, Seoul, Korea), volume 8592 of Lecture
Notes in Computer Science, pages 41–48, 2014.
[9] Inmaculada Medina-Bulo, Francisco Palomo-Lozano, and Jose-Luis Ruiz-Reina. A verified
Common Lisp implementation of Buchberger’s algorithm in ACL2. Journal of Symbolic
Computation, 45(1):96–123, January 2010.
[10] Sabine Stifter. A Generalization of Reduction Rings. Journal of Symbolic Computation,
4(3):351–364, 1988.
[11] Laurent Thery. A Machine-Checked Implementation of Buchberger’s Algorithm. Journal of
Automated Reasoning, 26:107–137, 2001.
[12] Makarius Wenzel et al.
The
https://isabelle.in.tum.de/.
Isabelle/Isar
8
Reference
Manual,
May
2015.
| 0 |
Associative Array Model of
SQL, NoSQL, and NewSQL Databases
Jeremy Kepner1,2,3, Vijay Gadepally1,2, Dylan Hutchison4, Hayden Jananthan3,5,
Timothy Mattson6, Siddharth Samsi1, Albert Reuther1
1
2
MIT Lincoln Laboratory, MIT Computer Science & AI Laboratory, 3MIT Mathematics Department, 4University of Washington
Computer Science Department, 5Vanderbilt University Mathematics Department, 6Intel Corporation
SQL, NoSQL, and NewSQL would enable their
interoperability. Such a mathematical model is the primary
goal of this paper.
Abstract—The success of SQL, NoSQL, and NewSQL databases
is a reflection of their ability to provide significant functionality
and performance benefits for specific domains, such as financial
transactions, internet search, and data analysis. The BigDAWG
polystore seeks to provide a mechanism to allow applications to
transparently achieve the benefits of diverse databases while
insulating applications from the details of these databases.
Associative arrays provide a common approach to the
mathematics found in different databases: sets (SQL), graphs
(NoSQL), and matrices (NewSQL). This work presents the SQL
relational model in terms of associative arrays and identifies the
key mathematical properties that are preserved within SQL.
These
properties
include
associativity,
commutativity,
distributivity, identities, annihilators, and inverses. Performance
measurements on distributivity and associativity show the impact
these properties can have on associative array operations. These
results demonstrate that associative arrays could provide a
mathematical model for polystores to optimize the exchange of
data and execution queries.
SQL Era
NewSQL Era
Polystore Era
BigDAWG Polystore
[Elmore 2015]
NoSQL
Relational Model
[Codd 1970]
NewSQL
[Cattell 2010]
SQL
Google BigTable
[Chang 2006]
NewSQL
Figure 1. Evolution of SQL, NoSQL, NewSQL, and polystore
databases. Each class of database delivered new mathematics,
functionality, and performance focused on new application areas.
SQL, NoSQL, and NewSQL databases are designed for
specific applications, have distinct data models, and rely on
different underlying mathematics (see Figure 2). Because of
their differences, each database has unique strengths that are
well suited for particular workloads. It is now recognized that
special-purpose databases can be 100x faster for a particular
application than a general-purpose database [Kepner 2014]. In
addition, the availability of high performance data analysis
platforms, such as the MIT SuperCloud [Reuther 2013, Prout
2015], allows high performance databases to share the same
hardware platform without sacrificing performance.
Keywords-Associative Array Algebra; SQL; NoSQL; NewSQL;
Set Theory; Graph Theory; Matrices; Linear Algebra
I.
NoSQL Era
INTRODUCTION
Relational or SQL (Structured Query Language) databases
[Codd 1970, Stonebraker 1976] such as PostgreSQL, MySQL,
and Oracle have been the de facto interface to databases since
the 1980s (see Figure 1) and are the bedrock of electronic
transactions around the world. More recently, key-value stores
(NoSQL databases) such as Google BigTable [Chang 2008],
Apache Accumulo [Wall 2015], and MongoDB [Chodorow
2013] have been developed for representing large sparse tables
to aid in the analysis of data for Internet search. As a result, the
majority of the data on the Internet is now analyzed using keyvalue stores [DeCandia et al 2007, Lakshman & Malik 2010,
George 2011]. In response to similar performance challenges,
the relational database community has developed a new class
of databases (NewSQL) such as C-Store [Stonebraker 2005],
H-Store [Kallman 2008], SciDB [Balazinska 2009], VoltDB
[Stonebraker 2013], and Graphulo [Hutchison 2015] to support
new analytics capabilities within a database. The SQL,
NoSQL, and NewSQL concepts have also been blended in
hybrid processing systems, such as Apache Pig [Olston 2008],
Apache Spark [Zaharia 2010], and HaLoop [Bu 2010]. An
effective mathematical model that encompasses the concepts of
SQL
SQL
NoSQL
NoSQL
NewSQL
NewSQL
Polystore
Future
BigDAWG
Example
PostgreSQL
Accumulo
SciDB
Application
Transactions
Search
Analysis
All
Data Model
Relational
Tables
Key-Value
Pairs
Sparse
Matrices
Associative
Arrays
Set
Theory
Graph
Theory
Linear
Algebra
Associative
Algebra
Math
Consistency
Volume
Velocity
Variety
Analytics
Usability
Figure 2. Focus areas of SQL, NoSQL, NewSQL, and Polystore
databases.
This material is based upon work supported by the National Science Foundation under Grant No. DMS-1312831. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
1
The recognition of “one size does not fit all” [Stonebraker
& Çetintemel 2005] has led to the need for polystore databases,
such as BigDAWG [Duggan 2015, Elmore 2015], that can
contextualize queries and cast data between multiple databases
so that a user can employ the best database for a particular task
(see Figure 3). To achieve this goal, polystore databases need
to bridge SQL, NoSQL, and NewSQL databases. The
Dynamic Distributed Dimensional Data Model (D4M)
technology [Kepner 2012] was developed to provide a linear
algebraic interface to graphs stored in NoSQL databases [Byun
2012, Kepner 2013].
Subsequently, D4M has been
successfully used with both SQL [Wu 2014, Gadepally 2015]
and NewSQL [Samsi 2016] databases. The effectiveness of
D4M to seamlessly interact with these diverse databases rests
on its associative array algebra [Kepner & Chaidez 2013,
Kepner & Chaidez 2014, Kepner & Jansen 2016] that provides
a mathematics that spans sets, graphs, and matrices. The
ability of D4M (and Myria [Halperin 2014]) to bridge multiple
databases has laid the foundation for the polystore database
concept.
Visualizations
and their corresponding operations are defined in terms of
associative arrays. Fifth, the mathematical properties required
by graph algorithms and matrix mathematics are confirmed for
relational operations. Finally, performance results illustrating
the impact of these properties are presented and discussed.
II.
The relational model, based on set theory, is a key
mathematical foundation for SQL databases. The relational
model effectively consists of relational algebra, relational
calculus, and the structured query language (SQL) that balance
the theoretical, implementation, and systems design aspects of
databases. The relational model is well covered in the
literature [Maier 1983, Codd 1990, Abiteboul 1995]; only the
most relevant aspects of the model are reviewed here. Some of
the more significant mathematical contributions of the
relational model to databases include
(R1) Relations: a mathematical definition of database tables
sufficient for their representation without constraining
their implementation;
(R2) Query semantics: a mathematical definition of operations
on relations sufficient for proving the correctness of
database queries;
(R3) Proof of the equivalence of declarative and procedural
syntaxes over the above definitions that has enabled the
use of declarative semantics for database users and
procedural semantics for database builders [Codd 1972].
Applications
BigDAWG Polystore Common Interface
Analytic
Translator
Analytic
Translator
Data
Translator
SQL
SQL
Analytic
Translator
Data
Translator
NoSQL
NoSQL
Of these results, (R3) has been enormously important, but
would not be possible without (R1) and (R2). (R3) has been
critical to the success of SQL databases that follow the
relational model. (R3) has enabled the successful coexistence
of separate interfaces and languages for users and
implementers, with the confidence that neither would create a
fundamental mathematical contradiction for the other.
The relational model is based on balancing mathematical
rigor with implementation practicality.
Too much
mathematical rigor burdens a database implementation with
unnecessary mathematics. Too little mathematical rigor makes
it is difficult to know if a database implementation will work.
As with all good compromises, there have been advocates for
improvement on both sides. As cited earlier, many new
databases under the names of NoSQL and NewSQL differ from
the relational model to meet new performance and analysis
demands. Likewise, there is extensive mathematical work on
modifications to the relational model to increase its
mathematical rigor [Imieliński 1984a, Imieliński 1984b,
Kanellakis 1989, Tsalenko 1992, Plotkin 1998, Priss 2006, van
Emden 2006, Litak 2014, Hutchison 2016]. One motivation
for increasing the mathematical rigor [Kelly 2012] is to align
relations with well-established Zermelo-Fraenkel Choice (ZFC)
set theory [Zermelo 1908, Fraenkel 1922] that is the foundation
for a number of branches of mathematics.
The emerging diversity of databases has initiated a dialogue
regarding the traditional relational model and the newer graph
and matrix models. This dialogue is akin to the earlier
declarative and procedural conversation that culminated in the
NewSQL
NewSQL
Figure 3. BigDAWG polystore database architecture.
Analytic
translators contextualize queries to specific databases.
Data
translators cast data between databases.
Mathematics is one of the most important differences
among SQL, NoSQL, and NewSQL databases (see Figure 4).
The relational algebra found in SQL databases is based upon
selection, union, and intersection of special sets called
relations.
NoSQL is designed for analyzing sparse
relationships among data and relies on graph theory and graph
analysis. NewSQL databases use matrices and linear algebra to
look for patterns in numeric data.
SQL
Set Operations
out
vertex
edge
link
in
vertex
001
alice
cited
bob
002
bob
cited
alice
003
alice
cited
carl
SELECT ∗ WHERE
out vertex=alice
NoSQL
Graph Operations
bob
NewSQL
Matrix Operations
AT!
ATv
v
bob
cited
alice
carl
RELATIONS
à
cited
carl
alice
Figure 4. Mathematics of breadth-first search for SQL, NoSQL, and
NewSQL databases.
The approach to developing an associative array model of
the above databases is as follows. First, the relevant aspects of
relations are summarized. Second, the sparse matrix operations
that encompass graph algorithms and matrix mathematics are
given. Third, the associative array model that describes
NoSQL and NewSQL databases is described. Fourth, relations
2
relational model. This work seeks similar progress by
demonstrating that an associative array model can provide
Given m×n matrices A, B, and C, element-wise matrix
addition (and its graph equivalent: weighted graph union) is
denoted
C=A⊕B
or more specifically
C(i,j) = A(i,j) ⊕ B(i,j)
where i ∈ {1,...,m} and j ∈ {1,...,n}. Element-wise matrix
multiplication (and its graph equivalent, weighted graph
intersection) is denoted
C=A⊗B
or more specifically
C(i,j) = A(i,j) ⊗ B(i,j)
For a m×l matrix A, l×n matrix B, and m×n matrix C, matrix
multiplication (and its graph equivalent, multisource weighted
breadth-first search) combines addition and multiplication and
is written
C = A ⊕.⊗ B = A B
or more specifically
(A1) Associative arrays: a mathematical definition of database
tables for SQL, NoSQL, and NewSQL databases that
accurately describes their implementation;
(A2) Associative array algebra: a mathematical definition of
database queries and computations that accurately
describes the operations performed by SQL, NoSQL, and
NewSQL databases;
(A3) Equivalence of relational and array syntaxes over the
above definitions that enables the use of either in a SQL,
NoSQL, or NewSQL database.
Of these results, (A3) has the most potential to impact
polystore databases. Likewise, (A3) would not be possible
without (A1) and (A2).
The mathematical challenge of creating an associative array
model encompassing SQL, NoSQL, and NewSQL is
reconciling their mathematical differences. SQL databases
focus on set operations (subsets, unions, intersections), and the
relational model is based on an elegant approach to set theory
that provides only those attributes of formal set theory that are
required for SQL databases. NoSQL and NewSQL databases
focus on high performance data analysis (graph algorithms and
matrix mathematics) that require mathematical properties such
as associativity, commutativity, distributivity, identities,
annihilators, and inverses. Reproducing the balance that led to
the success of the relational model in another model is difficult.
Detailed analysis of this balance leads down the same welltraveled path of those who have advocated for both more or
less mathematical rigor in the relational model. Instead, just as
Alexander solved the problem of the Gordian Knot, this paper
asserts the desired outcome (relations are associative arrays)
and the implications of this assertion are then addressed.
III.
C(i,j) = ⊕k A(i,k) ⊗ B(k,j)
where k ∈ {1,...,l}. Finally, the matrix transpose (and its graph
equivalent, graph edge reversal) is denoted
T
A(j,i) = A (i,j)
The above operations have been found to enable a wide
range of graph algorithms and matrix mathematics while also
preserving the required vector-space properties [Heaviside
1887, Peano 1888] such as commutativity
A⊕B=B⊕A
A⊗B=B⊗A
T
T
T
(A B) = B A
associativity
(A ⊕ B) ⊕ C = A ⊕ (B ⊕ C)
(A ⊗ B) ⊗ C = A ⊗ (B ⊗ C)
(A B) C = A (B C)
distributivity
A ⊗ (B ⊕ C) = (A ⊗ B) ⊕ (A ⊗ C)
A (B ⊕ C) = (A B) ⊕ (A C)
and the additive and multiplicative identities
A⊕ =A
A⊗ =A
A
=A
where is a matrix of all 0, is a matrix of all 1, and is a
matrix with 1 along its diagonal. Furthermore, these matrices
possess a multiplicative annihilator
A⊗ =
A
=
Their corresponding inverses may also exist
A
⊕ -A
=
A(i,j) ⊗ A(i,j)-1 =
A
A-1
=
when ( ,⊕,0) and ( ,⊗,1) are groups (i.e., have inverses)
[Galois 1832].
GRAPHS AND MATRICES
The duality between graph algorithms and matrix
mathematics (or sparse linear algebra) has been extensively
covered in the literature and is summarized in the cited text
[Kepner & Gilbert 2011]. This text has further spawned the
development of the GraphBLAS math library standard
(GraphBLAS.org)[Mattson 2013] that is described in the series
of proceedings [Mattson 2014a, Mattson 2014b, Mattson 2015,
Buluc 2015, Buluc 2016]. The essence of the graph algorithms
and matrix mathematics duality are three operations: elementwise addition, element-wise multiplication, and matrix
multiplication. In brief, an m×n matrix A is defined as a
mapping from pairs of integers to values
A: {1,...,m}×{1,...,n} →
where
is the set of values that form a semiring
( ,⊕,⊗,0,1)[Kepner & Jansen 2016] with addition operation
⊕, multiplication operation ⊗, additive identity/multiplicative
annihilator 0, and multiplicative identity 1. The construction of
a sparse matrix is denoted
A = (I,J,V)
where I, J, V are vectors of the rows, columns, and values of
the nonzero elements of A.
3
Most significantly, the properties of matrices are determined
by the properties of the set of values . In other words, the
properties of determine the properties of the corresponding
matrices.
The above properties are required for the
development and implementation of data analysis algorithms.
IV.
associative array because each row label (or row key) and each
column label (or column key) in A is unique. The size of the
associative array is m = 4 and n = 4.
V.
ASSOCIATIVE ARRAYS
Associative arrays can be rigorously built up from ZFC set
theory, groups, and semirings, culminating with the
observation that linear algebra is a specialization of associative
array algebra. How associative arrays encompass graphs,
matrices, NoSQL, and NewSQL is described extensively in
[Kepner & Jansen 2016] and is only summarized here.
As described earlier, sparse matrices are a common
representation used for both graphs and linear algebra. The
standard definition of sparse matrices requires generalization to
encompass the tables found in SQL, NoSQL, and NewSQL
databases. The primary difference between a matrix and an
associative array is the specification of the row and column
indices. In a matrix, the row and column indices are drawn
from the sets of integers {1,...,m} and {1,...,n}. Associative
array row and column “keys” can be drawn from any strict,
totally ordered set (i.e., any uniquely sortable set). Associative
array row and column keys can be negative numbers, real
numbers, or character strings. The true dimensions of an
associative array are often very large (e.g., all possible finite
strings). Instead, the size of an associative array is more
commonly used and is defined as the number of nonzero rows,
m, and the number of nonzero columns, n. An equally
important quantity is the number of nonzeros in an associative
array, which is denoted by the function nnz(). The size and nnz
of an associative array can change through the course of a
calculation. There are no size constraints on associative array
operations.
Element-wise
addition,
element-wise
multiplication, and array multiplication are valid for
combinations of associative arrays of any size.
Associative arrays derive much of their power from their
ability to represent data intuitively in easily understandable
tables. Consider the list of songs and the various features of
those songs shown in Figure 5. The tabular arrangement of the
data shown in Figure 5 is an associative array (denoted A). This
arrangement is similar to those widely used in spreadsheets and
databases. Figure 5 illustrates two properties of associative
arrays that may differ from other two-dimensional
arrangements of data. First, each row key and each column key
in A unique, to allow rows and columns to be queried
efficiently. Second, associative arrays do not store rows or
columns that are entirely empty, to allow insertion, selection,
and deletion of data to be performed by associative array
addition, multiplication, and products. These properties are
what makes A an associative array and allows A to be
manipulated as a spreadsheet, database, matrix, or graph.
053013ktnA1
053013ktnA2
063012ktnA1
082812ktnA1
Artist
Bandayde
Kastle
Kitten
Kitten
Date
2013-05-30
2013-05-30
2010-06-30
2012-08-28
Duration
5:14
3:07
4:38
3:25
RELATIONS AS ASSOCIATIVE ARRAYS
A first step in adapting the relational model to associative
arrays is to define a relation in terms of associative
arrays. This step is done by asserting a relation is an
associative array and considering the implications of the
assertion.
Operationally, asserting that relations are
associative arrays means that the row keys of an associative
array are arbitrary but distinct at the time of input and output of
a relational operation. Using this definition, some of the
implications can be illustrated by a series of common questions
about relations, specifically, whether or not relations are sets,
tuples, indices, ordered, multisets (bags), or sequences.
Are relations ZFC sets? Relations in the traditional
relational model require some, but not all, properties of ZFC set
semantics. The values of associative arrays are ZFC sets. The
keys of associative arrays are ZFC sets. Expressing relations as
associative arrays means that they adhere to ZFC set semantics.
Are relations tuples? A row of an associative array is
mathematically a row vector. Mathematically, tuples are also
vectors so relations are tuples.
Are relations indices? In the past, it has been efficient in
both space and time if a relation can be represented as a tuple
of integer indices that connect to values in a table. Today, this
implementation guidance is less important and it is
mathematically more flexible to treat relations as tuples of their
actual values, which is how they are defined in associative
arrays.
Are relations ordered sets? Mathematically, ordering of
rows or columns is not required for either relations or
associative arrays. However, as a practical matter, ordering is
required for real database tables, and there is no negative
mathematical consequence for requiring rows and columns to
be ordered sets. Thus, associative array rows and columns are
ordered sets.
Are relations multisets (bags)? Identical rows are a reality
in many databases, implying that relations are multi-sets. The
row key of an associative array distinguishes rows with
identical values.
Are relations sequences? A practical approach to
implementing multiple identical rows is to view relations as a
mathematical sequence instead of a set. In a sequence, each
row is paired with a number that sets the order of the rows;
hence, the term sequence ID in SQL databases. A sequence ID
is effectively equivalent to the row key in an associative array.
NoSQL databases embrace this view to the point of fully
exposing the unique sequence ID as a user-controlled
parameter.
Defining relations as associative arrays provides new
answers to the above questions. However, new questions arise
that also must be addressed. Primary amongst these are the
differences among 0, ∅, null, empty entries, and empty rows
and columns.
To provide the necessary mathematical
properties for matrix calculations, associative arrays follow the
conventions set by sparse matrices that define 0 as the nonstored element. More specifically, the value corresponding to
Genre
Electronic
Electronic
Rock
Pop
Figure 5. Tabular arrangement of a list of songs and the various
features of those songs into an associative array A. The array A is an
4
or more specifically
the ⊕ identity and the ⊗ annihilator is the non-stored element.
Because of its mathematical properties, the 0 element is unique
and there is no distinction between 0 and “no data” or null. As
a practical matter, when it is desired to distinguish between
these states, usually a workaround can be found. A unique 0 is
useful as it does not require that exceptions be defined for
every operation.
VI.
P(iA,iB) = &j (A(iA,j) = B(iB,j))
where iA ∈ IA and iB ∈ IB. Likewise, P can computed as
P(iA,iB) = δ(A(iA,:),B(iB,:))
where δ(,) is the Kronecker delta function. If every nonzero
row in A has a nonzero row in P and if every row in B has a
nonzero column in P, then
A~B
Using the convention of restricting to the nonzero rows of A
and B, P can also be computed simply as
T
P = AB
where ⊕.⊗ is &.= or δ(,) is implied. Likewise, by the
transpose identity
T
T
P = BA
The stronger version of equivalence can obtained by imposing
the further requirement that if P is stripped of its row and
column keys, it forms a symmetric matrix where
T
P =P
Using this definition of equivalence allows most relational
operations to be defined in terms of variations on the
construction of the permutation matrix P.
QUERIES AS ASSOCIATIVE ARRAY ALGEBRA
Relational algebra and SQL have defined a wide range of
operations that are useful for executing queries on relations.
Some of these operations are union ∪, intersection ∩, set
difference \, Cartesian product ×, project Π, rename ρ, select σ,
natural join ⋈, equijoin ⋈k, theta join ⋈θ, left outer join ⟕,
right outer join ⟖, full outer join ⟗, antijoin ▷, extended
projection, and aggregation. In discussions of the relational
model, the list of operations most commonly discussed include
union ∪, intersection ∩, set difference \, project Π, rename ρ,
select σ, and theta join ⋈θ.
In practice, all computations are restricted to the nonzero
rows and nonzero columns of the associative array
representation of relations. . Likewise, since the row keys in
an associative array representation of a relation are arbitrary, it
is assumed that wherever convenient the row keys of any
associative array can be made distinct. Thus, it is common for
there to be no operations that require the comparison of two
arbitrary values. In many computations, the only operations
that need to be specified are the identities
v⊕ 0=v
v⊗1=v
and the additive inverse and multiplicative annihilator
v ⊕ -v = 0
v⊗0=0
where v ∈ . Results that can be proven under the above
conditions will hold for a wide variety of ⊕ and ⊗ operations.
B. Project
The project operation picks sets of J columns from a
relation A and is typically written in relational algebra as
ΠJ(A)
The SQL equivalent is
SELECT J(1),...,J(n) FROM A
or simply
A.J(1),...,J(n)
In terms of associative array algebra, project can be
accomplished via many expressions, such as
A ⊕.⊗ (J,J,1) or A (J,J) or A (J) or A(:,J)
given the shorthand notation for the identity array
(J,J,1) = (J,J) = (J)
and the Matlab notation A(:,J) for column selection.
A. Equivalence
In dealing with any new data representation, the first step is
to define when two representations are equivalent [Howe
2005]. Relational equivalence for associative arrays is denoted
A~B
and implies every row in A has an identical row in B, and every
row in B has an identical row in A. This definition allows
multiple identical rows. A stronger version further requires
exactly the same number of identical rows in A and B.
Equivalence can be computed via the equivalency permutation
array P of the nonzero rows in A to the nonzero rows in B
where P(iA,iB) = 1 (and 0 otherwise) if row A(iA,:) is the same
as row B(iB,:). P can be computed by using a variety of
notational conventions
C. Rename
The rename operation picks columns J from a relation A
and assigns them new names J'. Rename is written in relational
algebra as
ρJ/J'(A)
The SQL equivalent is
SELECT J(1),...,J(n) AS J'(1),...,J'(n) FROM A
In associative array algebra, rename can be accomplished with
the many expressions, such as
A ⊕.⊗ (J,J',1) or A (J,J')
T
P = A (A ⊕.⊗ B ) B
T
= A (A &.= B ) B
where ⊕ is &, ⊗ is =, and
IA is the set of nonzero rows in A
A = (IA, IA,1) is the identity array over IA
IB is the set of nonzero rows in B
B = (IB, IB,1) is the identity array over IB
D. Union
The union operation selects all the distinct rows in two
relations A and B and is written in relational algebra as
A∪B
The SQL equivalent is
5
where
∗ FROM A UNION SELECT ∗ FROM B
In associative array algebra, using the convention of distinct
row keys for nonzero rows, union can be written as
A⊕B
SELECT
T
P = A (A(:,J) ⊕.⊗ B(:,J') ) B
T
= A (A(:,J) θ B(:,J') ) B
The function θ can be any function on two rows of an
associative array that produces either a 0 or a 1 (i.e., θ ∈
{0,1}). If the operation is restricted to the nonzero rows of A
and B, then the A and B terms can be dropped and the θ
permutation array can be written as
T
P = A(:,J) θ B(:,J')
E. Intersection
The intersection operation combines the common rows in
two relations A and B and is written in relational algebra as
A∩B
The SQL equivalent is
SELECT ∗ FROM A INTERSECT SELECT ∗ FROM B
In associative array algebra, using the equivalence permutation
array, intersection can be computed with the following
expressions
T
PB or P A
I. Extended Projection
An extended projection applies a function ϕ on the subset
of columns J of a relation A and returns the output of that
function as a new relation with a column key j'. Extended
projection is written in relational algebra as
j'Πϕ(J)(A)
The SQL equivalent is
SELECT ϕ(A.J(1),...,J(n)) AS j' FROM A
In associative array algebra, extended projection can be written
as
A ⊕.⊗ (J,j')
where ⊕.⊗ = ϕ. The function ϕ can be any function on a row
of an associative array.
F. Set Difference
Set difference returns the rows in relation A that are not
found in relation B and is written in relational algebra as
A\B
The SQL equivalent is
SELECT ∗ FROM A EXCEPT SELECT ∗ FROM B
In associative array algebra, assuming the additive inverse v ⊕
-v = 0, intersection can be written using the equivalence
permutation array as
A ⊕ -PB
J. Aggregation
The aggregation operations applies an aggregate function ƒ
on all the values of column j' of relation A that share a common
value in column j. Aggregation is written in relational algebra
as
j'Gƒ(j)(A)
The SQL equivalent is
SELECT ƒj' FROM A GROUP BY j
In associative array algebra, aggregation can be written as
P ƒ.⊗ A(:,j')
where P is the permutation array defined by cross-correlating
column j with itself
T
P = A (A(:,j) ⊕.⊗ A(:,j) ) A
T
= A (A(:,j) ⊕.= A(:,j) ) A
The function ƒ can be any binary commutative function on a
column of an associative array.
G. Select
The select operation returns all rows in the relation A that
satisfy a function ϕ on the subset of columns J
σϕ(J)(A)
The SQL equivalent is
SELECT ∗ FROM A WHERE ϕ(A.J(1),...,J(n))
In associative array algebra, select can be written using the
select permutation array
PA
where
T
P = (ϕ(A(:,J)) ϕ(A(:,J)) ) ⊗ A
or
P = (ϕ(A(:,J))
The function ϕ can be any function on the J columns of a row
of an associative array that produces either a 0 or a 1 (i.e., ⊕.θ
∈ {0,1}).
VII. PROPERTIES AND PERFORMANCE
Having expressed the main relational operations in terms of
associative array algebra, the mathematical properties
necessary for graph and matrix computation can be checked.
The results of this analysis are summarized in Tables 1 and 2.
The derivation of all of these properties is beyond the scope of
this paper, but as an example, perhaps the most important
property, distributivity is derived in the context of relational
renaming over union and intersection. These operations are the
closest direct analogs to array multiplication and element-wise
addition.
Showing that renaming distributes over union is computed
as follows
H. Theta Join
A theta join returns the rows of two relations A and B
joined where they satisfy the function θ on the J columns of A
and J' columns of B and is written in relational algebra as
A ⋈θ(J,J') B
The SQL equivalent is
SELECT ∗ FROM A, B WHERE θ(A.J(1),...,J(n),B.J'(1),...,J'(n))
In associative array algebra, select can be written using the θ
permutation array as
T
T
T
PB ⊕ PP A
or P A ⊕ P PB
6
2
Assumes corresponding property in join function θ.
ρJ/J'(A ∪ B)
~ (A ⊕ B) (J,J')
= A (J,J') ⊕ B (J,J')
~ ρJ/J'(A) ∪ ρJ/J'(B)
Showing that renaming distributes over intersection is
computed as follows
T
ρJ/J'(A) ∩ ρJ/J'(B) ~ ((A (J,J')) (B (J,J') ) B (J,J')
T T
= ((A (J,J')) ( (J,J') B ) B (J,J')
T
= ((A ( (J,J') (J',J)) B ) B (J,J')
T
= (A (J) B ) B (J,J')
T
= (AB ) B (J,J')
= (PB) (J,J')
~ ρJ/J'(A ∩ B)
Because the above derivations use the corresponding properties
of associative arrays, the results can be more general than the
relational algebra would suggest. Specifically, distributivity
would still hold if rename modified the values in a manner
consistent with associative array multiplication. Likewise,
distributivity would still hold if union and intersection modified
the values in a manner consistent with element-wise addition.
One of the benefits of the properties in Table 1 is the ability
to eliminate operations if the appropriate identity can be
recognized.
Likewise, the properties in Table 2 allow
operations to be reordered to reduce execution time. These
properties are particularly useful in the polystore context when
selecting the optimal database to perform an operation. Figures
6 and 7 show the relative execution time impact of exploiting
associativity and distributivity as a function of the size of the
associative arrays. These experimental measurements were
conducted using the D4M (d4m.mit.edu) implementation of
associative arrays. A fixed 4096×4096 associative array A was
multiplied with square associative arrays B and C that varied in
size. All of the associative arrays were randomly generated
with an average of 8 nonzero entries per row or column, which
is consistent with many graph applications. The results show
the potential performance benefits of exploiting the distributive
and associative properties.
Thus, the kinds of query
optimizations that are found in many databases systems can be
applied to a broad set of computations. These optimizations
are important for polystores as they allow the movement of
computations and data to the appropriate databases.
Identity, annihilator, and inverse properties of relational
operations in terms of associative arrays. Unary functions with
parameters are treated as binary functions. The potential performance
impact of the elimination of an operation via the recognition of one of
these properties is typically O(nnz(A)).
Operation
Identity
Annihilator
Inverse
Project
Rename
Union
Intersection
Difference
A
Select
ϕ=1
ϕ=0
Theta Join
A
Extended
J = j'
ϕ=0
ϕ=1
Projection
ϕ(1,v) = v
Aggregation
A(:,j) is unique
ƒ(0,v) = 0
ƒ(0,v) = 1
ƒ(0,v) = v
Table 1.
time(A (B ⊕ C))
time((A B) ⊕ (A C))
Time(A*(B+C))/Time(A*B+A*C))
3
2.5
2
1.5
1
0.5
-3
10
10
-2
10
-1
10
0
10
1
10
2
(nnz(B)+nnz(C))/nnz(A)
(nnz(B)+nnz(C))
/ nnz(A)
Figure 6. Relative execution time of A (B ⊕ C) vs (A B) ⊕ (A C) as
a function of the size of B and C as compared to A. (A B) ⊕ (A C) is
much faster than A (B ⊕ C) when A is smaller than B and C.
time(A (B C))
time((A B) C)
Time(A*(B+C))/Time(A*B+A*C))
4.5
Commutativity, associativity, and distributivity properties of
relational operations on associative arrays. Unary functions with
parameters are treated as binary functions. The potential performance
impact of elimination of an operation via the recognition of one of
these properties is typically O(nnz(A)).
Operation
Commutativity
Associativity
Distributivity
Projection
no
yes
∪, ∩, \
Table 2.
Rename
no
yes
Union
Intersection
Difference
Select1
Theta Join2
yes
yes
no
yes
yes
no
∪, ∩, \
∩
∪
4
3.5
3
2.5
2
1.5
1
10 -3
10 -2
10 -1
10 0
(nnz(B)+nnz(C))/nnz(A)
(nnz(B)+nnz(C)) / nnz(A)
yes
1
Assumes corresponding property in select function ϕ.
7
10 1
10 2
[Chang 2008] F. Chang, J. Dean, S. Ghemawat, W. Hsieh, D. Wallach, M.
Burrows, T.Chandra, A. Fikes & R. Gruber, “Bigtable: A Distributed
Storage System for Structured Data,” ACM Transactions on Computer
Systems, Volume 26, Issue 2, June 2008.
[Chodorow 2013] K. Chodorow, “MongoDB: the definitive guide,” O'Reilly
Media, Inc.
[Codd 1970] E.F. Codd, “A Relational Model of Data for Large Shared Data
Banks,” Communications of the ACM, Vol. 13, No. 6, 37787, June,
1970.
[Codd 1972] E.F. Codd, Relational completeness of data base sublanguages,
IBM Corporation, 1972.
[Codd 1990] E.F. Codd, The relational model for database management:
version 2, Addison-Wesley Longman Publishing Co., Inc., 1990.
[DeCandia et al 2007] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati,
A. Lakshman, Alex Pilchin, S. Sivasubramanian, P. Vosshall & W
Vogels, “Dynamo: Amazon’s Highly Available Key-value Store,”
Symposium on Operation Systems Principals (SOSP), 2007.
[Duggan 2015] J. Duggan, A. Elmore, M. Stonebraker, M. Balazinska, B.
Howe, J. Kepner, S. Madden, D. Maier, T. Mattson, & S. Zdonik, “The
BigDAWG Polystore System” ACM SIGMOD Record, 44(2), pp.11-16.
[Elmore 2015] Elmore, Kraska, Duggan, Madden, Stonebraker, Maier,
Balazinska, Mattson, Cetintemel, Papadopoulis, Gadepally, Parkhurst,
Heer, Tatbul, Howe, Vartek, Kepner & Zdonik, “A Demonstration of the
BigDAWG Multi-Database System,” VLDB 2015
[Fraenkel 1922] A. Fraenkel, “Zu den grundlagen der Cantor-Zermeloschen
mengenlehre,” Mathematische annalen 86.3 (1922): 230-237
[Gadepally 2015] D4M: Bringing Associative Arrays to Database Engines, V.
Gadepally, J. Kepner, W. Arcand, D. Bestor, B. Bergeron, C. Byun, L.
Edwards, M. Hubbell, P. Michaleas, J. Mullen, A. Prout, A. Rosa, C.
Yee, A. Reuther, IEEE High Performance Extreme Computing (HPEC)
conference, Sep 2015, Waltham, MA
[Galois 1832] E. Galois, Lettre a Auguste Chevalier, 29 mai 1832.
[George 2011] L. George, HBase: The Definitive Guide, O’Reilly, Sebastapol,
California, 2011.
[Halperin 2014] D. Halperin, V. Teixeira de Almeida, L. Choo, S. Chu, P.
Koutris, D. Moritz, J. Ortiz, V. Ruamviboonsuk, J. Wang, A. Whitaker,
S. Xu, M. Balazinska, B. Howe, & D. Suciu, "Demonstration of the
Myria big data management service," ACM SIGMOD international
conference on Management of data, 2014.
[Heaviside 1887] O. Heaviside, LXII, “On resistance and conductance
operators, and their derivatives, inductance and permittance, especially
in connexion with electric and magnetic energy,” The Lon-don,
Edinburgh, and Dublin Philosophical Magazine and Journal of Science
24.151 (1887): 479-502
[Howe 2015] B. Howe & D. Maier, "Algebraic manipulation of scientific
datasets," The VLDB journal 14, no. 4 (2005): 397-416.
[Hutchison 2015] D. Hutchison, J. Kepner, V. Gadepally, & A. Fuchs,
"Graphulo implementation of server-side sparse matrix multiply in the
Accumulo database," IEEE High Performance Extreme Computing
(HPEC) Conference, Walham, MA, September 2015.
[Hutchison 2016] D. Hutchison, B. Howe & D. Suciu, “Lara: A Key-Value
Algebra
underlying
Arrays
and
Relations,”
https://arxiv.org/abs/1604.03607
[Imieliński 1984a] T. Imieliński & Witold Lipski, "The relational model of
data and cylindric algebras," Journal of Computer and System Sciences
28, no. 1 (1984): 80-102.
[Imieliński 1984b] T. Imieliński & Witold Lipski, "Incomplete information in
relational databases." Journal of the ACM (JACM) 31, no. 4 (1984): 761791.
[Kallman 2008] R. Kallman, H. Kimura, J. Natkins, A. Pavlo, A. Rasin, S.
Zdonik, E. Jones, S. Madden, M. Stonebraker, Y. Zhang, J. Hugg & D.
Abadi, “H-store: a high-performance, distributed main memory
transaction processing system,” Proceedings of the VLDB Endowment,
Volume 1 Issue 2, August 2008, pages 1496-1499.
[Kanellakis 1989] P. Kanellakis, “Elements of relational database theory,”
Brown University, Department of Computer Science, 1989.
Figure 7. Relative execution time of A (B C) vs (A B) C as a
function of the size of B and C as compared to A. (A B) C is much
faster than A (B C) when A is smaller than B and C.
VIII. SUMMARY AND FUTURE WORK
The success of SQL, NoSQL, and NewSQL databases is a
reflection of their ability to provide significant functionality
and performance benefits for specific domains: transactions,
internet search, and data analysis. The BigDAWG polystore
seeks to provide a mechanism to allow applications to
transparently achieve the benefits of diverse databases while
insulating applications from the details of these diverse
databases. Associative arrays provide a common approach to
the mathematics found in different databases: sets (SQL),
graphs (NoSQL), and matrices (NewSQL). This work presents
the SQL relational model in terms of associative arrays and
identifies the key mathematical properties of NoSQL and
NewSQL that are preserved within SQL. These properties
include associativity, commutativity, distributivity, identities,
annihilators, and inverses. Performance measurements on
distributivity and associativity show the impact these properties
can have on associative array operations. These results
demonstrate that associative arrays can provide a model for
polystores to leverage mathematical properties across databases
to optimize the exchange of data and queries.
Future work in this area will focus on a complete set of
proofs for the aforementioned relational operations, detailed
analysis of optimizations, and the potential application of
uncertainty quantification to database queries.
ACKNOWLEDGMENTS
The authors wish to acknowledge the following individuals
for their contributions: Michael Stonebraker, Sam Madden, Bill
Howe, David Maier, Chris Hill, Alan Edelman, Charles
Leiserson, Dave Martinez, Sterling Foster, Paul Burkhardt,
Victor Roytburd, Bill Arcand, Bill Bergeron, David Bestor,
Chansup Byun, Mike Houle, Matt Hubbell, Mike Jones, Anna
Klein, Pete Michaleas, Lauren Milechin, Julie Mullen, Andy
Prout, Tony Rosa, Sid Samsi, and Chuck Yee.
REFERENCES
[Abiteboul 1995] S. Abiteboul, R. Hull, & V. Vianu. Foundations of
databases. Vol. 8. Reading: Addison-Wesley, 1995.
[Balazinska 2009] M. Balazinska, J. Becla, D. Heath, D. Maier, M.
Stonebraker & S. Zdonik, “A Demonstration of SciDB: A ScienceOriented DBMS, Cell, 1, a2. (2009).
[Bu 2010] Y. Bu, B. Howe, M. Balazinska, & M. Ernst. "HaLoop: efficient
iterative data processing on large clusters," Proceedings of the VLDB
Endowment 3, no. 1-2 (2010): 285-296.
[Buluc 2015] A. Buluc, “GraphBLAS Special Session,” IEEE HPEC 2015,
Waltham, MA
[Buluc 2016] A. Buluc, “Workshop on Graph Algorithms Building Blocks,”
IPDPS 2016, Chicago, IL
[Byun 2012] C. Byun, W. Arcand, D. Bestor, B. Bergeron, M. Hubbell, J.
Kepner, A. McCabe, P. Michaleas, J. Mullen, D. O’Gwynn, A. Prout,
A. Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,”
IEEE High Performance Extreme Computing (HPEC) Conference, Sep
2012.
[Cattell 2011] R. Cattell, Rick. "Scalable SQL and NoSQL data stores," ACM
SIGMOD Record 39.4 (2011): 12-27.
8
[Kelly 2012] P. Kelly & M. H. van Emden, "Relational Semantics for
Databases and Predicate Calculus," arXiv preprint arXiv:1202.0474
(2012).
[Kepner 2012] J. Kepner, W. Arcand, W. Bergeron, N. Bliss, R. Bond, C.
Byun, G. Condon, K. Gregson, M. Hubbell, J. Kurz, A. McCabe, P.
Michaleas, A. Prout, A. Reuther, A. Rosa & C. Yee, “Dynamic
Distributed Dimensional Data Model (D4M) Database and Computation
System,” ICASSP (International Conference on Accoustics, Speech, and
Signal Processing), 2012, Kyoto, Japan
[Kepner 2013] J. Kepner, C. Anderson, W. Arcand, D. Bestor, B. Bergeron, C.
Byun, M. Hubbell, P. Michaleas, J. Mullen, D. O'Gwynn, A. Prout, A.
Reuther, A. Rosa, C.arles Yee, “D4M 2.0 Schema: A General Purpose
High Performance Schema for the Accumulo Database,” IEEE High
Performance Extreme Computing (HPEC) conference, Sep 10-12, 2013,
Waltham, MA
[Kepner & Chaidez 2013] J. Kepner & J. Chaidez, “The Abstract Algebra of
Big Data,” Union College Mathematics Conference, Oct 2013,
Schenectady, NY
[Kepner & Chaidez 2014] J. Kepner & J. Chaidez, “The Abstract Algebra of
Big Data and Associative Arrays,” SIAM Meeting on Discrete Math, Jun
2014, Minneapolis, MN
[Kepner 2014] J. Kepner, W. Arcand, D. Bestor, B. Bergeron, C. Byun, V.
Gadepally, M. Hubbell, P. Michaleas, J. Mullen, A. Prout, A. Reuther,
A. Rosa, & C. Yee, “Achieving 100,000,000 database inserts per second
using Accumulo and D4M,” IEEE High Performance Extreme
Computing (HPEC) Conference, Walham, MA, September 2014.
[Kepner & Jansen 2016] J. Kepner & H. Jansen,
[Lakshman & Malik 2010] A. Lakshman & P. Malik, “Cassandra: A
Decentralized Structured Storage System,” ACM SIGOPS Operating
Systems Review, Volume 44 Issue 2, April 2010.
[Litak 2014] T. Litak, S. Mikulás, & J Hidders, "Relational lattices,"
Relational and Algebraic Methods in Computer Science, pp. 327-343.
Springer International Publishing, 2014.
[Maier 1983] D. Maier, The theory of relational databases. Vol. 11.
Rockville: Computer science press, 1983.
[Mattson 2013] T. Mattson, D. Bader, J. Berry, A. Buluc, J. Dongarra, C.
Faloutsos, J. Feo, J. Gilbert, J. Gonzalez, B. Hendrickson, J. Kepner, C.
Leiserson, A. Lumsdaine, D. Padua, S. Poole, S. Reinhardt, M.
Stonebraker, S. Wallach, & A. Yoo, “Standards for Graph Algorithms
Primitives,” IEEE HPEC 2013, Waltham, MA
[Mattson 2014a] T. Mattson, “Workshop on Graph Algorithms Building
Blocks,” IPDPS 2014, Pheoniz, AZ
[Mattson 2014b] T. Mattson, “GraphBLAS Special Session,” IEEE HPEC
2014, Waltham, MA
[Mattson 2015] T. Mattson, “Workshop on Graph Algorithms Building
Blocks,” IPDPS 2015, Hyderabad, India
[Olston 2008] C. Olston, B. Reed, U. Srivastava, R. Kumar, & A. Tomkins.
"Pig latin: a not-so-foreign language for data processing," ACM
SIGMOD international conference on Management of data, pp. 10991110. ACM, 2008.
[Peano 1888] G. Peano, “Calcolo geometrico,” secondo l’Ausdehnungslehre
di H. Grassmann, 1888
[Plotkin 1998] T. Plotkin, S. Kraus, & B. Plotkin, "Problems of equivalence,
categoricity of axioms and states description in databases," Studia
Logica 61, no. 3 (1998): 347-366.
[Priss 2006] U. Priss, "An FCA interpretation of relation algebra," Formal
Concept Analysis, pp. 248-263. Springer Berlin Heidelberg, 2006.
[Prout 2015] A. Prout et al, “MIT SuperCloud Database Management
System,” IEEE High Performance Extreme Computing (HPEC)
Conference, September 2015, submitted.
[Reuther 2013] A. Reuther, J. Kepner, W. Arcand, D. Bestor, B. Bergeron, C.
Byun, M. Hubbell, P. Michaleas, J. Mullen, A. Prout, & A. Rosa,
“LLSuperCloud: Sharing HPC Systems for Diverse Rapid Prototyping,”
IEEE High Performance Extreme Computing (HPEC) Conference,
September 2013.
[Samsi 2016] S. Samsi, L. Brattain, V. Gadepally, & J. Kepner "D4M and
Large Array Databases for Management and Analysis of Large
Biomedical Imaging Data," New England Database Summit, 2016
[Stonebraker 1976] M. Stonebraker, G. Held, E. Wong & P. Kreps, “The
design and implementation of INGRES,” ACM Transactions on
Database Systems (TODS), Volume 1 Issue 3, Sep 1976, Pages 189-222
[Stonebraker 2005] M. Stonebraker, D. Abadi, A. Batkin, X. Chen, M.
Cherniack, M. Fer- reira, E. Lau, A. Lin, S. Madden, E. O’Neil, P.
O’Neil, A. Rasin, N.Tran & S. Zdonik, “C-store: a column-oriented
DBMS,” Proceedings of the 31st International Conference on Very
Large Data Bases (VLDB ’05), 2005, pages 553 – 564.
[Stonebraker & Çetintemel 2005] M. Stonebraker & U. Çetintemel, ""One
size fits all": an idea whose time has come and gone," IEEE International
Conference on Data Engineering, ICDE 2005.
[Stonebraker & Weisberg 2013] M. Stonebraker & A. Weisberg, “The Volt
DB Main Memory DBMS,” IEEE Data Eng. Bull., Vol. 36, No. 2, 2013,
pages 21-27.
[Tsalenko 1992] M.Sh. Tsalenko, "Database theory in Russia (1979–1991)(an
overview)," Database Theory—ICDT'92, pp. 51-70. Springer Berlin
Heidelberg, 1992.
[van Emden 2006] M.H. van Emden, “Set-Theoretic Preliminaries for
Computer Scientists,” Research Report DCS-304-IR, Department of
Computer Science, University of Victoria, 2006
[Wall 2013] M. Wall, A. Cordova & B. Rinaldi, Accumulo Application
Development, Table Design, and Best Practices, O’Reilly, Sebastapol,
California, US, 2013.
[Wu 2014] S. Wu, V. Gadepally, A. Whitaker, J. Kepner, B. Howe, M.
Balazinska & S. Madden, “MIMICViz: Enabling Visualization of
Medical Big Data,” Intel Science & Technology Center retreat, Portland,
OR, August, 2014
[Zaharia 2010] M. Zaharia, M Chowdhury, M. J. Franklin, S. Shenker, & I.
Stoica, "Spark: Cluster Computing with Working Sets," HotCloud 10
(2010): 10-10.
[Zermelo 1908] E. Zermelo, “Untersuchungen uber die Grundlagen der
Mengenlehre,” I., Mathematische Annalen 65.2 (1908): 261-281
9
| 6 |
A critical analysis of some popular methods for the discretisation of
the gradient operator in finite volume methods
Alexandros Syrakos∗1 , Stylianos Varchanis1 , Yannis Dimakopoulos1 , Apostolos Goulas2 , and
John Tsamopoulos1
1
arXiv:1606.05556v6 [cs.NA] 29 Dec 2017
Laboratory of Fluid Mechanics and Rheology, Dept. of Chemical Engineering, University of Patras, 26500
Patras, Greece
2
Laboratory of Fluid Mechanics and Turbomachinery, Dept. of Mechanical Engineering, Aristotle University
of Thessaloniki, 54124 Thessaloniki, Greece
Abstract
Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the
gradient operator has received less attention despite its fundamental importance with regards to
the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or
Green-Gauss) scheme, and the least-squares (LS) scheme. Both are widely believed to be secondorder accurate, but the present study shows that in fact the common variant of the DT gradient
is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general
unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively.
This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes
are then used within a FVM to solve a simple diffusion equation on unstructured grids generated
by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement.
On the other hand, use of the LS gradient leads to second-order accurate results, as does the use
of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the
common DT gradient consistent at almost no extra cost. The numerical tests are performed using
both an in-house code and the popular public domain PDE solver OpenFOAM.
This is the accepted version of the article published in:
Physics of Fluids 29, 127103 (2017); doi:10.1063/1.4997682
1
Introduction
Finite volume methods (FVMs) are used widely for the simulation of fluid flows; they are employed by
several popular general-purpose CFD (Computational Fluid Dynamics) solvers, both commercial (e.g.
ANSYS Fluent, STAR-CD) and open-source (e.g. OpenFOAM). One of the key components of FVMs
is the approximation of the gradient operator. Computing the gradient of the dependent variables is
needed for the FVM discretisation on non-Cartesian grids, where the fluxes across a face separating
two finite volumes cannot be expressed as a function of the values of the variables at the centres of
these two volumes alone, but additional terms involving the gradients must also be included [1]. The
gradient operator is even more significant when solving complex flow problems such as turbulent flows
modelled by the RANS and some LES methodologies [1] or non-Newtonian flows [2–5], where the
additional equations solved (turbulence closure, constitutive equation etc.) may directly involve the
velocity gradients. Apart from the main task of solving partial differential equations (PDEs), gradient
calculation may also be important in auxiliary activities such as post-processing [6].
∗
Corresponding author. E-mail address: [email protected], [email protected]
1
The two most popular methods for calculating the gradient on grids of general geometry are
based on the use of (a) the divergence theorem (DT) and (b) least-squares (LS) minimisation. They
have both remained popular for nearly three decades of application of the FVM; for example, the
DT method (also known as “Green–Gauss gradient”) has been used in [1, 7–11], and the LS method
in [10–15]. Often, general-purpose commercial or open-source CFD codes present the user with the
option of choosing between these two methods. Their popularity stems from the fact that they are
not algorithmically restricted to a particular grid cell geometry but can be applied to cells with an
arbitrary number of faces. This property is important, because in recent years the use of unstructured
grids is becoming standard practice in simulations of complex engineering processes. The tessellation
of the complex geometries typically associated with such processes by structured grids is an arduous
and extremely time-consuming procedure for the modeller, whereas unstructured grid generation is
much more automated. Therefore, discretisation schemes are sought that are easily applicable on
unstructured grids of arbitrary geometry whereas early FVMs either used Cartesian grids or relied on
coordinate transformations which are applicable only on smooth structured grids.
In the literature, usually the gradient discretisation is only briefly discussed within an overall
presentation of a FVM, with only a relatively limited number of studies devoted specifically to it
(e.g. [6, 16–20]). This suggests that existing gradient schemes are deemed satisfactory, and in fact
there seems to be a widespread misconception that the DT and LS schemes are second-order accurate
on any type of grid. For the DT gradient, a couple of studies have shown that it is potentially zerothorder accurate, depending on the grid properties. Syrakos [21] noticed that it converged to wrong
values on composite Cartesian grids in the vicinity of the interfaces between patches of different
fineness. His analysis, given here in Sec. 5.4, concludes that this is due to grid skewness. Later,
Sozer et al. [20] tested a simpler variant of the DT gradient that uses arithmetic averaging instead
of linear interpolation and proved that in the one-dimensional case it converges to incorrect values
if the grid is not uniform. In numerical tests they also noticed that the scheme is inconsistent on
two-dimensional grids of arbitrary topology. In fact, as early as in [7] it was briefly mentioned that
the DT gradient fails to calculate exactly the gradients of linear functions, providing a hint to its
inconsistency. An important question is whether this inconsistency inhibits convergence of the FVM
to the correct solution. The results reported in [10] concerning the solution of a Poisson equation
suggest zeroth-order FVM accuracy when the DT gradient is employed on skewed meshes, although
the authors did not explicitly attribute this to inconsistency of the DT scheme.
The present paper presents mathematical analyses of the orders of accuracy of both methods on
various types of grid, backed by numerical experiments. The common DT method is proved to be
zeroth-order accurate on grids of arbitrary geometry and second-order accurate on smooth structured
meshes. This is shown to be true even if a finite number of “corrector” steps is performed (e.g. [10,22])
whereby an iterative procedure uses the gradient calculated at the previous iteration to improve the
accuracy. Theoretically, this procedure increases the order of accuracy only in the limit of infinite
iterations. Practically, the desired accuracy can be achieved with a finite number of iterations, but
this number is a priori unknown and increases with the grid fineness. Furthermore, corrector steps
make the DT gradient much more expensive than the LS. However, since the FVM solution typically
proceeds with a number of outer iterations, we exploit this fact to interweave the gradient iterations
with the FVM outer iterations to obtain a first-order accurate DT gradient at a cost that is nearly
the same as that of the uncorrected DT scheme. On the other hand, the LS method is shown to be
first-order accurate on arbitrary grids and second-order accurate on structured grids, while for the
default distance-based weighting scheme a particular non-integer exponent (−3/2) enlarges the set
of grid types for which the method is second-order accurate. The different order of accuracy of the
gradient schemes on structured versus unstructured grids is attributed to the fact that the former
become less skewed and more uniform as they are refined, whereas the latter usually do not.
First-order accurate gradients are, for most purposes, compatible with second-order accurate FVMs
because differentiation of a second-order accurate variable gives a first-order accurate derivative. In
Section 6 we proceed beyond the analysis of the gradient schemes themselves and test them as components of a FVM for the solution of a Poisson equation on various kinds of unstructured grids.
Experiments with both an in-house code and OpenFOAM show that the FVMs are zeroth-order ac-
2
N2
N1
N3
c01 m1
c1
P
c05
n1
c5
N4
Figure 1: Part of an unstructured grid, showing cell P and its neighbouring cells, each having a single
common face with P . The shaded area lies outside the grid. The faces and neighbours of cell P are numbered
in anticlockwise order, with face f separating P from its neighbour Nf . The geometric characteristics of its
face 1 which separates it from neighbouring cell N1 are displayed. The position vectors of the centroids of cells
P and Nf are denoted by the same characters but in boldface as P and Nf ; cf is the centroid of face f and
c0f is its closest point on the line segment connecting P and Nf ; mf is the midpoint between P and Nf ; nf
is the unit vector normal to face f , pointing outwards of P . The shown cell P also has a boundary face (face
5), with no neighbour on the other side; in this case c05 is the projection of P onto the boundary face.
curate when they employ the common DT gradient, whereas they are second-order accurate if they
employ instead the least-squares gradient, or the proposed iteratively corrected DT gradient, or an
alternative DT gradient scheme, achieving accuracy that is in fact only marginally worse than that
obtained on Cartesian grids.
2
Preliminary considerations
We will focus on two-dimensional problems, although the one- and three-dimensional cases are also
discussed where appropriate and the conclusions are roughly the same in all dimensions. Let x, y
denote the usual Cartesian coordinates, and let φ(x, y) be a function defined over a domain, whose
gradient ∇φ we wish to calculate. It is convenient to introduce a convention where subscripts beginning
with a dot (.) denote differentiation with respect to the ensuing variable(s), e.g. φ.x ≡ ∂φ/∂x, φ.xy ≡
∂ 2 φ/∂x∂y etc. If the variables x, y etc. are used as subscripts without a leading dot then they are used
simply as indices, without any differentiation implied. Therefore, we seek the gradient ∇φ = (φ.x , φ.y )
of the function φ. The domain over which the function is defined is discretised by a grid into a number
of non-overlapping finite volumes, or cells. A cell can be arbitrarily shaped, but its boundary must
consist of a number of straight faces, as in Fig. 1, each separating it from a single other cell or from the
exterior of the domain (the latter are called boundary faces). We assume a cell-centred finite volume
method, meaning that the values of φ are known only at the geometric centres of the cells and at the
centres of the boundary faces. The notation that is adopted in order to describe the geometry of the
grid is presented in Fig. 1. Also, we will denote vectors by boldface characters.
Our goal is to derive approximate algebraic gradient operators ∇a which return values ∇a φ(P ) ≈
∇φ(P ) at each cell centroid P , using information only from the immediate neighbouring cells and
a , φa ). The
boundary faces. The components of the approximate gradient are denoted as ∇a φ = (φ.x
.y
a
operators ∇ must be capable of approximating the derivative on grids of arbitrary geometry. To
study the effect of grid geometry on the accuracy, we must define some indicators of grid irregularity.
With the present grid arrangement, we find it useful to define three kinds of such grid irregularities,
which we shall call here “non-orthogonality”, “unevenness” and “skewness” (other possibilities exist,
see e.g. [23]). We will define these terms with the aid of Fig. 1. With the nomenclature defined in that
figure, we will say that face f of cell P exhibits non-orthogonality if Nf − P is not parallel to nf ; a
3
N2
c2 δ η
N1
δξ
c1
P
c3
c4
N3
N4
Figure 2: Part of a grid formed by equispaced parallel grid lines. See the text for details.
measure of non-orthogonality is the angle between the vectors (Nf −P ) and nf . Also, we will say that
face f exhibits unevenness if the midpoint of the line segment joining P and Nf , mf = (P + Nf )/2,
does not coincide with c0f (i.e. the cell centres are unequally spaced on either side of the face); a
measure of unevenness is kc0f − mf k/kNf − P k. Finally, we will say that face f exhibits skewness if
c0f does not coincide with cf (i.e. the line joining the cell centres does not pass through the centre of
the face); a measure of the skewness is kcf − c0f k/kNf − P k.
Before discussing the DT and LS gradient schemes, it is useful to examine a simpler scheme which
is applicable on very plain grids that are formed from two families of equispaced parallel straight
lines intersecting at a constant angle, as in Fig. 2. In this case all the cells of the grid are identical
parallelograms (Cartesian grids with constant spacing are a special case of this category). Figure
2 shows a cell P belonging to such a grid, and its four neighbouring cells. The vectors δ ξ and δ η
are parallel to the grid lines and span the size of the cells. Due to the grid properties it holds that
δ ξ = c1 − c3 = N1 − P = P − N3 and δ η = c2 − c4 = N2 − P = P − N4 . It can be assumed that two
variables, ξ and η, are distributed in the domain, such that in the direction of δ ξ the variable ξ varies
linearly while η is constant, and in the direction of δ η the variable η varies linearly while ξ is constant.
Then the grid can be considered to be constructed by drawing lines of constant ξ and of constant η,
equispaced (in the ξ, η space) by ∆ξ and ∆η, respectively. Let us assume also that the grid density
can be increased by adjusting the spacings ∆ξ and ∆η, but their ratio must be kept constant, e.g. if
∆ξ = h then ∆η = αh with α being a constant, independent of h. The variable h determines the grid
fineness. Therefore, the direction of the grid vectors δ ξ and δ η remains constant with grid refinement,
but their lengths are proportional to the grid parameter h.
This idealized grid exhibits no unevenness or skewness. It possibly exhibits non-orthogonality,
but this poses no problem as far as the gradient calculation is concerned. Since points N3 , P and
N1 are collinear and equidistant, the rate of change of any quantity φ in the direction of δ ξ can
be approximated at point P from the values at N3 and N1 using second-order accurate central
differencing. In the same manner, the rate of change in the direction of δ η at P can be approximated
from the values at N4 and N2 . So, let the grid vectors be written in Cartesian coordinates as
δ ξ = (δxξ , δyξ ) and δ η = (δxη , δyη ), respectively. Then, by expanding the function φ in a two-dimensional
Taylor series along the Cartesian directions, centred at point P , and using that to express the values
at the points N1 = P + δ ξ , N3 = P − δ ξ , N2 = P + δ η and N4 = P − δ η we get
φ(N1 ) − φ(N3 ) = ∇φ(P ) · (2δ ξ ) + O(h3 )
φ(N2 ) − φ(N4 ) = ∇φ(P ) · (2δ η ) + O(h3 )
This system can be solved for ∇φ(P ) ≡ (φ.x , φ.y ) to get
"
#
"
#"
#
φ.x (P )
δyη −δyξ
φ(N1 ) − φ(N3 )
1
=
·
2ΩP −δxη
φ.y (P )
φ(N2 ) − φ(N4 )
δxξ
+
1
2ΩP
"
δyη −δyξ
−δxη
#"
#
O(h3 )
·
O(h3 )
δxξ
where ΩP = δxξ δyη − δyξ δxη = kδ ξ × δ η k is the volume (area in 2D) of cell P . The last term in the above
equation, involving the unknown O(h3 ) terms, is of order O(h2 ) because ΩP = O(h2 ) and all the δ’s
4
(x2,1 , y2,1 )
ξ = ξ0 + ∆ξ
ξ = ξ0
(x1,1 , y1,1 )
η = η0 + ∆η
(x0,1 , y0,1 )
(x2,0 , y2,0 )
ξ = ξ0 + 2∆ξ
(x1,0 , y1,0 )
η = η0
(x0,0 , y0,0 )
Figure 3: Part of a grid (dashed straight lines) constructed from lines of constant ξ (red curves) and η
(blue curves), where ξ, η are variables distributed smoothly in the domain. The lines are equispaced with
constant spacings ∆ξ and ∆η, respectively. Grid node (i, j) is located at (ξi , ηj ) = (ξ0 + i ∆ξ, η0 + j ∆η) in the
computational space, where (ξ0 , η0 ) is a predefined point, and at (xi,j , yi,j ) in the physical space.
are O(h). So, carrying out the matrix multiplications we arrive at
" η
#
"
#
"
#
δy (φ(N1 )−φ(N3 )) − δyξ (φ(N2 )−φ(N4 ))
O(h2 )
φ.x (P )
1
+
=
∇φ(P ) ≡
2ΩP δxξ (φ(N2 )−φ(N4 )) − δxη (φ(N1 )−φ(N3 ))
O(h2 )
φ.y (P )
|
{z
}
(1)
∇s φ(P )
Finally, we drop the unknown O(h2 ) terms in Eq. (1) and are left with a second-order approximation
to the gradient, ∇s ; the dropped terms are called the truncation error of the operator ∇s . The fact
that this formula has been derived for a grid constructed from equidistant parallel lines may seem too
restrictive, but in fact the utility of Eq. (1) goes beyond this narrow context, as will now be explained.
Consider again structured grid generation based on a pair of variables ξ, η distributed smoothly in
the domain, where curves of constant ξ and η are drawn at equal intervals of ∆ξ and ∆η, respectively.
This time ξ and η are not required to vary linearly nor to be constant along straight lines. Therefore,
a curvilinear grid such as that shown in Fig. 3 may result, constructed by joining the points of
intersection of these two families of curves by straight line segments (the dashed lines in Fig. 3).
There is a one-to-one correspondence of coordinates (x, y) of the physical space to coordinates
(ξ, η) of the computational space. Therefore, not only are the computational coordinates functions of
the physical coordinates (ξ = ξ(x, y) and η = η(x, y)), but the physical coordinates (x, y) can also be
regarded as functions of the computational coordinates (x = x(ξ, η) and y = y(ξ, η)). Since the latter
vary smoothly in the domain, (x, y) can be expanded in Taylor series around a reference point (ξ0 , η0 ).
For example, for the x coordinate,
x(ξ0 + δξ, η0 + δη) = x(ξ0 , η0 ) + x.ξ δξ + x.η δη + O(h2 )
(2)
where the derivatives are evaluated at point (ξ0 , η0 ). In this way, we can express the coordinates of
all the grid vertices (xi,j , yi,j ) shown in Fig. 3 as functions of (x0,0 , y0,0 ) ≡ (x(ξ0 , η0 ), y(ξ0 , η0 )) and of
the derivatives x.ξ etc. there. Using these expansions, it is easy to show that
x1,1 − x1,0
y1,1 − y1,0
lim
=
lim
= 1
(3)
∆ξ,∆η→0 x0,1 − x0,0
∆ξ,∆η→0 y0,1 − y0,0
and
lim
∆ξ,∆η→0
x2,0 − x1,0
x1,0 − x0,0
=
lim
∆ξ,∆η→0
y2,0 − y1,0
y1,0 − y0,0
= 1
(4)
Equation (3) implies that, as the grid is refined by reducing the ∆ξ, ∆η spacings, neighbouring
grid lines of the same family become more and more parallel, so that grid skewness tends to zero.
5
Equation (4) implies that, as the grid is refined, neighbouring cells tend to become of equal size, so
that grid unevenness tends to zero. The conclusion is that on structured grids which are constructed
from smooth distributions of auxiliary variables (ξ, η), grid refinement1 causes the geometry of a cell
and its neighbours to locally approach that depicted in Fig. 2. This has an important consequence:
any numerical scheme for computing the gradient that reduces to Eq. (1) on parallelogram grids (such
as that of Fig. 2) is of second-order accuracy on smooth structured grids (such as that of Fig. 3)2 . It
so happens that both the DT scheme and the LS scheme belong to this category. Unfortunately, grid
refinement does not engender such a quality improvement when it comes to unstructured meshes.
3
Gradient calculation using the divergence theorem
For grids of more general geometry, like that of Fig. 1, a more general method is needed. Let ΩP
denote the volume of cell P and SP its bounding surface. The DT gradient scheme is based on a
derivative of the divergence theorem, which can be expressed as follows:
Z
Z
∇φ dΩ =
φ n ds
ΩP
SP
where n is the unit vector perpendicular to SP at each point, pointing outwards of the cell, while dΩ
and ds are infinitesimal elements of the volume and surface, respectively. The bounding surface SP
can be decomposed into F faces which are denoted by Sf , f = 1, . . . , F (F = 5 in Fig. 1). These faces
are assumed to be straight (planar, in 3 dimensions), as in Fig. 1, so that the normal unit vector n
has a constant value nf along each face f . Therefore, the above equation can be written as
!
Z
Z
F
X
∇φ dΩ =
nf
φ ds
(5)
ΩP
Sf
f =1
According to the midpoint integration rule [1,24], the mean value of a quantity over cell P (or face
f ) is equal to its value at the centroid P of the cell (or cf of the face), plus a second-order correction
term. Applying this to the mean values of ∇φ and φ over ΩP and Sf we get, respectively:
Z
Z
1
2
∇φ dΩ = ∇φ(P ) + O(h ) ⇒
∇φ dΩ = ∇φ(P ) ΩP + O(h4 )
(6)
ΩP ΩP
ΩP
1
Sf
Z
Z
2
φ ds = φ(cf ) + O(h ) ⇒
Sf
φ ds = φ(cf ) Sf + O(h3 )
(7)
Sf
where h is a characteristic grid spacing (we used that ΩP = O(h2 ) and Sf = O(h)). Substituting
these expressions into the divergence theorem, Eq. (5), we get
F
1 X
∇φ(P ) =
φ(cf ) Sf nf + O(h)
ΩP
(8)
f =1
The above formula is exact as long as the unknown O(h) term is retained, which consists mostly
of face contributions arising from Eq. (7), whereas the volume contribution from Eq. (6) is only
O(h2 ). Dropping this term would leave us with a first-order accurate approximation. However, this
approximation cannot be the final formula for the gradient because Eq. (8) contains φ(cf ), the φ
values at the face centres, whereas we need a formula that uses only the values at the cell centres.
The common practice is to approximate φ(cf ) by φ(c0f ), the exact values of φ at points c0f rather than
1
It is stressed that grid refinement must be performed in the computational space (ξ, η), by simultaneously reducing
the spacings ∆ξ and ∆η. Otherwise, if refinement is performed directly in the physical space (x, y), for example by
joining the centroid of each cell to the centroids of its faces thus splitting it into four child cells, then equations (3) – (4)
do not necessarily hold.
2
To be precise, this depends on the skewness decreasing fast enough with grid refinement – see Section 3. For
structured grids, using Taylor series such as Eq. (2), it can be shown that the skewness is O(h).
6
cf (see Fig. 1); these values also do not belong to the set of cell-centre values, but since c0f lies on the
line segment joining cell centres P and Nf , the value φ(c0f ) can in turn be approximated by linear
interpolation between φ(P ) and φ(Nf ), say φ̄(c0f ) (the overbar denotes linear interpolation):
φ̄(c0f ) ≡
kc0f − Nf k
kNf − P k
φ(P ) +
kc0f − P k
kNf − P k
φ(Nf ) = φ(c0f ) + O(h2 )
(9)
Linear interpolation is known to be second-order accurate, hence the O(h2 ) term in the above equation.
Thus, by using φ̄(c0f ) instead of φ(cf ) in the right hand side of Eq. (8) and dropping the unknown
O(h) term we obtain an approximation to the gradient which depends only on cell-centre values of φ:
F
1 X
φ̄(c0f ) Sf nf
∇ φ(P ) ≡
ΩP
d0
(10)
f =1
This is called the “divergence theorem” (DT) gradient. It applies in both two and three dimensions.
An important question is whether and how the replacement of φ(cf ) by φ̄(c0f ), that led from Eq.
(8) to the formula (10), has affected the accuracy. To answer this question we first need to deduce
how much φ̄(c0f ) differs from φ(cf ). This can be done by using the exact φ(c0f ) as an intermediate
value. We will consider structured grids and unstructured grids separately.
Structured grids
As discussed in the previous Section, the skewness of smooth structured grids diminishes with refinement, and in fact expressing the points involved in its definition as Taylor series of the form (2) it can
be shown that kcf − c0f k/kNf − P k = O(h), or cf − c0f = O(h2 ). Therefore, expanding φ(cf ) in a
Taylor series about point c0f gives φ(cf ) = φ(c0f ) + ∇φ(c0f ) · (cf − c0f ) + O(h2 ) = φ(c0f ) + O(h2 ). Next,
φ(c0f ) can be expressed in terms of φ̄(c0f ) according to Eq. (9). Putting everything together results in
the sought relationship:
φ(cf ) = φ(c0f ) + O(h2 ) = (φ̄(c0f ) + O(h2 )) + O(h2 ) = φ̄(c0f ) + O(h2 )
(11)
Substituting this into Eq. (8), and considering that ΩP = O(h2 ) and Sf = O(h), we arrive at
∇φ(P ) =
F
1 X
φ̄(c0f ) Sf nf + O(h)
ΩP
(12)
f =1
The first term on its right-hand side is just the approximate gradient ∇d0 , Eq. (10). Therefore, the
truncation error of ∇d0 is τ ≡ ∇d0 φ(P ) − ∇φ(P ) = O(h). Actually, the situation is even better
because the leading terms of the contributions of opposite faces to the O(h) term in Eq. (12) cancel
out leaving a net O(h2 ) truncation error, making the method second-order accurate. The proof is
tedious and involves writing analytic expressions for the truncation error contributions of each face;
it can be performed more easily for a grid such as that shown in Fig. 2 where the divergence theorem
procedure described in the present Section is an alternative path to arrive at the exact same formula,
Eq. (1), ∇d0 ≡ ∇s , with its O(h2 ) truncation error.
The error cancellation between opposite faces does not always occur; sometimes the worst-case
scenario of O(h) truncation error predicted by Eq. (12) holds. An example is at boundary cells. Fig. 4
shows a boundary cell P belonging to a Cartesian grid which exhibits neither skewness nor unevenness
(in fact, it does not even exhibit non-orthogonality). Yet cell P has no neighbour on the boundary
side, and so the centre of its boundary face, c3 , is used instead of a neighbouring cell centre. This
introduces “unevenness” in the x- (horizontal) direction because the distances kN1 − P k = h and
kc3 − P k = h/2 are not equal. The x- component of the divergence theorem gradient (10) reduces to
d0
φ.x
(P ) =
1
(φ(N1 ) + φ(P ) − 2φ(c3 ))
2h
7
(13)
N2
c3
N1
P
N4
h
h/2
Figure 4: A boundary cell P belonging to a Cartesian grid.
which is only first-order accurate, since expanding φ(N1 ) and φ(c3 ) in Taylor series about P gives
d0
φ.x
(P ) = φ.x (P ) +
1
φ.xx (P ) h + O(h2 )
8
This offers a nice demonstration of the effect of error cancellation between opposite faces. Equation
(10) is obtained from Eq. (8) by using interpolated values φ̄(c0f ) instead of the exact but unknown
values φ(cf ) at the face centres. Therefore, one might expect that since the cell of Fig. 4 has a
boundary face and the exact value at its centre, φ(c3 ), is used rather than an interpolated value, the
result would be more accurate; but the above analysis shows exactly the opposite: the error increases
from O(h2 ) to O(h). This is due to the fact that by dropping the interpolation error on the boundary
face the corresponding error on the opposite face 1 is no longer counterbalanced.
Another example where error cancellation does not occur and the formal O(h) accuracy predicted
by Eq. (12) holds is the common case of grid that consists of triangles that come from dividing each
cell of a smooth structured grid along the same diagonal. For example, application of this procedure
to the grid shown in Fig. 2 results in the grid of Fig. 5. The latter may be seen to exhibit neither
unevenness nor skewness as the face centres cf coincide with the midpoints of the line segments joining
cell centre P to its neighbours Nf . Thus Eq. (11) holds, leading to Eq. (12). This time, however,
a tedious but straightforward calculation where neighbouring φ(Nf ) values are expressed in Taylor
series about P and substituted in (12) shows that there is no cancellation and the truncation error
remains O(h). If the grid comes from triangulation of a curvilinear structured grid (Fig. 3) then the
skewness is not zero but it diminishes with refinement at an O(h) rate (Eq. (11)) and exactly the same
conclusions hold.
N1
N2
P
c1
c3
c2
N3
Figure 5: Grid of triangles constructed through bisection of the cells of the structured grid of Fig. 2 along
their same diagonal.
In fact, from the considerations leading to Eq. (11) it follows that if a grid generation algorithm
is such that skewness diminishes as O(hp ) then the order of accuracy of the DT gradient is at least
min{p, 1}.
8
Unstructured grids
Unstructured grids usually consist of triangles (or tetrahedra in 3D), or of polygonal (polyhedral in
3D) cells which are also formed by a triangulation process. They are typically constructed using
algorithms that are based on geometrical principles that do not depend on grid fineness, so that there
is some self-similarity between coarse and fine grids and refinement does not reduce the skewness, i.e.
kcf − c0f k/kNf − P k = O(1). This means that cf − c0f = O(h) and therefore instead of Eq. (11) we
now have
φ(cf ) = φ(c0f ) + O(h) = (φ̄(c0f ) + O(h2 )) + O(h) = φ̄(c0f ) + O(h)
(14)
Substituting this into the exact equation (8) we get
F
1 X
∇φ(P ) =
φ̄(c0f ) + O(h) Sf nf + O(h)
ΩP
f =1
=
F
F
1 X
1 X
φ̄(c0f ) Sf nf +
O(h) Sf nf + O(h)
ΩP
ΩP
f =1
f =1
which, considering that ΩP = O(h2 ) and Sf = O(h), means that
∇φ(P ) =
F
1 X
φ̄(c0f ) Sf nf + O(1) + O(h)
ΩP
(15)
f =1
Again, the first term on its right-hand side is the approximate gradient ∇d0 , Eq. (10). But this time,
unlike in the structured grid case, the truncation error of ∇d0 is τ ≡ ∇d0 φ(P ) − ∇φ(P ) = O(1).
Nor do the leading terms of the face contributions to the truncation error cancel out to increase its
order, because on unstructured grids the orientations and sizes of the faces are unrelated. This is
an unfortunate result, because it means that the approximation (10) is zeroth-order accurate, as the
error O(1) does not decrease with grid refinement, but instead ∇d0 φ(P ) converges to a value that is
not equal to ∇φ(P ) (see Section 5.4 for an example).
Acknowledging that the lack of accuracy is due to the bad representation of the φ(cf ) values in
Eq. (8) by φ̄(c0f ), application of the formula (10) is often followed by a “corrector step” where, instead
of the values φ̄(c0f ), the “improved” values φ̂(cf ) are used, defined as
φ̂(cf ) ≡ φ̄(c0f ) + ∇d0 φ(c0f )·(cf − c0f )
(16)
where ∇d0 φ(c0f ) is obtained by linear interpolation (Eq. (9)) between ∇d0 φ(P ) and ∇d0 φ(Nf ) at
point c0 (these were calculated in the previous step (10), the “predictor” step). This results in an
approximation which is hopefully more accurate than (10):
∇d1 φ(P ) ≡
F
1 X
φ̂(cf ) Sf nf
ΩP
(17)
f =1
The values φ̂(cf ) are expected to be better approximations to φ(cf ) than φ̄(c0f ) are, because Eq. (16)
tries to account for skewness by mimicking the Taylor series expansion
φ(cf ) = φ(c0f ) + ∇φ(c0f ) · (cf − c0f ) + O(h2 )
(18)
Unfortunately, since in Eq. (16) only a crude approximation of ∇φ(c0f ) is used, namely ∇d0 φ(c0f ) =
∇φ(c0f ) + O(1), what we get by subtracting Eq. (16) from Eq. (18) is φ̂(cf ) = φ(cf ) + O(h) which
may have greater accuracy but not greater order of accuracy than the previous estimate φ̄(c0f ) =
φ(cf ) + O(h), Eq. (14). Substituting this into Eq. (8) we arrive again at an equation similar to (15)
which shows that the error of the approximation (17) is also of order O(1).
9
Further correction steps may be applied in the same manner; a fixed finite number of such steps
may increase the accuracy, but the order of accuracy with respect to grid refinement will remain
zero. But if this procedure is repeated until convergence to an operator ∇d∞ , say, then ∇d∞ would
simultaneously satisfy both Eqs. (16) and (17), or combined in a single equation:
∇
d∞
F
F
1 X d∞
1 X
0
0
φ(P ) −
∇ φ(cf )·(cf − cf ) Sf nf =
φ̄(c0f ) Sf nf
ΩP
ΩP
f =1
(19)
f =1
where we have moved all terms involving the gradient to the left-hand side. An analogous equation
can be derived for the exact gradient, from Eqs. (8), (18), and (9):
∇φ(P ) −
F
F
1 X
1 X
∇φ(c0f )·(cf − c0f ) Sf nf =
φ̄(c0f ) Sf nf + O(h)
ΩP
ΩP
f =1
(20)
f =1
By subtracting Eq. (20) from Eq. (19) we get an expression for the truncation error τ ≡ ∇d∞ φ − ∇φ:
F
1 X
τ (c0f )·(cf − c0f ) Sf nf = O(h)
τ (P ) −
ΩP
(21)
f =1
The left-hand side is a linear combination of not only τ (P ) but also τ (Nf ) at all neighbour points.
Since ΩP = O(h2 ), Sf = O(h) and cf − c0f = O(h), the coefficients of this linear combination are O(1)
i.e. they do not depend on the grid fineness but only on the grid geometry (skewness, unevenness etc.).
Expression (21) suggests that τ = O(h), i.e. that ∇d∞ is first-order accurate, although to ascertain
this one would have to assemble Eqs. (21) for all grid cells in a large linear system Aτ = b ⇒ τ = A−1 b
(where τ stores the x- and y- (and z-, in 3D) components of τ at all cell centres and b = O(h)), select
a suitable matrix norm k · k, and ensure that kA−1 k remains bounded as h → 0.
So, iterating Eqs. (16) and (17) until convergence one would hope that a first-order accurate gradient ∇d∞ would be obtained. As reported in [19], convergence of this procedure is not guaranteed
and underrelaxation may be needed, leading to a large number of required iterations and great computational cost. In practice, the desired accuracy will be reached with a finite number of iterations,
but this number is not known a priori and must be determined by trials; furthermore, in order to
maintain the property that grid refinement improves the accuracy, the number of iterations should
increase with grid refinement in order to avoid accuracy stagnation. Considering that the main rival
of the DT gradient, the LS gradient, costs approximately as much as the DT gradient with a single
corrector step (see Sec. 6.1), it can be seen that this iterative procedure is impractical and may end
up consuming most of the computational time of a FVM solver. Alternatively, one could directly
solve all the equations (19) simultaneously in a large linear system; this is also proposed and tested
in [19], along with a second-order accurate method that additionally computes the Hessian matrix.
Obviously, this approach also involves a very large computational cost.
However, we would like to propose here a way to avoid the extra cost of the iterative procedure for
the gradient. The FVM generally employs outer iterations to solve a PDE (e.g. SIMPLE iterations,
as opposed to the inner iterations of the linear solvers employed). Outer iterations are necessary when
the equations solved are non-linear, but may be used also when solving linear problems if some terms
are treated with the deferred-correction approach. FVMs that use gradient schemes such as those
presently discussed are almost always iterative, with the gradients computed using the values of the
dependent variable from the previous outer iteration. The idea is then to exploit these outer iterations,
dividing the “gradient iterations” among them: at each outer iteration n a single “gradient iteration”
is performed according to
F
1 X n−1 0
∇ φ(P ) =
φ
(cf ) + ∇d(n−1) φ(c0f )·(cf − c0f ) Sf nf
ΩP
dn
(22)
f =1
which means that one has to store not only the solution of the previous outer iteration φn−1 but also
the gradient of that iteration ∇d(n−1) φ, which many codes do already. Since only one calculation is
10
performed per outer iteration, the cost of this procedure is almost as low as that of the uncorrected
DT gradient (10), but it converges to the first-order accurate ∇d∞ gradient operator instead of the
zeroth-order accurate ∇d0 . For explicit time-dependent FVM methods, Eq. (22) could be applied
with n − 1 denoting the previous time step; in the very first time step it may be necessary to perform
several “gradient iterations” to obtain a consistent gradient to begin with.
Iterative schemes that are different from (22) can also be devised; a similar iterative technique
is used in [25] in order to determine the coefficients of reconstruction polynomials without solving
large linear systems. There, Jacobi, Gauss-Seidel, and SOR-type iterations are applied, with the
latter found to be the most efficient. The present scheme (22) is reminiscent of the Jacobi iterative
procedure (although not exactly equivalent, since ∇φ(P ) appears also at the right-hand side, within
the averaged gradient, evaluated from the previous iteration). A scheme reminiscent of the GaussSeidel iterative procedure would be one where in the right-hand side of Eq. (22) new values of the
gradient at neighbour cells (calculated at the current outer iteration) are used whenever available
instead of using the old values. However, in the present work only the scheme (22) is tested in Section
6.1.
At this point we would like to make a brief comment concerning the calculation of the gradient at
boundary cells. The above analysis has assumed that at boundary faces the values of φ at the face
centres are available. However, this is not always the case; in situations where these values are not
available, they must be approximated to at least second-order accuracy as otherwise the DT gradient
will be inconsistent, as the preceding analysis has shown. For example, in problems with Neumann
boundary conditions the directional derivative normal to the boundary, g say, is given rather than the
boundary values. If face 5 of Fig. 1 belongs to such a boundary, then expressing φ(P ) in a Taylor
series about point c05 gives
φ(c05 ) = φ(P ) + gkc05 − P k + O(h2 )
(23)
(where g is measured in the direction pointing out of the domain). Equation (23) provides a secondorder accurate approximation for φ(c05 ), but only a first-order accurate approximation for φ(c5 ). Therefore, if we just use the value φ̄(c05 ) ≡ φ(P ) + gkc05 − P k in the DT gradient formula then we will
get a zeroth-order accurate gradient. Note that this will hold even for structured grids if the grid
lines intersect the boundary at an angle (c05 6= c5 ). It is not difficult to show that a second order
approximation is φ(c5 ) ≈ φ̄(c05 ) + ∇φ(P ) · (c5 − c05 ). This introduces ∇φ(P ) also in the right-hand
side of the gradient expression (8), which poses no problem for the iterative procedure (22).
Finally, we note that alternative schemes can be derived from the Gauss divergence theorem that
are consistent on unstructured grids and are worth mentioning, although the present work focuses on
the standard method. For example, consider the application of the divergence theorem method not to
the actual cell P but to the auxiliary cell marked by dashed lines in Fig. 6. The endpoints of each face
of this cell are either cell centroids or boundary face centroids, and so the value of φ can be computed
at any point on the face to second-order accuracy, by linear interpolation between the two endpoints.
Therefore, in Eq. (8) the values φ(cf ) (cf being the face centroids of the auxiliary cell, marked by
empty circles in Fig. 6) are calculated to second-order accuracy rather than first-order, leading to
first-order accuracy of the computed gradient. Second-order accuracy is inhibited also by the fact that
the midpoint integration rule (Eq. (6)) requires that the gradient be computed at the centroid of the
auxiliary cell, which now does not, in general, coincide with point P (the situation changes if P and
the centroid of the auxiliary cell tend to coincide with grid refinement). This method, along with a
more complex variant, is mentioned in [7]; it is also tested in Section 6.1. Yet another method would
be to use cell P itself rather than an auxiliary cell, but, similarly to the previous method, to calculate
the values φ(cf ) in Eq. (8) by linear interpolation from the values at the face endpoints (vertices)
rather than from the values at the cell centroids straddling the face. An extra step must therefore
precede where φ is approximated at the cell vertices to second-order accuracy from its values at the cell
centroids. This adds to the computational cost and furthermore requires of the grid data structures
to contain lists relating each vertex to its surrounding cells. More information on this method and
further references can be found in [11, 26].
11
N2
N1
N3
P
c5
N4
Figure 6: The divergence theorem method can approximate ∇φ(P ) to first-order accuracy on arbitrarily
irregular grids if applied to the auxiliary cell bounded by dashed lines, rather than on the actual cell P . This
cell is formed by joining the centroids of the neighbouring cells and of the boundary faces of cell P by straight
line segments. The open circles denote the centroids of these segments.
4
Gradient calculation using least squares minimisation
The starting point of the “least-squares” (LS) method for calculating the gradient of a quantity φ at
the centroid of a cell P is the expression of the values of φ at the centroids of all neighbouring cells as
Taylor series expansions about the centroid of P . For convenience, we decompose the position vectors
in Cartesian coordinates as P = (x0 , y0 ), Nf = (xf , yf ) and denote ∆xf ≡ xf − x0 , ∆yf ≡ yf − y0 .
Then, for each neighbour f it follows from the Taylor expansion that
φ(Nf ) − φ(P ) = ∇φ(P ) · (Nf − P ) + ςf
(24)
where
1
1
φ.xx (∆xf )2 + φ.xy ∆xf ∆yf + φ.yy (∆yf )2 + O(h3 )
(25)
2
2
is an associated truncation error (the second derivatives are evaluated at P ). The basic idea of the
method is to drop the unknown ςf terms and solve the remaining linear system by least squares since,
in general, the number of equations (F = number of neighbouring cells) will be greater than the
number of unknowns (two, φ.x (P ) and φ.y (P )). Of course some of the “neighbours” may be boundary
face points such as the centroid c5 in Fig. 1, or, in the case of Neumann boundary conditions, the
projection point c05 . In the latter case φ(c05 ) can be approximated by Eq. (23). Furthermore, unlike for
the DT method, it is now easy to incorporate additional, more distant cells as neighbours, a practice
that may prove advantageous in some cases3 (see e.g. [26]), although in the present work we will
restrict ourselves to using only the cells that share a face with P .
It is advantageous to first multiply each equation f by a suitably selected weight wf (weighted
least squares). Then the system of all these equations can be written in matrix form as
w1
∆φ1
w1
∆x1 ∆y1 "
ς1
#
w
∆x
∆y
∆φ
w
2
2
2
2
2 φ.x (P )
ς2
· . =
· .
·
+ . (26)
.
.
.
..
..
.. φ.y (P )
..
..
..
wF
∆φF
wF
∆xF ∆yF
ςF
|
|
| {z }
{z
} | {z }
{z
} |
{z
} | {z }
ςf =
0
0
0
0
W
b
W
A
z
ς
where ∆φf ≡ φ(Nf ) − φ(P ). The alternative matrix notation W b = W (Az + ς) is also introduced
above, to facilitate the discussions that follow. The above equations are exact, with the vector ς
3
It may even be necessary in some cases. For example, some of the triangular cells at the corners of the grid in Fig.
20(b) have only one neighbour cell; if the values of φ are not available at the boundary faces (e.g. if φ is the pressure in
incompressible flows) then at least one more distant cell must be used, because the LS calculation requires at least two
neighbour points.
12
ensuring that the system (26) has a unique solution, which is independent of the weights, despite
having more equations than unknowns. The redundant equations are just linear combinations of the
non-redundant ones. But once ς is dropped, the system will, in general, have no solution.
Systems with no solution can be solved in the “least squares” sense [27,28]. In matrix notation, if a
linear system Az = b with more equations than unknowns cannot be solved because the vector b does
not lie in the column space of the matrix A then the best that can be done is to find the vector z that
minimises the error b − Az. This error will be minimised when its projection onto the column space of
A is zero, i.e. when b − Az is perpendicular to each column of A, or AT (b − Az) = 0 ⇒ AT Az = AT b
(where T denotes the transpose). This latter system (called the normal equations) has a unique
solution provided that A has independent columns. The solution z is called the least squares solution
because it minimises the L2 norm kb − Azk, i.e. the sum of the squares of the individual components
of the error e ≡ b − Az
P(note that each individual error component ei is the error of the corresponding
equation i, ei = bi − j aij zj ).
In the weighted case W Az = W b the matrix A is replaced by the product W A and b by W b, so
that the corresponding solution to the normal equations is
z = (AT W T W A)−1 AT W T W b
(27)
Now the quantity minimised is the norm of the error of the weighted system, W (b − Az) = W e. Thus,
if equation i is assigned a larger weight than equation j then the method will prefer to make ei small
at the expense of ej .
For now, we proceed with the least squares methodology on Eq. (26) without discarding ς in order
to assess its impact on the error. We therefore first left-multiply the system (26) by (W A)T , and then
solve it to obtain
z = (AT W T W A)−1 AT W T W b + (AT W T W A)−1 AT W T W ς
(28)
The matrix AT W T W A is invertible provided that W A has independent columns, which requires that
A have independent columns because W is diagonal. The two columns of A will be independent if
there are also two independent rows, i.e. vectors Nf − P , so we need at least two neighbours Nf to
lie at different directions with respect to P , which is normally the case (in three dimensions A has
three columns and three independent neighbour directions are needed). Then, substituting for A, W ,
z, b and ς from Eq. (26) into Eq. (28) and performing the matrix multiplications and inversions, we
arrive at
z ≡ ∇φ(P ) = ∇ls φ(P ) + τ
(29)
where ∇ls φ(P ) is the first term on the right side of Eq. (28),
F
F
F
P
P
P
2
2
2
2
−
(∆yf ) wf
∆xf ∆yf wf
∆xf ∆φf wf
1
f =1
f =1
f =1
ls
∇ φ(P ) ≡
F
· F
F
P
P
D P
2
2
2
2
−
∆xf ∆yf wf
(∆xf ) wf
∆yf ∆φf wf
f =1
f =1
|
(30)
f =1
{z
} |
M
{z
βb
}
with (1/D)M = (AT W T W A)−1 and βb = AT W T W b, and τ is the second term:
1
τ =
M·
D
"
F
P
f =1
F
P
∆xf ςf wf2
f =1
#T
∆yf ςf wf2
=
1
M βς
D
(31)
with βς = AT W T W ς. In the 2D case, D = |M |, the determinant of M . Equation (29) gives the exact
gradient ∇φ(P ), but since the truncation error τ is unknown we drop it and use the expression (30)
alone as the approximate “least squares” gradient. In the 3D case the explicit formula is more involved
than Eq. (30), but again the least squares gradient is obtained by solving Eq. (27). If, just for the
purposes of discussing the 3D case, we denote the Cartesian components of the P
displacement vectors
1
2
3
T
T
as Nf − P = (∆xf , ∆xf , ∆xf ), then the (i, j) entry of the matrix A W W A is f ∆xif ∆xjf wf2 (it is
13
P
a symmetric matrix) and the i-th component of the vector AT W T W b is f ∆xif ∆φf wf2 . The normal
equations (27) are sometimes ill-conditioned, and in such cases it is better to solve the least-squares
system by QR factorisation of A [27].
The above derivation has produced the explicit expression (31) for the truncation error τ . To
analyse it further, we can assume that all the weights share the same dependency on the grid spacing,
namely wf = O(hq ) for some real number q (independent of f ), as is the usual practice. Then the
factors of Eq. (31) have the following magnitudes: Since ∆xf and ∆yf are O(h) the coefficients of
M have magnitude O(h2+2q ). Consequently, M being 2 × 2, its determinant will have magnitude
(O(h2+2q ))2 ⇒ 1/D = O(h−2(2+2q) ). Finally, considering that ςf = O(h2 ), the components of βς are
of O(h2q+3 ). Multiplying all these together, Eq. (31) shows that τ = O(h), independently of q. This
is not surprising, given that the approximation is based on Eq. (24) which assumes a linear variation
of φ in the vicinity of point P . So, ∇ls is at least first order accurate, even on grids of arbitrary
geometry.
The order of accuracy of ∇ls may be higher than one if some cancellation occurs between the
components of τ for certain grid configurations, similarly to the DT operator. In particular, a tedious
but straightforward calculation shows that when applied to a parallelogram grid such as that shown
in Fig. 2, ∇ls again reduces to the second-order accurate formula (1), provided only that the weights
of parallel faces are equal: w1 = w3 and w2 = w4 . This will hold due to symmetry if the weights are
dependent only on the grid geometry. Of course, as for the DT gradient, this has the consequence
that the LS gradient is second-order accurate also on smooth curvilinear structured grids.
The choice of weights
According to the preceding analysis, the least squares method is first-order accurate on grids of
arbitrary geometry and second-order accurate on smooth structured grids. This holds irrespective of
the choice of weights, and in fact it holds even for the unweighted method (wf = 1). The question
then arises of whether a suitable choice of weights can offer some advantage.
The weights commonly used are of the form wf = (∆rf )−q where ∆rf = kNf − P k is the distance
between the two cell centres and q is an integer, usually chosen as q = 1 [11, 12, 16] or q = 2 [6, 14].
The unweighted method amounts to q = 0. As noted, the least squares method finds the approximate
gradient that minimises kW (b − Az)k, which for the various choices of q amounts to minimising:
q=0:
2
X
∆φf
ls
∆rf
− ∇ φ(P ) · df
∆rf
(32)
f
q=1:
X ∆φf
2
− ∇ φ(P ) · df
(33)
2
X 1 ∆φf
ls
− ∇ φ(P ) · df
∆rf ∆rf
(34)
f
q=2:
ls
∆rf
f
where df = (Nf − P )/∆rf is the unit vector in the direction from P to Nf , and so ∇ls φ · df is the
least squares directional derivative ∂φ/∂rf in that direction. In the q = 1 case, expression (33) shows
that the least squares procedure shows no preference in trying to set the directional derivative in each
neighbour direction f equal to the finite difference ∆φf /∆rf . On the other hand, in the unweighted
method (32) the discrepancies between the directional derivatives and the finite differences are weighted
by the distances ∆rf so that the method prefers to reduce ∆φ/∆rf − ∇ls φ · df along the directions of
the distant neighbours at the expense of the directions of the closer neighbours. The exact opposite
holds for the q = 2 case (34) where the discrepancies are weighted by 1/∆rf so that the result is
determined mostly by the close neighbours. Intuitively, this latter choice seems more reasonable as
the linearity of the variation of φ is lost as one moves away from P and thus at distant neighbours
the finite differences ∆φ/∆rf are less accurate approximations of the directional derivatives. Thus it
is not surprising that usually the q > 0 methods outperform the unweighted method.
14
These are the common weight choices, but it so happens that the particular non-integer exponent
q = 3/2 confers enhanced accuracy compared to the other q choices under special but not uncommon
circumstances. This fact does not appear to be widely known in the literature; we have seen it only
briefly mentioned in [18, 29]. So, consider the vector βς in the expression (31) for the error, and
substitute for ς from Eq. (25). Then the first component of βτ becomes
X
∆xf ςf wf2 =
f
X
φ.yy X
φ.xx X
(∆xf )3 wf2 + φ.xy
(∆xf )2 ∆yf wf2 +
∆xf (∆yf )2 wf2
2
2
f
+
X
f
f
O(h4 ) wf2
(35)
f
Now, if two neighbours, i and j say, lie at opposite directions to point P but at the same distance then
wi = wj , ∆xi = −∆xj and ∆yi = −∆yj so that their contributions in each of the first three sums in
the right hand side of Eq. (35) cancel out. If all neighbour points are arranged
in such pairs, like in
P
4
the grid of Fig. 2, then these three sums become zero leaving only the
O(h )wf2 term. The same
4−2q
holds for the second component of βς , so that βς = O(h
) overall because wf = O(h−q ). Then Eq.
(31) gives τ = (1/D)M βς = O(h2 ) i.e. the method is second-order accurate for any exponent q.
The particular choice wf = (∆rf )−3/2 amounts to dropping the wf ’s from the first three sums of
the right-hand side of Eq. (35) and replacing therein every instance of ∆xf by ∆xf /∆rf and every
instance of ∆yf by ∆yf /∆rf . These ratios are precisely cos θf and sin θf , respectively, where θf is
the angle that the direction vector Nf − P makes with the horizontal direction. If two neighbours,
i and j, lie at opposite directions then θi = θj + π so that cos θi = − cos θj and sin θi = − sin θj ,
and their contributions cancel out in the aforementioned three sums, irrespective of whether these two
neighbours lie at equal distances to P or not. The same holds for the second component of βς . Thus,
if all neighbour points are arranged in such pairs then again what remains of the right-hand side of
Eq. (35) is only the last term and so τ = O(h2 ). For example, the LS gradient with q = 3/2 is
second-order accurate at the boundary volume P of Fig. 4, whereas it is only first-order accurate with
any other choice of q. Furthermore, the same result will hold if the neighbours are not arranged in
pairs at opposite directions but tend to become so with grid refinement. This is the case with smooth
structured grids, as shown in Section 2, and therefore the LS gradient with q = 3/2 is second-order
accurate at boundary cells of all smooth structured grids.
Another property of the q = 3/2 LS gradient is that it is second order accurate if all neighbour
points are arranged at equal angles. Unfortunately, this property holds only with more than three
neighbour points, which limits its usefulness. A proof is provided in Appendix A.
Finally, a question that arises naturally is whether full 2nd-order accuracy can be achieved on
arbitrary grids by allowing non-diagonal entries in the weights matrix. It turns out that this is indeed
possible, but yields a method that is equivalent to the least squares solution of a system of Taylor
expansions (24) with terms higher than first-order included. The procedure is sketched in Appendix B.
We do not advocate it, because direct solution of the system of higher-order Taylor expansions would
have the added advantage of solving also for the second derivatives – see e.g. [30]. An alternative
second-order accurate method is described in [31]. The second-order accurate methods are much more
expensive than the present method.
5
5.1
Numerical tests on the accuracy of the gradient schemes
One-dimensional tests
The methods are first tested on a one-dimensional problem so as to examine the effect of unevenness,
isolated from skewness. The derivative of the single-variable function φ(x) = tanh x is calculated at
101 equispaced points spanning the x ∈ [0, 2] interval. The
against the exact
P results are compared
2
ls
solution φ.x = 1 − (tanh x) and the mean absolute error i |φ.x (x(i) ) − φ.x (x(i) )|/101 is recorded for
each method. In order to introduce unevenness, the neighbours of point xi are not chosen from this set
of equispaced points but are set at xi,f = xi + ∆xf /2r where the ∆xf belong to a predetermined set
15
(a) {∆x0f } = {0.05, 0.10}
(b) {∆x0f } = {−0.10, 0.05}
(c) {∆x0f } = {−0.10, 0.05, 0.15}
(d) {∆x0f } = {−0.20, −0.10, 0.05, 0.15}
Figure 7: Mean errors of the calculation of the derivative of φ(x) = tanh(x) with various methods – see the
text for details.
of displacements {∆xf }Ff=1 (common to all xi points) and the integer r is the level of grid refinement.
For example, if the chosen set of displacements is {−0.05, 0.1} (F = 2 neighbours), then the derivative
at point xi will be calculated using the values of φ at the three points {xi − 0.05/2r , xi , xi + 0.1/2r }.
The order of accuracy of each method is determined by incrementing the level of refinement r.
In order to test the methods thoroughly, several sets of initial displacements {∆x0f }Ff=1 were used.
In Fig. 7, each diagram corresponds to a different such set and the mean error is plotted as a function
of the number of displacement halvings, r. The slope of each curve reveals the order of accuracy of
the corresponding method. The methods tested are: (a) the DT method, denoted “d”, (b) the LS
methods with weight exponents q = 0, 1, 1.5, 2 and 3, indicated on each curve, (c) the non-diagonal
weights LS method of Appendix B denoted “ND”, and (d) a simpler variant of the DT method which
is sometimes used [20], denoted “da”, where the values at face centres are calculated by arithmetic
averaging, φ(c0f ) = (φ(P ) + φ(N ))/2, instead of linear interpolation (9). The DT methods are only
applicable in the case plotted in Fig. 7(b) because exactly two neighbours are required, one on each
side of xi . To apply the DT methods xi is regarded as the centroid of a cell of size equal to the
minimum distance between xi and any of its neighbours.
A result not shown is that when the displacements stencil is symmetric (e.g. {∆x0f } = {−0.1, 0.1})
all methods produce identical, second-order accurate results. This is consistent with the two-dimensional
16
case where on symmetric grids like that of Fig. 2 all methods reduce to the formula (1). When the
stencil is not symmetric, Fig. 7 shows that the LS methods with a diagonal weights matrix become
first-order accurate, except for the q = 3/2 method which retains second-order accuracy when there
are equal numbers of neighbours on either side (Figs. 7(b) and 7(d)). The unweighted method (q = 0)
is always the least accurate; the optimum accuracy is achieved with 1 ≤ q ≤ 2, while a further increase
of q is unprofitable (q = 3). The method of Appendix B is always second-order accurate. Concerning
the DT methods (Fig. 7(b)), the method of Sec. 3, indicated by “d” in the figure, gives identical results
with the q = 1 least squares method. On the other hand, the simplified method, indicated by “da”, is
zeroth-order accurate, which agrees with the findings reported in [20].
5.2
Uniform Cartesian grids
Next, the two-dimensional methods are used to calculate the gradient of the function φ(x, y) = tanh(x)·
tanh(y) on the unit square (x, y) ∈ [0, 1] × [0, 1] using uniform Cartesian grids of different fineness.
The exact gradient is φ.x = (1 − (tanh x)2 ) tanh y and φ.y = (1 − (tanh y)2 ) tanh x. All grid cells
are geometrically identical squares of side h = 0.25/2r where r is the level of refinement; however,
boundary cells are topologically different from interior cells because they have one or more boundary
faces where the function value at the face centre has to be used (Fig. 4). It so happens that the
function tanh has zero second derivative at the boundaries x = 0 and y = 0, which may artificially
increase the order of accuracy of the methods there. However, the general behaviour of the methods
at boundary cells can be observed at the x = 1 and y = 1 boundaries where no such special behaviour
of the tanh function applies.
On each grid the gradient of φ is calculated at all cell centres using the DT method ∇d0 (Eq.
(10)), and the LS methods ∇ls (Eq. (30)) with weight exponents q = 0, 1, 1.5 and 2. Since there is
no skewness (c0 = c in Eq. (16)), the application of corrector steps is meaningless. The methods are
evaluated by comparing the mean and maximum truncation errors, defined as
τmean ≡
Mr
1 X
k∇a φ(P ) − ∇φ(P )k
Mr
(36)
P =1
Mr
τmax ≡ max k∇a φ(P ) − ∇φ(P )k
P =1
(37)
where k · k denotes the L2 norm of a vector in a single cell, Mr is the number of cells of grid r, and ∇a
is any of the gradient schemes considered. These errors are plotted in Figs. 8(a) and 8(b), respectively.
The theory predicts that all methods should be second-order accurate at interior cells, i.e. τ =
O(h2 ), because they all reduce to formula (1) there. At boundary cells (Fig. 4) the favourable conditions that are responsible for second-order accuracy are lost, and the methods should revert to
first-order accuracy except for the LS method with q = 3/2 of which second-order accuracy is still
expected. Figure 8(b) confirms that the maximum error, which occurs at some boundary cell, is
τmax = O(h) for all methods, except the q = 3/2 method for which τmax = O(h2 ). Of the other
methods, the unweighted LS method (q = 0) is the least accurate and the q = 2 method is slightly
better than the q = 1 method. The DT method gives identical results with the q = 1 LS method, as
they both revert to Eq. (13) at boundary cells.
Figure 8(a) shows the mean errors. Since all methods give the same results at interior cells any
differences are due to the different errors at the boundary cells. The slope of the curves is the same,
and corresponds to τmean = O(h2 ). This does not contradict with the O(h) errors at the boundary
cells: the number of such cells along each side of the domain equals 1/h, so that in total there are
O(1/h) boundary cells, each contributing an O(h) error. Their total contribution to τmean in Eq. (36)
is therefore O(1/h)·O(h)/Mr = O(h2 ) since Mr = 1/h2 . The interior cell contribution is O(h2 ) also,
since τ = O(h2 ) at each individual interior cell.
5.3
Smooth curvilinear grids
Next, we try the methods on smooth curvilinear grids. The same function φ = tanh(x) · tanh(y) is
differentiated, but the domain boundaries now have the shapes of two horizontal and two vertical
17
(a) mean error, τmean
(b) maximum error, τmax
Figure 8: The mean (a) and maximum (b) errors (defined by Eqs. (36) and (37), respectively) of the various
methods for calculating the gradient of the function φ = tanh(x) tanh(y) on uniform Cartesian grids. The
abscissa r designates the grid; grid r has a uniform spacing of h = 0.25/2r . The exponent q used for the LS
weights is shown on each curve. The DT method produces identical results with the q = 1 LS method.
sinusoidal waves (Fig. 9), beginning and ending at the points (0, 0), (1, 0), (1, 1) and (0, 1). The grid
is generated using a very basic elliptic grid generation method [32]. In particular, smoothly varying
functions ξ(x, y) and η(x, y) are assumed in the domain, and the grid consists of lines of constant ξ
and of constant η. The left, right, bottom and top boundaries correspond to ξ = 0, ξ = 1, η = 0
and η = 1, respectively. In the interior of the domain ξ and η are assumed to vary according to the
following Laplace equations:
ξ.xx + ξ.yy = 0
η.xx + η.yy = 0
These equations guarantee that ξ and η vary smoothly in the domain, but their solution ξ =
ξ(x, y), η = η(x, y) is not much help in constructing the grid. Instead, we need the inverse functions
x = x(ξ, η), y = y(ξ, η) which explicitly set the locations of all grid nodes; node (i, j) is located at
(xi,j , yi,j ) ≡ (x(ξ = i ∆ξ, η = j ∆η), y(ξ = i ∆ξ, η = j ∆η)). With ξ, η ∈ [0, 1], the constant spacings
∆ξ and ∆η are adjusted according to the desired grid fineness. Therefore, using the chain rule of
partial differentiation it can be shown that the above equations can be expressed in inverse form as
g22 x.ξξ − 2g12 x.ξη + g11 x.ηη = 0
g22 y.ξξ − 2g12 y.ξη + g11 y.ηη = 0
where
g11 = x.ξ2 + y.ξ2
g22 = x.η2 + y.η2
g12 = x.ξ x.η + y.ξ y.η
In order to cluster the points near the boundaries in the physical domain, we accompany the above
equations with the following boundary conditions: at the bottom boundary we set x = 0.5+0.5 sin(π(ξ−
0.5)) and y = sin(2πx), and at the left boundary we set y = 0.5+0.5 sin(π(η−0.5)) and x = − sin(2πy).
At the top and right boundaries we set the same conditions, respectively, adding 1 to x at the right
boundary and 1 to y at the top boundary. Better results can be obtained by using a more elaborate
method such as described in [33], but this suffices for the present purposes.
18
(a) r = 0
(b) r = 1
(c) r = 2
(d) r = 3
Figure 9: The first four of the series of grids constructed on a domain with sinusoidal boundaries via elliptic
grid generation. See the text for more details.
Now (x, y) have become the dependent variables while (ξ, η) are the independent variables, which
acquire values in the unit square [0, 1] × [0, 1]. The grid equations were solved numerically with a finite
difference method on a 513 × 513 point uniform Cartesian grid. Note that the dependent variables
(x, y) are stored at the grid nodes, i.e. at the intersection points of the grid lines, instead of at the cell
centres. The derivatives are approximated by second-order accurate central differences; for example,
at point (i, j), x.ξ ≈ (xi+1,j −xi−1,j )/2h, x.η ≈ (xi,j+1 −xi,j−1 )/2h, x.ξξ ≈ (xi+1,j −2xi,j +xi−1,j )/h2 etc.
where h = 1/512 is the grid spacing. The resulting system of nonlinear algebraic equations was solved
using a Gauss-Seidel iterative method where in the equations of the (i, j) node all terms are treated
as known from their current values except for xi,j and yi,j which are solved for. The convergence of
the method was accelerated using a minimal polynomial extrapolation technique [34], and iterations
were carried out until machine precision was reached.
After obtaining x(ξ, η) and y(ξ, η), a series of successively refined grids in the physical domain were
constructed by drawing lines of constant ξ and lines of constant η at intervals ∆ξ = ∆η = 0.25/2r , for
r = 0, 1, . . . 7. The first four grids are shown in Fig. 9. The gradient calculation methods were then
applied on each of these grids. The errors of each method are depicted in Fig. 10.
These grids exhibit all kinds of grid irregularity, but the unevenness and the skewness diminish with
grid refinement, as explained in Section 2. In particular, Table 1 shows that the measures of both these
grid qualities have magnitudes of O(h). These measures were defined in Sec. 2: kc0f − mf k/kNf − P k
for unevenness and kcf − c0f k/kNf − P k for skewness (see Fig 1 for definitions). The values listed in
Table 1 are the average values among all faces of each grid, excluding boundary faces. Therefore, it is
19
(a) mean error, τmean
(b) maximum error, τmax
Figure 10: The mean (a) and maximum (b) errors (defined by Eqs. (36) and (37), respectively) of the gradient
schemes applied on the function φ = tanh(x) tanh(y) on smooth curvilinear grids (Fig. 9). The abscissa r
designates the grid; r = 0 is the coarsest grid (Fig. 9(a)), and grid r comes from subdividing every cell of grid
r − 1 into 4 child cells in the computational space (see text). The blue solid lines correspond to the LS methods
with q = 0, 1 and 2 as indicated on each curve; the blue dash-dot line corresponds to the LS method with
q = 3/2; and the red dashed lines correspond to the DT methods dc where c is the number of corrector steps.
expected that eventually the methods will behave as in the Cartesian case. Indeed, Fig. 10(b) shows
that as the grid is refined τmax tends to decrease at a first-order rate for all methods, except for the
least squares method with q = 3/2, for which it decreases at a second-order rate. Accordingly, Fig.
10(a) shows that τmean tends to decrease at a second-order rate for all methods, with the second-order
rate attained earlier by the q = 3/2 method. Therefore, like on the Cartesian grids, the methods are
second-order accurate at interior cells but revert to first-order accuracy at boundary cells, except for
the q = 3/2 method.
Of the LS methods the least accurate is the unweighted method, followed by the q = 2 method.
The undisputed champion is the q = 3/2 method because, as mentioned, it retains its second-order
accuracy even at boundary cells. Concerning the DT methods, the method with no corrections (Eq.
10) performs similarly to the unweighted LS method. Application of a corrector step (Eq. (17)) now
does make a difference, since skewness is present at any finite grid density, bringing the accuracy of
the method on a par with the best weighted LS methods, except of course the q = 3/2 method. We
also tried a second corrector step but it did not bring any noticeable improvement.
Next, on the same grids we also applied discrete gradient operators to calculate the gradient of a
linear and of a quadratic function. The schemes tested are the DT gradient operators ∇d0 , ∇d1 and
∇d∞ (the latter approximated with 100 correction steps), and the LS gradient operators with q = 0,
1 and 3/2, denoted as ∇ls0 , ∇ls1 and ∇ls3/2 , respectively, in Tables 1 and 2. “Exactness” is sometimes
used as an aid to either determine the order of accuracy of a method or to design a method to achieve
a desired order of accuracy (e.g. Appendix B): a first-order accurate gradient scheme would normally
be exact for linear functions, and a second-order accurate gradient scheme would normally be exact
for quadratic functions. However, the results listed in Tables 1 and 2 show that in the present case the
DT gradient is not exact even for linear functions while the LS gradient is exact for linear functions
but not for quadratic functions, despite both methods being second-order accurate.
In particular, Table 1 shows that the DT gradient without corrector steps (∇d0 ) is not exact for the
linear function (the errors are not zero) but converges to the exact gradient at a rate that approaches
second-order as the grid is refined. Performing a corrector step (∇d1 ) brings a significant improvement
in accuracy, with an observed convergence rate order of between 2 and 3, but still the operator is not
exact. This inexactness is anticipated since the grids are skewed and the DT scheme cannot cope with
20
skewness. However, grid refinement causes skewness to diminish and the DT accuracy to improve at
a second-order rate. In the limit of many corrector steps (∇d∞ ) the operator becomes exact, with the
errors at machine precision levels even at the coarsest grid. All of the LS schemes are also exact for
the linear function.
Concerning the quadratic function (Table 2), none of the schemes is exact but they are all secondorder accurate, with the q = 3/2 LS scheme being the most accurate, and the q = 0 LS and zerocorrection DT schemes being the least accurate. A single corrector step in the DT scheme (∇d1 ) brings
the maximum attainable improvement, since the error levels of ∇d1 and ∇d∞ are nearly identical.
The second-order convergence rates are due to the improvement of grid quality with refinement, as
explained in Sections 3 and 4.
Table 1: Mean errors (Eq. (36)) of various schemes for calculating the gradient ∇φ = (1, 2) of the linear
function φ(x, y) = x + 2y + 0.5 on the series of grids shown in Fig. 9. Also displayed are the measures of grid
skewness and unevenness (defined in Sec. 2), averaged over all faces of each grid excluding boundary faces.
Grid r
Skew.
Unev.
∇d0
0
1.33·10−1
1.10·10−1
4.16·10−1
1
−2
5.68·10
8.00·10
−2
−1
2
3.03·10−2
4.83·10−2
6.93·10−2
3
1.64·10−2
2.70·10−2
4
8.44·10−3
5
−3
4.29·10
7.41·10
6
2.16·10−3
3.78·10−3
6.64·10−4
7
1.09·10−3
1.91·10−3
1.86·10−4
1.86·10
∇d1
∇d∞
∇ls0
∇ls1
∇ls3/2
8.00·10−2
2.19·10−15
6.04·10−16
7.39·10−16
9.37·10−16
−2
−15
−15
−15
1.68·10−15
4.46·10
1.34·10
1.45·10
4.80·10−3
9.53·10−15
2.43·10−15
2.41·10−15
2.81·10−15
2.41·10−2
8.91·10−4
1.93·10−14
4.42·10−15
4.52·10−15
4.84·10−15
1.43·10−2
7.72·10−3
1.16·10−4
3.97·10−14
8.74·10−15
8.77·10−15
9.07·10−15
−3
−3
−5
−14
−14
−14
1.74·10−14
2.32·10
1.90·10
1.43·10
7.98·10
1.70·10
1.71·10
1.94·10−6
1.60·10−13
3.36·10−14
3.38·10−14
3.40·10−14
3.25·10−7
3.20·10−13
6.69·10−14
6.70·10−14
6.72·10−14
Table 2: Mean errors (Eq. (36)) of various schemes for calculating the gradient ∇φ = (2x + 2y, 2x − 2y) of the
quadratic function φ(x, y) = x2 + 2xy − y 2 on the series of grids shown in Fig. 9.
5.4
Grid r
∇d0
∇d1
∇d∞
∇ls0
∇ls1
∇ls3/2
0
4.95·10−1
2.28·10−1
2.29·10−1
3.54·10−1
1.94·10−1
1.47·10−1
1
2.20·10−1
1.27·10−1
1.29·10−1
1.72·10−1
8.16·10−2
5.81·10−2
2
−2
8.38·10
4.48·10
−2
−2
−2
8.09·10
3.13·10
−2
2.24·10−2
3
2.84·10−2
1.46·10−2
1.46·10−2
3.20·10−2
1.12·10−2
7.43·10−3
4
8.99·10−3
4.63·10−3
4.64·10−3
1.14·10−2
3.76·10−3
2.05·10−3
5
2.66·10−3
1.40·10−3
1.40·10−3
3.75·10−3
1.19·10−3
5.28·10−4
6
−4
7.45·10
4.00·10
−4
−4
−3
1.15·10
3.50·10
−4
1.35·10−4
7
2.07·10−4
1.10·10−4
3.29·10−4
9.74·10−5
3.41·10−5
4.47·10
4.01·10
1.10·10−4
Grids of localised high distortion
Structured grids that are constructed not by solving partial differential equations, as in Section 5.3,
but by algebraic methods may lack the property that unevenness and skewness diminish with grid
refinement. This is especially true if the domain boundaries include sharp corners at points other than
grid line endpoints. For example, the grid of Fig. 11 is structured, consisting of piecewise straight
lines. At the line joining the sharp corners, the intersecting grid lines change direction abruptly. This
causes significant skewness which is unaffected by grid refinement.
21
Figure 11: A structured grid where the grid lines belonging to one family change direction abruptly at the
dashed line joining the pair of sharp corners, where grid skewness and not diminishing with grid refinement.
(a) r = 0
(b) r = 1
(c) r = 2
Figure 12: A series of multi-level, or composite, grids. Each grid r comes from the previous grid r − 1 by
evenly subdividing each cell into four child cells.
A similar situation may occur when adaptive mesh refinement is used, depending on the treatment
of the interaction between levels. Figure 12 shows multi-level grids, which consist of regions of different
fineness. Such grids are often called composite grids [35]. One possible strategy is to treat the cells at
the level interfaces as topologically polygonal [8, 29, 36]. For example, cell P of Fig. 13(a) has 6 faces,
each separating it from a single other cell. Its face f1 separates it from cell N1 which belongs to the
finer level. Faces such as f1 , which lie on grid level interfaces, exhibit non-orthogonality, unevenness,
and skewness. If the grid density is increased throughout the domain, as in the series of grids shown
in Fig. 12, then these interface distortions remain insensitive to the grid fineness, like for the marked
line in Fig. 11. Alternative schemes exist which avoid changing the topology of the cells by inserting
a layer of transitional cells between the coarse and the fine part of the grid (e.g. [37, 38]) but they also
lead to high, non-diminishing grid distortions at the interface.
We computed the gradient of the same function φ(x, y) = tanh(x) tanh(y) on a series of composite
grids the first three of which are shown in Fig. 12. Figure 14 shows how τmean and τmax vary with grid
refinement. This time, τmean is defined a little differently than Eq. (36) to account for the different grid
levels: the error of each individual cell is weighted by the cell’s volume (i.e. the area, in the present
two-dimensional setting):
Mr
1 X
τmean ≡
ΩP k∇a φ(P ) − ∇φ(P )k
(38)
Ω
P =1
where Ω is the total volume of the domain and ΩP is the volume of cell P .
We can identify three classes of cells that are topologically different. Apart from the familiar
classes of interior and boundary cells, there is now also the class of cells that touch the level interfaces,
which shall be called interface cells (these belong to two sub-classes, coarse- and fine-level cells, as
in Figs. 13(a) and 13(b), respectively). Interface cells possess high skewness and unevenness that do
not diminish with grid refinement. The behaviour of the gradient-calculation methods at interior and
22
N4
N3
f4
f3
N2
f2
N5
N2
θ2 = ϑ
P
c2
f5
f1
N3
θ1 = −ϑ
N1
c3
P c
1
c3
c01
N1
f6
N4
N6
(a) coarse cell
(b) fine cell
Figure 13: Topological and geometrical characteristics of a coarse cell (a) and of a fine cell (b) adjacent to a
level interface, in a composite grid.
boundary cells has already been tested in Sections 5.2 and 5.3, so our interest now focuses on interface
cells. Skewness, the most detrimental grid distortion, is encountered only at those and therefore it is
there that the maximum errors (Fig. 14(b)) occur.
In Fig. 14(b) we observe that at the level interfaces all the LS methods converge to the correct
solution at a first-order rate, τmax being lowest for the q = 1 method, followed closely by the q = 3/2
and q = 2 methods. On the other hand, none of the DT methods converge to the correct solution
there, although nearly an order of magnitude accuracy improvement is obtained with each corrector
step. We can determine the operator to whom ∇d0 converges as follows. For the interface cell P of
Fig. 13(b), formula (10) amounts to the following series of approximations:
φ.x (P ) ≈
≈
φ(c1 ) − φ(c3 )
φ(c01 ) − φ(c3 )
≈
h
h
[α φ(N1 ) + (1 − α) φ(P )] − [0.5 φ(P ) + 0.5 φ(N3 )]
d0
≡ φ.x
(P )
h
(39)
where h is the length of the side of cell P . In the last step, the values of φ at points c01 and c3 were
approximated with linear interpolation between points N1 and P , and P and N3 , respectively; α is
an interpolation factor which equals α = 0.3 for the present geometry. We then substitute in (39)
φ(N1 ) and φ(N3 ) with their two-dimensional Taylor series about P , considering that if P = (x0 , y0 )
then N1 = (x0+3h/2, y0−h/2) and N3 = (x0−h, y0 ) – see Fig. 13(b). The following result is obtained:
d0
φ.x
(P ) =
3α + 1
α
φ.x (P ) −
φ.y (P ) + O(h)
2
2
d0 converges not to φ but to an operator that involves both φ and φ .
Therefore, as h → 0, φ.x
.x
.x
.y
Next we examine the mean error in Fig. 14(a). The plot can be interpreted by considering separately the error contributions of each class of cells. The contributions of interior and boundary cells
to τmean are both O(h2 ), as discussed in Section 5.2 (for the q = 3/2 method the boundary cell
contribution is O(h3 )).
At interface cells the LS methods behave similarly as on boundary cells, because they produce
O(h) errors there as well due to unevenness and skewness. The total length of the level interfaces is
constant, O(1). The number of interface cells is O(h−1 ) because it equals this constant length divided
by the cell size which is O(h). Their contribution to the mean error in Eq. (38) is (number of cells) ×
(volume of one cell) × (error at a cell) = O(h−1 ) × O(h2 ) × O(h) = O(h2 ). This is confirmed by Fig.
14(a), where all the LS methods converge to the exact solution at a second-order rate.
23
(a) mean error, τmean
(b) maximum error, τmax
Figure 14: The mean (a) and maximum (b) errors (defined by Eqs. (38) and (37), respectively) of the gradient
calculation methods applied to the function φ = tanh(x) tanh(y) on locally refined grids (Fig. 12). The abscissa
r designates the grid; r = 0 is the coarsest grid (Fig. 12(a)), and grid r comes from subdividing every cell of
grid r − 1 into 4 identical child cells. The blue solid lines correspond to the LS methods with weight exponents
q = 0, 1 and 2, which are indicated on each curve; the blue dash-dot line corresponds to the LS method with
q = 3/2; and the red dashed lines correspond to the DT methods dc where c is the number of corrector steps.
On the other hand, for the DT methods the contribution of interface cells to τmean is (number of
cells) × (volume of one cell) × (error at a cell) = O(h−1 ) × O(h2 ) × O(1) = O(h). Figure 14(a) shows
that for the d0 method (no corrector steps) this O(h) component is so large that it dominates τmean
even at coarse grids. For the d1 method (one corrector step), at coarse grids this O(h) component
is initially small compared to the bulk O(h2 ) component that comes from all the other cells, so that
τmean appears to decrease at a second-order rate up to a refinement level of r = 3; but eventually it
becomes dominant and beyond r = 5 the d1 curve is parallel to the d0 curve, with a first-order slope.
With two corrector steps, the O(h) component is so small that up to r = 7 it is completely masked
by the O(h2 ) component and the method appears to be second-order accurate. More grid refinements
are necessary to reveal its asymptotic first-order accuracy.
An observation that raises some concern in Fig. 14(a) is that the DT methods outperform the LS
methods at those grids where they have not yet degraded to first order. This suggests that there may
be some room for improvement in the latter. A potential source of the problem is suggested by Fig.
13(a). Along the horizontal direction, cell P has one cell on its left side (N5 ) and two cells on its right
side (N1 and N2 ). Since points N1 and N2 are quite close to each other there is some overlap in the
information they convey. Yet the weights of the LS method depend only on the distance of Nf from
P , while any clustering of the Nf points in some direction is not taken into account. Thus, points N1
and N2 , being closer to P than N5 may individually contribute equally (q = 1) or more (q > 1) to the
calculation of the gradient at P than point N5 does. Combined they contribute much more. So, the
horizontal component of the gradient is calculated using mostly information from the right of cell P ,
whereas information from its left is undervalued. The DT methods do not suffer from this deficiency
because they weigh the contribution of each point by the area of the respective face; faces f1 and f2
are half in size than f5 , and so points N1 and N2 together contribute to the gradient approximately
as much as N5 alone does.
In order to seek a remedy, we investigate the contributions of points N1 , N2 and N5 (Fig. 13(a))
to the vector βς of the error expression (31). As for Eq. (35), we substitute for ςf from Eq. (25) into
the expression for βς , but then proceed to substitute ∆xf = ∆rf cos θf , ∆yf = ∆rf sin θf . For the
particular points under consideration we have, with reference to Fig. 13(a), θ1 = −ϑ, θ2 = ϑ and
θ5 = π, while ∆r2 = ∆r1 . Finally, concerning the weights, since cells P and N5 both belong to the
24
(a) mean error, τmean
(b) maximum error, τmax
Figure 15: As for Fig. 14, but with the weights (41) applied at level interfaces in the case of the LS methods.
same grid level we choose not to tamper with the weight w5 and simply use the possibly second-order
accurate q = 3/2 scheme: w5 = (∆r5 )−3/2 . Due to symmetry we set w2 = w1 and our goal is to
suitably select w1 to achieve a small error. Putting everything together, the joint contribution of these
three points to βς is, neglecting higher order terms,
φ.xx (cos ϑ)3 (∆r1 )3 w12 − 12 + φ.yy (sin ϑ)2 cos ϑ (∆r1 )3 w12
(40)
2
3
2
2φ.xy cos ϑ (sin ϑ) (∆r1 ) w1
Since the values of the higher order derivatives can be arbitrary, it is obvious that the above contribution does not become zero for any choice of w1 ; thus, second-order
√ accuracy cannot be achieved.
The best that can be done is to choose w1 = (cos ϑ)−3/2 (∆r1 )−3/2 / 2 so that the term in parentheses
multiplied by φ.xx becomes zero. In fact, since this choice is not guaranteed to minimise the error
and since cos ϑ ≈ 1 for relatively small ϑ, we chose to drop the cos ϑ factor and applied the following
scheme to all least squares methods:
(
kNf − P k−q
face f does not touch a finer level
wf =
(41)
1
√ kNf − P k−q
face f touches a finer level
2
(In the three-dimensional
case, cell P of Fig. 13(a)
would have four fine-level neighbours on its right
√
√
and the 1/ 2 factor in (41) would become 1/ 4 = 1/2). Using this scheme, we obtained the results
shown in Fig. 15, which can be seen to be better than those of Fig. 14, especially for the q = 3/2 and
q = 2 methods which now rival the d2 method with respect to the mean error.
The modification (41) is easy and inexpensive. More elaborate methods could be devised, applicable to more general grid configurations, in order to properly deal with the issue of point clustering at certain angles. For example, a point Nf could be assigned to span a “sector” of angle
∆θf = |θf +1 − θf −1 |/2 (assuming that the points are numbered in either clockwise or anti-clockwise
order); the sum of all these sectors would then equal 2π, and they could be incorporated into the
weights such that points with smaller sectors would have less influence over the solution. Alternatively, the face areas could be incorporated into the weights, as in the DT method [18].
5.5
Grids with arbitrary distortion
As mentioned in Section 3, general-purpose unstructured grid generation methods result in unevenness
and skewness that are insensitive to grid fineness throughout the domain, not just in isolated regions
25
(a) r = 0
(b) r = 1
(c) r = 2
Figure 16: A series of excessively distorted grids. Grid r is constructed by random perturbation of the nodes
of Cartesian grid r + 1 of Section 5.2.
as for the grids of Section 5.4. Therefore, in the present Section the methods are tested under these
conditions; grids of non-diminishing distortion were generated by randomly perturbing the vertices
of a Cartesian grid. Using such a process we constructed a series of grids, the first three of which
are shown in Fig. 16. The perturbation procedure is applied as follows: Suppose a Cartesian grid
with grid spacing h. If node (i, j) has coordinates (xij , yij ) then the perturbation procedure moves
0 ) = (x + δ x , y + δ y ) where δ x and δ y are random numbers in the interval
it to a location (x0ij , yij
ij
ij
ij
ij
ij
ij
[−0.25h, 0.25h). Because all perturbations are smaller than h/4 in both x and y, it is ensured that all
grid cells remain simple convex quadrilaterals after all vertices have been perturbed. Grids based on
triangles as well as three-dimensional cases will be considered in Section 6.
The gradient of the same function φ = tanh(x) tanh(y) is calculated, and the errors are plotted in
Fig. 17. This time, all cells belong to a common category. Concerning the mean error, Fig. 17(a) is in
full agreement with the theory. The LS methods converge to the exact gradient at a first-order rate,
with the unweighted method being, as usual, the least accurate, while the differences between the
weighted methods are very slight. On the other hand, the DT methods, as expected, do not converge
to the exact gradient. Performing corrector steps improves things, but in every case the convergence
eventually stagnates at some grid fineness.
As far as the maximum errors are concerned, Fig. 17(b) shows that those of the LS methods
decrease at a first order rate, with the unweighted method being the least accurate. On the other
hand, the maximum errors of the DT methods actually increase with grid refinement. Presumably this
is due to the fact that as the number of grid nodes is increased the probability of encountering higher
degrees of skewness somewhere in the domain increases. Performing corrector steps reduces the error,
but it is interesting to note that grid refinement causes a somewhat larger error increase when more
corrector steps are performed. The deterioration of the ∇d0 method with grid refinement propagates
across the iterative correction procedure and eventually it is expected that the errors produced by
the ∇dc operator used as a predictor will become so large that it will provide less (rather than more)
accurate face centre values to the resulting “corrected” operator ∇d(c+1) , making the latter worse than
∇dc itself.
Similarly to Sec. 5.3 we also tested the gradient schemes on a linear function. The results are listed
in Table 3, together with grid quality metrics. Table 3 confirms that in the present case grid skewness
and unevenness are roughly independent of grid refinement, i.e. their measures are O(1), whereas they
were O(h) in the structured grid case (Table 1). As a result, the DT gradient schemes with a finite
number of corrector steps (∇d0 and ∇d1 in Table 3) now do not converge to the exact gradient. In
the limit of infinite corrector steps the DT gradient (∇d∞ ) becomes exact for linear functions, as are
all the LS gradient schemes (∇ls0 and ∇ls1 in Table 3).
26
(a) mean error, τmean
(b) maximum error, τmax
Figure 17: The mean (a) and maximum (b) errors (defined by Eqs. (36) and (37), respectively) of the gradient
calculation methods applied on the function φ = tanh(x) tanh(y) on the series of globally distorted grids (Fig.
16). The blue solid lines correspond to the least squares methods with weight exponents q = 0, 1 and 2, which
are indicated on each curve; the blue dash-dot line corresponds to the least squares method with q = 3/2; and
the red dashed lines correspond to the divergence theorem methods dc where c is the number of corrector steps.
6
Use of the gradient schemes within finite volume PDE solvers
So far we have examined the gradient schemes per se, examining their truncation error through mathematical tools and numerical experiments. Although there are examples of independent use of a
gradient scheme such as in post-processing, the application of main interest is within finite volume
methods (FVMs) for the solution of partial differential equations (PDEs). The gradient scheme is
but a single component of the FVM and how it affects the overall accuracy depends also on the PDE
solved as well as on the rest of the FVM discretisation. In the present section we provide some general
comments and some simple demonstrations. The focus is on the effect of the gradient scheme on
unstructured grids, where the DT scheme was shown to be inconsistent; on such grids the tests of Sec.
5.5 showed that the weights exponent of the LS method plays a minor role and so we only test the
q = 1 LS variant.
We begin with the observation that the approximation formula (10) is very similar to the formulae
used by FVMs for integrating convective terms of transport equations over grid cells. Therefore,
Table 3: Mean errors (Eq. (36)) of various schemes for calculating the gradient ∇φ = (1, 2) of the linear
function φ(x, y) = x + 2y + 0.5 on the series of grids shown in Fig. 16. Also displayed are the measures of grid
skewness and unevenness (defined in Sec. 2), averaged over all faces of each grid excluding boundary faces.
Grid r
Skew.
Unev.
∇d0
∇d1
∇d∞
∇ls0
∇ls1
0
4.79·10−2
4.94·10−2
1.99·10−1
1.55·10−2
3.83·10−15
9.84·10−16
1.05·10−15
1
5.46·10−2
5.32·10−2
2.43·10−1
1.96·10−2
7.29·10−15
1.55·10−15
1.59·10−15
2
4.97·10−2
5.14·10−2
2.17·10−1
1.61·10−2
1.48·10−14
3.07·10−15
3.15·10−15
3
5.16·10
−2
−2
−1
−2
−14
−15
6.18·10−15
4
5.22·10−2
5
6
5.16·10
2.29·10
1.81·10
5.17·10−2
2.32·10−1
1.89·10−2
5.87·10−14
1.22·10−14
1.23·10−14
5.18·10−2
5.17·10−2
2.32·10−1
1.86·10−2
1.18·10−13
2.45·10−14
2.45·10−14
5.18·10−2
5.14·10−2
2.32·10−1
1.87·10−2
2.34·10−13
4.86·10−14
4.86·10−14
27
2.94·10
6.11·10
according to the same reasoning as in Section 3, such formulae also imply truncation errors of order
O(1) on arbitrary grids; this is true even if an interpolation scheme other than (9) is used to calculate
φ̄(c0f ), or even if the exact values φ(c0f ) are known and used. The order of the truncation error can
be increased to O(h) by accounting for skewness through a correction such as (16), provided that the
gradient used is at least first-order accurate. These observations may raise concern about the overall
accuracy of the FVM; however, it is known that the order of reduction of the discretisation error with
grid refinement is often greater than that of the truncation error. Thus, O(1) truncation errors do
not necessarily imply O(1) discretisation errors; the latter can be of order O(h) or even O(h2 ). This
phenomenon has been observed by several authors (including the present ones [36]) and a literature
review can be found in [39]. The question then naturally arises of whether and how the accuracy of the
gradient scheme would affect the overall accuracy of the FVM that uses it. A general answer to this
question does not yet exist, and each combination of PDE / discretisation scheme / grid type must be
examined separately. For simple cases such as one-dimensional ones the problem can be tackled using
theoretical tools but on general unstructured grids this has not yet been achieved [39]. Thus we will
resort to some simple numerical experiments that amount to solving a Poisson equation.
6.1
Tests with an in-house solver
We solve the following Poisson equation:
∇ · (−k∇φ) = b(x, y)
on Ω = [0, 1]×[0, 1]
(42)
φ = c(x, y)
on SΩ
(43)
with k = 1, where SΩ is the boundary of the domain Ω (the unit square), and
b(x, y) = 2 tanh(x) tanh(y) 2 − (tanh(x))2 − (tanh(y))2
c(x, y) = tanh(x) tanh(y)
(44)
(45)
This is a heat conduction equation with a heat source term and Dirichlet boundary conditions. The
source term was chosen such that the exact solution of the above equation is precisely φ = c(x, y). The
domain was discretised by a series of progressively finer grids of 32 × 32, 64 × 64, . . . , 512 × 512 cells
denoted as grids 0, 1, . . . 4, respectively. The grids were generated by the same randomised distortion
procedure as those of Fig. 16, only that their boundaries are straight (Fig. 20(a)). Corresponding
undistorted Cartesian grids were also used for comparison. According to the FV methodology, we
integrate Eq. (42) over each cell, apply the divergence theorem and use the midpoint integration rule
to obtain for each cell P a discrete equation of the following form:
F
X
Df = b(P ) ΩP
(46)
f =1
where b(P ) is the value of the source term at the centre of cell P and
Z
Df =
−k∇φ · nf dS ≈ −Sf k ∇φ(cf ) · nf
(47)
Sf
is the diffusive flux through face f of the cell. We will test here two alternative discretisations of the
fluxes Df , both of which utilise some discrete gradient operator.
The first is the “over-relaxed” scheme [8, 40] which, according to [41], is very popular and is the
method of choice in commercial and public-domain codes such as FLUENT, STAR-CD, STAR-CCM+
and OpenFOAM. This scheme splits the normal unit vector nf in (47) into two components, one in
the direction Nf − P and one tangent to the face. The unit vectors along these two directions are
denoted as df and tf , respectively (Fig. 18(a)). Thus, if nf = αdf + βtf then by taking the dot
product of this expression with nf itself and noting that tf and nf are perpendicular (nf · tf = 0) we
28
t1
c01
c1
P
N1
d1
N1
P0
c01
n1
c1
P5
P
n1
N10
c5
n5
(a)
(b)
Figure 18: (a) Adopted notation for the “over-relaxed” diffusion scheme (Eq. (48)) and for the scheme for
boundary faces (Eq. (50)); (b) Notation for the custom scheme (49). Face 1 of cell P is an inner face, while its
face 5 is a boundary face.
obtain α = (df · nf )−1 . We can therefore split nf = d∗f + t∗f where d∗f = df /(df · nf ) and t∗f = βtf
calculated most easily as t∗f = nf − d∗f . Then the flux (47) is approximated as
d∗f
}|
{
1
· df )
− Sf k ∇φ(c0f ) · t∗f
(nf · df )
z
Df ≈
−Sf k (∇φ(c0f )
≈ −Sf k
φ(Nf ) − φ(P )
1
− Sf k ∇a φ(c0f ) · t∗f
kNf − P k (df · nf )
= −Sf k
φ(Nf ) − φ(P )
− Sf k ∇a φ(c0f ) · t∗f
(Nf − P ) · nf
(48)
where ∇a is a discretised gradient calculated at the cell centres (either a DT gradient ∇d or the LS
gradient ∇ls ) whose role we wish to investigate, and ∇a φ(c0f ) is its value calculated at point c0f by
linear interpolation (9). This scheme obviously deviates somewhat from the midpoint integration rule,
substituting ∇φ(c0f ) · nf instead of ∇φ(cf ) · nf in (47), i.e. it does not account for skewness.
We also try an alternative scheme which is the standard scheme in our code [42] and is a slight
modification of a scheme proposed in [1]:
φ(Nf0 ) − φ(P 0 )
Df ≈ −Sf k
kNf0 − P 0 k
(49)
where
φ(P 0 ) ≈ φ(P ) + ∇a φ(P ) · (P 0 − P )
φ(Nf0 ) ≈ φ(Nf ) + ∇a φ(Nf ) · (Nf0 − Nf )
and points P 0 and Nf0 (Fig. 18(b)) are such that the line segment joining these two points has length
kNf − P k, is perpendicular to face f , and its midpoint is cf . Thus this scheme tries to account for
skewness. We will refer to this as the “custom” scheme.
Finally, irrespective of whether scheme (48) or (49) is used for the inner face fluxes, the boundary
face fluxes are always calculated as
Df ≈ −Sf k
φ(cf ) − φ(Pf )
kcf − Pf k
29
(50)
where f is a boundary face, point Pf is the projection of P on the line which is perpendicular to face
f and passes through cf (see Fig. 18(a)), and
φ(Pf ) ≈ φ(P ) + ∇a φ(P ) · (Pf − P )
Grid non-orthogonality activates in all schemes, (48), (49) and (50), a component that involves the
approximate gradient ∇a (it is activated in (49) also by unevenness or skewness). It is not hard to show
that this component contributes O(1) to the truncation error if ∇a is first-order accurate, and O(1/h)
if it is zeroth-order accurate. On the undistorted Cartesian grids that were used for comparison the
∇a terms of the schemes are not activated; in fact, schemes (48) and (49) reduce to the same simple
formula there. The reason why we do not simply substitute ∇a φ(cf ) for ∇φ(cf ) in Eq. (47) instead
of using schemes such as (48) and (49) is that this would allow spurious high-frequency (cell-to-cell)
oscillations in the numerical solution φ. This occurs because the ∇a calculation is mostly based on
differences of φ values that are two cells apart, and so the gradient field ∇a φ can be smooth even
though φ oscillates from cell to cell. Schemes such as (48) and (49) express the diffusive flux mostly
as a function of the direct difference of φ across the face, φ(Nf ) − φ(P ), and thus do not allow such
oscillations to go undetected. The gradients ∇a are used in an auxiliary fashion, in order to make the
diffusive fluxes consistent. However, it has recently been shown [43–45] that schemes such as (48) and
(49) can also be interpreted as equivalent to substituting an interpolated value of ∇a φ for ∇φ(cf ) in
(47) and adding a damping term which is a function of the direct difference φ(Nf ) − φ(P ) and of the
gradients ∇a φ(P ) and ∇a φ(Nf ) and tends to zero with grid refinement.
The system of discrete equations (46) is linear, but for convenience it was solved with a fixed-point
iteration procedure where in each outer iteration a linear system is solved (by a few inner iterations
of a preconditioned conjugate gradient solver) whose equations involve only the unknowns at a cell
and at its immediate neighbours, thus avoiding extended stencils. The matrix of this linear system
is assembled only from the parts of Eq. (48) or (49) that directly involve φ(P ) and φ(Nf ), while the
terms involving the gradients are calculated using the estimate of φ from the previous outer iteration
and incorporated into the right-hand side of the linear system, as is customary. Outer iterations were
carried out until the magnitude of the residual per unit volume had fallen below 10−8 in every grid
cell, and the number of required such iterations for each diffusion flux scheme and gradient scheme are
listed in Table 4, where the operator ∇d∞ is obtained with the scheme (22), while the operator ∇dx is
the one obtained by applying the divergence theorem to the auxiliary cell of Fig. 6. Table 4 includes
the percentage of the total calculation time consumed in calculating the gradient. It may be seen that
the cost of corrector steps is quite significant, with the ∇d2 gradient requiring more than 22% of the
total computational effort. On the other hand, the scheme (22) is quite efficient, obtaining the ∇d∞
gradient at a cost that is almost as low as that of ∇d0 ; however, in the case of the over-relaxed scheme,
it also requires somewhat more outer iterations for convergence. The LS gradient costs about the same
as the ∇d1 gradient. The cost of the ∇dx gradient appears somewhat inflated due to a quick but not
very efficient implementation. Overall, it seems that the iterative convergence rate depends mostly on
the chosen flux discretisation scheme rather than on the choice of gradient scheme: in most cases the
over-relaxed scheme converges faster than the custom scheme, which is not surprising since the former
Table 4: Number of outer iterations for convergence (residual per unit volume below 10−8 ) of the over-relaxed
(48) and custom (49) schemes for solving the Poisson equation (42) – (45) on the 512 × 512 distorted grids,
employing various gradient schemes. Also shown is the percentage of CPU time spent on computing the gradient.
∇d0
∇d1
∇d2
∇d∞
∇dx
∇ls
iterations
901
617
601
786
573
585
CPU %
6.8
14.9
22.1
9.7
17.2
14.3
iterations
792
874
914
752
919
919
CPU %
7.4
16.5
23.6
10.5
17.9
15.2
Scheme
Over-relaxed
Custom
30
(a) mean error, mean
(b) maximum error, max
Figure 19: The mean (a) and maximum (b) discretisation errors of various FVM schemes to solve the diffusion
problem (42) – (45), on a series of highly distorted quadrilateral grids (Fig. 20(a)). Grids 0, 1, . . . , 4 have 32×32,
64 × 64, . . . , 512 × 512 volumes, respectively. DT0, DT1, DT2, DT, DTX and LS (solid lines) denote the “overrelaxed” scheme (48) with the ∇d0 , ∇d1 , ∇d2 , ∇d∞ , ∇dx and ∇ls gradient schemes, respectively. DT1c, DTc,
DTXc and LSc (dash-dot lines) denote the “custom” scheme (49) with the respective gradient schemes. C
denotes results on Cartesian grids, where all methods are identical.
is known for its robustness [41]. Nevertheless, the choice of gradient scheme may significantly affect
the iterative convergence rate in the FVM solution of other kinds of problems (depending of course
also on the choice of FVM discretisation and iterative solver). In [26] it is reported that for convection
problems unweighted LS gradients can result in much better iterative convergence than weighted ones,
and that including more distant neighbours in the LS gradient computation also improves convergence.
The discretisation errors with respect to grid refinement are plotted in Fig. 19 for various flux and
gradient scheme combinations. The discretisation error is defined as the difference between the exact
solution φ of the problem (42) – (43) and the numerical (FVM) solution; Fig. 19(a) plots the mean
absolute value of the discretisation error over all grid cells, and Fig. 19(b) plots the maximum absolute
value.
The first thing that one can notice from Figure 19 is that the common DT gradient (∇d0 , ∇d1 ,
d2
∇ ) leads to zeroth-order accuracy with respect to both the mean and maximum discretisation errors,
irrespective of whether the over-relaxed or custom flux scheme is used. Similar trends as those of Fig.
17 are observed: corrector steps reduce the error but are eventually unable to converge to the exact
solution. Although on coarse grids the DT1 and DT2 lines give the impression that they converge,
eventually grid refinement cannot reduce the error below a certain point. In fact, an increase of the
mean error can be observed on grid 4. Things are worse when the DT gradient is used in combination
with the custom scheme (line DT1c), presumably because the gradient ∇a plays a more significant
role in that scheme than in the over-relaxed scheme.
On the other hand, the ∇d∞ gradient obtained through the iterative procedure (22), the “auxiliary
cell” gradient ∇dx , and the LS gradient ∇ls , all lead to second-order convergence to the exact solution.
As long as the gradient scheme is consistent, the FVM accuracy seems to depend not on the gradient
scheme but on the flux scheme, i.e. lines DT, LS and DTX (over-relaxed scheme (48)) nearly coincide,
as do lines DTc, LSc and DTXc (custom scheme (49)). The mean discretisation error of the custom
scheme is about the same as that obtained on Cartesian grids (line C), or even marginally lower when
used in combination with the ∇d∞ gradient! The mean discretisation error of the over-relaxed scheme
is only about 40% higher, which is a very good performance given that it does not account for skewness
and that its iterative convergence is better (Table 4).
31
(a)
(b)
(c)
Figure 20: Samples from each grid category employed to solve the 2D Poisson problems using OpenFOAM:
(a) a grid of distorted quadrilaterals; (b) a grid constructed with the Netgen algorithm; (c) a grid constructed
with the Gmsh algorithm. All domains have unit length along each dimension.
6.2
Tests with OpenFOAM
In order to investigate further the severity of the problem associated with the use of the common DT
gradient we performed also a set of experiments using the popular public domain CFD solver OpenFOAM (https://openfoam.org), version 4.0, (see e.g. [46]) which we have been using recently in our
laboratory [47,48]. We solved again a Poisson problem (42) – (43) but instead of (45) and (44) we have
c(x, y) = sin(πx) sin(πy) and b(x, y) = 2π 2 c(x, y), respectively. The problem was solved on the same
set of grids composed of distorted quadrilaterals (Fig. 20(a)), but, as these are artificially distorted
grids, we also repeated the calculations using a couple of popular grid generation procedures which
are more typical of real-life engineering applications, namely the Netgen [49] (https://ngsolve.org)
and Gmsh Delaunay [50] (http://gmsh.info) algorithms implemented as plugins in the SALOME
preprocessor (www.salome-platform.org). Both generate grids of triangular cells, coarse samples of
which are shown in Fig. 20. The Netgen algorithm can be seen to be effective in producing smooth
grid, reminiscent of that shown in Fig. 5, in four distinct areas within the domain that meet at an
“X”-shaped interface where grid distortion is high. The Gmsh grids are less regular. The “LaplacianFoam” component of OpenFOAM was used to solve the equations with default options, which include
the DT gradient (10) as the chosen gradient scheme (option “gradSchemes” is set to “Gauss linear”).
We repeated the calculations with the gradient option set to LS (gradSchemes: leastSquares).
The discretisation errors are plotted in Fig. 21(a), against the mean cell size4 h. We note that
on the distorted quadrilateral grids and on the Gmsh grids the DT gradient (lines DTQ and DTG )
engenders zeroth-order accuracy, like the DT0 case of Fig. 19. On the other hand, the Netgen grids
are quite smooth (Fig. 20(b)), resembling grids that come from triangulation of structured grids in
most of the domain, and this has the consequence that even with the DT gradient (lines DTN ) the
mean error decreases at a second-order rate. With the LS gradient second-order accuracy is exhibited
on all grids, and furthermore the accuracy is nearly at the same level as on the Cartesian grids.
Three-dimensional simulations were also performed. The governing equations are again (42) – (43)
where we set c(x, y, z) = sin(πx) sin(πy) sin(πz) and b(x, y, z) = 3π 2 c(x, y, z). The domain is the unit
cube z, y, z ∈ [0, 1] and the grids were generated with the Netgen and Gmsh algorithms, where in the
latter we chose the Delaunay method to construct the grid at the boundaries and the frontal Delaunay
algorithm to construct the grid in the interior. The discretisation errors are plotted in Fig. 21(b).
This time it is observed that the DT gradient engenders zeroth-order accuracy even on the Netgen
grids. Presumably, in the 3D case although the Netgen algorithm produces relatively smooth surface
grids on the bounding surfaces of the cube, in the bulk of the domain it packs the tetrahedra in a
way that the skewness is large throughout. On the other hand, the LS gradient again always results
in second-order accuracy.
4
Unlike in previous plots we use the mean cell size instead of the grid level in Fig. 21 because in the case of the Netgen
and Gmsh grids each level does not have precisely 4 (in 2D) or 8 (in 3D) times as many cells as the previous level.
32
(a) mean error, 2D problem
(b) mean error, 3D problem
Figure 21: The mean errors of the OpenFOAM solutions of the 2D (a) and 3D (b) Poisson equations, for
various configurations. The abscissa h is the mean cell size, h = (Ω/M )1/d where Ω = 1 is the domain volume,
M is the total number of grid cells, and d = 2 (for 2D) or 3 (for 3D). The black solid line (C) corresponds to
the 2D problem solved on undistorted Cartesian grids. Red lines (DT) and blue lines (LS) correspond to results
obtained with the DT and LS gradients, respectively. Subscript Q (DTQ , LSQ – solid lines) corresponds to
grids of distorted quadrilaterals (Fig. 20(a)). Subscript N (DTN , LSN – dash-dot lines) corresponds to Netgen
grids (Fig. 20(b)). Subscript G (DTG , LSG – dashed lines) corresponds to Gmsh Delaunay grids (Fig. 20(c)).
7
Final remarks and conclusions
The previous sections showed that on arbitrary grids the DT and LS gradients are zeroth- and firstorder accurate, respectively, but higher orders of accuracy, up to second, are obtained on particular
kinds of grids – a summary of the present findings is given in Table 5.
Unfortunately, general-purpose unstructured grid generation algorithms, such as the popular Netgen and Gmsh algorithms tested in Sec. 6, retain high skewness at all levels of refinement (except
in the 2D Netgen case) resulting in zeroth-order accuracy for both the DT gradient and the FVM
solver that employs it. In our tests, OpenFOAM using the DT gradient was unable to reduce the discretisation errors by grid refinement even though the problems solved were very fundamental, namely
linear Poisson problems with sinusoidal solutions on the simplest possible domains, the unit square
and cube. Zeroth-order accuracy is very undesirable because it deprives the modeller of his/her main
tool for estimating the accuracy of the solution, i.e. the grid convergence study. Therefore the common
DT gradient should be avoided unless the grid refinement algorithm is known to reduce the skewness
towards zero. This is especially important nowadays that automatic unstructured grid generation
algorithms are preferred over structured grids. On the other hand, if the LS gradient is employed
instead then the FVM for the problems of Sec. 6 proved to be second-order accurate on all grids, irrespective of whether their skewness diminishes or not with refinement. The same holds for consistent
DT gradient schemes, such as the proposed iterative scheme (22), the scheme employing the auxiliary
cell of Fig. 6, or any other consistent scheme mentioned in Sec. 3.
On grids where both methods exhibit the same order of accuracy the optimal method is problemdependent. Usually schemes of the same order of accuracy will produce comparable errors on any
given grid but in some cases the errors can differ significantly. For example, in aerodynamics problems
concerning the boundary layer flow over a curved solid boundary, when structured grids consisting of
cells of very high aspect ratio are employed the DT method is known to significantly outperform the LS
method [16, 20] despite both being second-order accurate (or first-order, on triangulated grids). This
is because the DT method benefits from the alignment of the cell faces with the flow (see Appendix
C).
The present study has been restricted to the FVM solution of a Poisson problem. The effect of
33
Table 5: Order of the mean and maximum truncation errors of the gradient schemes on various grid types.
Common DT
Cartesian (§5.2)
Smooth Structured (§5.3)
Locally Distorted (§5.4)
Arbitrary (§5.5)
mean
2
2
1
0
max.
1
1
0
0
LS q 6= 3/2,
∇d∞ , ∇dx
mean max.
2
1
2
1
2
1
1
1
LS q = 3/2
mean
2
2
2
1
max.
2
2
1
1
the choice of gradient discretisation scheme for the FVM solution of flow problems, including nonNewtonian or turbulent ones where the role of the gradient is more significant, will be investigated in
a future study.
Acknowledgements
This research has been funded by the LIMMAT Foundation under the Project “MuSiComPS”.
Appendix A
Accuracy of the least squares gradient when the neighbouring points are arranged at equal angles
Consider the case that all angles between two successive neighbours are equal; this means that they
are equal to 2π/F (F being the total number of neighbours). Therefore,
θf = θ1 + (f − 1)
2π
F
(A.1)
As per Eq. (35), we will express βς in the error expression (31) substituting for ςf from Eq. (25). We
also substitute ∆xf = ∆rf cos θf and ∆yf = ∆rf sin θf . In the special case that all neighbours are
equidistant from P , i.e. all ∆rf ’s are equal, the weights wf are also equal and the products (∆rf )3 wf2
can be factored out of the sums:
1
2 φ.xx
βτ = (∆rf )3 wf2
1
2 φ.xx
F
P
(cos θf )3 + φ.xy
f =1
F
P
F
P
f =1
(cos θf
)2 sin θ
f
+ φ.xy
f =1
F
P
cos θf (sin θf )2
f =1
F
F
P
P
cos θf (sin θf )2 + 21 φ.yy
(sin θf )3
(cos θf )2 sin θf +
f =1
1
2 φ.yy
(A.2)
f =1
where higher-order terms are omitted. If the neighbours are not equidistant but the weight scheme
wf = (∆rf )−3/2 is used, then all products (∆rf )3 wf2 = 1 are again equal and (A.2) still holds.
To proceed further, it will be convenient
to use complex arithmetic: cos θf = (eiθf + e−iθf )/2 and
√
iθ
−iθ
f
f
sin θf = (e − e
)/2i, where i ≡ −1. Then it is straightforward to show that the sums that
appear in the right hand side of Eq. (A.2) are equal to
34
F
X
F
(cos θf )3 =
f =1
F
X
1 X i3θf
e
+ 3eiθf + 3e−iθf + e−i3θf
8
(A.3)
f =1
F
(cos θf )2 sin θf =
f =1
(A.4)
f =1
f =1
F
X
1 X i3θf
+ eiθf − e−iθf − e−i3θf
e
8i
F
1 X i3θf
−i3θf
−iθf
iθf
cos θf (sin θf ) = −
+ e
− e
− e
e
8
2
(A.5)
f =1
F
X
F
1 X i3θf
(sin θf ) = −
e
− 3eiθf + 3e−iθf − e−i3θf
8i
3
f =1
(A.6)
f =1
P iθ P −iθ P i3θ
P −i3θ
f ; if these sums are zero
The above expressions involve the sums
e f,
e f,
e f and
e
then the sums on the left hand sides of (A.3) – (A.6) are also zero and it follows, in the same way as
for Eq. (35), that the LS gradient is second-order accurate. So, we consider the first of these sums,
substituting for θf from Eq. (A.1).
F
X
iθf
e
=
f =1
F
X
e
i(θ1 +(f −1) 2π
F )
f =1
= e
iθ1
F
X
2π
ei(f −1) F
(A.7)
f =1
We denote ζ = ei2π/F and note that it is the first F -th root of unity, i.e. ζ F = ei2πF/F = ei2π = 1.
Then the sum that appears at the end of Eq. (A.7) is
F
−1
X
ζ n = 1 + ζ + ζ 2 + · · · + ζ F −1 = 0
(A.8)
n=0
Equation (A.8) holds for any F -th root of unity except ζ = 1, as follows from the identity 1 − ζ F =
(1 − ζ)(1 + ζ + ζ 2 + · · · + ζ F −1 ): the left hand side is zero because ζ F = 1, so the right hand side must
be
as well, and since ζ 6= 1, it follows that (1 + ζ + ζ 2 + · · P
· + ζ F −1 ) = 0, i.e. Eq. (A.8) [28]. Thus
P zero
iθ
f
e = 0. In exactly the same manner it can
e−iθf = 0 as well.
Pbei3θshown that
We proceed in the same way for the sum
e f:
F
X
f =1
ei3θf =
F
X
2π
ei3(θ1 +(f −1) F ) = ei3θ1
f =1
F
X
2π
ei3(f −1) F
(A.9)
f =1
This time we denote ζ = ei3(2π)/F . It is also an F -th root of unity, since ζ F = ei3(2π)F/F = ei6π = 1.
So, it will also satisfy Eq. (A.8), unless ζ = 1. Now, ζ = ei2π(3/F ) will be 6= 1 precisely if 3/F is not
an
i.e. if F is neither 1 norP
3. Thus, for F 6= 3 Eq. (A.8) still holds, and Eq. (A.9)
that
P integer,
P says
i3θ
i3θ
−i3θ
f
f
f
e
is zero. But if F = 3 then
e
is not zero. The same holds for the last sum,
e
, as
can be shown in exactly the same way.
So, to summarise, if F > 3 then all the terms in (A.3) – (A.6) become zero and the leading term
of the truncation error vanishes, granting second-order accuracy to the LS gradient, provided that
the q = 3/2 weights scheme is used (or that the neighbouring points are equidistant from P ). But if
F = 3 then not all terms vanish and the leading part of the truncation error does not reduce to zero
– the gradient remains first-order accurate. This is unfortunate, as the F = 3 case is very common,
corresponding to triangular cells.
35
Appendix B
Using a LS weights matrix with off-diagonal elements
Consider the one-dimensional case. For a general matrix W the weighted LS method returns the value
ls (P ) which minimises the quantity
of φ.x
2
F
X X
ls
kW (b − Az)k2 =
wij ∆φj − ∆xj φ.x
(P )
i
(B.1)
j=1
It can be shown that if the method returns the exact derivative of any quadratic function φ then it is
second-order accurate. The Taylor expansion of a quadratic function φ is just
1
φ.xx (P ) (∆xf )2
2
We can use this equation to substitute for ∆φj into Eq. (B.1) to get
∆φf = φ.x (P ) ∆xf +
(B.2)
2
F
F
X
X
X
1
ls
φ.x (P ) − φ.x
kW (b − Az)k2 =
wij (∆xj )2
(P )
wij ∆xj + φ.xx (P )
2
i
j=1
j=1
Apparently, if we can choose the weights so that j wij (∆xj )2 = 0 for each row i of W , then the
terms involving φ.xx (P ) will vanish and the remaining quantity will be minimised (actually, made
ls
zero)
P by the 2choice φ.x (P ) = φ.x (P ). Thus the procedure will produce the exact result. The equation
j wij (∆xj ) = 0 can be written in matrix notation as X wi = 0, where X is the 1 × F matrix with
elements X1j = (∆xj )2 and wi is the F × 1 column vector with elements wij . We can therefore set
each row wi of W equal to a basis vector of the null space of X. A simple choice is one where in
row f the first element is (∆x1 )−2 , the f +1 element is −(∆xf +1 )−2 , and the rest of the elements are
zero. W is no longer square but has size (F −1) × F , so the weighted system W Az = W b has one less
equation than the original system Az = b. Equation i of the system W Az = W b has the form
1
1
∆φ1
∆φi+1
ls
φ.x (P )
−
−
(B.3)
=
2
∆xi+1 ∆x1
(∆xi+1 )
(∆x1 )2
P
for i = 1, . . . , F − 1. A little manipulation shows that Eq. (B.3) is just Eq. (B.2) for f = i + 1, where
φ.xx (P ) has been eliminated by using again Eq. (B.2), but for f = 1, to express it as a function of ∆φ1
and ∆x1 . Therefore, this method is nearly equivalent to the unweighted LS solution of the system of
equations (B.2). Extension to the 2D or 3D cases follows the same lines.
Appendix C
The gradient schemes in aerodynamics problems
It is known in aerodynamics that the DT gradient is more accurate than the LS gradient for the
computation of boundary layer flow over a curved solid boundary when grid cells of very high aspect
ratio are employed (typically in excess of 1000) [16, 20]. This is due to a fundamental difference
between the methods: the LS gradient uses the directions in which the neighbours lie, represented by
the unit vectors df = (Nf − P )/kNf − P k, while the DT gradient uses the directions normal to the
faces, represented by the unit vectors nf . This allows the possibility of assisting the DT gradient by
aligning the cell faces with the direction of the exact gradient, when this direction is known a priori.
So, consider a structured grid of high aspect ratio cells over a curved boundary as in Fig. 22 and
assume a boundary layer flow where the contours of the dependent variable φ follow the shape of the
boundary so that φ(P ) = φ(N1 ) = φ(N3 ) ⇒ ∆φ1 = ∆φ3 = 0 and the exact ∇φ is directed normal to
the boundary with approximate magnitude (φ(N2 ) − φ(N4 ))/2∆y. The grid being structured all of
the examined methods are second order accurate, but the unweighted LS gradient is known to perform
very poorly: From expression (32) it follows that because ∆r1 and ∆r3 are much larger than ∆r2 and
∆r4 this scheme relies on information mostly in the directions d1 and d3 , and since the directional
derivatives ∆φ1 /∆r1 and ∆φ3 /∆r3 in these directions appear to be zero, the predicted gradient is
also near zero although the true gradient may be very large. This gross underestimation is known
36
to occur when the ratio of y-displacements γ/∆y of neighbours N1 and N3 to neighbours N2 and N4
(Fig. 22) is larger than one (i.e. when the terms (∂φ/∂y)ls γ are larger than the terms (∂φ/∂y)ls ∆y in
the minimisation expression), and in practical applications it is typically around 50 [16]. As this ratio
is proportional to (∆x)2 /(R ∆y) [16], reducing the size ∆x of the cells in the streamwise direction
rapidly reduces the error. Using a weighted LS method dramatically improves the accuracy, but this
remedy does not work if the grid is triangulated as in Fig. 5.
∆x
ey
ex
n2 , d2
P
d3
n3
γ
N3
N2
n4 , d4
N4
d1
n1
∆y
N1
Figure 22: Structured grid of high aspect ratio cells over a curved boundary (the aspect ratio ∆x/∆y is greatly
downplayed for clarity).
The inaccuracy of the LS gradients in this case is due to the invalidation of their fundamental
assumption that the variation of φ in the neighbourhood of the cell is linear, as the cell size is large
enough for the contours of φ to curve significantly across it. On the other hand, the DT gradient
benefits from the alignment of the normals n2 and n4 of the long faces (which dominate the calculation
due to weighting by face area) with the direction of the exact gradient. Thus, for this particular φ
distribution the given grid alignment assists this gradient scheme to achieve good accuracy. A similar
(somewhat less favourable) situation holds in the triangulated grid case.
Another issue is related to numerical stability. Certain discretisation schemes use the values of
the dependent variable at face centres, evaluated as φ(cf ) ≈ φ(P ) + ∇a φ(P ) · (c − P ). Consider the
same boundary layer flow as before, but for the moment assume a straight boundary so that the grid
looks like that of Fig. 2, on which all the examined gradient schemes reduce to ∇s , Eq. (1). Since
φ(N1 ) = φ(N3 ) = φ(P ), Eq. (1) returns a gradient in the direction (−δyξ , δxξ ) i.e. perpendicular to
the vector δ ξ . Thus φ(c1 ) ≈ φ(P ) + ∇s φ(P ) · (c1 − P ) = φ(P ) = φ(N1 ), which is reasonable. Now
consider a curved boundary case where cell P has exactly the same shape as before but its neighbours
N1 and N3 are tilted to follow the curvature as in Fig. 22. The DT gradient depends only on the face
normals nf which have not changed for cell P and thus it gives the same result for φ(c1 ) as before.
On the other hand, the vectors d1 and d3 have now changed and thus the LS gradient is affected by
the curvature and may no longer be perpendicular to the direction c1 − P ; this leads to a predicted
value for φ(c1 ) that is either larger or smaller than both φ(P ) and φ(N1 ), introducing an oscillatory
variation in the direction d1 that may be detrimental to numerical stability [18]. In other cases this
issue can occur with both the LS and the DT gradients, and so some solvers such as OpenFOAM offer
versions of these gradient schemes that use limiters in order to avoid this problem.
References
[1] J. H. Ferziger and M. Peric, Computational methods for fluid dynamics. Springer, 3rd ed., 2002.
[2] P. J. Oliveira, F. T. Pinho, and G. A. Pinto, “Numerical simulation of non-linear elastic flows with
a general collocated finite-volume method,” J. Non-Newtonian Fluid Mech., vol. 79, pp. 1–43,
1998.
[3] A. M. Afonso, M. S. N. Oliveira, P. J. Oliveira, M. A. Alves, and F. T. Pinho, “The finite volume
method in computational rheology,” in Finite-Volume Methods – Powerful Means of Engineering
Design, ch. 7, pp. 141–170, In-Tech Open Publishers, 2012.
37
[4] A. Syrakos, G. Georgiou, and A. Alexandrou, “Solution of the square lid-driven cavity flow of
a Bingham plastic using the finite volume method,” J. Non-Newtonian Fluid Mech., vol. 195,
pp. 19–31, 2013.
[5] A. Jalali, M. Sharbatdar, and C. Ollivier-Gooch, “An efficient implicit unstructured finite volume
solver for generalised Newtonian fluids,” International Journal of Computational Fluid Dynamics,
vol. 30, no. 3, pp. 201–217, 2016.
[6] C. D. Correa, R. Hero, and K.-L. Ma, “A comparison of gradient estimation methods for volume
rendering on unstructured meshes,” IEEE Trans. Visual Comput. Graphics, vol. 17, pp. 305–319,
2011.
[7] T. J. Barth and D. C. Jespersen, “The design and application of upwind schemes on unstructured
meshes,” in AIAA Paper 89-0366, 1989.
[8] H. Jasak, Error Analysis and Estimation for the Finite Volume Method with Application to Fluid
Flows. PhD thesis, Imperial College, London, 1996.
[9] Ž. Lilek, S. Muzaferija, M. Perić, and V. Seidl, “An implicit finite-volume method using nonmatching blocks of structured grid,” Numer. Heat Transfer, vol. 32, pp. 385–401, 1997.
[10] J. Wu and P. Traoré, “Similarity and comparison of three finite-volume methods for diffusive
fluxes computation on nonorthogonal meshes,” Numer. Heat Transfer B, vol. 64, pp. 118–146,
2014.
[11] F. Moukalled, L. Mangani, and M. Darwish, The Finite Volume Method in Computational Fluid
Dynamics. Springer, 2016.
[12] T. J. Barth, “A 3-D upwind Euler solver for unstructured meshes,” in AIAA Paper 91-1548-CP,
1991.
[13] S. Muzaferija and D. A. Gosman, “Finite-volume CFD procedure and adaptive error control
strategy for grids of arbitrary topology,” J. Comput. Phys., vol. 138, pp. 766–787, 1997.
[14] C. Ollivier-Gooch and M. Van Altena, “A high-order-accurate unstructured mesh finite-volume
scheme for the advection–diffusion equation,” J. Comput. Phys., vol. 181, pp. 729–752, 2002.
[15] F. Bramkamp, P. Lamby, and S. Müller, “An adaptive multiscale finite volume solver for unsteady
and steady state flow computations,” J. Comput. Phys., vol. 197, pp. 460–490, 2004.
[16] D. J. Mavriplis, “Revisiting the least-squares procedure for gradient reconstruction on unstructured meshes,” in AIAA Paper 2003-3986, 2003.
[17] B. Diskin and J. L. Thomas, “Accuracy of gradient reconstruction on grids with high aspect
ratio,” tech. rep., NIA Report No. 2008-12, 2008.
[18] E. Shima, K. Kitamura, and K. Fujimoto, “New gradient calculation method for MUSCL type
CFD schemes in arbitrary polyhedra,” in AIAA Paper 2010-1081, 2010.
[19] L. J. Betchen and A. G. Straatman, “An accurate gradient and Hessian reconstruction method for
cell-centered finite volume discretizations on general unstructured grids,” Int. J. Numer. Methods
Fluids, vol. 62, pp. 945–962, 2010.
[20] E. Sozer, C. Brehm, and C. C. Kiris, “Gradient calculation methods on arbitrary polyhedral
unstructured meshes for cell-centered CFD solvers,” in AIAA Paper 2014-1440, 2014.
[21] A. Syrakos, Analysis of a finite volume method for the incompressible Navier-Stokes equations.
PhD thesis, Aristotle University of Thessaloniki, 2006.
38
[22] M. S. Karimian and A. G. Straatman, “Discretization and parallel performance of an unstructured
finite volume Navier–Stokes solver,” Int. J. Numer. Methods Fluids, vol. 52, pp. 591–615, 2006.
[23] Y. Kallinderis and C. Kontzialis, “A priori mesh quality estimation via direct relation between
truncation error and mesh distortion,” J. Comput. Phys., vol. 228, pp. 881–902, 2009.
[24] R. L. Burden and J. D. Faires, Numerical Analysis. Brooks/Cole, 9th ed., 2011.
[25] Q. Wang, Y.-X. Ren, J. Pan, and W. Li, “Compact high order finite volume method on unstructured grids III: Variational reconstruction,” Journal of Computational Physics, vol. 337, pp. 1–26,
may 2017.
[26] B. Diskin and J. L. Thomas, “Comparison of node-centered and cell-centered unstructured finitevolume discretizations: Inviscid fluxes,” AIAA Journal, vol. 49, pp. 836–854, apr 2011.
[27] L. N. Trefethen and D. Bau III, Numerical Linear Algebra. SIAM, 1997.
[28] G. Strang, Linear Algebra and its Applications. Brooks/Cole, 4th ed., 2006.
[29] S. Muzaferija, Adaptive Finite Volume Method for Flow Prediction using Unstructured Meshes
and Multigrid Approach. PhD thesis, Imperial College, London, 1994.
[30] Y. Liu and W. Zhang, “Accuracy preserving limiter for the high-order finite volume method on
unstructured grids,” Computers & Fluids, vol. 149, pp. 88–99, 2017.
[31] J.-M. Vaassen, D. Vigneron, and J.-A. Essers, “An implicit high order finite volume scheme for
the solution of 3D Navier–Stokes equations with new discretization of diffusive terms,” J. Comput.
Appl. Math., vol. 215, pp. 595–601, 2008.
[32] J. F. Thompson, Z. U. Warsi, and C. W. Mastin, Numerical grid generation: foundations and
applications. North-holland Amsterdam, 1985.
[33] Y. Dimakopoulos and J. Tsamopoulos, “A quasi-elliptic transformation for moving boundary
problems with large anisotropic deformations,” J. Comput. Phys., vol. 192, pp. 494–522, 2003.
[34] A. Sidi, “Review of two vector extrapolation methods of polynomial type with applications to
large-scale problems,” Journal of Computational Science, vol. 3, pp. 92–101, 2012.
[35] U. Trottenberg, C. Oosterlee, and A. Schüller, Multigrid. Academic Press, 2001.
[36] A. Syrakos, G. Efthimiou, J. G. Bartzis, and A. Goulas, “Numerical experiments on the efficiency
of local grid refinement based on truncation error estimates,” J. Comput. Phys., vol. 231, pp. 6725–
6753, 2012.
[37] R. Schneiders, “Refining quadrilateral and hexahedral element meshes,” in Proceedings of the 5th
International Conference on Numerical Grid Generation in Computational Fluid Simulations,
pp. 679–688, Mississippi State University, 1996.
[38] N. Chatzidai, A. Giannousakis, Y. Dimakopoulos, and J. Tsamopoulos, “On the elliptic mesh
generation in domains containing multiple inclusions and undergoing large deformations,” J.
Comput. Phys., vol. 228, pp. 1980–2011, 2009.
[39] B. Diskin and J. L. Thomas, “Notes on accuracy of finite-volume discretization schemes on irregular grids,” Appl. Numer. Math., vol. 60, pp. 224–226, 2010.
[40] P. Traoré, Y. M. Ahipo, and C. Louste, “A robust and efficient finite volume scheme for the
discretization of diffusive flux on extremely skewed meshes in complex geometries,” J. Comput.
Phys., vol. 228, pp. 5148–5159, 2009.
[41] I. Demirdžić, “On the discretization of the diffusion term in finite-volume continuum mechanics,”
Numer. Heat Transfer B, vol. 68, pp. 1–10, 2015.
39
[42] A. Syrakos and A. Goulas, “Estimate of the truncation error of finite volume discretization of the
Navier-Stokes equations on colocated grids,” Int. J. Numer. Methods Fluids, vol. 50, pp. 103–130,
2006.
[43] H. Nishikawa, “Robust and accurate viscous discretization via upwind scheme – i: Basic principle,” Computers & Fluids, vol. 49, pp. 62–86, oct 2011.
[44] A. Jalali, M. Sharbatdar, and C. Ollivier-Gooch, “Accuracy analysis of unstructured finite volume
discretization schemes for diffusive fluxes,” Computers & Fluids, vol. 101, pp. 220–232, sep 2014.
[45] H. Nishikawa, Y. Nakashima, and N. Watanabe, “Effects of high-frequency damping on iterative
convergence of implicit viscous solver,” Journal of Computational Physics, vol. 348, pp. 66–81,
nov 2017.
[46] E. Robertson, V. Choudhury, S. Bhushan, and D. Walters, “Validation of OpenFOAM numerical methods and turbulence models for incompressible bluff body flows,” Computers & Fluids,
vol. 123, pp. 122–145, 2015.
[47] N. K. Lampropoulos, Y. Dimakopoulos, and J. Tsamopoulos, “Transient flow of gravity-driven
viscous films over substrates with rectangular topographical features,” Microfluid. Nanofluid.,
vol. 20, p. 51, 2016.
[48] G. Karapetsas, N. K. Lampropoulos, Y. Dimakopoulos, and J. Tsamopoulos, “Transient flow of
gravity-driven viscous films over 3D patterned substrates: conditions leading to Wenzel, Cassie
and intermediate states,” Microfluid. Nanofluid., vol. 21, p. 17, 2017.
[49] J. Schöberl, “NETGEN an advancing front 2D/3D-mesh generator based on abstract rules,”
Computing and Visualization in Science, vol. 1, pp. 41–52, 1997.
[50] C. Geuzaine and J.-F. Remacle, “Gmsh: A 3-D finite element mesh generator with built-in preand post-processing facilities,” Int. J. Numer. Methods Eng., vol. 79, pp. 1309–1331, 2009.
40
| 5 |
Evol. Intel. manuscript No.
(will be inserted by the editor)
A semantic network-based evolutionary algorithm for
computational creativity
arXiv:1404.7765v2 [] 14 Jul 2014
Atılım Güneş Baydin · Ramon López de Mántaras · Santiago Ontañón
Received: date / Accepted: date
Abstract We introduce a novel evolutionary algorithm
(EA) with a semantic network-based representation. For
enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt
to work on semantic networks. The algorithm employs
commonsense reasoning to ensure all operations preserve
the meaningfulness of the networks, using ConceptNet
and WordNet knowledge bases. The algorithm can be
interpreted as a novel memetic algorithm (MA), given
that (1) individuals represent pieces of information that
undergo evolution, as in the original sense of memetics
as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has
been used as a synonym for local refinement after global
optimization. For evaluating the approach, we introduce
an analogical similarity-based fitness measure that is
computed through structure mapping. This setup enables the open-ended generation of networks analogous
to a given base network.
Keywords Evolutionary computation · Memetic
algorithms · Memetics · Analogical reasoning · Semantic
networks
Atılım Güneş Baydin (B)
Hamilton Institute & Department of Computer Science
National University of Ireland Maynooth
Co. Kildare, Ireland
E-mail: [email protected]
Ramon López de Mántaras
Artificial Intelligence Research Institute, IIIA - CSIC
Campus Universitat Autònoma de Barcelona
08193 Bellaterra, Spain
E-mail: [email protected]
Santiago Ontañón
Department of Computer Science
Drexel University
3141 Chestnut Street, Philadelphia, PA 19104, USA
E-mail: [email protected]
1 Introduction
We introduce an evolutionary algorithm (EA) that generates semantic networks under a fitness measure based
on information content and structure. This algorithm
is, to the best of our knowledge, the first instance in
literature where semantic networks are created via an
evolutionary optimization process and specially developed structural variation operators respect the semantics
of commonsense relations.
The algorithm works by fitness-based selection and
reproduction of networks undergoing gradual changes
introduced by variation operators. The initial generation
of networks, and the variation operators of mutation
and crossover, make use of randomly picked concepts
and relations that are associated with existing nodes in
a network, queried from commonsense knowledge bases.
We currently use ConceptNet and WordNet knowledge
bases for this purpose. The gradual changes in the algorithm are thus driven by randomness constrained by
commonsense knowledge.
We demonstrate the approach via a fitness function measuring analogical similarity to a given base
network. This is particularly interesting from an analogical reasoning perspective, because it enables us to
spontaneously generate analogical mappings and novel
analogous cases, in contrast with existing algorithms capable of generating only the mapping between two given
cases. Spontaneously generating novel networks that are
analogous to a given network, this demonstration is relevant for computational creativity applications, where
methods simulating analogical creativity are sought for
tasks such as story generation.
Seeing the evolutionary optimization of information
represented within semantic networks as an implementation of the idea of “memes” in cultural evolution, this
2
algorithm can be interpreted as a novel type of memetic
algorithm (MA). In this designation, we use the term
“memetic” in a different technical sense from existing
models classified as MA, and with an implication closer
to the original meaning as it was first introduced as a
metaphor by Dawkins [9] in his book The Selfish Gene
and later popularized by Hofstadter and Dennett [22].
This is due to several reasons.
Within the existing field of MA, one models the
effects of cultural evolution as a local refinement process for each individual, running on top of a global,
population-based, optimization [32]. So, the emphasis is
on the local refinement of each individual due to memetic
evolutionary factors1 . In algorithmic terms, this results
in a combination of population-based global search with
a local search step run for each individual. Thus, the
only connection of the existing work in MA with the
idea of “memetics” is using this word as a synonym for
“local refinement of candidate solutions”.
In contrast, the emphasis in our approach is directly
on the memetic evolution itself, given
1. it is the units of information (represented as semantic networks) that are undergoing variation, reproduction, and selection, exactly as in the original
metaphor by Dawkins [9];
2. we have variation operators developed specific for
this knowledge representation-based approach, respecting the semantics and commonsensical correctness of the evolving structures; and
3. the whole process is guided by a fitness measure
that is defined as a function of some selected set
of features of the knowledge represented by each
individual.
The article is organized as follows. In Sect. 2 we
provide background information on the subjects of evolution, creativity, and culture, followed by a brief review
existing models of graph-based EA, to enable a discussion of how our contribution is related with existing work
in the field. In Sect. 3, we go over a detailed description
of our algorithm, including details of representation and
the newly introduced variation operators specific for semantic networks. We introduce the analogical similaritybased fitness measure in Sect. 4, presenting results of
experiments with the spontaneous generation of analogies. Sect. 5 ends the article with concluding remarks
1
From a biological perspective, this sense emphasizes the
effect of society, culture, and learning on the survival of individuals on top of their physical traits emerging through
genetic evolution. An example would be the use of knowledge
and technology by the human species to survive in diverse
environments, far beyond the physical capabilities available
to them solely by the human anatomy.
A. G. Baydin, R. López de Mántaras, S. Ontañón
and a discussion of limitations and future directions for
our approach.
2 Background
2.1 Evolution, creativity, and culture
Following the success and explanatory power of evolutionary theory in biology, insights about the ubiquity of
evolutionary phenomena have paved the way towards
an understanding that these processes are not necessarily confined to biology. That is to say, whenever one
has a system capable of exhibiting a kind of variation,
heredity, and selection, one can formulate an evolutionary account of the complexity observed in almost any
scale and domain. This approach is termed Universal
Darwinism, generalizing the mechanisms and extending
the domain of evolutionary processes to systems outside
biology, including economics, psychology, physics, and
even culture [3, 10].
Within this larger framework, the concept of meme
first introduced as a metaphor by Dawkins [9] as an
evolving unit of culture2 analogous to a gene, hosted,
altered, and reproduced in minds, later formed the basis
of the approach called memetics3 .
Popularized by Hofstadter and Dennett [22], the
explanation found itself use in cultural and sociological studies. For example, Balkin [1] argues that ideologies can be explained using a meme-based description,
produced through processes of cultural evolution and
transcending the lives of individuals. This evolutionary
approach to ideology also enables genetic-inspired descriptions of cultural phenomena, such as ideological
drift. Similarly, creativity, as an integral part of culture, has also been addressed by these studies at the
intersection of evolution and culture [16]. At this point,
one has to clarify in which of the two highly related,
but conceptually different, ways one uses the notions of
“evolution” and “creativity” together.
The first point of view basically discusses the role of
arts and creativity in the general framework of classical
evolutionary biology, considering the provided advantages for adaptation and survival. All known societies
enjoy creative pursuits such as literature, music, and
visual arts; and there is evidence from the field of archaeology that this interest arose relatively early in
2
Or, information, idea, or belief.
Quoting Dawkins [9]: “Examples of memes are tunes,
ideas, catch-phrases, clothes fashions, ways of making pots
or of building arches. Just as genes propagate themselves in
the gene pool by leaping from body to body via sperms or eggs,
so memes propagate themselves in the meme pool by leaping
from brain to brain...”
3
A semantic network-based evolutionary algorithm for computational creativity
the development of human species. Combined with the
knowledge that a sense of aesthetic is also encountered
in several other species of the animal kingdom, there is
ample evidence to consider an “evolutionary basis” for
creativity [28].
Alternatively, inspired by the insight that evolutionary processes are not confined to biology, and using
evolutionary theories of sociocultural change, one can
consider that culture itself is possibly recreated through
evolutionary processes occurring in the abstract environment of thoughts, concepts, or ideas. An example
for this kind of interpretation in social sciences is the
evolutionary epistemology theory put forth by Campbell
[3].
Surely, a unifying approach considering all types of
evolution is also possible, studying it as a general phenomenon applying at different levels to both physical
and cultural systems. Within the creativity field, this
kind of approach is taken by Skusa and Bedau [40], who
study the processes occurring in systems exhibiting biological or cultural evolution. They base their insights on
their work on evolutionary activity statistics for visualizing and measuring diverse systems [2]. Similarly, Gabora
and Kaufman [16] investigate the issues at the intersection of creativity and evolution, from both biological
and cultural senses.
A similar dichotomy also exists in the interpretation
of the role of evolutionary algorithms in computational
creativity.
Researchers realize that EA can be applied to computational creativity problems, considering them as a
new area of complex and difficult technical problems
where they can employ the proven power of EA as a
black-box optimization tool.
Again, as in the case of sociocultural evolution, one
can also consider the creativity process itself as taking
place through evolutionary processes in an abstract “creativity space”. In cases where evidence for an underlying
evolutionary process can be spotted (as in the case of
cultural evolution), in addition to providing solutions
to difficult problems, one can also consider the simple
but powerful explanatory power of evolutionary theory
for the seemingly complex task of creativity.
The work that we present in this article is open to
both interpretations. In addition to being a technique
for the generation of semantic networks for a given
creativity task, we can also use it—due to its memetic
interpretation—to model the evolution of human culture
through passing generations.
3
2.2 Graph-based evolutionary algorithms
There are several existing algorithms using graph-based
representations for the encoding of candidate solutions
in EA [31]. The most notable work among these is genetic programming (GP) [23], where candidate solutions
are pieces of computer program represented in a tree
hierarchy. The trees are formed by functions and terminals, where the terminal set consists of variables and
constants, and the function set can contain mathematical functions, logical functions, or functions controlling
program flow, specific to the target problem.
In parallel distributed genetic programming (PDGP)
[39], the restrictions of the tree structure of GP are relaxed by allowing multiple outputs from a node, which
allows a high degree of parallelism in the evolved programs. In evolutionary graph generation (EGG) [6] the
focus is on evolving graphs with applications in electronic circuit design. Genetic network programming
(GNP) [26] introduces compact networks with conditional branching and action nodes; and similarly, neural
programming (NP) [43] combines GP with artificial neural networks for the discovery of network structures via
evolution.
The use of a graph-based representation makes the
design of variation operators specific to graphs necessary.
In works such as GNP, this is facilitated by using a
string-based encoding of node names, types, and connectivity, permitting operators very close to their counterparts in conventional EA; and in PDGP, the operations
are simplified by making nodes occupy points in a fixedsize two-dimensional grid.
Our approach in this article, on the other hand, is
closely related with how GP handles variation.
In GP crossover operation, two candidate solutions
are combined to form two new solutions as their offspring.
This is accomplished by randomly selecting crossover
fragments in both parents, deleting the selected fragment of the first parent and inserting the fragment from
the second parent. The second offspring is produced
by the same operation in reverse order. An important
advantage of GP is its ability to create nonidentical offspring even in the case where the same parent is selected
to mate with itself in crossover 4 .
In GP, there are two main types of mutations: the
first one involves the random change of the type of a
function or terminal at a randomly selected position in
the candidate solution; while in the second one an entire
4
This is in stark contrast with approaches such as GA,
where a crossover operation of identical parents would yield
identical offspring due to the linear nature of the representation
4
subtree of the candidate solution can be replaced by a
new randomly created subtree.
What is common within GP related algorithms is
that the output of each node in the graph can constitute
an input to another node. In comparison, for the semantic network-based representation that we will introduce,
the range of connections that can form a graph of a
given set of concepts is constrained by commonsense
knowledge, that is to say, the relations have to make
sense to be useful. To address this issue, we introduce
new crossover and mutation operations for memetic variation, making use of commonsense reasoning [19, 33]
and adapted to work on semantic networks.
Of the existing graph-based EAs, the implementation nicknamed McGonagall by Manurung [27] bears
similarities to our approach in that it uses a “flat semantic representation” that is essentially equivalent to what
we here call semantic networks. McGonagall uses an EA
approach to poetry generation, using fitness measures
involving poetic metre evaluations and semantic similarity to a given target poem. The system uses special
rule-based variation operators that ensure grammaticality and meaningfulness by exploiting domain knowledge.
This is comparable to our use of commonsense reasoning
to constrain variation operators to ensure meaningful
semantic networks.
3 The algorithm
Our algorithm, outlined in Algorithm 1, proceeds similar to conventional EA, with a relatively small set of
parameters.
Algorithm 1 Procedure for the novel semantic networkbased memetic algorithm. Refer to Table 1 for an
overview of involved parameters.
1: procedure MemeticAlgorithm
2:
P (t = 0) ← InitializePopulation(Sizepop ,
Sizenetwork , Scoremin , Counttimeout )
3:
repeat
4:
φ(t) ← EvaluateFitnesses(P (t))
5:
N (t) ← NextGeneration(P (t), φ(t), Sizepop ,
Sizetourn , P robwin , P robrec , P robmut , Scoremin ,
Counttimeout )
6:
P (t + 1) ← N (t)
7:
t←t+1
8:
until stop criterion
9: end procedure
Descriptions of initialization, fitness evaluation, selection, and memetic variation steps are presented in
detail in the following sections. The parameters affecting
each step of the algorithm, along with their explanations,
are summarized in Table 1.
A. G. Baydin, R. López de Mántaras, S. Ontañón
think
city
CapableOf
AtLocation
human
Desires
Desires
IsA
cake
Causes
UsedFor
animal
CreatedBy
Desires
bake
learn
eat
read
IsA
bird
CapableOf
fly
Fig. 1: A semantic network with 11 concepts and 11
relations.
3.1 Semantic networks
Semantic networks are graphs that represent semantic relations between concepts. In a semantic network,
knowledge is expressed in the form of directed binary relations, represented by edges, and concepts, represented
by nodes. This type of graph representation has found
use in many subfields of artificial intelligence, including
natural language processing, machine translation, and
information retrieval [41].
Figure 1 shows a graph representation of a simple semantic network. In addition to the graphical representation, we also adopt the notation of IsA(bird, animal)
to mean that the concepts bird and animal are connected by the directed relation IsA(·,·), i.e. “bird is an
animal”.
An important characteristic of a semantic network
is whether it is definitional or assertional: in definitional
networks the emphasis is on taxonomic relations (e.g.
IsA(human, mammal)) describing a subsumption hierarchy that is true by definition; in assertional networks,
the relations describe instantiations and assertions that
are contingently true (e.g. AtLocation(human, city))
[41]. In this study, we combine the two approaches
for increased expressivity. As such, semantic networks
provide a simple yet powerful means to represent the
“meme” metaphor of Dawkins as data structures that are
algorithmically manipulatable, allowing a procedural
implementation of memetic evolution.
3.2 Commonsense reasoning
A foundational issue that comes with our approach is the
problem of reconciling the intrinsically random nature
A semantic network-based evolutionary algorithm for computational creativity
5
Table 1: Parameter set of the novel evolutionary algorithm (Algorithm 1).
Parameter
Interval
Explanation
Evolution
Sizepop
P robrec
P robmut
[1, ∞)
[0, 1]
[0, 1]
Number of individuals forming the population
Probability of applying crossover operation
Probability of applying mutation operation
Semantic
networks
Sizenetwork
Scoremin
Counttimeout
[1, ∞)
[−10, 10]
[1, ∞)
Maximum size of randomly created semantic networks in the initial population
Minimum quality score of commonsense relations throughout the algorithm
Timeout value for the number of trials in commonsense retrieval operations
Tournament
selection
Sizetourn
[1, Sizepop ]
P robwin
[0, 1]
Number of individuals randomly selected from the population for each tournament selection event
The probability that the best individual in the ranked list of tournament
participants wins the tournament
of evolutionary operations with the requirement that
the evolving semantic networks should be meaningful.
This is so because, unlike existing graph-based approaches such as GP or GNP, not every node in a semantic network graph can be connected to an arbitrary
other node through an arbitrary type of relation. This
issue is relevant in every type of modification operation that needs to be executed during the course of our
algorithm.
Simply put, the operations should be constrained by
commonsense knowledge: a relation such as IsA(bird,
animal) is meaningful, while Causes(bird, table) is
not.
We address this problem by utilizing the nascent
subfield of AI named commonsense reasoning [19, 33].
Within AI, since the pioneering work by McCarthy [29],
commonsense reasoning has been commonly regarded
as a key ability that a system must possess in order to
be considered truly intelligent [30].
Commonsense reasoning refers to the type of reasoning involved in everyday human thinking, based on commonsense knowledge that an ordinary person is expected
to know, or “the knowledge of how the world works” [33].
It comprises information such as HasA(human, brain),
IsA(sun, star), or CapableOf(ball, roll), which
are acquired and taken for granted by any adult human,
but which need to be introduced in a particular way to
a computational reasoning system.
Knowledge bases such as the Cyc project maintained
by Cycorp company5 , ConceptNet project of MIT Media
Lab6 , and the Never-Ending Language Learning (NELL)
project of Carnegie Mellon University7 are set up to
collect and classify commonsense information for the use
of research community. In our current implementation,
5
6
7
http://www.cyc.com/
http://conceptnet5.media.mit.edu/
http://rtw.ml.cmu.edu/rtw/
we make use of ConceptNet and the lexical database
WordNet8 to address the restrictions of processing commonsense knowledge.
3.2.1 Knowledge bases
The ConceptNet project is a part of the Open Mind
Common Sense (OMCS) initiative of the MIT Media
Lab, based on the input of commonsense knowledge from
general public through several ways, including parsed
natural language and semi-structured fill-in-the-blanks
type forms [19]. As of 2013, ConceptNet is in version
5 and, in addition to data collected through OMCS, it
has been extended to include other data sources such
as the Wikipedia and Wiktionary projects of the Wikimedia Foundation9 and the DBPedia project10 of the
University of Leipzig and the Freie Universität Berlin.
Access to the ConceptNet database is provided through
a web API using JavaScript Object Notation (JSON)
textual data format. Due to performance reasons, we
use the previous version of ConceptNet, version 4, in
our implementation. This is because of the high volume of queries to ConceptNet during the creation of
random semantic networks and the application of variation operators. ConceptNet 4 provides the complete
dataset in locally accessible and highly efficient SQLite
database format, enabling substantially faster access to
data compared with the web API and JSON format of
the current version.
According to the study by Diochnos [11], ConceptNet
version 4 includes 566,094 assertions and 321,993 concepts. The variety of assertions in ConceptNet, initially
contributed by volunteers from general public, makes
it somewhat prone to noise. According to our experience, noise is generally due to charged statements about
8
9
10
http://wordnet.princeton.edu
http://www.wikimedia.org/
http://dbpedia.org/
6
A. G. Baydin, R. López de Mántaras, S. Ontañón
Table 2: Set of correspondences we define between WordNet and ConceptNet relation types.
Relation
WordNet
Example
ConceptNet
Relation
Example
Hypernym
canine is a hypernym of dog
IsA
IsA(dog,
canine)
Holonym
automobile is
a holonym of
wheel
PartOf
PartOf(wheel,
automobile)
Meronym
wheel is a
meronym of
automobile
PartOf
PartOf(wheel,
automobile)
Attribute
edible is an
attribute of
pear
HasProperty
HasProperty(
pear,
edible)
Entailment
to sleep is entailed by to
snore
Causes
Causes(sleep,
snore)
political issues, biased views about gender issues, or
attempts of making fun. We address the noise problem
by ignoring all assertions with a reliability score (determined by contributors’ voting) below a set minimum
Scoremin (Table 1)11 .
The lexical database WordNet [14] maintained by
the Cognitive Science Laboratory at Princeton University also has characteristics of a commonsense knowledge
base that make it attractive for our purposes. WordNet
is based on a grouping of words into synsets or synonym rings which hold together all elements that are
considered semantically equivalent12 .
In addition to these synset groupings, WordNet includes pointers that are used to represent relations between the words in different synsets. These include semantic pointers that represent relations between word
meanings and lexical pointers that represent relations
between word forms.
For treating WordNet as a commonsense knowledge
base compatible with ConceptNet, we define the set
of correspondences we outline in Table 2. Similar approaches have also been used by other researchers in the
field, such as by Kuo and Hsu [24].
In the implementation of our algorithm, we answer
the various types of queries to commonsense knowledge
bases (such as the RandomConcept() call in Algorithm 3) via ConceptNet or WordNet on a random basis.
11
The default reliability score for a statement is 1 [19]; and
zero or negative reliability scores are a good indication of
information that can be considered noise.
12
Another definition of synset is that it is a set of synonyms
that are interchangeable without changing the truth value of
any propositions in which they are embedded.
When the query is answered by information retrieved
from WordNet, we return the information formatted
in ConceptNet structure based on the correspondences
outlined in Table 2 and attach the maximum reliability
score of 10, since the information in WordNet is provided
by domain experts and virtually devoid of noise.
In our implementation we use WordNet version 3,
contributing definitional relations involving around 117,000
synsets. Another thing to note here is that, in ConceptNet version 5, WordNet already constitutes one of the
main incorporated data sources. This means that, in
case we switch from ConceptNet version 4 to version 5,
our approach of accessing WordNet would be obsolete.
3.3 Initialization
At the start of a run, the population of size Sizepop is initialized (Algorithm 2) with individuals created through
a procedure that we call random semantic network generation (Algorithm 3), capable of assembling random
semantic networks of any given size.
Figure 2 presents an example of a random semantic
network created via this procedure. This works by starting from a network comprising a sole concept randomly
picked from commonsense knowledge bases and running
a semantic network expansion algorithm that
1. randomly picks a concept in the given network (e.g.
human);
2. compiles a list of relations, from commonsense knowledge bases, that the picked concept can be involved in
(e.g. CapableOf(human, think), Desires(human, eat),
. . . );
3. appends to the network a relation randomly picked
from this list, together with the other involved concept; and
4. repeats this process until a given number of concepts
have been appended to the network, or a set timeout
Counttimeout has been reached (as a failsafe for situations where there are not enough relations involving
the concepts in the network being created).
It is very important to note here that even if it is
grown in a random manner, the generated network itself is totally meaningful, because it is a combination of
meaningful pieces of information harvested from commonsense knowledge bases.
The initialization algorithm depends upon the parameters of Sizenetwork , the intended number of concepts in the randomly created semantic networks, and
Scoremin , the minimum ConceptNet relation score that
should be satisfied by the retrieved relations (Table 1).
A semantic network-based evolutionary algorithm for computational creativity
(a)
(b)
(c)
7
(d)
(e)
Fig. 2: The process of random semantic network generation, starting with a single random concept in (a) and
proceeding with (b), (c), (d), (e), adding new random concepts from the set of concepts related to existing ones.
Algorithm 2 Procedure for the creation of initial random population.
1: procedure
InitializePopulation(Sizepop ,
Sizenetwork , Scoremin , Counttimeout )
2:
initialize P
. The return array
3:
for Sizepop times do
4:
r ← RandomNetwork(Sizenetwork , Scoremin ,
Counttimeout )
. Generate a new random network
5:
AppendTo(P , r)
6:
end for
7:
return P
8: end procedure
Algorithm 3 The random semantic network generation
algorithm. The algorithm is presented here in a form
simpler than the actual implementation, for the sake of
clarity.
1: procedure RandomNetwork(Sizenetwork , Scoremin ,
Counttimeout )
2:
initialize net
. Empty return network
3:
initialize c
. Random initial seed concept
4:
for Counttimeout times do
5:
c ← RandomConcept(Scoremin )
6:
rels ← InvolvedRelations(c)
7:
if Size(rels) ≥ Sizenetwork then
8:
AppendTo(net, c)
9:
break for. Favor a seed with more than a few
relations
10:
end if
11:
end for
12:
t←0
13:
repeat
14:
c ← RandomConceptIn(net)
15:
rels ← InvolvedRelations(c)
. The set of
relations involving c
16:
r ← RandomRelationIn(rels)
17:
if Score(r) ≥ Scoremin then
18:
AppendTo(net, r) . Append to the network
net the relation r and its involved concepts
19:
end if
20:
t←t+1
21:
until Size(net) ≥ Sizenetwork or t ≥ Counttimeout
22:
return net
23: end procedure
3.4 Fitness measure
After the initial generation is populated by individuals
created by the random semantic network generation
algorithm that we outlined, the algorithm proceeds by
assigning fitness values to each individual. Since our approach constitutes the first instance of semantic networkbased EA, it falls on us to introduce fitness measures of
interest for its validation.
As an example for showcasing our approach, in
Sect. 4, we define a fitness measure based on analogical
similarity to an existing semantic network, giving rise to
spontaneous generation of semantic networks that are in
each generation more and more structurally analogous
to a given network.
In general terms, a direct and very interesting application of our approach would be to devise realistically
formed fitness functions modeling selectionist theories
of knowledge, which remain untested until this time.
One such theory is the evolutionary epistemology theory of Campbell [3], which describes the development
of human knowledge and creativity through selectionist principles, such as the blind variation and selective
retention (BVSR) principle.
It is also possible to make the inclusion of certain concepts in the evolving semantic networks a requirement,
allowing the discovery of networks formed around a given
set of seed concepts. This can be also achieved through
starting the initialization procedure (Algorithm 3) with
the given seed concepts.
After all the individuals in the current generation are
assigned fitness values, the algorithm proceeds with the
creation of the next generation of individuals through
variation operators (Algorithm 5). But before this, the
algorithm has to apply selection to pick individuals
from the current population that will be “surviving” to
produce offspring.
8
3.5 Selection
After the assignment of fitness values, individuals are
replaced with offspring generated via variation operators
applied on selected parents. We employ tournament
selection, because it is better at preserving population
diversity13 and allowing selection pressure to be adjusted
through simple parameters [38].
Tournament selection involves, for each selection
event, running “tournaments” among a group of Sizetourn
randomly selected individuals. Individuals in the tournament pool then challenge each other in pairs and the
individual with the higher fitness will win with probability P robwin . This method simulates biological mating
patterns in which two members of the same sex compete
to mate with a third one of different sex for the recombination of genetic material. Individuals with higher
fitness have better chance of being selected, but an individual with low fitness still has a chance, however small,
to produce offspring. Adjusting parameters Sizetourn
and P robwin (Table 1) gives us an intuitive and straightforward way to adjust the selection pressure on both
strong and weak individuals.
In our implementation, we also allow reselection,
meaning that the same individual from a particular
generation can be selected more than once to produce
offspring in different combinations. Algorithm 4 gives an
overview of the selection procedure that we implement.
Algorithm 4 Implemented selection algorithm.
1: procedure Select(P (t), φ(t), Sizetourn , P robwin )
2:
w ← RandomMember(P (t))
. Current winner
3:
for Sizetourn − 1 times do
4:
o ← RandomMember(P (t)) . The next opponent
5:
if LookupFitness(φ(t), o) ≥ LookupFitness(φ(t),
w) then
6:
if RandomReal(0, 1) ≤ P robwin then
7:
w ← o . Opponent defeats current winner
8:
end if
9:
end if
10:
end for
11:
return w
12: end procedure
3.6 Memetic variation operators
Variation operators form the last step in the cycle of our
algorithm by creating the next generation of individu13
Diversity, in EA, is a measure of homogeneity of the
individuals in the population. A drop in diversity indicates
an increased number of identical individuals, which is not
desirable for the progress of evolution.
A. G. Baydin, R. López de Mántaras, S. Ontañón
als before going back to the step of fitness evaluation
(Algorithm 1).
As we mentioned in Sect. 3.2, our representation
does not permit arbitrary connections between different
nodes in the network and requires special variation operators that should respect the commonsense structure
of represented knowledge.
In the following sections, we present the commonsense crossover and commonsense mutation operators
that we set up specific to semantic networks.
Using these operators, the next step in the cycle of
our algorithm is the creation of the offspring through
variation (Algorithm 5). Crossover is applied to parents
selected from the population until Sizepop × P robrec
offspring are created (Table 1), where each crossover
event creates two offspring from two parents.
Following the tradition in the GP field [23], we design
the variation process such that the offspring created
by crossover do not undergo mutation. The mutation
operator is applied only to the rest of individuals that
are copied, or “reproduced”, directly from the previous
generation.
For generating the remaining part of the population,
we reproduce Sizepop × (1 − P robrec ) − 1 number of
individuals selected, and make these subject to mutation. We employ elitism: the last individual (hence the
remaining −1 in the previous equation) is a copy of the
one with the current best fitness.
Algorithm 5 Procedure for generating the next generation of individuals.
1: procedure NextGeneration(P (t), φ(t), Sizepop ,
Sizetourn , P robwin , P robrec , P robmut , Scoremin ,
Counttimeout )
2:
initialize N
. The return array
3:
c ← Sizepop P robrec /2 . Number of crossover events
4:
r ← Sizepop − 2c . Number of reproduction events
5:
for c times do
6:
p1 ← Select(P (t), φ(t), Sizetourn , P robwin )
7:
p2 ← Select(P (t), φ(t), Sizetourn , P robwin )
8:
o1, o2 ← Crossover(p1, p2) . Crossover the two
parents
9:
AppendTo(N , o1)
. Two offspring from each
crossover
10:
AppendTo(N , o2)
11:
end for
12:
for r − 1 times do
13:
m ← Select(P (t), φ(t), Sizetourn , P robwin )
14:
m ← Mutate(m, P robmut )
. Mutate an
individual
15:
AppendTo(N , m)
16:
end for
17:
AppendTo(N , BestIndividual(P (t), φ(t))). Elitism
18:
return N
19: end procedure
A semantic network-based evolutionary algorithm for computational creativity
9
3.6.1 Commonsense crossover
3.6.3 Type II Crossover (Graph Merging Crossover)
We introduce two types of commonsense crossover that
are tried in sequence by the variation algorithm.
The first type attempts a sub-graph interchange between two selected parents similar to common crossover
in standard GP; and where this is not feasible due to the
commonsense structure of relations forming the parents,
the second type falls back to a combination of both
parents into a new offspring.
Given two parent networks, such as Figure 4 (a) and
(b), where no interchangeable concepts between these
two can be located, the system falls back to the simpler
type II crossover.
A concept from each parent that is attachable 15 to
the other parent is selected as a crossover concept.
The two parents are merged into an offspring by
attaching a concept in one parent to another concept
in the other parent, picked randomly out of all possible attachments (CreatedBy(art, human) in Figure 4
(c). Another possibility is Desires(human, joy).). The
second offspring is formed randomly in the same way.
In the case that no attachable concepts are found, the
parents are merged as two separate clusters within the
same individual.
3.6.2 Type I Crossover (Subgraph Crossover)
Firstly, a pair of concepts, one from each parent, that
are interchangeable 14 are selected as crossover concepts,
picked randomly out of all possible such pairs.
For instance, for the parent networks in Figure 3 (a)
and (b), bird and airplane are interchangeable, since
they can replace each other in the relations CapableOf(·,
fly) and AtLocation(·, air).
In each parent, a subgraph is formed, containing:
1. the crossover concept;
2. the set of all relations, and associated concepts, that
are not common with the other crossover concept
For example, in Figure 3 (a), HasA(bird,
feather)
and
AtLocation(bird, forest);
and
in
Figure
3
(b),
HasA(airplane,
propeller), MadeOf(airplane, metal), and
UsedFor(airplane, travel); and
3. the set of all relations and concepts connected to
those found in the previous step, excluding the ones
that are also one of those common with the other
crossover concept.
For example, in Figure 3 (a) including
PartOf(feather, wing)
and
PartOf(tree,
forest); and in Figure 3 (b), including
MadeOf(propeller, metal));
but
excluding
the concept fly in Figure 3 (a), because of the
relation CapableOf(·, fly).
This, in effect, forms a subgraph of information specific to the crossover concept, which is insertable into
the other parent. Any relations between the subgraph
and the rest of the network not going through the crossover concept are severed (e.g. UsedFor(wing, fly) in
Figure 3 (a)).
The two offspring are formed by exchanging these
subgraphs between the parent networks (Figure 3 (c)
and (d)).
14
We define two concepts from different semantic networks
as interchangeable if both can replace the other in all, or
part, of the relations the other is involved in, queried from
commonsense knowledge bases.
3.6.4 Commonsense mutation
We introduce several types of commonsense mutation
operators that modify a parent by means of information
from commonsense knowledge bases.
For each mutation to be performed, the type is picked
at random with uniform probability. If the selected type
of mutation is not feasible due to the commonsense
structure of the parent, another type is again picked.
In the case that a set timeout of Counttimeout trials
has been reached without any operation, the parent is
returned as it is.
3.6.5 Type I (Concept Attachment)
A new concept randomly picked from the set of concepts
attachable to the parent is attached through a new
relation to one of existing concepts (Figure 5 (a) and
(b)).
3.6.6 Type IIa (Relation Addition)
A new relation connecting two existing concepts in the
parent is added, possibly connecting unconnected clusters within the same network (Figure 5 (c) and (d)).
3.6.7 Type IIb (Relation Deletion)
A randomly picked relation in the parent is deleted,
possibly leaving unconnected clusters within the same
network (Figure 5 (e) and (f)).
15
We define a distinct concept as attachable to a semantic
network if at least one commonsense relation connecting the
concept to any of the concepts in the network can be discovered
from commonsense knowledge bases.
10
A. G. Baydin, R. López de Mántaras, S. Ontañón
(a)
(b)
lift
metal
Causes
wing
fly
MadeOf
travel
MadeOf
PartOf
propeller
CapableOf
HasA
UsedFor
airplane
air
PartOf
AtLocation
AtLocation
feather
HasA
bird
AtLocation
CapableOf
oxygen
forest
air
fly
PartOf
kite
CapableOf
(c)
tree
(d)
Fig. 3: Commonsense crossover type I (subgraph crossover). (a) Parent 1, centered on the concept bird; (b) Parent
2, centered on the concept airplane; (c) Offspring 1; (d) Offspring 2.
notes
PartOf
music
Causes
UsedFor
art
violin
Causes
HasA
IsA
notes
PartOf
music
joy
IsA
Causes
planet
human
art
AtLocation
brain
CreatedBy
IsA
Causes
IsA
earth
IsA
human
HasA
(a)
woman
HasA
UsedFor
violin
joy
AtLocation
woman
planet
(b)
IsA
earth
HasA
brain
(c)
Fig. 4: Commonsense crossover type II (graph merging crossover). (a) Parent 1; (b) Parent 2; (c) Offspring, merging
by the relation CreatedBy(art, human). If no concepts attachable through commonsense relations are encountered,
the offspring is formed by merging the parent networks as two separate clusters within the same semantic network.
A semantic network-based evolutionary algorithm for computational creativity
3.6.8 Type IIIa (Concept Addition)
A randomly picked new concept is added to the parent
as a new cluster (Figure 5 (g) and (h)).
3.6.9 Type IIIb (Concept Deletion)
A randomly picked concept is deleted with all the relations it is involved in, possibly leaving unconnected
clusters within the same network (Figure 5 (i) and (j)).
3.6.10 Type IV (Concept Replacement)
A concept in the parent, randomly picked from the set
of those with at least one interchangeable concept, is
replaced with one of its interchangeable concepts, again
randomly picked. Any relations left unsatisfied by the
new concept are deleted (Figure 5 (k) and (l)).
4 Analogy as a fitness measure
For experimenting with our approach, we select analogical reasoning as an initial application area, by using
analogical similarity as our fitness measure.
This constitutes an interesting choice for evaluating
our work, because it not only validates the viability of
the novel algorithm, but also produces results of interest
for the fields of analogical reasoning and computational
creativity.
4.1 Analogies and creativity
There is evidence that analogical reasoning is at the core
of higher-order cognition, and it enters into creative discovery, problem-solving, categorization, and learning
[21]. Analogy-making ability is extensively linked with
creative thought and regularly plays a role in creativity
expressed in arts and sciences. Boden [4] classifies analogy as a form of combinational creativity, noting that it
works by producing unfamiliar combinations of familiar
ideas.
In addition to literary use of metaphors and allegories in written language, analogies often constitute
the basis of composition in all art forms including visual
or musical. For example, in classical music, it is highly
common to formulate interpretations of a composer’s
work in terms of tonal allegories [5]. In visual arts, examples of artistic analogy abound, ranging from allegorical
compositions of Renaissance masters such as Albrect
Dürer, to modern usage in film, such as the many layers
of allegory in Stanley Kubrick’s 2001: A Space Odyssey
[37].
11
In science, analogies have been used to convey revolutionary theories and models. A key example of analogybased explanations is Kepler’s explanation of the laws of
heliocentric planetary motion with an analogy to light
radiating from the Sun16 .
Another instance is Rutherford’s analogy between
the atom and the Solar System, where the internal
structure of the atom is explained by electrons circling
the nucleus in orbits like planets around the Sun. This
model, which was later improved by Bohr to give rise to
the Rutherford-Bohr model, was one of the “planetary
models” of the atom, where the electromagnetic force
between oppositely charged particles were presented
analogous to the gravitational force between planetary
bodies. Earlier models of the atom were, also notably,
explained using analogies, including “plum pudding”
model of Thomson and the “billiard ball” model of
Dalton (Figure 7).
In contemporary studies, analogical reasoning is
mostly seen through a structural point of view, framed
by the structure mapping theory based on psychology
[17].
Other approaches to analogical reasoning include the
view of Hofstadter [20] of analogy as a kind of high-level
perception, where one situation is perceived as another
one. Veale and Keane [44] extend the work in analogical reasoning to the more specific case of metaphors,
which describe the understanding of one kind of thing
in terms of another. A highly related cognitive theory is
the conceptual blending idea developed by Fauconnier
and Mark [13], which involves connecting several existing concepts to create new meaning, operating below
the level of consciousness as a fundamental mechanism
of cognition. An implementation of this idea is given
by Pereira [36] as a computational model of abstract
thought, creativity, and language.
Computational approaches within the analogical reasoning field have been mostly concerned with the mapping problem [15]. Put in a different way, models developed and implemented are focused on constructing
mappings between two given source and target domains
(Figure 6 (a)). This focus neglects the problem of retrieval or recognition of a new source domain, given a
target domain, or the other way round.
By combining our algorithm for the evolution of
semantic networks with a fitness measure based on analogical similarity, we can essentially produce a method
to address this creativity-related subproblem of ana16
Kepler argued, in his Astronomia Nova, as light can travel
undetectably on its way between the source and destination,
and yet illuminate the destination, so can motive force be
undetectable on its way from the Sun to planet, yet affect
planet’s motion.
12
A. G. Baydin, R. López de Mántaras, S. Ontañón
home
eat
dessert
dessert
home
AtLocation
IsA
eat
Desires
UsedFor
AtLocation
Desires
UsedFor
IsA
IsA
person
cheesecake
person
CapableOf
dessert
dessert
IsA
walk
(a)
(b)
(d)
sweet
dessert
sweet
eat
dessert
cheesecake
IsA
IsA
cheesecake
cheesecake
IsA
cake
(f)
person
IsA
cake
(e)
IsA
cheesecake
IsA
cake
UsedFor
dessert
IsA
IsA
IsA
eat
HasProperty
UsedFor
dessert
IsA
IsA
cake
(c)
HasProperty
IsA
cheesecake
cake
cake
(g)
(h)
eat
sweet
eat
HasProperty
dessert
UsedFor
sweet
home
UsedFor
HasProperty
university
dessert
AtLocation
AtLocation
eat
IsA
IsA
cheesecake
UsedFor
IsA
eat
Desires
cake
cake
(i)
(j)
person
UsedFor
CapableOf
dessert
IsA
Desires
walk
(k)
person
CapableOf
dessert
walk
(l)
Fig. 5: Examples illustrating commonsense mutation. (a) Mutation type I (before); (b) Mutation type I (after);
(c) Mutation type IIa (before); (d) Mutation type IIa (after); (e) Mutation type IIb (before); (f) Mutation type
IIb (after); (g) Mutation type IIIa (before); (h) Mutation type IIIa (after); (i) Mutation type IIIb (before); (j)
Mutation type IIIb (after); (k) Mutation type IV (before); (l) Mutation type IV (after).
logical reasoning, which has remained, so far, virtually
untouched.
We accomplish this by
1. providing our evolutionary algorithm with a “reference” semantic network that will represent the input
to the system; and
2. running the evolutionary process under a fitness measure quantifying analogical similarity to the given
“reference” network.
This, in effect, creates a “survival of the fittest analogies” process where, starting from a random initial population of semantic networks, one gets semantic networks
that get gradually more analogous to the given reference
network.
In our implementation, we define the fitness measure
to take the reference semantic network as the base and
the individual whose fitness is just being evaluated as
the target. In other terms, this means that the system
produces structurally analogous target networks for a
given base network. From a computational creativity
perspective, an interpretation for this would be the
“imagining”, or creation, of a novel case that is analogous
to a case at hand.
This designation of the base and target roles for the
two networks is an arbitrary choice, and it is straightforward to define the fitness function in the other direction.
So, if the system would be set up such that it would
produce base networks, given the target network, one
can then interpret this as the the classical retrieval process in analogical reasoning, where one is supposed to
retrieve a base case that is analogous to the currently
encountered case, for using it as a basis for solution.
If one subscribes to the “retrieval of a base case”
interpretation, since the ultimate source of all the information underlying the generated networks is the commonsense knowledge bases, one can treat this source of
knowledge as a part of the system’s memory, and see it
as a “generic case base” from which the base cases are
retrieved.
A semantic network-based evolutionary algorithm for computational creativity
13
4.2 Structure mapping engine
(a)
(b)
Fig. 6: Contribution to computational analogy-making.
(a) Existing work in the field, restricted to finding analogical mappings between a given pair of domains (b)
Our novel approach, capable of creating novel analogies
as well as the analogous case itself.
On the other hand, if we consider the “imagination of a novel case” interpretation, our system, in fact,
replicates a mode of behavior observed in psychology research where an analogy is not always simply recognized
between an original case and a retrieved analogous case
from memory, but the analogous case can sometimes be
created together with the analogy [7].
Considering the depth of commonsense knowledge
sources, this creation process is effectively open-ended;
and due to randomly performed queries, it produces
different analogous cases in each run. This capability of
open-ended creation of novel analogous cases is, to our
knowledge, the first of its kind and makes our approach
interesting for the analogical reasoning and computational creativity fields.
The random nature of population initialization and
the breadth of information in ConceptNet and WordNet
virtually ensure that the generated semantic networks
are in different domains from the one supplied as the
input. However, it is possible to formulate fitness measures that include a measure of semantic similarity 17
in addition to analogical similarity and to penalize networks that are semantically too similar to the source
network.
The Structure Mapping Engine (SME) [12] is an analogical matching algorithm firmly based on the psychological
structure mapping theory of Gentner. It is a very robust
algorithm, having been used in many practical applications by a variety of research groups, and it has been
considered the most influential work on the modeling of
analogy-making [15].
An important characteristic of SME is that it ignores
surface features and it can uncover mappings between
potentially very distant domains, if they have a similar
representational structure.
A typical example given for illustrating the working
of SME is the analogy between the Rutherford-Bohr
atom model and the Solar System, which we already
mentioned. Using a predicate calculus representation,
Figure 8 illustrates a structural mapping between these
domains.
Here we make use of our own implementation of SME
based on the original description by Falkenhainer et al.
[12] and adapt it to the simple structure of semantic
networks.
Using SME in this way necessitates the introduction of a mapping between the concept–relation based
structure of semantic networks and the predicate calculus based representation traditionally used in SME
applications.
A highly versatile such mapping is given by Larkey
and Love [25]. Given information such as “Jim (a man)
loves Betty (a woman)”, one can transform the predicate calculus representation of loves(Jim, Betty),
gender(Jim, male), gender(Betty, female) into a
semantic network representation by converting predicates into nodes such as gender and loves; and creating
argument nodes for each argument of a predicate. This
kind of mapping makes it possible, theoretically, to represent arbitrarily complex information within the simple
representation framework of semantic networks. As an
example, one can represent meta-information such as
“John knows that Jim loves Betty”.
However, the approach of Larkey and Love [25] requires the creation of ad hoc “relation nodes” for the
representation of relations between concepts and the
usage of unlabeled directed edges. On the other hand,
the existing structure of the commonsense knowledge
bases that we interface extensively, mainly ConceptNet,
are based on nodes representing concepts and labeled
directed edges representing relations. In this representation, nodes can have arbitrary names but the names of
edges come from a limited set of basic relation names18 .
18
17
Readily available by using WordNet [34].
For ConceptNet version 4: IsA, HasA, PartOf, UsedFor,
AtLocation, CapableOf, MadeOf, CreatedBy, HasSubevent,
14
A. G. Baydin, R. López de Mántaras, S. Ontañón
Fig. 7: The Dalton (1805), Thomson (1904), Rutherford (1911), and Bohr (1913) models of the atom with their
corresponding analogies.
Cause
Cause
Cause
Cause
Gravity
Gravity
Greater
Greater
Mass
Mass
Mass
Mass
Sun
Sun
Cause
Cause
And
And
Attracts
Attracts
Revolve
Revolve
Electromagnetism Greater
Greater
Electromagnetism
Charge
Charge
Revolve
Revolve
Charge
Charge
Nucleus
Nucleus
Planet
Planet
Attracts
Attracts
Electron
Electron
(a)
Greater
Greater
Attracts
Attracts
Greater
Greater
Revolve
Revolve
Attracts
Attracts
Table 3: Correspondences between SME predicate calculus statements [12] and semantic network structure that
we define for applying structure mapping to semantic
networks.
Predicate calculus
Semantic networks
Entity
Relation
Attribute
Function
Concept (node)
Relation (edge)
IsA or HasProperty relation
Not employed
Revolve
Revolve
4.3 Results
Mass
Mass
Charge
Charge
Mass
Mass
Sun
Sun
Planet
Planet
Charge
Charge
Nucleus
Nucleus
Electron
Electron
(b)
Fig. 8: Representation of the Rutherford-Bohr atom
model – Solar System analogy as graphs: (a) Predicates
about the two domains. (b) Analogy as a mapping of
structure between the two domains.
Because of this, we take another approach for mapping
between semantic networks and predicate calculus.
Due to these reasons, we define a basic list of correspondences between the two representation schemes,
where we treat “entities” as concepts, relations as relations, attributes as IsA relations; and exclude functions.
Table 3 gives the list of correspondences that we employ
in our SME implementation.
HasFirstSubevent, HasLastSubevent, HasPrerequisite,
MotivatedByGoal,
Causes,
Desires,
CausesDesire,
HasProperty,
ReceivesAction,
DefinedAs,
SymbolOf,
LocatedNear,
ObstructedBy,
ConceptuallyRelatedTo,
InheritsFrom.
With the implementation of analogical similarity-based
fitness measure that we described so far, we carried out
numerous experiments with reference networks representing different domains. In this part, we present the
results from two such experiments.
Table 4 provides an overview of the parameter values
that we used for conducting these experiments.
The selection of crossover and mutation probabilities
for a particular application have been a traditional subject of debate in EA literature [42]. Since the foundation
of the field, in essence, the arguments have been mainly
centered on the relative importance of the crossover
and mutation operators in the progress of evolution.
For our approach, we make the decision to follow the
somewhat established consensus in the graph-based EA
field [35], dominated by genetic programming (GP) and
the selection of parameters by the pioneering work of
Koza.
Thus, we use a crossover probability of P robrec =
0.85, similar to the high crossover probabilities typically
≥ 0.9 encountered in GP literature [23].
However, unlike the typical GP mutation value of
≤ 0.1, we employ a somewhat-above-average mutation
rate of P robmut = 0.15.
Due to the fact that our algorithm is the first attempt at having a graph-based evolutionary model of
memetics, this mutation rate is somewhat arbitrary
A semantic network-based evolutionary algorithm for computational creativity
Table 4: Parameter set used during experiments. Refer
to Table 1 for an explanation of parameters.
Parameter
Value
Evolution
Sizepop
P robrec
P robmut
200
0.85
0.15
Semantic networks
Sizenetwork
Scoremin
Counttimeout
5
2
10
Tournament selection
Sizetourn
P robwin
8
0.8
and is dependent on our subjective interpretation of
the mutation events in memetic processes. Nonetheless,
there is preliminary support for a high mutation rate in
memetics, where it has been postulated, for example by
Gil-White [18], that memes would have a high tendency
of mutation.
We select a population size of Sizepop = 200 individuals, and subject this population to tournament
selection with a tournament size of Sizetourn = 8 and a
winning probability P robwin = 0.8.
Using this parameter set, here we present the results
from two runs of experiment:
1. analogies generated for a network describing some
basic astronomical knowledge, shown in Figure 9;
and
2. analogies generated for a network describing familial
relations, shown in Figure 11.
For the first reference base network (Figure 9), after
a run of the algorithm for 35 generations, the system
produced the target network shown in Figure 10.
The produced target network exhibits an almost
one-to-one structural correspondence with the reference
network, missing only one node (mass in the original network) and two relations both pertaining to this missing
node (HasA(planet, mass) and HasProperty(matter,
mass)). The discovered analogy is remarkably inventive,
and draws a parallel between the Earth and an apple:
Just as the Earth is like an apple, planets are like fruits
and the solar system is like a tree holding these fruits.
Just as the solar system is a part of the universe, a tree
is a part of a forest.
It is an intuitive analogy and leaves us with the impression that it is comparable with the classic analogy
between the atom and the Solar System that we mentioned in the beginning of this section. Table 5 gives a
full list of all the correspondences.
15
For the second reference network (Figure 11), in a
run after 42 generations, our algorithm produced the
network shown in Figure 12.
The produced analogy can be again considered “creative”, drawing a parallel between human beings and
musical instruments. It considers a mother as a clarinet
and a father as a drum; and just as a mother is a woman
and a father a man, a clarinet is an instance of wind
instrument and a drum is an instance of percussion instrument. The rest of the correspondences also follow
in a somewhat intuitive way. Again, Table 6 gives a list
of correspondences.
We should note here that each of these two examples
were hand-picked out of a collection of approximately
hundred runs with the corresponding reference network,
chosen because they represent interesting analogies suggesting possible creative value. It is evidently a subjective judgment of what would be “interesting” to present
to our audience. This is a common issue in computational creativity research, recognized, for example, by
Colton and Wiggins [8] who introduce the term curation
coefficient as an informal subjective measure of typicality, novelty, and quality of the output from generative
algorithms.
During our experiments, we observed that under
the selected parameter set, the evolutionary process
approaches equilibrium conditions after approximately
50 generations. This behavior is typical and expected
in EA approaches and manifests itself with an initial
exponential or logarithmic growth in fitness that asymptotically approaches a fitness plateau, after which fitness
increasing events will be sporadic and negligible.
Figure 13 shows the progression of the average fitness
of the population and the fitness of the best individual
for each passing generation, during the course of one
of our experiments with the reference network in Figure 9, which lasted for 50 generations. We observe that
the evolution process asymptotically reaches a fitness
plateau after about 40 generations.
Coinciding with the progression of fitness values, we
observe, in Figure 14, the sizes of individual semantic
networks both for the best individual and as a population average. Just as in the fitness values, there is
a pronounced stabilization of the network size for the
best individual in the population, occurring around the
40th generation. While the value stabilizes for the best
individual, the population average for the network size
keeps a trend of (gradually slowing) increase.
Our interpretation of this phenomenon is that, once
the size of the best network becomes comparable with
the size of the given reference network (Figure 9, comprising 10 concepts and 11 relations) and the analogies
considered by the SME algorithm have already reached
16
A. G. Baydin, R. López de Mántaras, S. Ontañón
Table 5: Experiment 1: Correspondences between the
base and target networks, after 35 generations.
Table 6: Experiment 2: Correspondences between the
base and target networks, after 42 generations.
Base
Target
Base
Target
Concepts
earth
moon
planet
solar system
galaxy
universe
spherical
matter
mass
large object
apple
leave
fruit
tree
forest
forest
green
—
seed
source of vitamin
Concepts
mother
father
woman
man
human
home
care
family
sleep
dream
female
clarinet
drum
wind instrument
percussion instrument
instrument
music hall
perform glissando
—
make music
play instrument
member of orchestra
HasA(apple, leave)
HasProperty(apple, green)
Relations
IsA(mother, woman)
HasProperty(leave, green)
IsA(father, man)
IsA(apple, fruit)
IsA(fruit, source of
vitamin)
AtLocation(fruit, tree)
IsA(woman, human)
Relations
HasA(earth, moon)
HasProperty(earth,
spherical)
HasProperty(moon,
spherical)
IsA(earth, planet)
IsA(planet, large object)
AtLocation(planet, solar
system)
AtLocation(solar system,
galaxy)
PartOf(solar system,
universe)
MadeOf(planet, matter)
HasA(planet, mass)
HasProperty(matter, mass)
AtLocation(human, home)
IsA(man, human)
AtLocation(tree,
mountain)
PartOf(tree, forest)
—
HasA(fruit, seed)
—
PartOf(mother, family)
PartOf(father, family)
CapableOf(mother, care)
CapableOf(human, sleep)
HasSubevent(sleep, dream)
IsA(woman, female)
a certain quality, further increases in the network size
would not cause substantial improvement on the SME
structural evaluation score. This is because the analogical mapping from the reference semantic network to the
current best individual is already highly optimized and
very close to the ideal case of a structurally one-to-one
mapping (cf. Figure 9, 10 concepts, 11 relations, and
Figure 10, 9 concepts, 9 relations).
In general, our experiments demonstrate that, combined with the SME-based fitness measure, the algorithm we developed is capable of spontaneously creating
collections of semantic networks analogous to the one
given as reference. In most cases, our implementation
was able to reach extensive analogies within 50 generations and reasonable computational resources, where a
typical run of experiment took around 45 minutes on
a medium-range laptop computer with AMD Athlon II
2.2 GHz processor and 8 GB of RAM.
IsA(clarinet, wind
instrument)
IsA(drum, percussion
instrument)
IsA(wind instrument,
instrument)
AtLocation(instrument,
music hall)
IsA(percussion
instrument, instrument)
—
—
CapableOf(clarinet,
perform glissando)
CapableOf(instrument,
make music)
HasSubevent(make music,
play instrument)
IsA(wind instrument,
member of orchestra)
5 Conclusions
We presented a novel graph-based EA employing semantic networks as evolving individuals. The use of
semantic networks provides a simple yet powerful means
of representing pieces of evolving knowledge, giving us
a possibility to interpret this algorithm as an implementation of the idea of memetics. Because this work
constitutes a novel semantic network-based EA, we had
to establish the necessary crossover and mutation operators working on this representation.
We make extensive use of commonsense reasoning
and commonsense knowledge bases, necessitated by the
semantic network-based representation and the requirement that all operations should ensure meaningful conceptual relations. Put another way, we use a combina-
A semantic network-based evolutionary algorithm for computational creativity
17
galaxy
home
universe
AtLocation
IsA
PartOf
father
solar system
AtLocation
man
dream
IsA
human
PartOf
large object
CapableOf
AtLocation
HasSubevent
sleep
IsA
IsA
planet
HasA
family
mass
IsA
HasA
PartOf
MadeOf
HasProperty
earth
woman
mother
matter
moon
IsA
IsA
CapableOf
female
HasProperty
HasProperty
care
spherical
Fig. 9: Experiment 1: Given semantic network, 10 concepts, 11 relations (base domain).
Fig. 11: Experiment 2: Given semantic network, 11 concepts, 11 relations (base domain).
music hall
mountain
forest
PartOf
AtLocation
tree
percussion instrument
AtLocation
play instrument
IsA
IsA
instrument
CapableOf
drum
source of vitamin
HasSubevent
make music
IsA
AtLocation
IsA
wind instrument
fruit
IsA
IsA
HasA
IsA
member of orchestra
apple
HasA
seed
leave
HasProperty
clarinet
CapableOf
HasProperty
green
Fig. 10: Experiment 1: Evolved individual, 9 concepts,
9 relations (target domain). The evolved individual is
encountered after 35 generations, with fitness value 2.8.
Concepts and relations of the individual not involved in
the analogy are not shown here for clarity.
tion of random processes constrained by the non-random
structural bounds of commonsense knowledge, under
selection pressure of the defined fitness function.
For evaluating the approach, we make use of SME
as the basis of a fitness function that measures analogical similarity. With the analogical similarity-based
fitness calculated between the reference network and the
evolving networks in the population, we create a system
capable of spontaneously generating networks analogous
to any given network. This system represents a first in
the analogical reasoning field, because current models
perform glissando
Fig. 12: Experiment 1: Evolved individual, 10 concepts,
9 relations (target domain). The evolved individual is
encountered after 42 generations, with fitness value 2.7.
Concepts and relations of the individual not involved in
the analogy are not shown here for clarity.
have been limited to only finding analogical mappings
between two already existing networks.
5.1 Limitations and future work
The most considerable limitation of this work comes
from our choice of using semantic networks instead of
a more powerful representation scheme. For example,
since we are using SME for experimenting with our approach, it would be highly desirable and logical to use
predicate calculus to represent evolving individuals. In-
18
A. G. Baydin, R. López de Mántaras, S. Ontañón
3.0
è è è è è è è è è è èé è èé è è è
è è è è è è è è è è è è èé èé é é é é é é é é é é é é é é
èèè ééééééééééé
è
éé
èè éé
è éé
è
èè éé
é
è
è éé
èè é
é
2.5
Fitness
2.0
èè é
é
é
è é
1.5
è
1.0
è
é
è é
é
0.5
é
0.0
0
10
20
30
40
Generations HtL
50
Fig. 13: Progress of fitness during a typical run with
parameters given in Table 4. Filled circles represent the
best individual in a generation, while the empty circles
represent population average.
20
ééé
é
é
é
éé éé
é
é
é
é é ééé
Semantic network size
é
é
éé é
é
é
è è èé é é é é
èèèèèèèèèèèèèèèè
è é é è è è èé è è è è è è è è è è
è è èé é
è é
è èé èé é é
èè
é
15
10
è è èé
è é
è é
èèéé
éé
5
éé
0
0
10
20
30
Generations HtL
40
50
Fig. 14: Progress of semantic network size during a
typical run with parameters given in Table 4. Filled
circles represent the best individual in a generation,
while the empty circles represent population average.
Network size is taken to be the number of relations
(edges) in the semantic network.
stead, we limit the representation to semantic networks,
and provide our own implementation of SME that we
adapt to work on the simple directed graph structure of
semantic networks.
This choice of limiting representation was mainly
directed by our reliance on ConceptNet version 4 as the
main commonsense knowledge base used in this study,
which is based on simple binary relations using a limited
set of relation types. This impedes the representation
of more complex information such as temporal relations
or causal connections between subgraphs. It should be
noted, however, that in the next version, ConceptNet
project has made a decision to move to a “hypergraph”
representation, where one can have relations about other
instances of relation between concepts. This can, in
effect, greatly increase the expressivity of the system.
Another issue in the current study is the selection
of parameter values for our EA implementation. Due
to the fact that our algorithm is a first attempt at
having a graph-based implementation of memetics, we
are faced with selecting mutation and crossover rates
without any antecedents. Even in theoretical studies
of cultural evolution, discussions of the frequency of
variation events are virtually nonexistent. This makes
our parameter values rather arbitrary, roughly guided
by the general conventions in the graph-based EA field.
For future work, it would be interesting to experiment with extensions of the simple SME-based fitness
measure that we have used. As semantic networks are
graphs, a straightforward possibility is to take graphtheoretical properties of candidate networks into account, such as the clustering coefficient or shortest path
length. With these kinds of constraints, selection pressure on the network structure can be adjusted in a more
controlled way.
Another highly interesting prospect with the EA
system would be to consider different types of mutation
and crossover operators, and doing the necessary study
for grounding the design of such operators on existing
theories of cultural transmission and variation. Combined with realistically formed fitness functions, one can
use such a system for modeling selectionist theories of
knowledge. Performing experiments with such a setup
could be considered a “memetic simulation” and comparable to computational simulations of genetic processes
performed in computational biology.
Besides the “memetic” interpretation, a more handson application that we foresee we can achieve in the
short-term is practical computational creativity. Already
with the SME-based fitness measure that we demonstrated in this article, it would be possible to create
systems for tasks such as story generation based on
analogies [45]. This would involve giving the system an
existing story as the input, and getting an analogous
story in another domain as the output. For doing this we
would need to define a structural representation scheme
of story elements, and, preferably an automated way of
translating between structural and textual representations.
Acknowledgements This work was supported by a JAEPredoc fellowship from CSIC, and the research grants: 2009SGR-1434 from the Generalitat de Catalunya, CSD2007-0022
from MICINN, and Next-CBR TIN2009-13692-C03-01 from
MICINN. We thank the three anonymous reviewers whose
input has considerably improved the article.
A semantic network-based evolutionary algorithm for computational creativity
References
1. Balkin, J.M.: Cultural Software: A Theory of Ideology. Yale University Press (1998)
2. Bedau, M.A., Snyder, E., Brown, C.T.: A comparison of evolutionary activity in artificial evolving
systems and in the biosphere. In: Husbands, P.,
Harvey, I. (eds.) Proceedings of the Fourth European Conference on Artificial Life, pp. 125–134. MIT
Press, Cambridge (1997)
3. Bickhard, M.H., Campbell, D.T.: Variations in variation and selection: the ubiquity of the variationand-selective-retention ratchet in emergent ogranizational complexity. Foundations of Science 8, 215–
2182 (2003)
4. Boden, M.A.: Computer models of creativity. AI
Magazine 30(3), 23–34 (2009)
5. Chafe, E.: Tonal Allegory in the Vocal Music of J.
S. Bach. University of California Press (1991)
6. Chen, D., Aoki, T., Homma, N., Terasaki, T.,
Higuchi, T.: Graph-based evolutionary design of
arithmetic circuits. IEEE Transactions on Evolutionary Computation 6(1), 86–100 (2002)
7. Clement, J.: Observed methods for generating analogies in scientific problem solving. Cognitive Science
12, 563–586 (1988)
8. Colton, S., Wiggins, G.: Computational creativity:
The final frontier? In: de Raedt, C., Bessiere, C.,
Dubois, D., Doherty, P. (eds.) Proceedings of the
20th European Conference on Artificial Intelligence,
pp. 21–26. IOS Press, Amsterdam (2012)
9. Dawkins, R.: The Selfish Gene. Oxford University
Press, New York City (1976)
10. Dennett, D.C.: Darwin’s Dangerous Idea: Evolution
and the Meanings of Life. Simon & Schuster (1995)
11. Diochnos, D.I.: Commonsense reasoning and large
network analysis: A computational study of conceptnet 4 (2013). URL http://arxiv.org/abs/1304.
5863
12. Falkenhainer, B., Forbus, K.D., Gentner, D.: The
Structure-Mapping Engine: Algorithm and examples. Artificial Intelligence 41, 1–63 (1989)
13. Fauconnier, G., Mark, T.: The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. Basic Books, New York (2002)
14. Fellbaum, C.: WordNet: An Electronic Lexical
Database. MIT Press (1998)
15. French, R.M.: The computational modeling of
analogy-making. Trends in Cognitive Sciences 6(5),
200–205 (2002)
16. Gabora, L., Kaufman, S.B.: Evolutionary approaches to creativity, pp. 279–300. Cambridge
University Press (2010)
19
17. Gentner, D.: Structure-mapping: A theoretical
framework for analogy. Cognitive Science 7, 155–170
(1983)
18. Gil-White, F.: Let the meme be (a meme): insisting too much on the genetic analogy will turn it
into a straightjacket. In: Botz-Bornstein, T. (ed.)
Culture, Nature, Memes. Cambridge Scholars, Newcastle upon Tyne (2008)
19. Havasi, C., Speer, R., Alonso, J.: ConceptNet 3:
a flexible, multilingual semantic network for common sense knowledge. In: Proceedings of Recent
Advances in Natural Language Processing (2007)
20. Hofstadter, D.R.: Fluid concepts and creative analogies: Computer models of the fundamental mechanisms of thought. Basic Books, New York (1995)
21. Hofstadter, D.R.: Analogy as the core of cognition. In: Gentner, D., Holyoak, K.J., Kokinov, B.
(eds.) Analogical Mind: Perspectives from Cognitive
Science, pp. 499–538. MIT Press, Cambridge, MA
(2001)
22. Hofstadter, D.R., Dennett, D.C.: The Mind’s I: Fantasies and Reflections on Self and Soul. Basic, New
York (1981)
23. Koza, J.R., Keane, M.A., Streeter, M.J., Mydlowec,
W., Yu, J., Lanza, G.: Genetic Programming IV:
Routine Human-Competitive Machine Intelligence.
Kluwer Academic Publishers (2003)
24. Kuo, Y.L., Hsu, J.: Bridging common sense knowledge bases with analogy by graph similarity. In:
Workshops at the Twenty-Fourth AAAI Conference
on Artificial Intelligence (2010)
25. Larkey, L.B., Love, B.C.: CAB: Connectionist analogy builder. Cognitive Science 27(2003), 781–794
(2003)
26. Mabu, S., Hirasawa, K., Hu, J.: A graph-based
evolutionary algorithm: Genetic network programming (GNP) and its extension using reinforcement
learning. Evolutionary Computation 15(3), 369–398
(2007)
27. Manurung, H.M.: An evolutionary algorithm approach to poetry generation. Ph.D. thesis, University of Edinburgh, Edinburgh, UK (2003)
28. Martindale, C., Locher, P., Petrov, V.M.: Evolutionary and Neurocognitive Approaches to Aesthetics,
Creaticity and the Arts. Foundations and Frontiers
of Aesthetics. Baywood Publishing Company (2007)
29. McCarthy, J.: Programs with common sense. In:
Symposium on Mechanization of Thought Processes.
National Physical Laboratory, Teddington, England
(1958)
30. Minsky, M.: Emotion Machine: Commonsense
Thinking, Artificial Intelligence, and the Future of
the Human Mind. Simon & Schuster, New York,
20
NY (2006)
31. Montes, H.A., Wyatt, J.L.: Graph representation
for program evolution: An overview. Tech. rep., University of Birmingham School of Computer Science
(2004)
32. Moscato, P.A., Cotta, C., Mendes, A.: Studies in
Fuzziness and Soft Computing – New Optimization
Techniques in Engineering, chap. Memetic Algorithms. Springer, New York (2004)
33. Mueller, E.T.: Commonsense Reasoning. Morgan
Kaufmann (2006)
34. Pedersen, T., Patwardhan, S., Michelizzi, J.: Wordnet:: Similarity: measuring the relatedness of concepts. In: Demonstration Papers at HLT-NAACL
2004, pp. 38–41. Association for Computational Linguistics (2004)
35. Pereira, F.B., Machado, P., Costa, E., Cardoso, A.:
Graph based crossover — A case study with the
busy beaver problem. In: Proceedings of the Genetic
and Evolutionary Computation Conference, vol. 2,
pp. 1149–1155. Morgan Kaufmann (1999)
36. Pereira, F.C.: Creativity and Artificial Intelligence:
A Conceptual Blending Approach. Mouton de
Gruyter, New York (2007)
37. Pezzotta, E.: The metaphor of dance in Stanley
Kubrick’s 2001: A Space Odyssey, A Clockwork Orange and Full Metal Jacket. Journal of Adaptation
in Film & Performance 5(1) (2012)
38. Pohlheim, H.: Genetic and evolutionary algorithm
toolbox documentation (2006). URL http://www.
geatbx.com/docu/algindex-02.html
39. Poli, R.: New Ideas in Optimisation, chap. Parallel
Distributed Genetic Programming. McGraw-Hill
(1999)
40. Skusa, A., Bedau, M.A.: Towards a comparison of
evolutionary creativity in biological and cultural
evolution. In: Standish, R., Abbass, H.A., Bedau,
M.A. (eds.) Artificial Life VIII, pp. 233–242. MIT
Press (2002)
41. Sowa, J.F.: Knowledge Representation: Logical
Philosophical, and Computational Foundations.
Brooks/Cole, Pacific Grove, CA (2000)
42. Spears, W.M.: Foundations of Genetic Algorithms 2,
chap. Crossover or mutation?, pp. 221–237. Morgan
Kaufmann (1992)
43. Teller, A.: Algorithm evolution with internal reinforcement for signal understanding. Ph.D. thesis,
Carnegie Mellon University, Pittsburgh, PA (1998)
44. Veale, T., Keane, M.: The competence of suboptimal structure mapping on hard analogies. In:
Proceedings of the 15th International Joint Conference on AI. Morgan Kauffman, San Mateo, CA
(1997)
A. G. Baydin, R. López de Mántaras, S. Ontañón
45. Zhu, J., Ontañón, S.: Evaluating analogy-based
story generation: An empirical study. In: Proceedings of the 9th Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE 2013). Boston, MA (2013)
| 9 |
A SUPERVISED STDP-BASED TRAINING ALGORITHM FOR LIVING NEURAL
NETWORKS
Yuan Zeng∗ Kevin Devincentis† Yao Xiao‡ Zubayer Ibne Ferdous∗
Xiaochen Guo∗ Zhiyuan Yan∗ Yevgeny Berdichevsky∗§
arXiv:1710.10944v3 [] 21 Mar 2018
Lehigh University, ∗ Electrical and Computer Engineering Department § Bioengineering Department
†
Carnegie Mellon University, Electrical and Computer Engineering Department
‡
University of Science and Technology of China, School of Gifted Young
ABSTRACT
Neural networks have shown great potential in many
applications like speech recognition, drug discovery, image
classification, and object detection. Neural network models are inspired by biological neural networks, but they are
optimized to perform machine learning tasks on digital computers. The proposed work explores the possibility of using
living neural networks in vitro as the basic computational elements for machine learning applications. A new supervised
STDP-based learning algorithm is proposed in this work,
which considers neuron engineering constraints. A 74.7% accuracy is achieved on the MNIST benchmark for handwritten
digit recognition.
Index Terms— Spiking neural network, Spike timing dependent plasticity, Supervised learning, Biological neural network
1. INTRODUCTION AND MOTIVATION
Artificial Neural Network (ANN) and Spiking Neural
Network (SNN) are two brain inspired computational models, which have shown promising capabilities for solving
problems such as face detection [1] and image classification
[2]. Computer programs based on neural networks can defeat
professional players in the board game Go [3] [4]. ANN relies on numerical abstractions to represent both the states of
the neurons and the connections among them, whereas SNN
uses spike trains to represent inputs and outputs and mimics
computations performed by neurons and synapses [5]. Both
ANN and SNN are models extracted from biological neuron
behaviors and optimized to perform machine learning tasks
on digital computers.
In contrast, the proposed work explores whether biological living neurons in vitro can be directly used as basic computational elements to perform machine learning tasks. Living neurons can perform “computation” naturally by transferring spike information through synapses. The energy consumption is 100000× more efficient than hardware evaluation
[6]. Living neurons have small sizes (4 to 100 micrometers
diameter) [7], and adapt to changes.
While precise control of living neural networks is challenging, recent advances in optogenetics, genetically encoded
neural activity indicators, and cell-level micropatterning open
up possibilities in this area [8] [9] [10]. Optogenetics can
label individual neurons with different types of optically
This work is supported in part by a CORE grant from Lehigh University
and in part by the National Science Foundation under Grant ECCS-1509674.
controlled channels, which equip in vitro neural networks
with optical interfaces. Patterned optical stimulation and
high-speed optical detection allow simultaneous access to
thousands of in vitro neurons. In addition, the invention of
micropattern [11] enables modularized system design.
To the best of our knowledge, this work is the first to
explore the possibility of using living neurons for machine
learning applications. By considering neuron engineering design constraints, a new algorithm is proposed for easy training
in future biological experiments. A fully connected spiking
neural network is evaluated on MNIST dataset using NEURON simulator [12]. A 74.7% accuracy is obtained based on
a biologically-plausible SNN model, which is a promising result that demonstrates the feasibility of using living neuron
networks to compute.
2. METHODS
Spiking neural network is a model that closely represents
biological neuron behavious. In biological neural network,
neurons are connected through plastic synapses. A spike of
a pre-synaptic neuron will change the membrane potential of
a post-synaptic neuron. The impact of the spikes at different time from all of the pre-synaptic neurons will be accumulated at the post-synaptic neuron. Post synaptic spikes are
generated when the membrane voltage of a neuron exceeds a
certain threshold. In SNN, information is represented as a series of spikes, and the accumulative effect of the pre-synaptic
spikes are also modeled.
This work models biological neural networks using the
Hodgkin-Huxley (HH) neuron model [13] and spike-based
data representation. One major difference between the proposed work and prior SNN based models [14] [15] [16] [17]
[18] [19] is that this work aims to explore the potential of using biological living neurons as the functional devices, while
prior works focused on the computational capability of the
neuron model. Neuron engineering design constraints lead to
different design choice for input encoding, network topology,
neuron model, learning rule, and model parameters.
2.1. Network topology
Neural connectivity in human brain is complex and has
different types of topologies in different parts of the nervous
system. To understand the network functionality, a simple
network topology is built, where all input neurons are connected to all output neurons through synapses (Fig. 1).
apse
1
Black
1
.
.
White
.
.
.
.
.
.
.
.
1
.
.
.
.
0
Black
.
.
0
Group 9
30 neurons
Syn
Group 0
30 neurons
Due to bioengineering constraints, images from the
MNIST dataset, which have 28 × 28 pixels, are compressed
to 14 × 14 pixels. As a result of this simplification, the network has 196 input neurons, each corresponding to one pixel.
Only black pixels generate spikes, and all input spikes occur
simultaneously.
Output for the network is a vector of spiking state for output neurons, “1” represents a spike, “0” means no spike. Each
output neuron is associated with a group index from 0 to 9,
which can be defined artificially. The index of the group with
the largest number of spiking neurons will be considered as
the network output. In our experiment, 300 output neurons
are used. Every 30 consecutive neurons belong to one group,
for example, the first 30 belong to group 0.
1
MNIST image
196 input
300 output
Fig. 1. Network topology.
2.2. Neuron model
To capture the realistic neuron dynamics, the HodgkinHuxley (HH) model [13] is used in the simulation, which
models the electro-chemical information transmission of biological neurons with electrical circuit. HH model has been
successfully verified by numerous biological experimental
data and it is more biologically accurate than other simplified
model such as Integrate and Fire model [20].
A spike will be generated if the membrane potential for
a neuron exceeds a certain threshold. However, after a strong
current pulse excites a spike, there will be a period during
which current pulse at the same amplitude cannot generate another spike,which is referred to as the refractory period [21].
2.3. Learning rule
The plasticity of synapses between neurons is important for learning. Connection strength changes based on
precise timing between pre- and post-synaptic spikes. This
phenomenon is called Spike Timing Dependent Plasticity
(STDP) [22].
Eqs. (1)-(4) describe the STDP rule [23] used in this
work. In this rule, the weight changes are proportional to
spike trace [24]. tpre and tpost denote the time a pre- and
post-synaptic, respectively, spikes arrive; tpre 0 and tpost 0
represent the arrival time of previous pre- and post-synaptic
spikes respectively. ALT P and ALT D are the amplitudes of
trace updating for potentiation and depression respectively;
and aLT P and aLT D are the potentiation and depression
learning rate respectively. Each pre-synaptic spike arrival will
update the pre-synaptic trace P according to Eq. (1), postsynaptic spike changes the post-synaptic trace Q according to
Eq. (2). If a pre-synaptic spike happens after a post-synaptic
spike, weight will decrease by δW q , which is given by (3). If
a post-synaptic spike occurs after a pre-synaptic spike, weight
will increase by δW p given by (4).
tpre 0 − tpre
P = P × exp(
) + ALT P
(1)
τLT P
Q = Q × exp(
tpost 0 − tpost
) + ALT D
τLT D
δW q = aLT D × Q × exp(
(2)
tpost − tpre
), tpost < tpre (3)
τLT D
tpre − tpost
), tpre < tpost (4)
τLT P
The STDP rule itself is not enough for learning. Both unsupervised [14] [15] [16] and supervised [17] [18] training algorithms based on the STDP rule have been proposed, which
focus on computational aspect of the neuron model. These algorithms use Poisson based spike trains as input and include
other bio-inspired mechanisms like winner-take-all [25] and
homoeostasis [26].
Since artificial stimuli can be precisely applied to optically stimulated neurons, synchronous inputs are used for the
proposed work. Considering a feedforward network, the synchronous inputs will lead to a problem. There will be no
weight decrease for the network based on the basic STDP
rule and all of the neurons will fire eventually. That is because when all of the input neurons fire at the same time, no
post-synaptic neuron will fire before any pre-synaptic spike.
In order to solve this problem and make the living neural
network easy to train, we propose a new supervised STDP
training algorithm. Four basic operations in this algorithm
are shown in Fig. 2 and discussed below.
In this algorithm, stimuli can be applied to both input and
output neurons artificially to generate a spike. In Fig. 2, external stimuli that generate a spike for input and output neurons
are shown in pink and blue respectively.
Without artificial output stimuli, an output neuron can
reach its action potential and fire in response to the network
inputs, which is shown in yellow. Because the output neuron
spikes after the input stimuli, weights between those in-out
pairs will be naturally potentiated. This is referred to as the
network’s “natural increase”, with tpre − tpost = T 2 − T 4.
Besides this natural increase, stimuli can be directly applied to the input and output neuron pairs to artificially change
the weights. If an output stimulus is given at T3 and an input
stimulus is given at T2, weight between this pair will be increased, which is referred to as “artificial increase”. If an output stimulus is given at T1, which is before an input stimulus,
“artificial decrease” will happen and weight will decrease.
In this work, these timing-based rules of applying external stimuli are the essential mechanisms to change synaptic
weights. For the digit recognition task, groups of weights are
δW p = aLT P × P × exp(
Training phase 1
Training phase 2
Natural increase
Artificial increase
Artificial decrease
in
Stimulus
out
Stimulus
out
Artificial hold
Time axis
0
T0
T1
T2
T3
T4
Tn Tn+T0
Tn+T1
Tn+T2
Tn+T3
Tn+T4
2Tn
Fig. 2. Four mechanisms in the supervised STDP training.
increased or decreased to make the network converge. Some
weights that are already reaching convergence need to be kept
same. However, if the weight of a synapse is large enough to
make the output neuron fire, the weight will increase naturally during the training process, which will move the network away from convergence. Therefore, each training step
is separated into two phases, and the input stimuli corresponding to the input image are given once for each phase. During
the first phase, if a weight that should be kept the same actually increased, a stimulus will be added to the corresponding
output neurons at Tn+T0, which is before the input stimulus at Tn+T2 for the second phase to decrease the increased
weight. Because of the refractory period, there will not be a
natural increase at the second phase. The time interval between input and the stimuli (T2-T0) is adjusted with the time
interval between natural output spike and input stimuli (T4T2) at the first phase, so that the amount of decrease and increase matches. Through this approach, the weight can be
kept roughly the same. This process is referred to as “artificial hold”.
The new STDP training algorithm is described in Algorithm 1. For each new image observed by the network, a
prediction is made by applying the input stimuli to the network and checking the natural response of the outputs (Algorithm 1 lines 3-5). The index of the group that contain the
largest number of spiking neurons is the predicted result (if
two groups have the same number of spikes, the smaller group
index is chosen). To train the network, the correct label (ID)
for the input image and the actual spike pattern for output
neurons are compared to generate the control signals to select
neurons into different lists that require external stimuli (lines
7-14). Based on the selected neurons, stimuli are applied to
the network to update weights (lines 16-22).
Three tunable parameters can be set for the training process. trainStep is the number of training steps the network
goes through for one image. A larger trainStep means higher
effective learning rate. In order for the correct group to have
more firing neurons than the other groups, inTarget and deTarget are targets of the number of firing neurons in each group.
inTarget represents the desired number of firing neurons to
be observed in the output group that matches the correct label. deTarget represents the desired number of firing neurons
to be observed in the incorrect groups. numSpike[id] is the
number of spiking neurons in group id. When id matches
ID, all spiking neurons in this group will be added to the
holdList to keep their weights since they respond correctly
(line 8). If the number of firing neurons is less than inTarget, inTarget-numSpike[id] neurons will be randomly chosen
among the non-spiking ones and added to the inList (lines
9-11). For other output groups, if the number of firing neurons is more than deTarget, numSpike[id]-deTarget neurons
will be randomly chosen among the spiking ones and added
to the deList (lines 12-14). After selecting inList, deList, and
holdList, corresponding stimuli are applied in time sequence
shown in Fig. 2 for each training step (lines 16-22).
Algorithm 1 Supervised STDP Training
1: // Tunable parameters: trainStep, inTarget, deTarget
2: for each image do
3:
Clear inList, deList and holdList
4:
Apply stimuli to input neurons based on pixel values
5:
Record spike pattern for output neurons
6:
for each output group (id = 0 to 9) do
7:
if id == ID then
8:
Add all spiking neurons to holdList
9:
if numSpike[id]<inT arget then
10:
x = inT arget − numSpike[id]
11:
Add x non-spiking neurons to inList
12:
if id 6= ID and numSpike[id]>deT arget then
13:
y = numSpike[id] − deT arget
14:
Add y spiking neurons to deList
15:
for each trainStep do
16:
Apply stimuli to the deList at T1
17:
Apply stimuli to input neurons at T2
18:
Apply stimuli to the inList at T3
19:
Apply stimuli to the holdList at Tn+T0
20:
Apply stimuli to the deList at Tn+T1
21:
Apply stimuli to input neurons at Tn+T2
22:
Apply stimuli to the inList at Tn+T3
3. SIMULATION
3.1. Parameters
Parameters used for the network are listed in Table 1.
Timing parameters in Fig. 2 impact the learning rate for each
trainStep. Small time interval between pre- and post-synaptic
neuron spikes (e.g., T3-T2) leads to greater weight changes.
However, the optical stimuli cannot be spaced arbitrarily close
to each other. In this work, a fixed 5 ms interval is used between pre- and post-synaptic spikes for artificial increase and
decrease. The pre- and post-synaptic spike interval for the
Accuracy
Accuracy
Accuracy
75%
75%
75%
60%
60%
60%
45%
45%
45%
30%
30%
30%
15%
15%
15%
0%
1
2
3
4
trainStep (inTarget=20, deTarget=0)
0%
5
10
15
20
25
30
inTarget (trainStep=2, deTarget=0)
0%
0
5
10
15
20
25
deTarget (trainStep=2, inTarget=20)
Fig. 3. Sensitivity of the image recognition accuracy to different parameters.
hold mechanism matches the natural increase interval, which
is 10 ms according to empirical data.
Parameter
τLT P / τLT D
ALTP / ALTD
aLTP / aLTD
max weight
refractory period
T3-T2 / T2-T1
T4-T2 / T2-T0
Tn
Value
20ms / 20ms
1 / -1
6 × 10−5 / 6.3 × 10−5
0.02
25ms
5ms / 5ms
10ms / 10ms
30ms
Table 1. Simulation parameters, [27][28][29][14].
3.2. Results and analysis
Three tunable parameters for the network are: trainStep,
inTarget and deTarget. trainStep is kept at 2, inTarget is set
at 20, and deTarget is configured as 0 as the base line. A
sensitivity study is done by tuning one parameter at a time.
For this sensitivity study, 1000 images from the MNIST
dataset are used for training, another 1000 images are used for
testing. Training time for each trainStep is 60 ms. Simulation
results are shown in Fig. 3.
When trainStep increases from 1 to 2, the prediction accuracy increases. However, when learning rate is too large
(beyond 3 steps), the accuracy drops. This is because a larger
effective learning rate may lead to fast convergence, but if it is
too large, overshooting will happen when moving towards the
global optimum point, which leads to oscillations and hurts
the performance.
For inTarget, better results are achieved in the middle
range, which shows that, training nearly half of the neurons to
fire can provide enough information while avoid divergence.
For example, if two images have a large set of overlapping
firing neurons in the inputs but have different labels, strengthening the connections between all of the firing inputs and
the corresponding outputs for one image will likely lead to
mis-prediction for the other image. The best performance is
achieved at inTarget=20.
Decreasing weights associated with all firing neurons in
all incorrect groups (deTarget=0) achieves the best performance. When deTarget is larger than 10, performance drops
dramatically. This is because inTarget is 20 for this set of results. The number of firing neurons in the incorrect group
needs to be below 20 to make sure that the correct group
has the greatest number of firing neurons. However, training 2 steps cannot guarantee both inTarget and deTarget are
reached. The best accuracy for the sensitivity study is 72.7%.
A larger dataset (10000 images from MNIST) is evaluated
based on the best parameters: trainStep=2, inTarget=20 and
deTarget=0. The accuracy for this larger dataset is 74.7%.
Compared to a single-layer fully connected ANN, which
achieves 88% [30] accuracy on MNIST dataset, the proposed
supervised STDP-based SNN still has an accuracy gap. Unlike the ANN, where weights and inputs can be directly used
in the prediction, most SNN models only rely on the output
spikes other than the exact membrane potential to make a prediction. The loss of information leads to the accuracy drops.
The only single-layer SNN work in our knowledge [31] derives an extra mathematical function to extract more informations through the timing relationships for output spikes,
which does not consider biological properties of neurons and
synapses and hence is an unrealistic scheme for living neuron
experiment.
For SNN works based on neuron science simulations, a
three-layer design with supervised STDP achieved an accuracy of 75.93% for 10 digit recognition task on a MNIST
dataset [18]. That is similar to the proposed single-layer network in this paper. There are other SNN works that have better results on MNIST [14] [15] [16] [17] [19]. However, those
networks have at least two layers and a larger number of neurons (e.g. 71,026 in [19]). Some works also preprocess the
input images to achieve better accuracy [17]. The major difference between the proposed work and prior works is that
prior works are optimized for solid state computers. For the
proposed supervised scheme, bioengineering constraints are
considered. Input data are compressed and applied as synchronous spike trains, HH model are used instead of the integrate and fire model. The biological limitations on maintaining the synaptic weights lead to the design of two training
phases and multiple training steps.
4. CONCLUSION
To explore the possibility of using living neuron for machine learning tasks, a new supervised STDP training algorithm has been proposed and simulated on a fully-connected
neural network based on the HH model. A 74.7% accuracy
is achieved on digit recognition task for the MNIST dataset.
This result demonstrates the feasibility of using living neurons as computation elements for machine learning tasks.
5. REFERENCES
[1] Haoxiang Li, Zhe Lin, Xiaohui Shen, Jonathan Brandt, and
Gang Hua, “A convolutional neural network cascade for face
detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5325–5334.
[2] Taras Iakymchuk, Alfredo Rosado-Muñoz, Juan F GuerreroMartı́nez, Manuel Bataller-Mompeán, and Jose V FrancésVı́llora,
“Simplified spiking neural network architecture
and stdp learning algorithm applied to image classification,”
EURASIP Journal on Image and Video Processing, vol. 2015,
no. 1, pp. 4, 2015.
[3] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez,
Laurent Sifre, George van den Driessche, Julian Schrittwieser,
Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot,
Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach,
Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis,
“Mastering the game of Go with deep neural networks and tree
search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016.
[4] Karen Simonyan Ioannis Antonoglou Aja Huang Arthur Guez
Thomas Hubert Lucas Baker Matthew Lai Adrian Bolton Yutian Chen Timothy Lillicrap Fan Hui Laurent Sifre George van
den Driessche Thore Graepel Demis Hassabis David Silver, Julian Schrittwieser, “Mastering the game of go without human
knowledge,” Nature, 2017.
[5] Zidong Du, Daniel D. Ben-Dayan Rubin, Yunji Chen, Liqiang
He, Tianshi Chen, Lei Zhang, Chengyong Wu, and Olivier
Temam, “Neuromorphic accelerators: A comparison between
neuroscience and machine-learning approaches,” pp. 494–507,
2015.
[6] P. King, “How is the human brain so energy-efficient?,”
https://www.quora.com/How-is-the-human-brain-so-energyefficient, 2012.
[7] Melissa Davies, “The neuron: Size comparison,”
science: A journey through the brain., 2002.
Neuro-
[8] Yevgeny Berdichevsky, Helen Sabolek, John B Levine, Kevin J
Staley, and Martin L Yarmush, “Microfluidics and multielectrode array-compatible organotypic slice culture method,”
Journal of neuroscience methods, vol. 178, no. 1, pp. 59–64,
2009.
[9] Yevgeny Berdichevsky, Kevin J Staley, and Martin L Yarmush,
“Building and manipulating neural pathways with microfluidics,” Lab on a Chip, vol. 10, no. 8, pp. 999–1004, 2010.
[10] Erkin Seker, Yevgeny Berdichevsky, Matthew R Begley,
Michael L Reed, Kevin J Staley, and Martin L Yarmush,
“The fabrication of low-impedance nanoporous gold multipleelectrode arrays for neuralelectrophysiology studies,” Nanotechnology, vol. 21, no. 12, pp. 125504, 2010.
[11] Md. Fayad Hasan and Yevgeny Berdichevsky, “Neural circuits
on a chip,” Micromachines, vol. 7, no. 9, 2016.
[12] Gordon M Shepherd, Jason S Mirsky, Matthew D Healy,
Michael S Singer, Emmanouil Skoufos, Michael S Hines,
Prakash M Nadkarni, and Perry L Miller, “The human brain
project: neuroinformatics tools for integrating, searching and
modeling multidisciplinary neuroscience data,” Trends in neurosciences, vol. 21, no. 11, pp. 460–468, 1998.
[13] Alan L Hodgkin and Andrew F Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of physiology, vol.
117, no. 4, pp. 500–544, 1952.
[14] Peter U Diehl and Matthew Cook, “Unsupervised learning
of digit recognition using spike-timing-dependent plasticity,”
Frontiers in computational neuroscience, vol. 9, 2015.
[15] Jason M Allred and Kaushik Roy, “Unsupervised incremental
stdp learning using forced firing of dormant or idle neurons,”
in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 2492–2499.
[16] Damien Querlioz, Olivier Bichler, Philippe Dollfus, and Christian Gamrat, “Immunity to device variations in a spiking neural
network with memristive nanodevices,” IEEE Transactions on
Nanotechnology, vol. 12, no. 3, pp. 288–295, 2013.
[17] Joseph M Brader, Walter Senn, and Stefano Fusi, “Learning
real-world stimuli in a neural network with spike-driven synaptic dynamics,” Neural computation, vol. 19, no. 11, pp. 2881–
2912, 2007.
[18] Amirhossein Tavanaei and Anthony S Maida, “A minimal spiking neural network to rapidly train and classify handwritten
digits in binary and 10-digit tasks,” International Journal of
Advanced Research in Artificial Intelligence, vol. 4, no. 7, pp.
1–8, 2015.
[19] Michael Beyeler, Nikil D Dutt, and Jeffrey L Krichmar, “Categorization and decision-making in a neurobiologically plausible spiking network using a stdp-like learning rule,” Neural
Networks, vol. 48, pp. 109–124, 2013.
[20] James P Keener, FC Hoppensteadt, and J Rinzel, “Integrateand-fire models of nerve membrane response to oscillatory input,” SIAM Journal on Applied Mathematics, vol. 41, no. 3,
pp. 503–517, 1981.
[21] David E Meyer and David E Kieras, “A computational theory of executive cognitive processes and multiple-task performance: Part 2. accounts of psychological refractory-period
phenomena.,” Psychological review, vol. 104, no. 4, pp. 749,
1997.
[22] Natalia Caporale and Yang Dan, “Spike timing–dependent
plasticity: a hebbian learning rule,” Annu. Rev. Neurosci., vol.
31, pp. 25–46, 2008.
[23] Andrew P. Davison, “Modelling stdp in the neuron simulator,” http://andrewdavison.info/notes/modelling-stdp-neuronsimulator/, 2007.
[24] Jesper Sjöström and Wulfram Gerstner, “Spike-timing dependent plasticity,” Spike-timing dependent plasticity, vol. 35,
2010.
[25] Bernhard Nessler, Michael Pfeiffer, and Wolfgang Maass,
“Stdp enables spiking neurons to detect hidden causes of their
inputs,” in Advances in neural information processing systems,
2009, pp. 1357–1365.
[26] Mark CW Van Rossum, Guo Qiang Bi, and Gina G Turrigiano,
“Stable hebbian learning from spike timing-dependent plasticity,” Journal of neuroscience, vol. 20, no. 23, pp. 8812–8821,
2000.
[27] Sen Song, Kenneth D Miller, and Larry F Abbott, “Competitive hebbian learning through spike-timing-dependent synaptic
plasticity,” Nature neuroscience, vol. 3, no. 9, pp. 919–926,
2000.
[28] Rebecca Lewis, Katie E Asplin, Gareth Bruce, Caroline Dart,
Ali Mobasheri, and Richard Barrett-Jolley, “The role of the
membrane potential in chondrocyte volume regulation,” Journal of cellular physiology, vol. 226, no. 11, pp. 2979–2986,
2011.
[29] Hélene Paugam-Moisy and Sander Bohte, “Computing with
spiking neuron networks,” in Handbook of natural computing,
pp. 335–376. Springer, 2012.
[30] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick
Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–
2324, 1998.
[31] Wenyu Yang, Jie Yang, and Wei Wu, “A modified one-layer
spiking neural network involves derivative of the state function
at firing time,” Advances in Neural Networks–ISNN 2012, pp.
149–158, 2012.
| 9 |
1
Dynamic Distributed Storage for Scaling
Blockchains
Ravi Kiran Raman and Lav R. Varshney
arXiv:1711.07617v2 [] 7 Jan 2018
Abstract
Blockchain uses the idea of storing transaction data in the form of a distributed ledger wherein each node in
the network stores a current copy of the sequence of transactions in the form of a hash chain. This requirement of
storing the entire ledger incurs a high storage cost that grows undesirably large for high transaction rates and large
networks. In this work we use the ideas of secret key sharing, private key encryption, and distributed storage to
design a coding scheme such that each node stores only a part of the entire transaction thereby reducing the storage
cost to a fraction of its original cost. When further using dynamic zone allocation, we show the coding scheme
can also improve the integrity of the transaction data in the network over current schemes. Further, block validation
(bitcoin mining) consumes a significant amount of energy as it is necessary to determine a hash value satisfying a
specific set of constraints; we show that using dynamic distributed storage reduces these energy costs.
Index Terms
Blockchains, scaling, distributed storage, secret sharing
I. I NTRODUCTION
Blockchains are distributed, shared ledgers of transactions that reduce the friction in financial networks due to
different intermediaries using different technology infrastructures, and even reduce the need for intermediaries to
validate financial transactions. This has in turn led to the emergence of a new environment of business transactions
and self-regulated cryptocurrencies such as bitcoin [1], [2]. Owing to such favorable properties, blockchains are
being adopted extensively outside cryptocurrencies in a variety of novel application domains [3], [4]. In particular,
blockchains have inspired novel innovations in medicine [5], supply chain management and global trade [6], and
government services [7]. Blockchains are expected to revolutionize the way financial/business transactions are done
[8], for instance in the form of smart contracts [9]. In fact, streamlining infrastructure and removing redundant
intermediaries creates the opportunity for significant efficiency gains.
The blockchain architecture however comes with an inherent weakness. The blockchain works on the premise
that the entire ledger of transactions is stored in the form of a hash chain at every node, even though the transactions
themselves are meaningless to the peers that are not party to the underlying transaction. Consequently, the individual
nodes incur a significant, ever-increasing storage cost [10]. Note that secure storage may be much more costly than
just raw hard drives, e.g. due to infrastructure and staffing costs.
In current practice, the most common technique to reduce the storage overload is to prune old transactions in
the chain. However, this mechanism is not sustainable for blockchains that have to support a high arrival rate of
transaction data. Hence the blockchain architecture is not scalable owing to the heavy storage requirements.
For instance, consider the bitcoin network. Bitcoin currently serves an average of just under 3.5 transactions per
second [11]. This low number is owing to a variety of reasons including the economics involved in maintaining a
high value for the bitcoin. However, even at this rate, this accounts for an average of 160MB of storage per day
[10], i.e., about 60GB per year. The growth in storage and the number of transactions in the bitcoin network over
time [10] are shown in Figs. 1a and 1b. The exponential growth in the size of the blockchain and the number of
transactions to be served highlights the need for better storage techniques that can meet the surging demand.
SETL, an institutional payment and settlement infrastructure based on blockchain, claims to support 1 billion
transactions per day. This is dwarfed by the Federal Reserve, which processes 14 trillion financial transactions
per day. If cryptocurrencies are to become financial mainstays, they would need to be scaled by several orders.
Specifically, they will eventually need to support to the order of about 2000 transactions per second on average
The authors are with the University of Illinois at Urbana-Champaign.
2
(a) Blockchain Size
(b) Average Number of Transactions per day
(c) Hash-rate
Fig. 1: Increase in transactions, storage, and hash-rate in bitcoin. Data obtained from [10].
[11]. Note that this translates to an average storage cost of over 90GB per day. This is to say nothing about uses for
blockchain in global trade and commerce, healthcare, food and agriculture, and a wide variety of other industries.
With storage cost expected to saturate soon due to the ending of Moore’s Law, storage is emerging as a pressing
concern with the large-scale adoption of blockchain.
This impending end to Moore’s law not only implies saturated storage costs, but also a saturation of computational
speeds. Block validation (mining) in bitcoin-like networks requires computing an appropriate hash value that caters
to specified constraints and adding the data block to the blockchain. Notwithstanding new efforts such as [12],
this process has been found to be computationally expensive and demands the use of high-end hardware and large
volumes of energy. The hash rate, which is the number of hashes computed per second on the bitcoin network,
is indicative of the total energy consumption by the network. The hash rate has been found to be increasing
exponentially over time, as indicated in Fig. 1c, as a result of both an increase in the rate of transactions, and an
increase in the number of miners with powerful computing machines.
This staggering growth in hash computation results in a corresponding growth in energy consumption by the
blockchain network. Recent independent studies have estimated that the global energy consumption of the bitcoin
network is of the order of 700MW [13], [14]. This amount of power is sufficient to power over 325,000 homes and
well over 5000 times the power consumed per transaction by a credit card. Increasing demand for bitcoin in the
absence of any updates to the mechanism of mining is thus projected to increase the consumption to the order of
14GW by the year 2020. The reducing energy efficiency improvement of mining hardware at a time when bitcoin
value is rising rapidly, with a projected rapid increase the transaction rate further compounds the energy problem.
In addition to the unsustainable growth in energy consumption, this requirement of mining on availability of
efficient computing resources and cheap electricity also establishes an imbalance in the decentralized mining
requirement of bitcoin-like cryptocurrencies. In particular, miners tend to concentrate in geographic regions of the
world where both the mentioned resources are available for cheap. This is again undesirable for cryptocurrencies
such as bitcoin that rely fundamentally on the distributed nature of the mining operation to establish reliability.
Thus, it is of interest to keep the mining process relatively cheap without compromising on data integrity.
The current mechanism of bitcoin mining is referred to at large as a Proof of Work (PoW) method where the
creation of a hash that adheres to the constraint set is identified as a proof of validation. This is typically done
in a competitive environment with the first peer to mine earning a reward in terms of new bitcoins. The reason
for using such a mechanism is two-fold. Firstly, it automates and controls the generation of new bitcoins in the
system while also incentivizing miners to validate data blocks. Secondly, the creation of the hash constraints also
enhances data integrity as a corruption to the block in turn requires the computationally expensive recomputation
of the constrained hash values.
In this paper we do not consider the economics related to cryptocurencies through the mining process. Blockchain
systems, cryptocurrencies in particular, however have explored the feasibility of reducing the energy demands by
moving away from a competition-based mining process. Efforts to move to a Proof of Stake (PoS) model wherein
3
peers are assigned the mining task in a deterministic fashion depending on the fraction of cryptocurrency (stake)
they own. While the method is not yet foolproof, the removal of the competitive framework is expected to reduce
the energy demands. In this work, we focus on energy reduction through an alternate scheme that guarantees a
enhanced data integrity through information-theoretic immutability rather than a computational one in Sec. VII.
A. Our Contributions
In this paper we show that the storage costs in large-scale blockchain networks can be reduced through secure
distributed storage codes. Specifically, when each peer stores a piece of the entire transaction instead of the entire
block, the storage cost can be diminished to a fraction of the original. In Sec. IV, we use a novel combination of
Shamir’s secret sharing scheme [15], private key encryption, and distributed storage codes [16], inspired by [17],
to construct a coding scheme that distributes transaction data among subsets of peers. The scheme is shown to be
optimal in storage up to small additive terms. We also show, using a combination of statistical and cryptographic
security of the code, that the distributed storage loses an arbitrarily small amount of data integrity as compared to
the conventional method.
Further, in Sec. V, using a dynamic zone allocation strategy among peers, we show that the integrity of the data
can further be enhanced in the blockchain. We formulate the zone allocation problem as an interesting combinatorial
problem of decomposing complete hypergraphs into 1-factors. We design an allocation strategy that is order optimal
in the time taken to ensure highest data integrity. We show that given enough transaction blocks to follow, such a
system is secure from active adversaries.
Distributed storage schemes have been considered in the past in the form of information dispersal algorithms
(IDA) [18], [19] and in the form of distributed storage codes [16], [20]. In particular [19] considers an information
dispersal scheme that is secure from adaptive adversaries. We note that the coding scheme we define here is stronger
than such methods as it handles active adversaries. Secure distributed storage codes with repair capabilities to protect
against colluding eavesdroppers [21] and active adversaries [22] have also been considered. The difference in the
nature of attacks by adversaries calls for a new coding scheme in this paper.
In addition to the reduction in storage costs without compromising the integrity, in Sec. VII, we study the energy
demands of our scheme in comparison to conventional bitcoin-like blockchains by comparing the computational
complexity involved in establishing the blockchain. We show that our scheme creates a framework to reduce energy
consumption through the use of simpler, easy to compute hash functions.
The enhanced integrity, reduced storage and energy costs do come at the expense of increased recovery and
repair costs. In particular, as the data is stored as a secure distributed code, the recovery process is highly sensitive
to denial of service attacks and the codes are not locally repairable, as elaborated in Sec. VI.
Before getting into the construction and analysis of the codes, we first introduce a mathematical abstraction of
the blockchain in Sec. II, and give a brief introduction to the coding schemes that are used in this work in Sec. III.
II. S YSTEM M ODEL
We now introduce a mathematical model of conventional blockchain systems. We first describe the method in
which the distributed ledger is maintained in the peer network, emphasizing the roles of various nodes in the process
of incorporating a new transaction in the ledger. Then, we introduce data corruption in blockchain systems and the
nature of the active adversary considered in this work.
A. Ledger Construction
The blockchain comprises a connected peer-to-peer network of nodes, where nodes are placed into three primary
categories based on functionality:
1) Clients: nodes that invoke or are involved in a transaction, have the blocks validated by endorsers, and
communicate them to the orderers.
2) Peers: nodes that commit transactions and maintain a current version of the ledger. Peers may also adopt
endorser roles.
3) Orderer: nodes that communicate the transactions to the peers in chronological order to ensure consistency
of the hash chain.
4
Fig. 2: Architecture of the Blockchain Network. Here the network is categorized by functional role into clients Ci ,
peers Pi , and orderers O. As mentioned earlier, the clients initialize transactions. Upon validation, the transactions
are communicated to peers by orderers. The peers maintain an ordered copy of the ledger of transactions.
Note that the classification highlighted here is only based on function, and individual nodes in the network can
serve multiple roles.
The distributed ledger of the blockchain maintains a current copy of the sequence of transactions. A transaction is
initiated by the participating clients and is verified by endorsers (select peers). Subsequently, the verified transaction
is communicated to the orderer. The orderer then broadcasts the transaction blocks to the peers to store in the ledger.
The nodes in the blockchain are as depicted in Fig. 2. Here nodes Ci are clients, Pi are peers, and O is the set of
orderers in the system, categorized by function.
The ledger at each node is stored in the form of a (cryptographic) hash chain.
Definition 1: Let M be a message space consisting of messages of arbitrary length. A cryptographic hash function
is a deterministic function h : M → H, where H is a set of fixed-length sequences called hash values.
Cryptographic hash functions additionally incorporate the following properties:
1) Computational ease: To establish efficiency, the hash function should be easy to compute.
2) Pre-image resistance: Given H ∈ H, it is computationally infeasible to find M ∈ M such that h(M) = H .
3) Collision resistance: It is computationally infeasible to find M1 , M2 ∈ M such that h(M2 ) = h(M1 ).
4) Sensitivity: Minor changes in the input change the corresponding hash values significantly.
Other application-specific properties are incorporated depending on need.
Let Bt be the tth transaction block and let Ht be the hash value stored with the (t + 1)th transaction, computed
as Ht = h(Wt ), where h(·) is the hash function, and Wt = (Ht−1 , Bt ) is the concatenation of Ht−1 and Bt . Thus,
the hash chain is stored as
(H0 , B1 ) − (H1 , B2 ) − · · · − (Ht−1 , Bt ).
Let us assume that for all t, Bt ∼ Unif(Fq ) and Ht ∈ Fp , where q, p ∈ N and Fq , Fp are finite fields of order
q and p respectively. Thus, in the conventional implementation of the blockchain, the cost of storage per peer per
transaction is
R̃s = log2 q + log2 p bits.
(1)
Transactions stored in the ledger may at a later point be recovered in order to validate claims or verify details
of the past transaction by nodes that have read access to the data. Different implementations of the blockchain
invoke different recovery mechanisms depending on the application. One such method is to use an authentication
mechanism wherein select peers return the data stored in the ledger and the other peers validate (sign) the content.
Depending on the application, one can envision varying the number of authorization checks necessary to validate
the content.
For convenience, we restrict this work to one form of retrieval which broadly encompasses a wide class of
recovery schemes. Specifically, we assume that in order to recover the tth transaction, each peer returns its copy
of the transaction and the majority rule is applied to recover the block.
5
B. Blockchain Security
There are two main forms of data security that we focus on.
1) Integrity: transaction data stored by clients cannot be corrupted unless a majority of the peers are corrupted.
2) Confidentiality: local information from individual peers does not reveal sensitive transaction information.
Corruption of data in the blockchain requires corrupting a majority of the peers in the network to alter the
data stored in the distributed, duplicated copy of the transaction ledger. Thus, the blockchain system automatically
ensures a level of integrity in the transaction data.
Conventional blockchain systems such as bitcoin enforce additional constraints on the hash values to enhance data
integrity. For instance, in the bitcoin network, each transaction block is appended with a nonce which is typically a
string of zeros, such that the corresponding hash value satisfies a difficulty target i.e., is in a specified constraint set.
The establishment of such difficulty targets in turn implies that computing a nonce to satisfy the hash constraints
is computationally expensive. Thus data integrity can be tested by ensuring the hash values are consistent as it is
computationally infeasible to alter the data.
The hash chain, even without the difficulty targets, offers a mechanism to ensure data integrity in the blockchain.
Specifically, note that the sensitivity and pre-image resistance of the cryptographic hash function ensures that any
change to Ht−1 or Bt would require recomputing Ht . Further, it is not computationally feasible to determine Wt
such that h(Wt ) = Ht .
Thus storing the ledger in the form of a hashchain ensures that corrupting a past transaction not only requires
the client to corrupt at least half the set of peers to change the majority value, but also maintain a consistent hash
chain following the corrupted transaction. That is, say a participating client wishes to alter transaction B1 to B10 .
Let there be T transactions in the ledger. Then, corrupting B1 implies that the client would also have to replace
H1 with H10 = h(W10 ) at the corrupted nodes. This creates a domino effect, in that all subsequent hashes must in
turn be altered as well, to maintain integrity of the chain. This strengthens the integrity of the transaction data in
blockchain systems.
Confidentiality of information is typically guaranteed in these systems through the use of private key encryption
methods, where the key is shared with a select set of peers are authorized to view the contents of the transactions.
Note however that in such implementations, a leak at a single node could lead to a complete disambiguation of the
information.
C. Active Adversary Model
In this work, we explore the construction of a distributed storage coding scheme that ensures a heightened sense
of confidentiality and integrity of the data, even when the hash functions are computationally inexpensive.
Let us assume that each transaction Bt also has a corresponding access list, which is the set of nodes that have
permission to read and edit the content in Bt . Note that this is equivalent to holding a private key to decrypt the
encrypted transaction data stored in the ledger.
In this work, we primarily focus on active adversaries who alter a transaction content Bt to a desired value Bt0 .
Let us explicitly define the semantic rules of a valid corruption for such an adversary. If a client corrupts a peer,
then the client can
1) learn the contents stored in the peer;
2) alter block content only if it is in the access list of the corresponding block; and
3) alter hash values as long as chain integrity is preserved, i.e., an attacker cannot invalidate the transaction of
another node in the process.
The active adversary in our work is assumed to be aware of the contents of the hash chain and the block that it
wishes to corrupt. We elaborate on the integrity of our coding scheme against such active adversaries. We also
briefly elaborate on the data confidentiality guaranteed by our system against local information leaks.
Another typical attack of interest in such blockchain systems is the denial of service attack where an adversary
corrupts a peer in the network to deny the requested service, which in this case is the data stored in the ledger. We
also briefly describe the vulnerability of the system to denial of service attacks owing to the distributed storage.
Before we describe the code construction, we first give a preliminary introduction to coding and encryption
schemes that we use as the basis to build our coding scheme.
6
III. P RELIMINARIES
This work uses a private key encryption scheme with a novel combination of secret key sharing and distributed
storage codes to store the transaction data and hash values. We now provide a brief introduction to these elements.
A. Shamir’s Secret Sharing
Consider a secret S ∈ Fq that is to be shared with n < q nodes such that any subset of size less than k get
no information regarding the secret upon collusion, while any subset of size at least k get complete information.
Shamir’s (k, n) secret sharing scheme [15] describes a method to explicitly construct such a code. All the arithmetic
performed here is finite field arithmetic on Fq .
i.i.d
Draw ai ∼ Unif(Fq ), for i ∈ [k − 1] and set a0 = S . Then, compute
yi = a0 + a1 xi + a2 x2i + · · · + ak−1 xk−1
, for all i ∈ [n],
i
where xi = i. Node i ∈ [n] receives the share yi .
Since the values are computed according to a polynomial of order k −1, the coefficients of the polynomial can be
uniquely determined only when we have access to at least k points. Recovering the secret key involves polynomial
interpolation of the k shares to obtain the secret key (intercept). Thus, the secret can be recovered if and only if at
least k nodes collude.
In this work, we presume that each secret share is given by (xi , yi ), and that the unique abscissa values are chosen
uniformly at random from Fq \{0}. That is, {xi : i ∈ [n]} are drawn uniformly at random without replacement
from Fq \{0}. Then, given any k − 1 shares and the secret, the final share uniformly likely in a set of size q − k .
It is worth noting that Shamir’s scheme is minimal in storage as the size of each share is the same as the size of
the secret key. Shamir’s scheme however is not secure to active adversaries. In particular, by corrupting n − k + 1
nodes, the secret can be completely altered.
Secret key sharing codes have been widely studied in the past [23]–[25]. In particular, it is known that ReedSolomon codes can be adopted to define secret shares. Linear codes for minimal secret sharing have also been
considered [26]. In this work however, we restrict to Shamir’s secret sharing scheme for simplicity.
B. Data Encryption
Shannon considered the question of perfect secrecy in cryptosystems from the standpoint of statistical security of
encrypted data [27]. There, he concluded that perfect secrecy required the use of keys drawn from a space as large
as the message space. This is practically unusable as it is difficult to use and securely store such large key values.
Thus practical cryptographic systems leverage computational limitations of an adversary to guarantee security over
perfect statistical secrecy.
We define a notion of encryption that is slightly different from that used typically in cryptography. Consider a
message M = (M1 , . . . , Mm ) ∈ M, drawn uniformly at random. Let K ∈ K be a private key drawn uniformly at
random.
Definition 2: Given message, key, and code spaces M, K, C respectively, a private key encryption scheme is a
pair of functions Φ : M × K → C, Ψ : C × K → M, such that for any M ∈ M,
Φ(M; K) = C, such that, Ψ(C, K) = M,
and it is -secure if it is statistically impossible to decrypt the codeword in the absence of the private key K beyond
a confidence of in the posterior probability. That is, if
max
M∈M,C∈C
P [Ψ(C, K) = M ] ≤ .
(2)
The definition indicates that the encryption scheme is an invertible process and that it is statistically infeasible to
decrypt the plaintext message beyond a degree of certainty. We know that given the codeword C, decrypting the
code is equivalent to identifying the chosen
private key. In addition, from (2), we observe that the uncertainty in
1
the message estimation is at least log2 . Thus,
log2 1 bits ≤ H(M|C) ≤ log2 |K| bits.
7
Algorithm 1 Coding scheme for data block
n
for z = 1 to m
do
(z)
Generate private key Kt ∼ Unif(K)
(z)
(z)
(z)
Encrypt block with key Kt as Ct = Φ(Bt ; Kt )
(i)
Use a distributed storage code to store Ct among peers in {i : pt = z}
(z)
(z)
(z)
Use Shamir’s (m, m) secret sharing scheme on Kt and distribute the shares (K1 , . . . , Km ) among peers
in the zone
end for
For convenience, we assume without loss of generality that the encrypted codewords are vectors of the same
length as the message from an appropriate alphabet, i.e., C = (C1 , . . . , Cm ).
Since we want to secure the system from corruption by adversaries who are aware of the plaintext message,
we define a stronger notion of secure encryption. In particular, we assume that an attacker who is aware of the
message M, and partially aware of the codeword, C−j = (C1 , . . . , Cj−1 , Cj+1 , . . . , Cm ), is statistically incapable
of guessing Cj in the absence of knowledge of the key K . That is, for any M, C−j ,
1
(3)
P [Φ(M; K) = C|M, C−j ] ≤ , for any Cj .
2
Note that this criterion indicates that the adversary is unaware of at least 1 bit of information in the unknown code
fragment, despite being aware of the message, i.e.,
H(C|M, C−j ) ≥ 1 bit.
C. Distributed Storage Codes
This work aims to reduce the storage cost for blockchains by using distributed storage codes. Distributed storage
codes have been widely studied [16], [28] in different contexts. In particular, aspects of repair and security, including
explicit code constructions have been explored widely [20]–[22], [29], [30]. In addition, information dispersal
algorithms [18], [19] have also considered the question of distributed storage of data. This is a non-exhaustive
listing of the existing body of work on distributed storage and most algorithms naturally adapt to the coding
scheme defined here. However, we consider the simple form of distributed storage that just divides the data evenly
among nodes.
IV. C ODING S CHEME
For this section, assume that at any point of time t, there exists a partition Pt of the set of peers [n] into sets
of size m each. In this work we presume that n is divisible by m. Let each set of the partition be referred to as
n
a zone. Without loss of generality, the zones are referred to by indices 1, . . . , m
. At each time t, for each peer
(i)
n
i ∈ [n], let pt ∈ [ m ] be the index that represents the zone that includes peer i. We describe the zone allocation
scheme in detail in Sec. V.
A. Coding Data Block
In our coding scheme, a single copy of each data block is stored in a distributed fashion across each zone.
Consider the data block Bt corresponding to time t. We use a technique inspired by [17]. First a private key K is
generated at each zone and the data block is encrypted using the key. The private key is then stored by the peers in
the zone using Shamir’s secret key sharing scheme. Finally, the encrypted data block is distributed amongst peers
in the zone using a distributed storage scheme. The process involved in storage and recovery of a block, given a
zone division is shown in Fig. 3.
The coding scheme is given by Alg. 1. In this discussion we will assume that the distributed storage scheme
just distributes the components of the code vector Ct among the peers in the zone. The theory extends naturally
to other distributed storage schemes.
8
Fig. 3: Encryption and decryption process for a given zone allocation. The shaded regions represent individual
zones in the peer network. The data is distributed among peers in each zone and the data from all peers in a zone
are required to recover the transaction data.
In order to preserve the integrity of the data, we use secure storage for the hash values as well. In particular,
at time t, each zone Z ∈ Pt stores a secret share of the hash value Ht−1 generated using Shamir’s (m, m) secret
sharing scheme.
The storage per transaction per peer is thus given by
1
Rs =
(4)
log2 |C| + 2 log2 |K| + 2 log2 p bits,
m
where |C| ≥ q depending on the encryption scheme. In particular, when the code space of encryption matches the
message space, i.e., |C| = q , the gain in storage cost per transaction per peer is given by
m−1
Gain in storage cost = R̃s − Rs =
log2 q − 2 log2 |K| − log2 p bits.
(5)
m
Thus, in the typical setting where the size of the private key space is much smaller than the size of the blocks, we
have a reduction in the storage cost.
B. Recovery Scheme
We now describe the algorithm to retrieve a data block Bt in a blockchain system comprising a total of T
transactions. The algorithm to recover block Bt is described in Alg. 2.
The recovery algorithm exploits information-theoretic security in the form of the coding scheme, and also invokes
the hash-based computational integrity check established in the chain. First, the data blocks are recovered from
the distributed, encrypted storage from each zone. In case of a data mismatch, the system inspects the chain for
consistency in the hash chain. The system scans the chain for hash values and eliminates peers that have inconsistent
hash values. A hash value is said to be inconsistent if the hash value corresponding to the data stored by a node
in the previous instance does not match the current hash value.
Through the inconsistency check, the system eliminates some, if not all corrupted peers. Finally, the majority
consistent data is returned. In practice, the consistency check along the hash chain can be limited to a finite number
of blocks to reduce computational complexity of recovery.
In the implementation, we presume that all computation necessary for the recovery algorithm is done privately
by a black box. In particular, we presume that the peers and clients are not made aware of the code stored at other
peers or values stored in other blocks. Specifics of practical implementation of such a black box scheme is beyond
the scope of this paper.
C. Feasible Encryption Scheme
The security of the coding scheme from corruption by active adversaries depends on the encryption scheme used.
We first describe the necessary condition on the size of the key space.
Lemma 1: A valid encryption scheme satisfying (2) and (3), has
|K| ≥ 2m .
9
Algorithm 2 Recovery scheme for data block
N ← [n]
(z)
n
Using polynomial interpolation
on theshares, compute Kt , for all z ∈ [ m
]
(z)
(z)
(z)
n
Decode blocks Bt ← Ψ Ct ; Kt , for all z ∈ [ m ]
(z)
n
if |{Bt : z ∈ [ m
]}| > 1 then
for τ = t to T do
(z)
n
Using polynomial interpolation from the hash shares, compute Hτ , for all z ∈ [ m
]
(z)
n
z
z
Determine Wτ = Bτ , Hτ −1 , for all
n z ∈ [m]
o
(z)
(z 0 )
(i)
(i)
Determine hash inconsistencies I ← i ∈ [n] : h Wτ
6= Hτ , z = pτ , z 0 = pτ +1
N ←
\I
n N (i)
o
(pt )
if Bt
: i ∈ N = 1 then
break
end if
end for
end if
nn (i)
oo
(p )
return Majority in
Bt t : i ∈ N
Proof: First, by chain rule of entropy,
H(K, C|M) = H(K) + H(C|M, K) = H(K),
(6)
where (6) follows from the fact that the codeword is known given the private key and the message.
Again using the chain rule and (6), we have
H(K) = H(C|M) + H(K|C, M)
m
X
≥
H(Cj |C−j , M)
(7)
j=1
≥ m,
(8)
where (7) follows from non-negativity of entropy and the fact that conditioning only reduces entropy. Finally, (8)
follows from the condition (3). Since keys are chosen uniformly at random, the result follows.
We now describe an encryption scheme that is order optimal in the size of the private key space upto log factors.
Let T be the set of all rooted, connected trees defined on m nodes. Then, by Cayley’s formula [31],
|T | = m(m−1) .
Let us define the key space by the entropy-coded form of uniform draws of a tree from T . Hence in the description
of the encryption scheme, we presume that given the private key K , we are aware of all edges in the tree. Let
V = [m] be the nodes of the tree and v0 be the root. Let the parent of a node i in the tree be µi .
Consider the encryption function given in Alg. 3. The encryption algorithm proceeds by first selecting a rooted,
connected tree uniformly at random on m nodes. Then, each peer is assigned to a particular node of the tree. For
each node other than the root, the codeword is created as the modulo 2 sum of the corresponding data block and
that corresponding to the parent. Finally, the root is encrypted as the modulo 2 sum of all codewords at other nodes
and the corresponding data block. The bits stored at the root node are flipped with probability half. The encryption
scheme for a sample data block is shown in Fig. 4. We refer to Alg. 3 as Φ from here on.
The decryption of the stored code is as given in Alg. 4. That is, we first determine the private key, i.e., the rooted
tree structure, the bit, and peer assignments. Then we decrypt the root node by using the codewords at other peers.
Then we sequentially recover the other blocks by using the plain text message at the parent node.
Lemma 2: The encryption scheme Φ satisfies (2) and (3).
Proof: Validity of (2) follows directly from the definition of the encryption scheme as the message is not
recoverable from just the encryption.
10
Algorithm 3 Encryption scheme
T ← Unif(T ), K ← Key(T ); b ← Binom(n, 1/2)
Assign peers to vertices, i.e., peer i is assigned to node θi
For all
i 6= v0 , C̃i← Bi ⊕ Bµi ; flip bits if bi = 1.
C̃v0 ← ⊕j6=v0 C̃j ⊕ Bv0
if bv0 = 1 then
Flip the bits of C̃v0
end if
Store Ci ← C̃θi at each node i in the zone
Store (K, θ) using Shamir’s secret sharing at the peers
Store the peer assignment θi locally at each peer i
Fig. 4: Encryption examples for a zone with six peers. The data block, parameters, tree structure, and corresponding
codes are shown. The two cases consider the same rooted tree with varying peer assignments. The corresponding
change in the code is shown.
To check the validity of (3), note that given the data at all peers other than one node, the adversary is unaware
of the parent of the missing node. Since this is uniformly likely, the probability that the adversary can guess the
encrypted data is at most 1/2, with the maximum being if the root is not recovered.
Lemma 3: The storage cost per peer per transaction under (Φ, Ψ) is
1
log2 q + 2m log2 m + 2 log2 p + 1 bits
(9)
m
Proof: First note that C = Fq . Next, the number of rooted, connected trees on m nodes is given by Cayley’s
formula as m(m−1) . The peer assignments can be stored locally and so cost only log2 m bits per node per transaction.
Thus, the result follows.
From Lemma 3, we can see that the encryption scheme guarantees order-optimal storage cost per peer per transaction
up to log factor in the size of the key space. The security of the encrypted data can be enhanced by increasing the
inter-data dependency by using directed acyclic graphs (DAGs) with bounded in-degree in place of the rooted tree.
Then, the size of storage for the private key increases by a constant multiple.
Rs (Φ, Ψ) =
D. Individual Block Corruption
We now establish the security guarantees of individual blocks in each zone from active adversaries. First, consider
an adversary who is aware of the hash value Ht and wishes to alter it to Ht0 .
11
Algorithm 4 Decryption scheme
Use polynomial interpolation to recover (K, b, θ)
Define θ̃i ← j if θj = i
Flip the bits of Cθ̃v , if bv0 = 1
0
Bv0 ← Cθ̃v ⊕j6=θ̃v Cj
0
0
For all i ∈ [n]\{v0 }, flip bits of Ci if bi = 1
Iteratively compute Bi ← Cθ̃i ⊕ Bµi for all i 6= v0
return B
Lemma 4: Say an adversary, aware of the hash value Ht and the peers in a zone z , wishes to alter the value
stored in the zone to Ht0 . Then, the probability of successful corruption of such a system when at least one peer is
honest, is O(1/q).
Proof: Assume the adversary knows the secret shares of k − 1 peers in the zone. Since the adversary is also
aware of Ht = a0 , the adversary is aware of the coding scheme through polynomial interpolation. However, since
the final peer is honest, the adversary is unaware of the secret share stored here. Hence the result follows.
This indicates that in order to corrupt a hash value, the adversary practically needs to corrupt all nodes in the zone.
To understand corruption of data blocks, we first consider the probability of successful corruption of a zone
without corrupting all peers of the zone.
Lemma 5: Consider an adversary, aware of the plain text B and the peers in a zone. If the adversary corrupts
c2
c < m peers of the zone, then the probability that the adversary can alter the data to B0 is at most m
2 , i.e.,
c
n
P B → B0 in zone z ≤ , for all B 6= B0 , z ∈ [ m
].
(10)
m
Proof: For ease, let us assume that θi = i for all i ∈ [m]. From the construction of the encryption scheme,
we note that if Bi → Bi0 is to be performed, then all blocks in the subtree rooted at i are to be altered as well.
Further, for any change in the block contents, the root is also to be altered.
Thus, a successful corruption is possible only if all nodes in the subtree and the root have been corrupted.
However, the adversary can only corrupt the peers at random and has no information regarding the structure of the
tree. Thus, the simplest corruption is one that alters only a leaf and the root. Thus, the probability of corruption
can be bounded by the probability of corrupting those two particular nodes as follows:
P B → B0 in zone z ≤ P [Selecting a leaf and root in c draws without replacement from [m]]
m−2
m
=
/
c−2
c
c(c − 1)
c2
=
≤ 2.
(11)
m(m − 1)
m
n
A consistent corruption of a transaction by an active adversary however requires corruption of at least 2m
zones.
Theorem 1: Consider an active adversary who corrupts c1 , . . . , cn/2m peers respectively in n/2m zones. Then,
the probability of successful corruption across the set of all peers is
Pn/2m !!
2
n
i=1 ci
P Successful corruption B → B0 ≤ exp
log
, for all B 6= B0 .
(12)
m
n
Proof: From Lemma 5 and independence of the encryption across zones, we have
n/2m 2
Y c
0
i
P Successful consistent corruption B → B ≤
m2
i=1
n/2m
n
log m
2m
i=1
n
Pn/2m
ci
≤ exp
log 2 i=1
,
n
m
= exp 2
X
log ci − 2
(13)
12
where (13) follows from the arithmetic-geometric mean inequality.
Note that
n/2m
X
n
ci ≤ ,
2
i=1
and thus the upper bound on successful corruption decays with the size of the peer network if less than half the
network is corrupted. From Thm. 1, we immediately get the following corollary.
Corollary 1: If an adversary wishes to corrupt a data block with probability at least 1 − , for some > 0, then
the total number of nodes to be corrupted satisfies
n/2m
X
i=1
ci ≥
m
n
(1 − ) n .
2
(14)
Cor. 1 indicates that when the network size is large, the adversary practically needs to corrupt at least half the
network to have the necessary probability of successful corruption. Thus we observe that for a fixed zone division,
the distributed storage system loses an arbitrarily small amount of data integrity as compared to the conventional
scheme.
In Sec. V, we introduce a dynamic zone allocation scheme to divide the peer network into zones for different time
slots. We show that varying the zone allocation patterns over time appropriately yields even better data integrity.
With regard to denial of service attacks, since it is infeasible to recover the data block without the share of any
peer in the zone, the system is capable of handling upto 1 denial of service attack per zone. That is, the system
can tolerate a total of n/m denial of service attacks.
E. Data Confidentiality
We earlier stated that the two aspects of security needed in blockchain systems are data integrity and confidentiality. We addressed the question of data integrity in the previous subsection. We now consider confidentiality of
transaction data.
Consider the situation where a peer i in a zone is compromised. That is, an external adversary receives the data
stored by the peer for one particular slot. This includes the secret share of the private key Ki , the encrypted block
data Ci , and secret share corresponding to the hash of the previous block.
From Shamir’s secret sharing scheme, we know that knowledge of Ki gives no information regarding the actual
private key K . Thus, the adversary has no information on the rooted tree used for encryption.
We know that the transaction data are chosen uniformly at random. Since the adversary is unaware of the relation
of the nodes to one another, from the encryption scheme defined, we know that given the entire encrypted data C,
the probability of recovering the block B is uniformly distributed on the set of all possible combinations obtained
for all possible tree configurations. That is, each possibly rooted tree yields a potential candidate for the transaction
data.
This observation implies that
H(B|C) ≤ H(K, B|C) = H(K) + H(B|K, C) = m log2 m.
(15)
We know that the entropy of the transaction block is actually H(B) = log2 q > m log2 m. That is, the adversary
does learn the transaction data partially and has a smaller set of candidates in comparison to the set of all possible
values, given the entire codeword.
However, in the presence of just Ci , the adversary has no way to determine any of the other stored data, nor
does it have any information on the position of this part of the code in the underlying transaction block. Thus,
local leaks reveal very little information regarding the transaction block.
Thus, we observe that the coding scheme also ensures a high degree of confidentiality in case of data leaks from
up to m − 1 peers in a zone.
13
Fig. 5: Dynamic Zone Allocation over Time: Iterate the zone allocation patterns among the peers so that increasing
number of peers need to be corrupted to maintain a consistent chain structure.
V. DYNAMIC Z ONE A LLOCATION
In the definition of the coding and recovery schemes, we presumed the existence of a zone allocation strategy
over time. Here we make it explicit.
Cor. 1 and Lemma 4 highlighted the fact that the distributed secure encoding process ensures that corrupting a
transaction block or a hash requires an adversary to corrupt all peers in the zone. This fact can be exploited to
ensure that with each transaction following the transaction to be corrupted, the client would need to corrupt an
increasing set of peers to maintain a consistent version of the corrupted chain.
In particular, let us assume a blockchain in the following state
(H0 , B1 ) − (H1 , B2 ) − · · · − (Ht−1 , Bt ).
Let us assume without loss of generality that an adversary wishes to corrupt the transaction entry B1 to B01 . The
validated, consistent version of such a corrupted chain would look like
0
(H0 , B01 ) − (H10 , B2 ) − · · · − (Ht−1
, Bt ).
If the zone segmentation used for the encoding process is static, then the adversary can easily maintain such a
corrupted chain at half the peers to validate its claim. If each peer is paired with varying sets of peers across blocks,
then, for sufficiently large t, each corrupted peer eventually pairs with an uncorrupted peer.
Let us assume that this occurs for a set of corrupted peers at slot τ . Then, in order to successfully corrupt the
hash Hτ −1 to Hτ0 −1 , the adversary would need to corrupt the rest of the uncorrupted peers in the new zone. On
the other hand, if the client does not corrupt these nodes, then the hash value remains unaltered indicating the
inconsistencies of the corrupted peers.
Thus, it is evident that if the zones are sufficiently well distributed, corrupting a single transaction would eventually
require corruption of the entire network, and not just a majority. A sample allocation scheme is shown in Fig. 5.
However, the total number of feasible zone allocations is given by
√
n!
2πn n n
No. of zone allocations =
(16)
n ≈ √
n m ,
(m!) m
2πm m
which increases exponentially with the number of peers and is monotonically decreasing in the zone size m. This
indicates that naive deterministic cycling through this set of all possible zone allocations is practically infeasible.
To ensure that every uncorrupted peer is eventually grouped with a corrupted peer, we essentially need to ensure
that every peer is eventually grouped with every other peer. Further, the blockchain system needs to ensure uniform
security for every transaction and to this end, the allocation process should also be fair.
In order to better understand the zone allocation strategy, we first study a combinatorial problem.
14
Algorithm 5 Dynamic Zone Allocation Strategy
Let ν2 . . . , ν2n0 be the vertices of a 2n0 − 1 regular polygon, and ν1 its center
for i = 2 to 2n0 do
Let L be the line passing through ν1 and νi
M ← {(νj , νk ) : line through νj , νk is perpendicular to L}
M ← M ∪ {(ν1 , νi )}
Construct zones as {νj ∪ νk : (νj , νk ) ∈ M }
end for
restart for loop
Fig. 6: Dynamic Zone Allocation strategy when n = 4m. The zone allocation scheme cycles through matchings of
the complete graph by viewing them in the form of the regular polygon.
A. K -way Handshake Problem
Consider a group of n people. At each slot of time, the people are to be grouped into sets of size m. A peer
gets acquainted with all other peers in the group whom they have not met before. The problem can thus be viewed
as an m-way handshake between people.
n−1
Lemma 6: The minimum number of slots required for every peer to shake hands with every other peer is m−1
.
Proof: At any slot, a peer meets at most m − 1 new peers. Thus the lower bound follows.
Remark 1: Note that each such grouping of nodes constitutes a matching (1-factor) of an m-uniform complete
n . Baranyai’s theorem [32] states that if n is divisible by m, then, there exists a
hypergraph on n nodes, Km
n−1
n
1-factors. However, we do not require every hyperedge to be covered by the
decomposition of Km into m−1
allocation scheme, but only for every node to be grouped with every other node eventually.
Note that for m = 2, it shows that we can decompose a graph into n − 1 different matchings. In this case, the
handshake problem is the same as the decomposition of the graph into matchings. Thus Baranyai’s theorem in this
case gives us the exact number of slots to solve the problem.
We use the tightness observed for the 2-way handshake problem to design a strategy to assign the peers in zones.
n
Let n0 = m
. Partition the peers into 2n0 sets, each containing m/2 peers. Let these sets be given by ν1 , . . . , ν2n0 .
Then, we can use matchings of K2n0 to perform the zone allocation.
Consider Alg. 5. The algorithm provides a constructive method to create zones such that all peers are grouped
with each other over time. The functioning of the algorithm is as in Fig. 6.
Lemma 7: The number of slots required for every peer to be grouped with every other peer is 2n0 − 1.
Proof: The result follows directly from the cyclic decomposition and Baranyai’s theorem for m = 2.
We see that the scheme matches the lower bound on the number of slots for coverage in the order sense. Thus we
consider this allocation strategy in the following discussion. In addition to the order optimality, the method is also
fair in its implementation to all transactions over time.
B. Security Enhancement
From Alg. 2, we know that inconsistent peers are removed from consideration for data recovery. While Lemma 7
n
guarantees coverage in 2 m
slots, we are in fact interested in the number of slots for all uncorrupted peers to be
15
paired with corrupt peers. We now give insight into the rate at which this happens.
We know that an adversary who wishes to corrupt a block corrupts at least n/2 nodes originally.
Lemma 8: Under the cyclic zone allocation strategy, the adversary needs to corrupt at least m new nodes with
each new transaction to establish a consistent corruption.
Proof: From the cyclic zone allocation strategy and the pigeonhole principle, we note that with each slot, at
least two honest nodes in the graph are paired with corrupt nodes. Thus the result follows.
n
Naturally this implies that in the worst case, with 2m
transactions, the data becomes completely secure in the
network. That is, only a corruption of all peers (not just a majority) leads to a consistent corruption of the
transaction.
VI. DATA R ECOVERY AND R EPAIR
A. Recovery Cost
As highlighted in the scheme, the recovery process of a transaction block or hash value requires the participation
of all peers in the zones. However, practical systems often have peers that are temporarily inactive or undergo data
failure. In such contexts the recovery of the data from the corresponding zone becomes infeasible.
Thus it is of interest to know the probability that it may not be feasible to recover an old transaction at any time
slot. Consider a simple model wherein the probability that a peer is inactive in a slot is ρ, and peer activity across
slots and peers is independent and identically distributed.
Theorem 2: For any δ > 0, probability of successful recovery of a data block at any time slot is at least 1 − δ
if and only if m = Θ(log n)
Proof: First, the probability that the data stored can be recovered at any time slot is bounded according to the
union bound as follows:
P [Recovery] = P [there exists a zone with all active peers]
n
≤ (1 − ρ)m .
m
Thus, to guarantee a recovery probability of at least 1 − δ , we need
n
(1 − ρ)m ≥ 1 − δ
m
1
(log n − log(1 − δ)) .
=⇒ m ≤
1
log 1−ρ
(17)
(18)
Next, to obtain sufficient conditions on the size of the zones, note that
P [Failure] = P [at least one peer in each zone is inactive]
n
n
= (1 − (1 − ρ)m ) m ≤ exp −(1 − ρ)m
,
m
where the last inequality follows from the fact that 1 − x ≤ exp(−x). Hence a sufficient condition for guaranteeing
an error probability of less than δ is
m n
≤δ
exp −(1 − ρ)
m
1
(log n − log log(1/δ)) .
=⇒ m ≤
(19)
1
log 1−ρ
Thus the result follows.
Thm. 2 indicates that the zone sizes have to be of the order of log n to guarantee a required probability of
recovery in one slot when node failures are possible.
16
Algorithm 6 Bitcoin Mining Algorithm
input: previous block data Wt , acceptable hash value set Ht
output: nonce-appended data W̃t , hash value Ht
for all Nt ∈ N do
W̃t ← (Nt , Wt )
Ht ← h(W̃t )
if Ht ∈ Ht then
break
end if
end for
return W̃t , Ht
B. Data Repair
The distributed secure storage ensures that individual entries stored at each peer can not be recovered from the
knowledge of other entries in the zone. Thus it is not feasible to repair nodes locally within a zone. However, it
suffices to substitute a set of bits such that the code structure is retained.
A node failure indicates that the private key is lost. Thus repairing a node involves recoding the entire zone using
data from a neighboring zone. Thus owing to the encryption, it is difficult to repair nodes upon failure.
Thus, the transaction data is completely lost if one peer from every zone undergoes failure. That is, the system
can handle up to n/m node failures. This again emphasizes the need to ensure that m is small in comparison to n.
VII. C OMPUTATION AND E NERGY
As stated in Sec. II-B, conventional blockchain systems incorporate constraints on the hash values of transactions over time to enhance the security of data blocks. Enforcing such constraints however translates to higher
computational load on the block validation (mining) process.
Let us elaborate with the example of the bitcoin. The bitcoin network employs the SHA-256 algorithm for
generating the hash values [33]. At each time slot, the network chooses a difficulty level at random from the set
of hash values, Fp . The role of the miner in the network is to append an appropriate nonce to the block such that
the corresponding hash value satisfies the difficulty level for the slot. The nonce-appended block and the generated
hash value are then added to the blockchain. To be precise, the hash is applied twice in bitcoin. First, the block is
hashed and then the nonce is appended to this version and rehashed. However, for simplicity, we just presume that
the nonce is added directly to the hash.
At any slot t, let the transaction block be Bt and previous hash Ht−1 . Let the set of all possible nonce values
be N = {0, 1}b , where b is the number of bits appended to the transaction block, and is chosen sufficiently large
such that the hash criterion will be satisfied for at least one value. Let the nonce added by the miner be Nt . Then,
the miner searches for Nt ∈ N such that
h(Ht−1 , Nt , Bt ) ∈ Ht ,
where Ht ⊆ Fp is the set of permissible hash values at time t as determined by the difficulty level.
However, due to the pre-image resistance of the cryptographic hash function, the miner is not capable of directly
estimating the nonce, given the data block and difficulty level. This in conjunction with collision resistance of
the hash function implies that the best way for the miner to find a nonce that corresponds to a hash value in the
desired range is to perform brute-force search. Thus, the typical miner in the bitcoin network adopts Alg. 6 to add
a transaction to the blockchain. The brute-force search for nonce value results in a large computational cost which
in turn results in an increase in the consumption of electricity as reflected in the hash rate as was shown in Fig. 1c.
The imposition of the constraints on the hash values for each block enhances the integrity of data blocks. Consider
an adversary who corrupts a data block. Establishing consistency in the chain following such a corruption requires
not only recomputing a hash value, but doing so such that the hash values adhere to the constraints. Each such
computation of a hash value is computationally expensive. Thus, the security established through the use of such
17
constraints is essentially equivalent to preventing the addition of incorrect transactions to the chain, as is to be
prevented by expensive mining.
We now use the notion of an ideal cryptographic hash function to quantify the expected computational cost. Let
us again presume that the message space is bounded and given by M. If M ∼ Unif(M) and h(·) is an ideal
cryptographic hash function, then H = h(M) is also distributed uniformly in the space of hash values H. Thus,
extending from this notion, for any Ht ⊂ H, the set of values Mt ⊂ M such that h(M) ∈ Ht for any M ∈ Mt
has a volume proportional to the size of the hash subset, i.e., |Mt | ∝ |Ht |.
We know that the mining algorithm exhaustively searches for a nonce value that matches the required hash
constraints. Now, if the hash value is chosen at random, and the transaction block is given to be uniformly random,
then searching for the nonce by brute-force is equivalent, in average to the experiment of drawing balls from an
urn without replacement.
Theorem 3: Let q = |N |, |Ht | = p0 be given, and let Ht be chosen at uniformly at random from the set of
all subsets of H with size p0 . Let M (Wt , Ht ) be the number of steps taken by Alg. 6 to find a valid hash. If
1 p0 p, and b ≥ p0 is sufficiently large, then
h 0
i
0
E M (Wt , Ht )|Wt , |Ht | = p0 = E M 0 pp q, 1 − pp q ,
(20)
where M 0 (a, b) is the number of turns it takes, drawing without replacement from an urn containing a1 blue and
a2 red balls, to get the first blue ball.
Proof: First, we know that the volume of permissible nonce values is proportional to the size of the hash subset,
i.e., is cp0 , for some constant c. Any message from this set is uniformly likely as the corresponding constraint set
is drawn uniformly. Further, we know that for p0 = p, any nonce is permissible and so cp = q .
Thus, we essentially are drawing without replacement as the brute-force search does not resample candidates,
0
and we are drawing till we get an element from the set of size pp q in a set of size q . This is essentially the same
as would be the case for sampling without replacement from the urn. Since the uniformity in trials holds only on
average, the result follows.
Corollary 2: The expected computational cost of the mining algorithm is given by
p
0
E M (Wt , Ht )|Wt , |Ht | = p = O
,
p0
for any 1 p0 p, and b ≥ p0 sufficiently large.
Proof: The result follows directly from Thm. 3 and the expected number of draws for the sampling without
replacement experiment.
In practical systems, a moderate range of values of p0 are used as a very small subset might not be satisfied with
a bounded nonce set and a large subset leads to loss of data integrity, as almost any value works. Thus practical
systems choose values in a range wherein it is sufficiently difficult to generate viable candidates satisfying the hash
constraint.
Thus we note that the average computational cost can be very high in bitcoin-like systems. In particular, the
bitcoin uses 256 bit hash values. That is, log2 p = 256, while log2 q = 32. This in turn implies that the potential
candidate space to search is very large thereby leading to increased energy consumption.
On the other hand, in our proposed dynamic distributed storage scheme, the hash values computed are not
restricted by these constraints. Instead we enhance data integrity in the blockchain using information-theoretic
secrecy instead of a computational one. This way, computing hash values and the corresponding secret shares
is computationally inexpensive. Further, we have shown that the integrity is guaranteed through the consistency
checks. Thus, the scheme defined here guarantees a reduction in the computational cost without compromising on
security.
VIII. C ONCLUSION
In this work, we considered fundamental questions that hinder the scaling of conventional blockchain systems.
We addressed the issue of increasing storage costs associated with large-scale blockchain systems. Using a novel
combination of secret key sharing, encryption, and distributed storage, we developed coding schemes that were
information-theoretically secure and reduced the storage to a fraction of the original load. We also showed that the
resulting storage cost was optimal up to log factors in the size of the private key.
18
We showed that the distributed storage code ensures both data integrity and confidentiality, with very little loss in
comparison to the conventional schemes. We then used a dynamic segmentation scheme to enhance data integrity.
We described a zone allocation strategies using decomposition of a complete graph into matchings that was order
optimal in the time for complete coverage.
We also investigated the energy requirements of conventional bitcoin-like blockchain systems by studying the
computational complexity associated with establishing the hashchain. We highlighted the need for simple hashing
schemes and showed that our coding scheme guarantees a significant reduction in energy consumption.
Notwithstanding these benefits, the coding scheme hinges centrally on distributing the transaction data securely
among a set of peers. Thus, the recovery and repair costs associated with the scheme are higher than conventional
systems. In particular, through probabilistic recovery arguments, we established order optimal necessary and sufficient conditions on the size of the zones in our construction. Further, we also highlighted the fact that local repair
is infeasible in this scheme and requires explicit recomputation of the codes corresponding to the zone.
Additionally, the encoding and recovery schemes presumed the feasibility of performing them using black box
methods, such that the peers do not learn the codes pertaining to other peers. We did not attempt to define explicit
methods of constructing such black box schemes in this work, and this would be the focus of future research.
Another aspect to be addressed here is the computational protocols and the related costs to be used to distributed
the codewords in the peer-to-peer network. Since the data stored by individual nodes varies, owing to the dynamic
distributed storage scheme, it is essential to communicate the individual blocks in a point-to-point manner, rather
than as a broadcast, as is done in conventional blockchain systems. This process naturally implies an increased
communication cost in the network. Since this is closely related to the code construction methodology, this aspect
is also to be considered in detail in future research.
This work establishes the feasibility of enhancing blockchain performance and reducing associated costs through
novel use of coding-theoretic techniques. We believe such methods enhance the ability of blockchain systems to
scale to address practical applications in a variety of industries.
ACKNOWLEDGMENT
The authors would like to thank Prof. Andrew Miller for references and useful discussions on blockchain systems.
R EFERENCES
[1] J. Bonneau, A. Miller, J. Clark, A. Narayanan, J. A. Kroll, and E. W. Felten, “SoK: Research perspectives and challenges for bitcoin
and cryptocurrencies,” in Proc. 2015 IEEE Symp. Security Privacy, May 2015, pp. 104–121.
[2] A. Narayanan, J. Bonneau, E. Felten, A. Miller, and S. Goldfeder, Bitcoin and Cryptocurrency Technologies: A Comprehensive
Introduction. Princeton: Princeton University Press, 2016.
[3] D. Tapscott and A. Tapscott, Blockchain Revolution: How the Technology behind Bitcoin is Changing Money, Business, and the World.
New York: Penguin, 2016.
[4] S. Underwood, “Blockchain beyond bitcoin,” Commun. ACM, vol. 59, no. 11, pp. 15–17, Oct. 2016.
[5] A. Azaria, A. Ekblaw, T. Vieira, and A. Lippman, “Medrec: Using blockchain for medical data access and permission management,”
in 2nd Int. Conf. Open Big Data (OBD 2016), Aug. 2016, pp. 25–30.
[6] M. J. Casey and P. Wong, “Global supply chains are about to get better, thanks to blockchain,” Harvard Bus. Rev., Mar. 2017.
[Online]. Available: https://hbr.org/2017/03/global-supply-chains-are-about-to-get-better-thanks-to-blockchain
[7] M. Swan, Blockchain: Blueprint for a New Economy. Sebastopol, CA: O’Reilly Media, Inc., 2015.
[8] M. Iansiti and K. R. Lakhani, “The truth about blockchain,” Harvard Bus. Rev., vol. 95, no. 1, pp. 118–127, Jan. 2017.
[9] A. Kosba, A. Miller, E. Shi, Z. Wen, and C. Papamanthou, “Hawk: The blockchain model of cryptography and privacy-preserving
smart contracts,” in Proc. 2016 IEEE Symp. Security Privacy, May 2016, pp. 839–858.
[10] Blockchain info. [Online]. Available: https://blockchain.info/home
[11] K. Croman, C. Decker, I. Eyal, A. E. Gencer, A. Juels, A. Kosba, A. Miller, P. Saxena, E. Shi, E. G. Sirer, D. Song, and R. Wattenhofer,
“On scaling decentralized blockchains,” in Financial Cryptography and Data Security, ser. Lecture Notes in Computer Science, J. Clark,
S. Meiklejohn, P. Y. A. Ryan, D. Wallach, M. Brenner, and K. Rohloff, Eds. Berlin: Springer, 2016, vol. 9604, pp. 106–125.
[12] M. Vilim, H. Duwe, and R. Kumar, “Approximate bitcoin mining,” in Proc. 53rd Des. Autom. Conf. (DAC ’16), Jun. 2016, pp. 97:1–97:6.
[13] C. Malmo, “Bitcoin is unsustainable,” Jun. 2015. [Online]. Available: https://motherboard.vice.com/en us/article/ae3p7e/
bitcoin-is-unsustainable
[14] P. Fairley, “Blockchain world - feeding the blockchain beast if bitcoin ever does go mainstream, the electricity needed to sustain it will
be enormous,” IEEE Spectr., vol. 54, no. 10, pp. 36–59, Oct. 2017.
[15] A. Shamir, “How to share a secret,” Commun. ACM, vol. 22, no. 11, pp. 612–613, Nov. 1979.
[16] A. G. Dimakis and K. Ramchandran, “Network coding for distributed storage in wireless networks,” in Networked Sensing Information
and Control, V. Saligrama, Ed. New York: Springer, 2008, pp. 115–136.
19
[17] H. Krawczyk, “Secret sharing made short,” in Advances in Cryptology — CRYPTO ’93, ser. Lecture Notes in Computer Science, D. R.
Stinson, Ed. Berlin: Springer, 1994, vol. 773, pp. 136–146.
[18] M. O. Rabin, “The information dispersal algorithm and its applications,” in Sequences, R. M. Capocelli, Ed. New York: Springer-Verlag,
1990, pp. 406–419.
[19] C. Cachin and S. Tessaro, “Asynchronous verifiable information dispersal,” in 24th IEEE Symp. Rel. Distrib. Sys. (SRDS’05), Oct.
2005, pp. 191–201.
[20] K. V. Rashmi, N. B. Shah, P. V. Kumar, and K. Ramchandran, “Explicit construction of optimal exact regenerating codes for distributed
storage,” in Proc. 47th Annu. Allerton Conf. Commun. Control Comput., Sep. 2009, pp. 1243–1249.
[21] A. S. Rawat, O. O. Koyluoglu, N. Silberstein, and S. Vishwanath, “Optimal locally repairable and secure codes for distributed storage
systems,” IEEE Trans. Inf. Theory, vol. 60, no. 1, pp. 212–236, Jan. 2014.
[22] S. Pawar, S. El Rouayheb, and K. Ramchandran, “Securing dynamic distributed storage systems against eavesdropping and adversarial
attacks,” IEEE Trans. Inf. Theory, vol. 57, no. 10, pp. 6734–6753, Oct. 2011.
[23] R. J. McEliece and D. V. Sarwate, “On sharing secrets and Reed-Solomon codes,” Commun. ACM, vol. 24, no. 9, pp. 583–584, Sep.
1981.
[24] E. Karnin, J. Greene, and M. Hellman, “On secret sharing systems,” IEEE Trans. Inf. Theory, vol. IT-29, no. 1, pp. 35–41, Jan. 1983.
[25] W. Huang, M. Langberg, J. Kliewer, and J. Bruck, “Communication efficient secret sharing,” IEEE Trans. Inf. Theory, vol. 62, no. 12,
pp. 7195–7206, Dec. 2016.
[26] J. L. Massey, “Minimal codewords and secret sharing,” in Proc. 6th Joint Swedish-Russian Int. Workshop Inf. Theory, Aug. 1993, pp.
276–279.
[27] C. E. Shannon, “Communication theory of secrecy systems,” Bell Syst. Tech. J., vol. 28, pp. 656–715, Oct. 1949.
[28] A. G. Dimakis, K. Ramchandran, Y. Wu, and C. Suh, “A survey on network codes for distributed storage,” Proc. IEEE, vol. 99, no. 3,
pp. 476–489, Mar. 2011.
[29] Y. Wu, A. G. Dimakis, and K. Ramchandran, “Deterministic regenerating codes for distributed storage,” in Proc. 45th Annu. Allerton
Conf. Commun. Control Comput., Sep. 2007, pp. 242–249.
[30] N. B. Shah, K. V. Rashmi, P. V. Kumar, and K. Ramchandran, “Distributed storage codes with repair-by-transfer and nonachievability
of interior points on the storage-bandwidth tradeoff,” IEEE Trans. Inf. Theory, vol. 58, no. 3, pp. 1837–1852, Mar. 2012.
[31] R. Durrett, Random Graph Dynamics. Cambridge: Cambridge University Press, 2007.
[32] Z. Baranyai, “The edge-coloring of complete hypergraphs I,” J. Comb. Theory, Ser. B, vol. 26, no. 3, pp. 276–294, 1979.
[33] H. Gilbert and H. Handschuh, “Security analysis of SHA-256 and sisters,” in Selected Areas in Cryptography, ser. Lecture Notes in
Computer Science, M. Matsui and R. J. Zuccherato, Eds. Berlin: Springer, 2004, vol. 3006, pp. 175–193.
| 7 |
Explaining Constraint Programming
Krzysztof R. Apt1,2,3
arXiv:cs/0602027v1 [] 7 Feb 2006
1
School of Computing, National University of Singapore
2
CWI, Amsterdam
3
University of Amsterdam, the Netherlands
Abstract. We discuss here constraint programming (CP) by using a proof-theoretic perspective. To this end we identify three levels of abstraction. Each level
sheds light on the essence of CP.
In particular, the highest level allows us to bring CP closer to the computation
as deduction paradigm. At the middle level we can explain various constraint
propagation algorithms. Finally, at the lowest level we can address the issue of
automatic generation and optimization of the constraint propagation algorithms.
1 Introduction
Constraint programming is an alternative approach to programming which consists
of modelling the problem as a set of requirements (constraints) that are subsequently
solved by means of general and domain specific methods.
Historically, constraint programming is an outcome of a long process that has started
in the seventies, when the seminal works of Waltz and others on computer vision (see,
e.g.,[30]) led to identification of constraint satisfaction problems as an area of Artificial
Intelligence. In this area several fundamental techniques, including constraint propagation and enhanced forms of search have been developed.
In the eighties, starting with the seminal works of Colmerauer (see, e.g., [16]) and
Jaffar and Lassez (see [21]) the area constraint logic programming was founded. In the
nineties a number of alternative approaches to constraint programming were realized,
in particular in ILOG solver, see e.g., [20], that is based on modeling the constraint
satisfaction problems in C++ using classes. Another, recent, example is the Koalog
Constraint Solver, see [23], realized as a Java library.
This way constraint programming eventually emerged as a distinctive approach to
programming. In this paper we try to clarify this programming style and to assess it using a proof-theoretic perspective considered at various levels of abstraction. We believe
that this presentation of constraint programming allows us to more easily compare it
with other programming styles and to isolate its salient features.
2 Preliminaries
Let us start by introducing the already mentioned concept of a constraint satisfaction
problem. Consider a sequence X = x1 , . . ., xm of variables with respective domains
D1 , . . ., Dn . By a constraint on X we mean a subset of D1 × . . . × Dm . A constraint
satisfaction problem (CSP) consists of a finite sequence of variables x1 , . . ., xn with
respective domains D1 , . . ., Dn and a finite set C of constraints, each on a subsequence
of X. We write such a CSP as
hC ; x1 ∈ D1 , . . ., xn ∈ Dn i.
A solution to a CSP is an assignment of values to its variables from their domains
that satisfies all constraints. We say that a CSP is consistent if it has a solution, solved
if each assignment is a solution, and failed if either a variable domain is empty or a
constraint is empty. Intuitively, a failed CSP is one that obviously does not have any
solution. In contrast, it is not obvious at all to verify whether a CSP is solved. So we
introduce an imprecise concept of a ‘manifestly solved’ CSP which means that it is
computationally straightforward to verify that the CSP is solved. So this notion depends
on what we assume as ‘computationally straightforward’.
In practice the constraints are written in a first-order language. They are then atomic
formulas or simple combinations of atomic formulas. One identifies then a constraint
with its syntactic description. In what follows we study CSPs with finite domains.
3 High Level
At the highest level of abstraction constraint programming can be seen as a task of
formulating specifications as a CSP and of solving it. The most common approach to
solving a CSP is based on a top-down search combined with constraint propagation.
The top-down search is determined by a splitting strategy that controls the splitting
of a given CSP into two or more CSPs, the ‘union’ of which (defined in the natural
sense) is equivalent to (i.e, has the same solutions as) the initial CSP. In the most common form of splitting a variable is selected and its domain is partitioned into two or
more parts. The splitting strategy then determines which variable is to be selected and
how its domain is to be split.
In turn, constraint propagation transforms a given CSP into one that is equivalent
but simpler, i.e, easier to solve. Each form of constraint propagation determines a notion
of local consistency that in a loose sense approximates the notion of consistency and
is computationally efficient to achieve. This process leads to a search tree in which
constraint propagation is alternated with splitting, see Figure 1.
So the nodes in the tree are CSPs with the root (level 0) being the original CSP.
At the even levels the constraint propagation is applied to the current CSP. This yields
exactly one direct descendant. At the odd levels splitting is applied to the current CSP.
This yields more than one descendant. The leaves of the tree are CSPs that are either
failed or manifestly solved. So from the leaves of the trees it is straightforward to collect
all the solutions to the original CSP.
The process of tree generation can be expressed by means of proof rules that are
used to express transformations of CSPs. In general we have two types of rules. The
deterministic rules transform a given CSP into another one. We write such a rule as:
φ
ψ
constraint propagation
splitting
constraint propagation
splitting
constraint propagation
Fig. 1. A search tree for a CSP
where φ and ψ are CSPs.
In turn, the splitting rules transform a given CSP into a sequence of CSPs. We write
such a rule as:
φ
ψ1 | . . . | ψn
where φ and ψ1 , . . ., ψn are CSPs.
It is now easy to define the notion of an application of a proof rule to a CSP. In
the case of a deterministic rule we just replace (after an appropriate renaming) the part
that matches the premise of the rule by the conclusion. In the case of a splitting rule we
replace (again after an appropriate renaming) the part that matches the premise of the
rule by one of the CSPs ψi from the rule conclusion.
We now say that a deterministic rule
φ
ψ
is equivalence preserving if φ and ψ are equivalent and that a splitting rule
φ
ψ1 | . . . | ψn
is equivalence preserving if the union of ψi ’s is equivalent to φ.
In what follows all considered rules will be equivalence preserving. In general, the
deterministic rules are more ‘fine grained’ than the constraint propagation step that is
modeled as a single ‘step’ in the search tree. In fact, our intention is to model constraint
propagation as a repeated application of deterministic rules. In the next section we shall
discuss how to schedule these rule applications efficiently.
The search for solutions can now be described by means of derivations, just like
in logic programming. In logic programming we have in general two types of finite
derivations: successful and failed. In the case of proof rules as defined above a new type
of derivations naturally arises.
Definition 1. Assume a finite set of proof rules.
– By a derivation we mean a sequence of CSPs such that each of them is obtained
from the previous one by an application of a proof rule.
– A finite derivation is called
• successful if its last element is a first manifestly solved CSP in this derivation,
• failed if its last element is a first failed CSP in this derivation,
• stabilizing if its last element is a first CSP in this derivation that is closed under
the applications of the considered proof rules.
✷
The search for a solution to a CSP can now be described as a search for a successful
derivation, much like in the case of logic programming. A new element is the presence
of stabilizing derivations.
One of the main problems constraint programming needs to deal with is how to limit
the size of a search tree. At the high level of abstraction this matter can be addressed by
focusing on the derivations in which the applications of splitting rules are postponed as
long as possible. This bring us to a consideration of stabilizing derivations that involve
only deterministic rules. In practice such derivations are used to model the process of
constraint propagation. They do not lead to a manifestly solved CSP but only to a CSP
that is closed under the considered deterministic rules. So solving the resulting CSP
requires first an application of a splitting rule. (The resulting CSP can be solved but to
determine it may be computationally expensive.)
This discussion shows that at a high level of abstraction constraint programming
can be viewed as a realization of the computation as deduction paradigm according
to which the computation process is identified with a constructive proof of a formula
from a set of axioms. In the case of constraint programming such a constructive proof
is a successful derivation. Each such derivation yields at least one solution to the initial
CSP.
Because so far no specific rules are considered not much more can be said at this
level. However, this high level of abstraction allows us to set the stage for more specific
considerations that belong to the middle level.
4 Middle Level
The middle level is concerned with the form of derivations that involve only deterministic rules. It allows us to explain the constraint propagation algorithms which are
used to enforce constraint propagation. In our framework these algorithms are simply
efficient schedulers of appropriate deterministic rules. To clarify this point we now introduce examples of specific classes of deterministic rules. In each case we discuss a
scheduler that can be used to schedule the considered rules.
Example 1: Domain Reduction Rules
These are rules of the following form:
hC ; x1 ∈ D1 , . . ., xn ∈ Dn i
hC ′ ; x1 ∈ D1′ , . . ., xn ∈ Dn′ i
where Di′ ⊆ Di for all i ∈ [1..n] and C ′ is the result of restricting each constraint in C
to D1′ , . . ., Dn′ .
We say that such a rule is monotonic if, when viewed as a function f from the
original domains D1 , . . ., Dn to the reduced domains D1′ , . . ., Dn′ , i.e.,
f (D1 , . . ., Dn ) := (D1′ , . . ., Dn′ ),
it is monotonic:
Di ⊆ Ei for all i ∈ [1..n] implies f (D1 , . . ., Dn ) ⊆ f (E1 , . . ., En ).
That is, smaller variable domains yield smaller reduced domains.
Now, the following useful result shows that a large number of domain reduction
rules are monotonic.
Theorem 1. ([10]) Suppose each Di′ is obtained from Di using a combination of
–
–
–
–
–
union and intersection operations,
transposition and composition operations applied to binary relations,
join operation ✶,
projection functions, and
removal of an element.
Then the domain reduction rule is monotonic.
This repertoire of operations is sufficient to describe typical domain reduction rules
considered in various constraint solvers used in constraint programming systems, including solvers for Boolean constraints, linear constraints over integers, and arithmetic
constraints over reals, see, e.g., [10].
Monotonic domain reduction rules are useful for two reasons. First, we have the
following observation.
Note 1. Assume a finite set of monotonic domain reduction rules and an initial CSP P.
Every stabilizing derivation starting in P yields the same outcome.
Second, monotonic domain reduction rules can be scheduled more efficiently than
by means of a naive round-robin strategy. This is achieved by using a generic iteration
algorithm which in its most general form computes the least common fixpoint of a set
of functions F in an appropriate partial ordering. This has been observed in varying
forms of generality in the works of [12], [28], [17] and [7]. This algorithm has the
following form. We assume here a finite set of functions F , each operating on a given
partial ordering with the least element ⊥.
G ENERIC I TERATION algorithm
d := ⊥;
G := F ;
WHILE G 6= ∅ DO
choose g ∈ G;
IF d 6= g(d) THEN
G := G ∪ update(G, g, d);
d := g(d)
ELSE
G := G − {g}
END
END
where for all G, g, d
A {f ∈ F − G | f (d) = d ∧ f (g(d)) 6= g(d)} ⊆ update(G, g, d).
The intuition behind the assumption A is that update(G, g, d) contains at least all
the functions from F − G for which d is a fixpoint but g(d) is not. So at each loop
iteration if d 6= g(d), such functions are added to the set G. Otherwise the function g is
removed from G.
An obvious way to satisfy assumption A is by using the following update function:
update(G, g, d) := {f ∈ F − G | f (d) = d ∧ f (g(d)) 6= g(d)}.
The problem with this choice of update is that it is expensive to compute because for
each function f in F − G we would have to compute the values f (g(d)) and f (d). So in
practice, we are interested in some approximations from above of this update function
that are easy to compute. We shall return to this matter in a moment.
First let us clarify the status of the above algorithm. Recall that a function f on a
partial ordering (D, ⊑ ) is called monotonic if x ⊑ y implies f (x) ⊑ f (y) for all x, y
and inflationary if x ⊑ f (x) for all x.
Theorem 2. ([7]) Suppose that (D, ⊑ ) is a finite partial ordering with the least element ⊥. Let F be a finite set of monotonic and inflationary functions on D. Then every
execution of the G ENERIC I TERATION algorithm terminates and computes in d the least
common fixpoint of the functions from F .
In the applications we study the iterations carried out on a partial ordering that is a
Cartesian product of the component partial orderings. More precisely, given n partial
orderings (Di , ⊑ i ), each with the least element ⊥i , we assume that each considered
function g is defined on a ‘partial’ Cartesian product Di1 × . . . × Dil . Here i1 , . . ., il
is a subsequence of 1, . . ., n that we call the scheme of g. Given d ∈ D1 × · · · × Dn ,
where d := d1 , . . ., dn , and a scheme s := i1 , . . ., il we denote by d[s] the sequence
di1 , . . ., dil .
The corresponding instance of the above G ENERIC I TERATION algorithm then takes
the following form.
G ENERIC I TERATION FOR C OMPOUND D OMAINS algorithm
d := (⊥1 , . . ., ⊥n );
d′ := d;
G := F ;
WHILE G 6= ∅ DO
choose g ∈ G;
d′ [s] := g(d[s]), where s is the scheme of g;
IF d′ [s] 6= d[s] THEN
G := G ∪ {f ∈ F | scheme of f includes i such that d[i] 6= d′ [i]};
d[s] := d′ [s]
ELSE
G := G − {g}
END
END
So this algorithm uses an update function that is straightforward to compute. It simply checks which components of d are modified and selects the functions that depend
on these components. It is a standard scheduling algorithm used in most constraint programming systems.
Example 2: Arc Consistency
Arc consistency, introduced in [24], is the most popular notion of local consistency
considered in constraint programming. Let us recall the definition.
Definition 2.
– Consider a binary constraint C on the variables x, y with the domains Dx and Dy ,
that is C ⊆ Dx × Dy . We call C arc consistent if
• ∀a ∈ Dx ∃b ∈ Dy (a, b) ∈ C,
• ∀b ∈ Dy ∃a ∈ Dx (a, b) ∈ C.
– We call a CSP arc consistent if all its binary constraints are arc consistent.
So a binary constraint is arc consistent if every value in each domain has a support
in the other domain, where we call b a support for a if the pair (a, b) (or, depending on
the ordering of the variables, (b, a)) belongs to the constraint.
In the literature several arc consistency algorithms have been proposed. Their purpose is to transform a given CSP into one that is arc consistent without losing any solution. We shall now illustrate how the most popular arc consistency algorithm, AC-3,
due to [24], can be explained as a specific scheduling of the appropriate domain reduction rules. First, let us define the notion of arc consistency in terms of such rules.
Assume a binary constraint C on the variables x, y. We introduce the following two
rules.
ARC CONSISTENCY 1
hC ; x ∈ Dx , y ∈ Dy i
hC ; x ∈ Dx′ , y ∈ Dy i
where Dx′ := {a ∈ Dx | ∃ b ∈ Dy (a, b) ∈ C}.
ARC CONSISTENCY 2
hC ; x ∈ Dx , y ∈ Dy i
hC ; x ∈ Dx , y ∈ Dy′ i
where Dy′ := {b ∈ Dy | ∃ a ∈ Dx (a, b) ∈ C}.
So in each rule a selected variable domain is reduced by retaining only the supported
values. The following observation characterizes the notion of arc consistency in terms
of the above two rules.
Note 2 (Arc Consistency). A CSP is arc consistent iff it is closed under the applications
of the ARC CONSISTENCY rules 1 and 2.
So to transform a given CSP into an equivalent one that is arc consistent it suffices to repeatedly apply the above two rules for all present binary constraints. Since
these rules are monotonic, we can schedule them using the G ENERIC I TERATION FOR
C OMPOUND D OMAINS algorithm. However, in the case of the above rules an improved
generic iteration algorithm can be employed that takes into account commutativity and
idempotence of the considered functions, see [8].
Recall that given two functions f and g on a partial ordering we say that f is idempotent if f (f (x)) = f (x) for all x and say that f and g commute if f (g(x)) = g(f (x))
for all x. The relevant observation concerning these two properties is the following.
Note 3. Suppose that all functions in F are idempotent and that for each function g we
have a set of functions Comm(g) from F such that each element of Comm(g) commutes with g. If update(G, g, d) satisfies the assumption A, then so does the function
update(G, g, d) − Comm(g).
In practice it means that in each iteration of the generic iteration algorithm less
functions need to be added to the set G. This yields a more efficient algorithm.
In the case of arc consistency for each binary constraint C the functions corresponding to the ARC CONSISTENCY rules 1 and 2 referring to C commute. Also, given two
binary constraints that share the first (resp. second) variable, the corresponding ARC
CONSISTENCY rules 1 (resp. 2) for these two constraints commute, as well. Further,
all such functions are idempotent. So, thanks to the above Note, we can use an appropriately ‘tighter’ update function. The resulting algorithm is equivalent to the AC-3
algorithm.
Example 3: Constructive Disjunction
One of the main reasons for combinatorial explosion in search for solutions to a CSP are
disjunctive constraints. A typical example is the following constraint used in scheduling problems:
Start[task1 ] + Duration[task1 ] ≤ Start[task2 ] ∨
Start[task2 ] + Duration[task2 ] ≤ Start[task1 ]
stating that either task1 is scheduled before task2 or vice versa. To deal with a
disjunctive constraint we can apply the following splitting rule (we omit here the information about the variable domains):
C1 ∨ C2
C1 | C2
which amounts to a case analysis.
However, as already explained in Section 3 it is in general preferable to postpone an
application of a splitting rule and try to reduce the domains first. Constructive disjunction, see [29], is a technique that occasionally allows us to do this. It can be expressed
in our rule-based framework as a domain reduction rule that uses some auxiliary derivations as side conditions:
CONSTRUCTIVE DISJUNCTION
hC1 ∨ C2 ; x1 ∈ D1 , . . ., xn ∈ Dn i
hC1′ ∨ C2′ ; x1 ∈ D1′ ∪ D1′′ , . . ., xn ∈ Dn′ ∪ Dn′′ i
where
der1 , der2
with
der1 := hC1 ; x1 ∈ D1 , . . ., xn ∈ Dn i ⊢ hC1′ ; x1 ∈ D1′ , . . ., xn ∈ Dn′ i,
der2 := hC2 ; x1 ∈ D1 , . . ., xn ∈ Dn i ⊢ hC2′ ; x1 ∈ D1′′ , . . ., xn ∈ Dn′′ i,
and where C1′ is the result of restricting the constraint in C1 to D1′ , . . ., Dn′ and similarly
for C2′ .
In words: assuming we reduced the domains of each disjunct separately, we can
reduce the domains of the disjunctive constraint to the respective unions of the reduced
domains. As an example consider the constraint
h|x − y| = 1 ; x ∈ [4..10], y ∈ [2..7]i.
We can view |x − y| = 1 as the disjunctive constraint (x − y = 1) ∨ (y − x = 1). In
the presence of the ARC CONSISTENCY rules 1 and 2 rules we have then
hx − y = 1 ; x ∈ [4..10], y ∈ [2..7]i ⊢ hx − y = 1 ; x ∈ [4..8], y ∈ [3..7]i
and
hy − x = 1 ; x ∈ [4..10], y ∈ [2..7]i ⊢ hy − x = 1 ; x ∈ [4..6], y ∈ [5..7]i.
So using the CONSTRUCTIVE DISJUNCTION rule we obtain
h|x − y| = 1 ; x ∈ [4..8], y ∈ [3..7]i.
If each disjunct of a disjunctive constraint is a conjunction of constraints, the auxiliary derivations in the side conditions can be longer than just one step. Once the rules
used in these derivations are of an appropriate format, their applications can be scheduled using one of the discussed generic iteration algorithms. Then the single application
of the CONSTRUCTIVE DISJUNCTION rule consists in fact of two applications of the
appropriate iteration algorithm.
It is straightforward to check that if the auxiliary derivations involve only monotonic domain reduction rules, then the CONSTRUCTIVE DISJUNCTION rule is itself
monotonic. So the G ENERIC I TERATION FOR C OMPOUND D OMAINS algorithm can
be applied both within the side conditions of this rule and for scheduling this rule together with other monotonic domain reduction rules that are used to deal with other,
non-disjunctive, constraints.
In this framework it is straightforward to formulate some strengthenings of the constructive disjunction that lead to other modification of the constraints C1 and C2 than
C1′ and C2′ .
Example 4: Propagation Rules
These are rules that allow us to add new constraints. Assuming a given set A of ‘allowed’ constraints we write such rules as
B
C
where B, C ⊆ A.
This rule states that in presence of all constraints in B the constraints in C can be
added, and is a shorthand for a deterministic rule of the following form:
hB ; x1 ∈ D1 , . . ., xn ∈ Dn i
hB, C ; x1 ∈ D1 , . . ., xn ∈ Dn i
An example of such a rule is the transitivity rule:
x < y, y < z
x<z
that refers to a linear ordering < on the underlying domain (for example natural numbers).
In what follows we focus on another example of propagation rules, membership
rules. They have the following form:
y1 ∈ S1 , . . ., yk ∈ Sk
z1 6= a1 , . . ., zm 6= am
where yi ∈ Si and zj 6= aj are unary constraints with the obvious meaning.
Below we write such a rule as:
y1 ∈ S1 , . . ., yk ∈ Sk → z1 6= a1 , . . ., zm 6= am .
The intuitive meaning of this rule is: if for all i ∈ [1..k] the domain of each yi is a subset
of Si , then for all j ∈ [1..m] remove the element aj from the domain of zj .
The membership rules allow us to reason about constraints given explicitly in a form
of a table. As an example consider the three valued logic of Kleene. Let us focus on the
conjunction constraint and3(x, y, z) defined by the following table:
t fu
t t fu
f fff
uufu
That is, and3 consists of 9 triples. Then the membership rule y ∈ {u, f } → z 6= t, or
more precisely the rule
hand3(x, y, z), y ∈ {u, f } ; x ∈ Dx , y ∈ Dy , z ∈ Dz i
hand3(x, y, z), y ∈ {u, f }, z 6= t ; x ∈ Dx , y ∈ Dy , z ∈ Dz i
is equivalence preserving. This rule states that if y is either u or f , then t can be removed
from the domain of z.
We call a membership rule is minimal if it is equivalence preserving and its conclusions cannot be established by either removing from its premise a variable or by
expanding a variable range. For example, the above rule y ∈ {u, f} → z 6= t is minimal, while neither x ∈ {u}, y ∈ {u, f } → z 6= t nor y ∈ {u} → z 6= t is. In the case
of the and3 constraint there are 18 minimal membership rules.
To clarify the nature of the membership rules let us mention that, as shown in [9],
in the case of two-valued logic the corresponding set of minimal membership rules
entails a form of constraint propagation that is equivalent to the unit propagation, a
well-known form of resolution for propositional logic. So the membership rules can be
seen as a generalization of the unit propagation to the explicitly given constraints, in
particular to the case of many valued logics.
Membership rules can be alternatively viewed as a special class of monotonic domain reductions rules in which the domain of each zi variable is modified by removing
ai from it. So we can schedule these rules using the G ENERIC I TERATION FOR C OM POUND D OMAINS algorithm.
However, the propagation rules, so in particular the membership rules, satisfy an
important property that allows us to schedule them using a more efficient, fine-tuned,
scheduler. We call this property stability. It states that in each derivation the rule needs
to be applied at most once: if it is applied, then it does not need to be applied again.
So during the computation the applied rules that are stable can be permanently removed
from the initial rule set. The resulting scheduler for the membership rules and its further
optimizations are discussed in [14].
5 Low Level
The low level allows us to focus on matters that go beyond the issue of rule scheduling.
At this level we can address matters concerned with further optimization of the constraint propagation algorithms. Various improvements of the AC-3 algorithm that are
concerned with specific choices of the data structures used belong here but cannot be
explained by focusing the discussion on the corresponding ARC CONSISTENCY 1 and
2 rules.
On the other hand some other optimization issues can be explained in proof-theoretic
terms. In what follows we focus on the membership rules for which we worked out the
details. These rules allow us to implement constraint propagation for explicitly given
constraints. We explained above that they can be scheduled using a fine-tuned scheduler. However, even when an explicitly given constraint is small, the number of minimal
membership rules can be large and it is not easy to find them all.
So a need arises to generate such rules automatically. This is what we did in [11].
We also proved there that the resulting form of constraint propagation is equivalent to
hyper-arc consistency, a natural generalization of arc consistency to n-ary constraints
introduced in [25].
A further improvement can be achieved by removing some rules before scheduling
them. This idea was pursued in [14]. Given a set of monotonic domain reduction rules
R we say that a rule r is redundant if for each initial CSP P the unique outcome of
a stabilizing derivation (guaranteed by Note 1) is the same with r removed from R. In
general, the iterated removal of redundant rules does not yield a unique outcome but in
the case of the membership rules some useful heuristics can be used to appropriately
schedule the candidate rules for removal.
We can summarize the improvements concerned with the membership rules as follows:
– For explicitly given constraints all minimal membership rules can be automatically
generated.
– Subsequently redundant rules can be removed.
– A fine-tuned scheduler can be used to schedule the remaining rules.
– This scheduler allows us to remove permanently some rules which is useful during
the top-down search.
To illustrate these matters consider the 11-valued and11 constraint used in the
automatic test pattern generation (ATPG) systems. There are in total 4656 minimal
membership rules. After removing the redundant rules only 393 remain. This leads to
substantial gains in computing. To give an idea of the scale of the improvement here
are the computation times in seconds for three schedulers used to find all solutions to a
CSP consisting of the and11 constraint and solved using a random variable selection,
domain ordering and domain splitting:
Fine-tuned Generic CHR
all rules
1874
3321 7615
non-redundant rules 157
316
543
CHR stands for the standard CHR scheduler normally used to schedule such rules. (CHR
is a high-level language extension of logic programming used to write user-defined
constraints, for an overview see [18].) So using this approach a 50 fold improvement in
computation time was achieved. In general, we noted that the larger the constraint the
larger the gain in computing achieved by the above approach.
6 Conclusions
In this paper we assessed the crucial features of constraint programming (CP) by means
of a proof-theoretic perspective. To this end we identified three levels of abstraction.
At each level proof rules and derivations played a crucial role. At the highest level
they allowed us to clarify the relation between CP and the computation as deduction
paradigm. At the middle level we discussed efficient schedulers for specific classes of
rules. Finally, at the lowest level we explained how specific rules can be automatically
generated, optimized and scheduled in a customized way.
This presentation of CP suggests that it has close links with the rule-based programming. And indeed, several realizations of constraint programming through some form of
rule-based programming exist. For example, constraint logic programs are sets of rules,
so constraint logic programming can be naturally seen as an instance of rule-based
programming. Further, the already mentioned CHR language is a rule-based language,
though it does not have the full capabilities of constraint programming. In practise, CHR
is available as a library of a constraint programming system, for example ECLi PSe (see
[1]) or SICStus Prolog (see [3]). In turn, ELAN, see [2], is a rule-based programming
language that can be naturally used to explain various aspects of constraint programming, see for example [22] and [15].
In our presentation we abstracted from specific constraint programming languages
and their realizations and analyzed instead the principles of the corresponding programming style. This allowed us to isolate the essential features of constraint programming
by focusing on proof rules, derivations and schedulers. This account of constraint programming draws on our work on the subject carried out in the past seven years. In
particular, the high level view was introduced in [6]. In turn, the middle level summa-
rizes our work reported in [7,8]. Both levels are discussed in more detail in [10]. Finally,
the account of propagation rules and of low level draws on [11,14].
This work was pursued by others. Here are some representative references. Concerning the middle level, [26] showed that the framework of Section 4 allows us to
parallelize constraint propagation algorithms in a simple and uniform way, while [13]
showed how to use it to derive constraint propagation algorithms for soft constraints. In
turn, [19] explained other arc consistency algorithms by slightly extending this framework.
Concerning the lowest level, [27] considered rules in which parameters (i.e., unspecified constants) are allowed. This led to a decrease in the number of generated rules. In
turn, [4] presented an algorithm that generates more general and more expressive rules,
for example with variable equalities in the conclusion. Finally, [5] considered the problem of generating the rules for constraints defined intensionally over infinite domains.
Acknowledgments
The work discussed here draws partly on a joint research carried out with Sebastian
Brand and with Eric Monfroy. In particular, they realized the implementations discussed
in the section on the low level. We also acknowledge useful comments of the referees.
References
1. The
ECLi PSe
Constraint
Logic
Programming
System.
http://www-icparc.doc.ic.ac.uk/eclipse/.
2. ELAN, Version 3.3. http://www.iist.unu.edu/˜alumni/software/other/inria/www/elan/elan-prese
3. SICStus Prolog. http://www.sics.se/isl/sicstuswww/site/index.html.
4. S. Abdennadher and Ch. Rigotti. Automatic generation of rule-based constraint solvers over
finite domains. ACM Transactions on Computational Logic, 5(2):177–205, 2004.
5. S. Abdennadher and Ch. Rigotti. Automatic generation of CHR constraint solvers. Theory
and Practice of Logic Programming, 2005. To appear.
6. K. R. Apt. A proof theoretic view of constraint programming. Fundamenta Informaticae,
33(3):263–293, 1998. Available via http://arXiv.org/archive/cs/.
7. K. R. Apt. The essence of constraint propagation. Theoretical Computer Science, 221(1–
2):179–210, 1999. Available via http://arXiv.org/archive/cs/.
8. K. R. Apt. The role of commutativity in constraint propagation algorithms. ACM Transactions on Programming Languages and Systems, 22(6):1002–1036, 2000. Available via
http://arXiv.org/archive/cs/.
9. K. R. Apt. Some remarks on Boolean constraint propagation. In K. R. Apt, A. C.
Kakas, E. Monfroy, and F. Rossi, editors, New Trends in Constraints, volume 1865 of Lecture Notes in Artificial Intelligence, pages 91 – 107. Springer-Verlag, 2000. Available via
http://arXiv.org/archive/cs/.
10. K. R. Apt. Principles of Constraint Programming. Cambridge University Press, 2003.
11. K. R. Apt and E. Monfroy. Constraint programming viewed as rule-based programming. Theory and Practice of Logic Programming, 1(6):713–750, 2001. Available via
http://arXiv.org/archive/cs/.
12. F. Benhamou. Heterogeneous constraint solving. In M. Hanus and M. Rodriguez-Artalejo,
editors, Proceeding of the Fifth International Conference on Algebraic and Logic Programming (ALP 96), Lecture Notes in Computer Science 1139, pages 62–76, Berlin, 1996.
Springer-Verlag.
13. S. Bistarelli, R. Gennari, and F. Rossi. Constraint propagation for soft constraint satisfaction
problems: Generalization and termination conditions. In Rina Dechter, editor, Proceedings
of Constraint Programming 2000 (CP2000), Lecture Notes in Computer Science 1894, pages
83–97, Berlin, 2000. Springer-Verlag.
14. S. Brand and K. R. Apt. Schedulers and redundancy for a class of constraint propagation
rules. Theory and Practice of Logic Programming, 2005. To appear.
15. C. Castro. Building constraint satisfaction problem solvers using rewrite rules and strategies.
Fundamenta Informaticae, 33(3):263–293, 1998.
16. Alain Colmerauer. Opening the PROLOG-III universe. BYTE Magazine, 12(9), August
1987.
17. F. Fages, J. Fowler, and T. Sola. Experiments in reactive constraint logic programming.
Journal of Logic Programming, 37(1–3):185–212, 1998.
18. T. Frühwirth. Theory and practice of Constraint Handling Rules. Journal of Logic Programming, 37(1–3):95–138, October 1998. Special Issue on Constraint Logic Programming (P. J.
Stuckey and K. Marriot, Eds.).
19. R. Gennari. Arc consistency via subsumed functions. In John Lloyd, editor, Proceedings
of Computational Logic 2000 (CL2000), Lecture Notes in Artificial Intelligence 1861, pages
358–372, Berlin, 2000. Springer-Verlag.
20. ILOG.
Ilog
white
papers,
2003.
Available
via
http://www.ilog.com/products/optimization/papers.cfm.
21. J. Jaffar and J.-L. Lassez. Constraint logic programming. In POPL’87: Proceedings 14th
ACM Symposium on Principles of Programming Languages, pages 111–119. ACM, 1987.
22. C. Kirchner and C. Ringeissen. Rule-based constraint programming. Fundamenta Informaticae, 34(3):225–262, September 1998.
23. Koalog. http://www.koalog.com, 2005.
24. A. Mackworth. Consistency in networks of relations. Artificial Intelligence, 8(1):99–118,
1977.
25. R. Mohr and G. Masini. Good old discrete relaxation. In Y. Kodratoff, editor, Proceedings
of the 8th European Conference on Artificial Intelligence (ECAI), pages 651–656. Pitman
Publishers, 1988.
26. E. Monfroy and J.-H. Réty. Chaotic iteration for distributed constraint propagation. In
J. Carroll, H. Haddad, D. Oppenheim, B. Bryant, and G. Lamont, editors, Proceedings of the
14th ACM Symposium on Applied Computing, ACM SAC’99, Scientific Computing Track,
pages 19–24, San Antonio, Texas, USA, March 1999. ACM Press.
27. C. Ringeissen and E. Monfroy. Generating propagation rules for finite domains: a mixed
approach. In K. R. Apt, A. C. Kakas, E. Monfroy, and F. Rossi, editors, New Trends in Constraints, volume 1865 of Lecture Notes in Artificial Intelligence, pages 150–172. SpringerVerlag, 2000.
28. V. Telerman and D. Ushakov. Data types in subdefinite models. In J. A. Campbell J. Calmet
and J. Pfalzgraf, editors, Artificial Intelligence and Symbolic Mathematical Computations,
Lecture Notes in Computer Science 1138, pages 305–319, Berlin, 1996. Springer-Verlag.
29. P. Van Hentenryck, V. Saraswat, and Y. Deville. Design, implementation, and evaluation
of the constraint language cc(fd). Journal of Logic Programming, 37(1–3):139–164, 1998.
Special Issue on Constraint Logic Programming (P. J. Stuckey and K. Marriot, Eds.).
30. D. L. Waltz. Generating semantic descriptions from drawings of scenes with shadows. In
P. H. Winston, editor, The Psychology of Computer Vision, pages 19–91. McGraw Hill, 1975.
| 2 |
84
Cooperative Multi-Agent Planning: A Survey
arXiv:1711.09057v1 [] 24 Nov 2017
ALEJANDRO TORREÑO, Universitat Politècnica de València
EVA ONAINDIA, Universitat Politècnica de València
ANTONÍN KOMENDA, Czech Technical University in Prague
MICHAL ŠTOLBA, Czech Technical University in Prague
Cooperative multi-agent planning (MAP) is a relatively recent research field that combines technologies,
algorithms and techniques developed by the Artificial Intelligence Planning and Multi-Agent Systems communities. While planning has been generally treated as a single-agent task, MAP generalizes this concept
by considering multiple intelligent agents that work cooperatively to develop a course of action that satisfies
the goals of the group.
This paper reviews the most relevant approaches to MAP, putting the focus on the solvers that took part
in the 2015 Competition of Distributed and Multi-Agent Planning, and classifies them according to their key
features and relative performance.
CCS Concepts: •Computing methodologies → Multi-agent planning; Cooperation and coordination; Planning for deterministic actions; Heuristic function construction; Multi-agent systems; •Security
and privacy → Privacy-preserving protocols;
Additional Key Words and Phrases: Distribution, planning and coordination strategies, multi-agent heuristic functions, privacy preservation
ACM Reference Format:
Alejandro Torreño, Eva Onaindia, Antonı́n Komenda, Michal Štolba, 2016. Distributed and multi-agent
planning: a survey. ACM Comput. Surv. 50, 6, Article 84 (November 2017), 34 pages.
DOI: https://doi.org/10.1145/3128584
1. INTRODUCTION
Automated Planning is the field devoted to studying the reasoning side of acting. From
the restricted conceptual model assumed in classical planning to the extended models
that address temporal planning, on-line planning or planning in partially-observable
and non-deterministic domains, the field of Automated Planning has experienced huge
advances [Ghallab et al. 2004].
Multi-Agent Planning (MAP) introduces a new perspective in the resolution of a
planning task with the adoption of a distributed problem-solving scheme instead of
the classical single-agent planning paradigm. Distributed planning is required ”when
planning knowledge or responsibility is distributed among agents or when the execution capabilities that must be employed to successfully achieve objectives are inherently distributed” [desJardins et al. 1999].
This work is supported by the Spanish MINECO under project TIN2014-55637-C2-2-R, the Prometeo project
II/2013/019 funded by the Valencian Government, and the 4-year FPI-UPV research scholarship granted to
the first author by the Universitat Politècnica de València. Additionally, this research is partially supported
by the Czech Science Foundation under grant 15-20433Y.
Author’s addresses: A. Torreño ([email protected]) and E. Onaindia ([email protected]), Universitat Politècnica de València, Camino de Vera, s/n, Valencia, 46022, Spain; A. Komenda ([email protected]) and M. Štolba ([email protected]), Czech Technical University in
Prague, Zikova 1903/4, 166 36, Prague, Czech Republic.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights for components of this work owned
by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request
permissions from [email protected].
c 2017 ACM. 0360-0300/2017/11-ART84 $15.00
DOI: https://doi.org/10.1145/3128584
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:2
A. Torreño et al.
The authors of [desJardins et al. 1999] analyze distributed planning from a twofold
perspective; one approach, named Cooperative Distributed Planning, regards a MAP
task as the process of formulating or executing a plan among a number of participants;
the second approach, named Negotiated Distributed Planning, puts the focus on coordinating and scheduling the actions of multiple agents in a shared environment. The
first approach has evolved to what is nowadays commonly known as cooperative and
distributed MAP, with a focus on extending planning into a distributed environment
and allocating the planning task among multiple agents. The second approach is primarily concerned with controlling and coordinating the actions of multiple agents in a
shared environment so as to ensure that their local objectives are met. We will refer
to this second approach, which stresses the coordination and execution of large-scale
multi-agent planning problems, as decentralized planning for multiple agents. Moreover, while the first planning-oriented view of MAP relies on deterministic approaches,
the study of decentralized MAP has yielded an intensive research work on coordination
of activities in contexts under uncertainty and/or partial observability with the development of formal methods inspired by the use of Markov Decision Processes [Seuken
and Zilberstein 2008].
This paper surveys deterministic cooperative and distributed MAP methods. Our
intention is to provide the reader with a broad picture of the current state of the art
in this field, which has recently gained much attention within the planning community thanks to venues such as the Distributed and Multi-Agent Planning workshop1
and the 2015 Competition of Distributed and Multi-Agent Planning2 (CoDMAP). Interestingly, although there was a significant amount of work on planning in multi-agent
systems in the 90’s, most of this research was basically aimed at developing coordination methods for agents that adopt planning representations and algorithms to carry
out their tasks. Back then, little attention was given to the problem of formulating
collective plans to solve a planning task. However, the recent CoDMAP initiative of
fostering MA-STRIPS –a classical planning model for multi-agent systems [Brafman
and Domshlak 2008]– has brought back a renewed interest.
Generally speaking, cooperative MAP is about the collective effort of multiple planning agents to develop solutions to problems that each could not have solved as well (if
at all) alone [Durfee 1999]. A cooperative MAP task is thus defined as the collective effort of multiple agents towards achieving a common goal, irrespective of how the goals,
the knowledge and the agents’ abilities are distributed in the application domain. In
[de Weerdt and Clement 2009], authors identify several phases to address a MAP task
that can be interleaved depending on the characteristics of the problem, the agents and
the planning model. Hence, MAP solving may require allocation of goals, formulating
plans for solving goals, communicating planning choices and coordinating plans, and
execution of plans. The work in [de Weerdt and Clement 2009] is an overview of MAP
devoted to agents that plan and interact, presenting a rough outline of techniques
for cooperative MAP and decentralized planning. A more recent study examines how
to integrate planning algorithms and Belief-Desire-Intention (BDI) agent reasoning
[Meneguzzi and de Silva 2015]. This survey puts the focus on the integration of agent
behaviour aimed at carrying out predefined plans that accomplish a goal and agent
behaviour aimed at formulating a plan that achieves a goal.
This paper presents a thorough analysis of the advances in cooperative and distributed MAP that have lately emerged in the field of Automated Planning. Our aim
is to cover the wide and fragmented space of MAP approaches, identifying the main
characteristics that define tasks and solvers and establishing a taxonomy of the main
1 http://icaps16.icaps-conference.org/dmap.html
2 http://agents.fel.cvut.cz/codmap
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:3
approaches of the literature. We explore the great variety of MAP techniques on the
basis of different criteria, like agent distribution, communication or privacy models,
among others. The survey thus offers a deep analysis of techniques and domainindependent MAP solvers from a broad perspective, without adopting any particular
planning paradigm or programming language. Additionally, the contents of this paper
are geared towards reviewing the broad range of MAP solvers that participated in the
2015 CoDMAP competition.
This survey is structured in five sections. Section 2 offers a historical background
on distributed planning with a special emphasis on work that has appeared over the
last two decades. Section 3 discusses the main modelling features of a MAP task. Section 4 analyzes the main aspects of MAP solvers, including distribution, coordination,
heuristic search and privacy. Section 5 discusses and classifies the most relevant MAP
solvers in the literature. Finally, section 6 concludes and summarizes the ongoing and
future trends and research directions in MAP.
2. RELATED WORK: HISTORICAL BACKGROUND ON MAP
The large body of work on distributed MAP started jointly with an intensive research
activity on multi-agent systems (MAS) at the beginning of the 90’s. Motivated by
the distributed nature of the problems and reasoning of MAS, decentralized MAP
focused on aspects related to distributed control including activities like the decomposition and allocation of tasks to agents and utilization of resources [Durfee and
Lesser 1991; Wilkins and Myers 1998]; reducing communication costs and constraints
among agents [Decker and Lesser 1992; Wolverton and desJardins 1998]; or incorporating group decision making for distributed plan management in collaborative settings (group decisions for selecting a high-level task decomposition or an agent assignation to a task, group processes for plan evaluation and monitoring, etc.) [Grosz et al.
1999]. From this Distributed Artificial Intelligence (DAI) standpoint, MAP is fundamentally regarded as multi-agent coordination of actions in decentralized systems.
The inherently distributed nature of tasks and systems also fostered the appearance of techniques for cooperative formation of global plans. In DAI, this form of MAP
puts greater emphasis on reasoning, stressing the deliberative planning activities of
the agents as well as how and when to coordinate such planning activities to come up
with a global plan. Given the cooperative nature of the planning task, where all agents
are aimed at solving a common goal, this MAP approach features a more centralized
view of the planning process. Investigations in this line have yield a great variety
of planning and coordination methods such as techniques to merge the local plans
of the agents [Ephrati and Rosenschein 1994; desJardins and Wolverton 1999; Cox
and Durfee 2004], heuristic techniques for agents to solve their individual sub-plans
[Ephrati and Rosenschein 1997], mechanisms to coordinate concurrent interacting actions [Boutilier and Brafman 2001] or distributed constraint optimization techniques
to coordinate conflicts among agents [Cox and Durfee 2009]. In this latter work, the
authors propose a general framework to coordinate the activities of several agents in
a common environment such as partners in a military coordination, subcontractors
working on a building, or airlines operating in an alliance.
Many of the aforementioned techniques and approaches were actually used by some
of the early MAP tools. Distributed NOAH [Corkill 1979] is one of the first Partial-Order
Planning (POP) systems that generates gradual refinements in the space of (abstract)
plans using a representation similar to the Hierarchical Task Networks (HTNs). The
scheme proposed in [Corkill 1979] relies on a distributed conflict-solving process across
various agents that are able to plan without complete or consistent planning data;
the limitation of Distributed NOAH is the amount of information that must be exchanged between planners and the lack of robustness to communication loss or erACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:4
A. Torreño et al.
ror. In the domain-specific Partial Global Planning (PGP) method [Durfee and Lesser
1991], agents build their partial global view of the planning problem, and the search
algorithm finds local plans that can be then coordinated to meet the goals of all the
agents. Generalized PGP (GPGP) is a domain-independent extension of PGP [Decker
and Lesser 1992; Lesser et al. 2004] that separates the process of coordination from the
local scheduling of activities and task selection, which enables agents to communicate
more abstract and hierarchically organized information and has smaller coordination
overhead. DSIPE [desJardins and Wolverton 1999] is a distributed version of SIPE2 [Wilkins 1988] closely related to the Distributed NOAH planner. DSIPE proposes an
efficient communication scheme among agents by creating partial views of sub-plans.
The plan merging process is centralized in one agent and uses the conflict-resolution
principle originally proposed in NOAH. The authors of [de Weerdt et al. 2003] propose
a plan merging technique that results in distributed plans in which agents become
dependent on each other, but are able to attain their goals more efficiently.
HTN planning has also been exploited for coordinating the plans of multiple
agents [Clement and Durfee 1999]. The attractiveness of approaches that integrate hierarchical planning in agent teams such as STEAM [Tambe 1997] is that they leverage
the abstraction levels of the plan hierarchies for coordinating agents, thus enhancing
the efficiency and quality of coordinating the agents’ plans. A-SHOP [Dix et al. 2003] is
a multi-agent version of the SHOP HTN planner [Nau et al. 2003] that implements
capabilities for interacting with external agents, performs mixed symbolic/numeric
computations, and makes queries to distributed, heterogeneous information sources
without requiring knowledge about how and where these resources are located. Moreover, authors in [Kabanza et al. 2004] propose a distributed version of SHOP that runs
on a network of clusters through the implementation of a simple distributed backtrack
search scheme.
As a whole, cooperative MAP approaches devoted to the construction of a plan
that solves a common goal are determined by two factors, the underlying planning
paradigm and the mechanism to coordinate the formation of the plan. The vast literature on multi-agent coordination methods is mostly concerned with the task of combining and adapting local planning representations into a global consistent solution. The
adaptability of these methods to cooperative MAP is highly dependent on the particular agent distribution and the plan synthesis strategy of the MAP solver. Analyzing
these aspects is precisely the aim of the following sections.
3. COOPERATIVE MULTI-AGENT PLANNING TASKS
We define a (cooperative) MAP task as a process in which several agents that are not
self-interested work together to synthesize a joint plan that solves a common goal. All
agents wish thereby the goal to be reached at the end of the task execution.
First, this section presents the formalization of the components of a cooperative MAP
task. Next, we discuss the main aspects that characterize a MAP task by means of
two illustrative examples. Finally, we present how to model a MAP task with MAPDDL, a multi-agent version of the well-known Planning Domain Description Language (PDDL) [Ghallab et al. 1998].
3.1. Formalization of a MAP Task
Most of the cooperative MAP solvers that will be presented in this survey use a formalism that stems from MA-STRIPS [Brafman and Domshlak 2008] to a lesser or greater
extent. For this reason, we will use MA-STRIPS as the baseline model for the formalization of a MAP task. MA-STRIPS is a minimalistic multi-agent extension of the wellknown STRIPS planning model [Fikes and Nilsson 1971], which has become the most
widely-adopted formalism for describing cooperative MAP tasks.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:5
In MA-STRIPS, a MAP task is represented through a finite number of situations or
states. States are described by a set of atoms or propositions. States change via the
execution of planning actions. An action in MA-STRIPS is defined as follows:
Definition 3.1. A planning action is a tuple α = hpre(α), add(α), del(α)i, where
pre(α), add(α) and del(α) are sets of atoms that denote the preconditions, add effects,
and delete effects of the action, respectively.
An action α is executable in a state S if and only if all its preconditions hold in S;
that is, ∀p ∈ pre(α), p ∈ S. The execution of α in S generates a state S 0 such that
S 0 = S \ del(α) ∪ add(α).
Definition 3.2. A MAP task is defined as a 5-tuple T = hAG, P, {Ai }ni=1 , I, Gi with
the following components:
— AG is a finite set of n planning entities or agents.
— P is a finite set of atoms or propositions.
— Ai is the finite set ofSplanning actions of the agent i ∈ AG. We will denote the set of
actions of T as A = ∀i∈AG Ai .
— I ⊆ P defines the initial state of T .
— G ⊆ P denotes the common goal of T .
A solution plan is an ordered set of actions whose application over the initial state
I leads to a state Sg that satisfies the task goals; i.e., G ⊆ Sg . In MA-STRIPS a solution plan is defined as a sequence of actions Πg = {∆, ≺} that attains the task goals,
where ∆ ⊆ A is a non-empty set of actions and ≺ is a total-order relationship among
the actions of ∆. However, other MAP models assume a more general definition of a
plan; for example, as a set of of sequences of actions (one sequence per agent), as in
[Kvarnström 2011]; or as a partial-order plan [Torreño et al. 2012]. In the following,
we will consider a solution plan as a set of partially-ordered actions.
The action distribution model of MA-STRIPS, introduced in Definition 3.2, classifies
each atom p ∈ P as either internal (private) to an agent i ∈ AG, if it is only used
i
and affected by the actions in Ai , or public to all the agents in AG. Pint
denotes the
atoms that are internal to agent i, while Ppub refers to the public atoms of the task.
The distribution of the information of a MAP task T configures the local view that an
agent i has over T , T i , which is formally defined as follows:
Definition 3.3. The local view of a task T = hAG, P, A, I, Gi by an agent i ∈ AG is
defined as T i = hP i , Ai , I i , Gi, which includes the following elements:
i
— P i = Pint
∪ Ppub denotes the atoms accessible by agent i.
i
— A ⊆ A is the set of planning actions of i.
— I i ⊆ P i is the set of atoms of the initial state accessible by agent i.
— G denotes the common goal of the task T . An agent i knows all the atoms of G and
it will contribute to their achievement either directly (achieving a goal g ∈ G) or
indirectly (reaching effects that help other agents achieve g).
Note that Definition 3.3 does not specify G i , a set of individual goals of an agent i,
because in a cooperative MAP context the common goal G is shared among all agents
and it is never assigned to some particular agent (see next section for more details).
3.2. Characterization of a MAP Task
This section introduces a brief example based on a logistics domain [Torreño et al.
2014b] in order to illustrate the characteristics of a MAP task.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:6
A. Torreño et al.
ga1
t1
l1
ga2
l3
p
t1
l2
ft
sf
l2
ga1
l1
p
l3
ga2
sf
l7
t2
l8
l4
l4
t2
l5
l6
ft
l9
Fig. 1. MAP task examples: task T1 (left) and task T2 (right)
Consider the transportation task T1 in Figure 1 (left), which includes three different
agents. There are two transport agencies, ta1 and ta2, each of them having a truck,
t1 and t2, respectively. The two transport agencies work in two different geographical
areas, ga1 and ga2, respectively. The third agent is a factory, f t, located in the area
ga2. In order to manufacture products, factory f t requires a package of raw materials,
p, which must be collected from area ga1. In this task, agents ta1 and ta2 have the
same planning capabilities, but they operate in different geographical areas; i.e., they
are spatially distributed agents. Moreover, the factory agent f t is functionally different
from ta1 and ta2.
The goal of task T1 is for f t to manufacture a final product f p. For solving this task,
ta1 will use its truck t1 to load the package of raw materials p, initially located in l2,
and then it will transport p to a storage facility, sf , that is located at the intersection
of both geographical areas. Then, ta2 will complete the delivery by using its truck t2 to
transport p from sf to the factory f t, which will in turn manufacture the final product
f p. Therefore, this task involves three specialized agents, which are spatially or functionally distributed, and must cooperate to accomplish the common goal of having a
final manufactured product f p.
Task T1 defines a group goal; i.e., a goal that requires the participation of all the
agents in order to solve it. Given a task Tg = hAG, P, A, I, {g}i which includes a single
goal g, we say that g is a group goal if for every solution plan Πg = {∆, ≺}, ∃α, β ∈
∆ : α ∈ Ai , β ∈ Aj and i 6= j. We can thus distinguish between group goals, which
require the participation of more than one agent, and non-group goals, which can be
independently achieved by a single agent.
The presence or absence of group goals often determines the complexity of a cooperative MAP task. Figure 1 (right) depicts task T2 , where the goal is to deliver the package
p into the factory f t. This is a non-group goal because agent ta1 is capable of attaining it by itself, gathering p in l1 and transporting it to f t through the locations of its
area ga1. However, the optimal solution for this task is that agent ta1 takes p to sf so
that agent ta2 loads then p in sf and completes the delivery to the factory f t through
location l7. These two examples show that cooperative MAP involves multiple agents
working together to solve tasks that they are unable to attain by themselves (task T1 ),
or tasks that are accomplished better by cooperating (task T2 ) [de Weerdt and Clement
2009].
Tasks T1 and T2 emphasize most of the key elements of a MAP context. The spatial
and/or functional distribution of the participants gives rise to specialized agents that
have different capabilities and knowledge of the task. Information of the MAP tasks
is distributed among the specialized agents as summarized in Table 3.2. Atoms of the
form (pos t1 ∗) (note that ∗ acts as a wildcard) are accessible to agent ta1, since they
model the position of truck t1. Atoms of the form (pos t2 ∗), which describe the location
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:7
Table I. Task view T i for each agent i in example tasks T1 and T2
Task
AG
Pi
Ai
ta1
T1
ta2
ft
(pos t1 ∗)
(pos t2 ∗)
(pending fp)
(at p ∗)
(at p ft)
(manufactured fp)
drive, load, unload
manufacture
(pos t1 l1 )
Ii
(pos t2 l4 )
(pending fp)
(at p l2 )
G
(manufactured fp)
T2
ta2
ta1
ft
(pos t1 ∗)
(pos t2 ∗)
(pending fp)
(at p ∗)
(at p ft)
(manufactured fp)
drive, load, unload
(pos t1 l1 )
(at p l1 )
(at p3 ft)
manufacture
(pos t2 sf )
(at p ft)
of truck t2, are accessible to agent ta2. Finally, (pending fp) belongs to agent f t and
denotes that the manufacturing of f p (the goal of task T1 ) is still pending.
The atoms related to the location of the product p, (at p ∗), as well as
(manufactured fp), which indicates that the final product f p is already manufactured,
are accessible to the three agents, ta1, ta2 and f t. Since agents ignore the configuration of the working area of the other agents, the knowledge of agent ta1 regarding the
location of p is restricted to the atoms (at p l1 ), (at p l2 ), (at p sf ) and (at p t1 ), while
agent ta2 knows (at p sf ), (at p l3 ), (at p l4 ), (at p t2 ) and (at p ft). The awareness of
agent f t with respect to the location of p is limited to (at p ft).
The information distribution of a MAP task stresses the issue of privacy, which is
one of the basic aspects that must be considered in multi-agent applications [Serrano
et al. 2013]. Agents manage information that is not relevant for their counterparts
or sensitive data of their internal operational mechanisms that they are not willing
to disclose. For instance, ta1 and ta2 cooperate in solving tasks T1 and T2 but they
could also be potential competitors since they work in the same business sector. For
these reasons, providing privacy mechanisms to guarantee that agents do not reveal
the internal configuration of their working areas to each other is a key issue.
In general, agents in MAP seek to minimize the information they share with each
other, thus exchanging only the information that is relevant for other participating
agents to solve the MAP task.
3.3. Modelling of a MAP Task with MA-PDDL
The adoption of a common language for modelling planning domains allows for a direct comparison of different approaches and increases the availability of shared planning resources, thus facilitating the scientific development of the field [Fox and Long
2003]. Modelling a cooperative MAP task involves defining several elements that are
not present in single-agent planning tasks. Widely-adopted single-agent planning task
specification languages, such as PDDL [Ghallab et al. 1998], lack the required machinery to specify a MAP task. Recently, MA-PDDL3 , the multi-agent version of PDDL, was
developed in the context of the 2015 CoDMAP competition [Komenda et al. 2016] as the
first attempt to create a de facto standard specification language for MAP tasks. We
will use MA-PDDL as the language for modelling MAP tasks.
MAP solvers that accept an unfactored specification of a MAP task use a single input that describes the complete task T . In contrast, other MAP approaches require
a factored specification; i.e., the local view of each agent, T i . Additionally, modelling
3 Please
refer to http://agents.fel.cvut.cz/codmap/MA-PDDL-BNF-20150221.pdf for a complete BNF definition of the syntax of MA-PDDL.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:8
A. Torreño et al.
a MAP task may require the specification of the private information that an agent
cannot share with other agents.
MA-PDDL allows for the definition of both factored (:factored-privacy requirement) and unfactored (:unfactored-privacy requirement) task representations. In order to model the transportation task T1 (see Figure 1 (left) in Section 3.2), we will
use the factored specification. Task T1i of agent i is encoded by means of two independent files: the domain file describes general aspects of the task (P i and Ai , which can
be reused for solving other tasks of the same typology); the problem file contains a
description of the particular aspects of the task to solve (I i and G). For the sake of
simplicity, we only display fragments of the task T1ta1 .
The domain description of agents of type transport-agency, like ta1 and ta2, is defined in Listing 1.
( define ( domain transport - agency )
(: requirements : factored - privacy : typing : equality : fluents )
(: types transport - agency area location package product - object
truck place - location
factory - place )
(: predicates
( manufactured ? p - product ) ( at ? p - package ? l - location )
(: private
( area ? ag - transport - agency ? a - area ) ( in - area ? p - place ? a - area )
( owner ? a - transport - agency ? t - truck ) ( pos ? t - truck ? l - location )
( link ? p1 - place ? p2 - place )
)
)
(: action drive
: parameters
(? ag - transport - agency ? a - area ? t - truck ? p1 - place ? p2 - place )
: precondition ( and ( area ? ag ? a ) ( in - area ? p1 ? a ) ( in - area ? p2 ? a )
( owner ? a ? t ) ( pos ? t ? p1 ) ( link ? p1 ? p2 ))
: effect
( and ( not ( pos ? t ? p1 )) ( pos ? t ? p2 ))
)
[...]
)
Listing 1. Excerpt of the domain file for transport-agency agents
The domain of transport-agencies starts with the type hierarchy, which includes
the types transport-agency and factory to define the agents of the task. Note that
the type factory is defined as a subtype of place because a factory is also interpreted
as a place reachable by a truck. The remaining elements of the task are identified by
means of the types location, package, truck, etc.
The :predicates section includes the set of first-order predicates, which are patterns
to generate the agent’s propositions, P i , through the instantiation of their parameters.
The domain for transport-agencies includes the public predicates at, which models
the position of the packages, and manufactured, which indicates that the task goal of
manufacturing a product is fulfilled. Despite the fact that only the factory agent f t
has the ability to manufacture products, all the agents have access to the predicate
manufactured so that the transport-agencies will be informed when the task goal is
achieved.
The remaining predicates in Listing 1 are in the section :private of each
transport-agency, meaning that agents will not disclose information concerning the
topology of their working areas or the status of their trucks.
Finally, the :action block of Listing 1 shows the action schema drive. An action
schema represents a number of different actions that can be derived by instantiating
its variables. Agents ta1 and ta2 have three action schemas: load, unload and drive.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:9
Regarding the problem description, the factored specification includes three problem
files, one per agent, that contain I i , the initial state of each agent, and the task goals
G.
( define ( problem ta1 )
(: domain transport - agency )
(: objects
ta1 - transport - agency
ga1 - area
l1 l2 sf - place
p - package
fp - product
(: private t1 - truck )
)
(: init
( area ta1 ga1 ) ( pos t1 l1 ) ( owner t1 ta1 ) ( at p l1 )
( link l1 l2 ) ( link l2 l1 ) ( link l1 sf )
( link sf l1 ) ( link l2 sf ) ( link sf l2 )
( in - area l1 ga1 ) ( in - area l2 ga1 ) ( in - area sf ga1 )
)
(: goal ( manufactured fp ))
)
Listing 2. Problem file of agent ta1
Listing 2 depicts the problem description of task T1ta1 . The information managed
by ta1 is related to its truck t1 (which is defined as :private to prevent ta1 from
disclosing the location or cargo of t1), along with the places within its working area,
ga1, and the package p. The :init section of Listing 2 specifies I ta1 ; i.e., the location of
truck t1 and the position of package p, which is initially located in ga1. Additionally,
ta1 is aware of the links and places within its area ga1. The :goal section is common
to the three agents in T1 and includes a single goal indicating that the product fp
must be manufactured.
This modelling example shows the flexibility of MA-PDDL for encoding the specific
requirements of a MAP task, such as the agents’ distributed information via factored
input and the private aspects of the task. These functionalities make MA-PDDL a
fairly expressive language to specify MAP tasks.
4. MAIN ASPECTS OF A MAP SOLVER
Solving a cooperative MAP task requires various features such as information distribution, specialized agents, coordination or privacy. The different MAP solving techniques in the literature can be classified according to the mechanisms they use to
address these functionalities. We identify six main features to categorize cooperative
MAP solvers:
— Agent distribution: From a conceptual point of view, MAP is regarded as a task in
which multiple agents are involved, either as entities participating in the plan synthesis (planning agents) or as the target entities of the planning process (actuators
or execution agents).
— Computational process: From a computational perspective, MAP solvers use a
centralized or monolithic design that solves the MAP task through a central process,
or a distributed approach that splits the planning activity among several processing
units.
— Plan synthesis schemes: There exist a great variety of strategies to tackle the process of synthesizing a plan for the MAP task, mostly characterized by how and when
the coordination activity is applied. Coordination comprises the distributed information exchange processes by which the participating agents organize and harmonize
their activities in order to work together properly.
— Communication mechanisms: Communication among agents is an essential aspect that distinguishes MAP from single-agent planning. The type of communication
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:10
A. Torreño et al.
enabled in MAP solvers is highly dependent on the type of computational process
(centralized or distributed) of the solver. Thus, we will classify solvers according to
the use of internal or external communication infrastructures.
— Heuristic search: As in single agent planning, MAP solvers commonly apply heuristic search to guide the planning process. In MAP, we can distinguish between local
heuristics (each agent i calculates an estimate to reach the task goals G using only
its accessible information in T i ) or global heuristics (the estimate to reach G is calculated among all the agents in AG).
— Privacy preservation: Privacy is one of the main motivations to adopt a MAP approach. Privacy means coordinating agents without making sensitive information
publicly available. Whereas this aspect was initially neglected in former MAP solvers
[van der Krogt 2009], the most recent approaches tackle this issue through the development of robust privacy-preserving algorithms.
The following subsections provide an in-depth analysis of these aspects, which characterize and determine the performance of the existing MAP solvers.
4.1. Agent Distribution
Agents in MAP can adopt different roles: planning agents are reasoning entities that
synthesize the course of action or plan that will be later executed by a set of actuators
or execution agents. An execution agent can be, among others, a robot in a multi-robot
system, or a software entity in an execution simulator.
From a conceptual point of view, MAP solvers are characterized by the agent distribution they apply; i.e., the number of planning and execution agents involved in the
task. Typically, it is assumed that one planning agent from the set AG of a MAP task
T is associated with one actuator in charge of executing the actions of this planning
agent in the solution plan. However, some MAP solvers alter this balance between
planning and acting agents.
Table 4.1 summarizes the different schemes according to the relation between the
number of planning and execution agents. Single-agent planning is the simplest mapping: the task is solved by a single planning agent, i.e., |AG| = 1, and executed by a
single actuator. We can mention Fast Downward (FD) [Helmert 2006] as one of the
most utilized single-agent planners within the planning community.
Table II. Conceptual schemes according to the number of planning and execution agents (along
with example MAP solvers that apply them)
Planning agents |AG|
1
Execution agents
n
1
n
Single-agent planning
FD [Helmert 2006]
Planning for multiple agents
TFPOP [Kvarnström 2011]
Factored planning
ADP [Crosby et al. 2013]
Planning by multiple agents
FMAP [Torreño et al. 2015]
MAP solvers like Distoplan [Fabre et al. 2010], A# [Jezequel and Fabre 2012] and
ADP [Crosby et al. 2013] follow a factored agent distribution inspired by the factored
planning scheme [Amir and Engelhardt 2003]. Under this paradigm, a single-agent
planning task is decomposed into a set of independent factors (agents), thus giving
rise to a MAP task with |AG| > 1. Then, factored methods to solve the agents’ local
tasks T i are applied, and finally, the computed local plans are pieced together into a
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:11
M1
memory
Machine1
input (
)
Ag 1
memory
P1
input ( )
.
.
.
internal
communication
Pn
.
.
.
output ( )
external
communication
output ( )
Mn
memory
input (
)
Ag n
Fig. 2. Centralized (monolithic) vs. distributed (agent-based) implementation
valid solution plan [Brafman and Domshlak 2006]. Factored planning exploits locality
of the solutions and a limited information propagation between components.
The second row of Table 4.1 outlines the classification of MAP approaches that build
a plan that is conceived to be then executed by several actuators. Some solvers in the
literature regard MAP as a single planning agent working for a set of actuators (|AG| =
1), while other approaches regard MAP as planning by multiple planners (|AG| > 1).
4.1.1. Planning for multiple agents. Under this scheme, the actions of the solution plan
are distributed among actuators typically via the introduction of constraints. TFPOP
[Kvarnström 2011] applies single-agent planning for multi-agent domains where each
execution agent is associated with a sequential thread of actions within a partial-order
plan. The combination of forward-chaining and least-commitment of TFPOP provides
flexible schedules for the acting agents, which execute their actions in parallel. The
work in [Crosby et al. 2014] transforms a MAP task that involves multiple agents
acting concurrently and cooperatively in a single-agent planning task. The transformation compels agents to select joint actions associated with a single subset of objects
at a time, and ensures that the concurrency constraints on this subset are satisfied.The
result is a single-agent planning problem in which agents perform actions individually,
one at a time.
The main limitation of this planning-for-multiple-agents scheme is its lack of privacy, since the planning entity has complete access to the MAP task T . This is rather
unrealistic if the agents involved in the task have sensitive private information they
are not willing to disclose [Sapena et al. 2008]. Therefore, this scheme is not a suitable
solution for privacy-preserving MAP tasks like task T1 described in section 3.2.
4.1.2. Planning by multiple agents. This scheme distributes the MAP task among several
planning agents where each is associated with a local task T i . Thus, planning-bymultiple-agents puts the focus on the coordination of the planning activities of the
agents. Unlike single-planner approaches, the planning decentralization inherent to
this scheme makes it possible to effectively preserve the agents’ privacy.
In general, solvers that follow this scheme, such as FMAP [Torreño et al. 2012],
maintain a one-to-one correspondence between planning and execution agents; that is,
planning agents are assumed to solve their tasks, which will be later executed by their
corresponding actuators. There exist, however, some exceptions in the literature that
break this one-to-one correspondence such as MARC [Sreedharan et al. 2015], which
rearranges the n planning agents in AG into m transformer agents (m < n), where a
transformer agent comprises the planning tasks of several agents in AG. All in all,
MARC considers m reasoning entities that plan for n actuators, where m < n.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:12
A. Torreño et al.
4.2. Computational process
From a computational standpoint, MAP solvers are classified as centralized or distributed. Centralized solvers draw upon a monolithic implementation in which a central process synthesizes a global solution plan for the MAP task. In contrast, distributed MAP methods are implemented as multi-agent systems in which the problemsolving activity is fully decentralized.
Centralized MAP. In this approach, the MAP task T is solved on a single machine
regardless the number of planning agents conceptually considered by the solver. The
main characteristic of a centralized MAP approach is that tasks are solved in a monolithic fashion, so that all the processes of the MAP solver, {P1 , . . . , Pn }, are run in one
same machine (see Figure 2 (left)).
The motivation for choosing a centralized MAP scheme is twofold: 1) external communication mechanisms to coordinate the planning agents are not needed; and 2) robust and efficient single-agent planning technology can be easily reused.
Regarding agent distribution, MAP solvers that use a single planning agent generally apply a centralized computational scheme, as for example TFPOP [Kvarnström
2011] (see Table 4.1). On the other hand, some algorithms that conceptually rely on
the distribution of the MAP task among several planning agents do not actually implement them as software agents, but as a centralized procedure. For example, MAPR
[Borrajo 2013] establishes a sequential order among the planning agents and applies a
centralized planning process that incrementally synthesizes a solution plan by solving
the agents’ local tasks in the predefined order.
Distributed MAP. Many approaches that conceive MAP as planning by multiple
agents (see Table 4.1), are developed as multi-agent systems (MAS) defined by several independent software agents. By software agent, we refer to a computer system
that 1) makes decisions without any external intervention (autonomy), 2) responds to
changes in the environment (reactivity), 3) exhibits goal-directed behaviour by taking
the initiative (pro-activeness), and 4) interacts with other agents via some communication language in order to achieve its objectives (social ability) [Wooldridge 1997].
In this context, a software agent of a MAS plays the role of a planning agent in AG.
This way, in approaches that follow the planning by multiple agents scheme introduced
in the previous section, a software agent encapsulates the local task T i of a planning
agent i ∈ AG. Given a task, where |AG| = n, distributed MAP solvers can be run on up
to n different hosts or machines (see Figure 2 (right)).
The emphasis of the distributed or agent-based computation lies in the coordination
of the concurrent activities of the software planning agents. Since agents may be run
on different hosts (see Figure 2 (right)), having a proper communication infrastructure
and message-passing protocols is vital for the synchronization of the agents.
Distributed solvers like FMAP [Torreño et al. 2014b] launch |AG| software agents
that seamlessly operate on different machines. FMAP builds upon the MAS platform
Magentix2 [Such et al. 2012], which provides the messaging infrastructure for agents
to communicate over the network.
4.3. Plan synthesis schemes
In most MAP tasks, there are dependencies between agents’ actions and none of the
participants has sufficient competence, resources or information to solve the entire
problem. For this reason, agents must coordinate with each other in order to cooperate
and solve the MAP task properly.
Coordination is a multi-agent process that harmonizes the agents’ activities, allowing them to work together in an organized way. In general, coordination involves a
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
(a) Pre-planning coordination
(b) Post-planning coordination
Coordination
(pre-planning constraints)
j
i
n
j
...
Solution
plan
i
n
j
i
n
j
...
(c) Iterative response planning
Coordination
(pre-planning constraints)
Local
planning
...
Local
planning
...
i
84:13
n
...
Coordination
(plan merging)
i
Solution
plan
i
j
j
n
Solution
plan
Local planning
Fig. 3. Plan synthesis schemes in unthreaded planning and coordination
large variety of activities, such as distributing the task goals among the agents, making joint decisions concerning the search for a solution plan, or combining the agents’
individual plans into a solution for a MAP task. Since coordination is an inherently distributed mechanism, it is only required in MAP solvers that conceptually draw upon
multiple planning agents (see right column of Table 4.1).
The characteristics of a MAP task often determine the coordination requirements for
solving the task. For instance, tasks that feature group goals, like the task T1 depicted
in section 3.2, usually demand a stronger coordination effort. Therefore, the capability
and efficiency of a MAP solver is determined by the coordination strategy that governs
its behaviour.
The following subsections analyse the two principal coordination strategies in MAP;
namely, unthreaded and interleaved planning and coordination.
4.3.1. Unthreaded Planning and Coordination. This strategy defines planning and coordination as sequential activities, such that they are viewed as two separate black boxes.
Under this strategy, an agent i ∈ AG synthesizes a plan to its local view of the task,
T i , and coordination takes place before or/and after planning.
Pre-planning coordination. Under this plan synthesis scheme, the MAP solver defines the necessary constrains to guarantee that the plans that solve the local tasks of
the agents are properly combined into a consistent solution plan for the whole task T
(see Figure 3 (a)).
ADP [Crosby et al. 2013] follows this scheme by applying an agentification procedure
that distributes a STRIPS planning task among several planning agents (see Table
4.1). More precisely, ADP is a fully automated process that inspects the multi-agent
nature of the planning task and calculates an agent decomposition that results in a
set of n decoupled local tasks. By leveraging this agent decomposition, ADP applies a
centralized, sequential and total-order planning algorithm that yields a solution for
the original STRIPS task. Since the task T is broken down into several local tasks
independent from each other, the local solution plans are consistently combined into a
solution for T .
Ultimately, the purpose of pre-planning coordination is to guarantee that the agents’
local plans are seamlessly combined into a solution plan that attains the goals of the
MAP task, thus avoiding the use of plan merging techniques at post-planning time.
Post-planning coordination. Other unthreaded MAP solvers put the coordination
emphasis after planning. In this case, the objective is to merge the plans that solve
the agents’ local tasks, {T 1 , . . . , T n }, into a solution plan that attains the goals G of the
task T by removing inconsistencies among the local solutions (see Figure 3 (b)).
In PMR (Plan Merger by Reuse) [Luis and Borrajo 2014], the local plans of the agents
are concatenated into a solution plan for the MAP task. Other post-planning coordinaACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:14
A. Torreño et al.
Planning
0
0
...
j
i
1
2
3
0
n
3
4
5
5
6
7
Coordination
Fig. 4. Multi-agent search in interleaved planning and coordination
tion approaches apply an information exchange between agents to come up with the
global solution. For instance, PSM [Tožička et al. 2016] draws upon a set of finite automata, called Planning State Machines (PSM), where each automaton represents the
set of local plans of a given agent. In one iteration of the PSM procedure, each agent i
generates a plan that solves its local task T i , incorporating this plan into its associated
PSM. Then, agents exchange the public projection of their PSMs, until a solution plan
for the task T is found.
Iterative response planning. This plan synthesis scheme, firstly introduced by DPGM
[Pellier 2010], successively applies a planning-coordination sequence, each corresponding to a planning agent. An agent i receives the local plan of the preceding agent along
with a set of constraints for coordination purposes, and responds by building up a
solution for its local task T i on top of the received plan. Hence, the solution plan is
incrementally synthesized (see Figure 3 (c)).
Multi-Agent Planning by Reuse (MAPR) [Borrajo 2013] is an iterative response
solver based on goal allocation. The task goals G are distributed among
T the agents
before planning, such that an agent i is assigned a subset G i ⊂ G, where i∈AG G i = ∅.
Additionally, agents are automatically arranged in a sequence that defines the order
in which the iterative response scheme must be carried out.
In unthreaded planning and coordination schemes, agents do not need communication skills because they do not interact with each other during planning. This is the
reason why the unthreaded strategy is particularly efficient for solving tasks that do
not require a high coordination effort. In contrast, it presents several limitations when
solving tasks with group goals, due to the fact that agents are unable to discover and
address the cooperation demands of other agents. The needs of cooperation that arise
when solving group goals are hard to discover at pre-planning time, and plan merging techniques are designed only to fix inconsistencies among local plans, rather than
repairing the plans to satisfy the inter-agent coordination needs. Consequently, unthreaded approaches are more suitable for solving MAP task that do not contain group
goals; that is, every task goal can be solved by at least one single agent.
4.3.2. Interleaved Planning and Coordination. A broad range of MAP techniques interleave
the planning and coordination activities. This coordination strategy is particularly
appropriate for tasks that feature group goals since agents explore the search space
jointly to find a solution plan, rather than obtaining local solutions individually. In this
context, agents continuously coordinate with each other to communicate their findings,
thus effectively intertwining planning and coordination.
Most interleaved solvers, such as MAFS [Nissim and Brafman 2012] and FMAP
[Torreño et al. 2014b], commonly rely on a coordinated multi-agent search scheme,
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:15
wherein nodes of the search space are contributed by several agents (see Figure 4).
This scheme involves selecting a node for expansion (planning) and exchanging the
successor nodes among the agents (coordination). Agents thus jointly explore the
search space until a solution is found, alternating between phases of planning and
coordination.
Different forms of coordination are applicable in the interleaved resolution strategy.
In FMAP [Torreño et al. 2014b], agents share an open list of plans and jointly select
the most promising plan according to global heuristic estimates. Each agent i expands
the selected node using its actions Ai , and then, agents evaluate and exchange all the
successor plans. In MAFS [Nissim and Brafman 2012], each agent i keeps an independent open list of states. Agents carry out the search simultaneously: an agent i selects
a state S to expand from its open list according to a local heuristic estimate, and synthesizes all the nodes that can be generated through the application of the actions Ai
over S. Out of all the successor nodes, agents only share the states that are relevant to
other agents.
Interleaving planning and coordination is very suitable for solving complex tasks
that involve group goals and a high coordination effort. By using this strategy, agents
learn the cooperation requirements of other participants during the construction of the
plan and can immediately address them. Hence, the interleaved scheme allows agents
to efficiently address group goals.
The main drawback of this coordination strategy is the high communication cost
in a distributed MAP setting because alternating planning and coordination usually
entails exchanging a high number of messages in order to continuously coordinate
agents.
4.4. Communication mechanisms
Communication among agents plays a central role in MAP solvers that conceptually
define multiple planners (see Table 4.1), since planning agents must coordinate their
activities in order to accomplish the task goals. As shown in Figure 2, different agent
communication mechanisms can be applied, depending on the computational process
followed by the MAP solver.
Internal communication. Solvers that draw upon a centralized implementation resort to internal or simulated multi-agent communication. For example, the centralized
solver MAPR [Borrajo 2013] distributes the task goals, G, among the planning agents,
and agents solve their local tasks sequentially. Once a local plan Πi for an agent i ∈ AG
is computed, the information of Πi is used as an input for the next agent in the sequence, j, thus simulating a message passing between agents i and j. This type of simple and simulated communication system is all that is required in centralized solvers
that run all planning agents in a single machine.
External communication. As displayed in Figure 2 (right), distributed MAP solvers
draw upon external communication mechanisms by which different processes (agents),
potentially allocated on different machines, exchange messages in order to interact
with each other. External communication can be easily enabled by linking the different agents via network sockets or a messaging broker. For example, agents in MAPlan
[Fišer et al. 2015] exchange data over the TCP/IP protocol when the solver is executed
in a distributed manner. A common alternative to implement external communication in MAS implies using a message passing protocol compliant with the IEEE FIPA
standards [for Intelligent Physical Agents 2002], which are intended to promote the interoperation of heterogeneous agents. The Magentix2 MAS platform [Such et al. 2012]
used by MH-FMAP [Torreño et al. 2015] facilitates the implementation of agents with
FIPA-compliant messaging capabilities.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:16
A. Torreño et al.
The use of external communication mechanisms allows distributed solvers to run
the planning agents in decentralized machines and to coordinate their activities by
exchanging messages through the network. The flexibility provided by external communication mechanisms comes at the cost of performance degradation. The results of
the 2015 CoDMAP competition [Komenda et al. 2016] show that centralized solvers
like ADP [Crosby et al. 2013] outperform solvers executed in a distributed setting,
such as PSM [Tožička et al. 2015]. Likewise, an analysis performed in [Torreño et al.
2014b] reveals that communication among agents is the most time-consuming activity
of the distributed approach FMAP, thus compromising the overall scalability of this
solver. Nevertheless, the participants in the distributed track of the CoDMAP exhibited a competitive performance, which proves that the development of fully-distributed
MAP solvers is worth the overhead caused by external communication infrastructures.
4.5. Heuristic Search
Ever since the introduction of HSP [Long et al. 2000], the use of heuristic functions that
guide the search by estimating the quality of the nodes of the search tree has proven to
be one of the most robust and reliable problem-solving strategies in single-agent planning. Over the years, many solvers based on heuristic search, such as FF [Hoffmann
and Nebel 2001] or LAMA [Richter and Westphal 2010], have consistently dominated
the International Planning Competitions4 .
Since most MAP solvers stem from single-agent planning techniques, heuristic
search is one of the most common approaches in the MAP literature. In general, in
solvers that synthesize a plan for multiple executors, the single planning agent (see
Table 4.1) has complete access to the MAP task T , and so it can compute heuristics
that leverage the global information of T , in a way that is very similar to that of
single-agent planners. However, in solvers that feature multiple planning agents, i.e.,
|AG| > 1, each agent i is only aware of the information defined in T i and no agent has
access to the complete task T . Under a scheme of planning by multiple agents, one can
distinguish between local and global heuristics.
In local heuristics, an agent i estimates the cost of the task goals, G, using only the
information in T i . The simplicity of local heuristics, which do not require any interactions among agents, contrasts with the low accuracy of the estimates they yield due
to the limited task view of the agents. Consider, for instance, the example task T1 presented in section 3.2: agent ta1 does not have sufficient information to compute an
accurate estimate of the cost to reach the goals of T1 since T1ta1 does not include the
configuration of the area ga2. In general, if T i is a limited view of T , local heuristics
will not yield informative estimates of the cost of reaching G.
In contrast, a global heuristic in MAP is the application of a heuristic function “carried out by several agents which have a different knowledge of the task and, possibly,
privacy requirements” [Torreño et al. 2015]. The development of global heuristics for
multi-agent scenarios must account for additional features that make heuristic evaluation an arduous task [Nissim and Brafman 2012]:
— Solvers based on distributed computation require robust communication protocols for
agents to calculate estimates for the overall task.
— For MAP approaches that preserve agents’ privacy, the communication protocol must
ensure that estimates are computed without disclosing sensitive private data.
The application of local or global heuristics is also determined by the characteristics of the plan synthesis scheme of the MAP solver. Particularly, in an unthreaded
planning and coordination scheme, agents synthesize their local plans through the
4 http://www.icaps-conference.org/index.php/Main/Competitions
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:17
application of local heuristic functions. For instance, in the sequential plan synthesis
scheme of MAPR [Borrajo 2013], agents apply locally hF F [Hoffmann and Nebel 2001]
and hLand [Richter and Westphal 2010] when solving their allocated goals.
Local heuristic search has also been applied by some interleaved MAP solvers.
Agents in MAFS and MAD-A* [Nissim and Brafman 2014] generate and evaluate search
states locally. An agent i shares a state S and local estimate hi (S) only if S is relevant
to other planning agents. Upon reception of S, agent j performs its local evaluation
of S, hj (S). Then, depending on the characteristics of the heuristic, the final estimate
of S by agent j will be either hi (S), hj (S), or a combination of both. In [Nissim and
Brafman 2012], authors test MAD-A* with two different optimal heuristics, LM-Cut
[Helmert and Domshlak 2009] and Merge&Shrink [Helmert et al. 2007], both locally
applied by each agent. Despite heuristics being applied only locally, MAD-A* is proven
to be cost-optimal.
Unlike unthreaded solvers, the interleaved planning and coordination strategy
makes it possible to accommodate global heuristic functions. In this case, agents apply
some heuristic function on their tasks T i and then they exchange their local estimates
to come up with an estimate for the global task T .
GPPP [Maliah et al. 2014] introduces a distributed version of a privacy-preserving
landmark extraction algorithm for MAP. A landmark is a proposition that must be
satisfied in every solution plan of a MAP task [Hoffmann et al. 2004]. The quality of a
plan in GPPP is computed as the sum of the local estimates of the agents in AG. GPPP
outperforms MAFS thanks to the accurate estimates provided by this landmark-based
heuristic. MH-FMAP [Torreño et al. 2015], the latest version of FMAP, introduces a
multi-heuristic alternation mechanism based on Fast Downward (FD) [Helmert 2006].
Agents alternate two global heuristics when expanding a node in their tasks: hDT G ,
which draws upon the information of the Domain Transition Graphs [Helmert 2004]
associated with the state variables of the task, and the landmark-based heuristic
hLand , which only evaluates the preferred successors [Torreño et al. 2015]. Agents
jointly build the DTGs and the landmarks graph of the task and each of them stores
its own version of the graphs according to its knowledge of the MAP task.
Some recent work in the literature focuses on the adaptation of well-known singleagent heuristic functions to compute global MAP estimators. The authors of [Štolba
and Komenda 2014] adapt the single-agent heuristic hF F [Hoffmann and Nebel 2001]
by means of a compact structure, the exploration queue, that optimizes the number
of messages exchanged among agents. This multi-agent version of hF F , however, is
not as accurate as the single-agent one. The work in [Štolba et al. 2015] introduces
a global MAP version of the admissible heuristic function LM-Cut that is proven to
obtain estimates of the same quality as the single-agent LM-Cut. This multi-agent LMCut yields better estimates at the cost of a larger computational cost.
In conclusion, heuristic search in MAP, and most notably, the development of global
heuristic functions in a distributed context, constitutes one of the main challenges of
the MAP research community. The aforementioned approaches prove the potential of
the development and combination of global heuristics towards scaling up the performance of MAP solvers.
4.6. Privacy
The preservation of agents’ sensitive information, or privacy, is one of the basic aspects that must be enforced in MAP. The importance of privacy is illustrated in the
task T1 of section 3.2, which includes two different agents, ta1 and ta2, both representing a transport agency. Although both agents are meant to cooperate for solving this
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:18
A. Torreño et al.
task, it is unlikely that they are willing to reveal sensitive information to a potential
competitor.
Privacy in MAP has been mostly neglected and under-represented in the literature.
Some paradigms like Hierarchical Task Network (HTN) planning apply an form of implicit privacy when an agent delegates subgoals to another agent, which solves them
by concealing the resolution details from the requester agent. This makes HTN a very
well-suited approach for practical applications like composition of web services [Sirin
et al. 2004]. However, formal treatment of privacy is even more scarce. One of the
first attempts to come up with a formal privacy model in MAP is found in [van der
Krogt 2007], where authors quantify privacy in terms of the Shannon’s information
theory [Shannon 1948]. More precisely, authors establish a notion of uncertainty with
respect to plans and provide a measure of privacy loss in terms of the data uncovered
by the agents along the planning process. Unfortunately, this measure is not general
enough to capture details such as heuristic computation. Nevertheless, quantification
of privacy is an important issue in MAP, as it is in distributed constraint satisfaction
problems [Faltings et al. 2008]. A more recent work, also based on Shannon’s information theory [Štolba et al. 2016], quantifies privacy leakage for MA-STRIPS according
to the reduction of the number of possible transition systems caused by the revealed
information. In this work, the main sources of privacy leakage are identified, but not
experimentally evaluated.
Table III. Categorization of privacy properties in MAP
Privacy criterion
Modelling of private information
Categories
Imposed privacy [Brafman and Domshlak 2008]
Induced privacy [Torreño et al. 2014b]
Information sharing
MA-STRIPS [Brafman and Domshlak 2008]
Subset privacy [Bonisoli et al. 2014]
Practical guarantees
No privacy [Decker and Lesser 1992]
Weak privacy [Borrajo 2013]
Object cardinality privacy [Shani et al. 2016]
Strong privacy [Brafman 2015]
The next subsections analyze the privacy models adopted by the MAP solvers in
the literature according to three different criteria (see summary in Table 4.6): the
modelling of private information, the information sharing schemes, and the privacy
practical privacy guarantees offered by the MAP solver.
4.6.1. Modelling of Private Information. This feature is closely related to whether the language used to specify the MAP task enables explicit modelling of privacy or not.
Early approaches to MAP, such as MA-STRIPS [Brafman and Domshlak 2008], manage a notion of induced privacy. Since the MA-STRIPS language does not explicitly
model private information, the agents’ private data are inferred from the task structure. Given an agent i ∈ AG and a piece of information pi ∈ T i , pi is defined as private
if ∀j∈AG|j6=i pi 6∈ T j ; that is, if pi is known to i and ignored by the rest of agents in T .
FMAP [Torreño et al. 2014b] introduces a more general imposed privacy scheme,
explicitly describing the private and shareable information in the task description.
MA-PDDL [Kovacs 2012], the language used in the CoDMAP competition [Komenda
et al. 2016], follows this imposed privacy scheme and allows the designer to model the
private elements of the agents’ tasks.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:19
In general, both induced and imposed privacy schemes are commonly applied by
current MAP solvers. The induced privacy scheme enables the solver to automatically
identify the naturally private elements of a MAP task. The imposed privacy scheme,
by contrast, offers a higher control and flexibility to model privacy, which is a helpful
tool in contexts where agents wish to occlude sensitive data that would be shared
otherwise.
4.6.2. Information Sharing. Privacy-preserving algorithms vary accordingly to the number of agents that share a particular piece of information. In general, we can identify
two information sharing models, namely MA-STRIPS and subset privacy.
MA-STRIPS. The MA-STRIPS information sharing model [Brafman and Domshlak
2008] defines as public the data that are shared among all the agents in AG, so that a
piece of information is either known to all the participants, or only to a single agent.
More precisely, a proposition p ∈ P is defined as internal or private to an agent i ∈ AG
if p is only used and affected by the actions of Ai . However, if the proposition p is also
in the preconditions and/or effects of some action α ∈ Aj , where j ∈ AG and j 6= i, then
p is publicly accessible to all the agents in AG.
An action α ∈ Ai that contains only public preconditions and effects is said to be
public and it is known to all the participants in the task. In case α includes both public
and private preconditions and effects, agents share instead αp , the public projection of
α, an abstraction that contains only the public elements of α.
This simple dichotomic privacy model of MA-STRIPS does not allow for specifying
MAP tasks that require some information to be shared only by a subset of the planning
agents in AG.
Subset privacy. Subset privacy is introduced in [Bonisoli et al. 2014] and generalizes
the MA-STRIPS scheme by establishing pairwise privacy. This model defines a piece of
information as private to a single agent, publicly accessible to all the agents in AG or
known to a subset of agents. This approach is useful in applications where agents wish
to conceal some information from certain agents.
For instance, agent ta2 in task T1 of section 3.2 notifies the factory agent f t whenever
the proposition (pos t2 f t) is reached. This proposition indicates that the truck t2
is placed at the factory f t, a location that is known to both ta2 and f t. Under the
MA-STRIPS model, agent ta1 would be notified that truck t2 is at the factory f t, an
information that ta2 may want to conceal. However, the subset privacy model allows
ta2 to hide (pos t2 f t) from ta1 by defining it as private between ta2 and f t.
Hence, subset privacy is a more flexible information sharing model compared to the
more conservative and limited approach of MA-STRIPS, which enables representing
more complex and realistic situations concerning information sharing.
4.6.3. Privacy Practical Guarantees. Recent studies devoted to a formal treatment of
practical privacy guidelines in MAP [Nissim and Brafman 2014; Shani et al. 2016]
conclude that some privacy schemes allow agents to infer private information from
other agents through the transmitted data. According to these studies, it is possible to
establish a four-level taxonomy to classify the practical privacy level of MAP solvers.
The four levels of the taxonomy, from the least to the most secure one, are: no privacy,
weak privacy, object cardinality privacy and strong privacy.
No privacy. Privacy has been mostly neglected in MAP but has been extensively
treated within the MAS community [Such et al. 2012]. The 2015 CoDMAP competition
introduced a more expressive definition of privacy than MA-STRIPS and this was a
boost for many planners to model private data in the task descriptions. Nevertheless,
we can cite a large number of planners that completely disregard the issue of privacy
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:20
A. Torreño et al.
among agents such as early approaches like GPGP [Decker and Lesser 1992] or more
recent approaches like µ-SATPLAN [Dimopoulos et al. 2012], A# [Jezequel and Fabre
2012] or DPGM [Pellier 2010].
Weak privacy. A MAP system is weakly privacy-preserving if agents do not explicitly communicate their private information to other agents at execution time [Brafman
2015]. This is accomplished by either obfuscating (encrypting) or occluding the private
information they communicate to other agents in order to only reveal the public projection of their actions. In a weak privacy setting, agents may infer private data of other
agents through the information exchanged during the plan synthesis.
Obfuscating the private elements of a MAP task is an appropriate mechanism when
agents wish to conceal the meaning of propositions and actions. In obfuscation, the
proposition names are encrypted but the number and unique identity of preconditions
and effects of actions are retained, so agents are able to reconstruct the complete isomorphic image of their tasks. In MAPR [Borrajo 2013], PMR [Luis and Borrajo 2014]
and CMAP [Borrajo and Fernández 2015], when an agent communicates a plan, it encrypts the private information in order to preserve its sensitive information. Agents
in MAFS [Nissim and Brafman 2014], MADLA [Štolba and Komenda 2015], MAPlan
[Fišer et al. 2015] and GPPP [Maliah et al. 2014] encrypt the private data of the relevant states that they exchange during the plan synthesis.
Other weak privacy-preserving solvers in the literature occlude the agents’ private
information rather than sharing obfuscated data. Agents in MH-FMAP [Torreño et al.
2015] only exchange the public projection of the actions of their partial-order plans,
thus occluding private information like preconditions, effects, links or orderings.
Object cardinality privacy. Recently, the DPP planner [Shani et al. 2016] introduced
a new level of privacy named object cardinality privacy. A MAP algorithm preserves
object cardinality privacy if, given an agent i and a type t, the cardinality of i’s private
objects of type t cannot be inferred by other agents from the information they receive
[Shani et al. 2016]. In other words, this level of privacy strongly preserves the number
of objects of a given type t of an agent i, thus representing a middle ground between
the weak and strong privacy settings.
Hiding the cardinality of private objects is motivated by real-world scenarios. Consider, for example, the logistics task T1 of section 3.2. One can assume that the transport agencies that take part in the MAP task, ta1 and ta2, know that packages are
delivered using trucks. However, it is likely that each agent would like to hide the
number of trucks it possesses or the number of transport routes it uses.
Strong privacy. A MAP algorithm is said to strongly preserve privacy if none of the
agents in AG is able to infer a private element of an agent’s task from the public
information it obtains during planning. In order to guarantee strong privacy, it is necessary to consider several factors, such as the nature of the communication channel
(synchronous, asynchronous, lossy) or the computational power of the agents.
Secure-MAFS [Brafman 2015] is a theoretical proposal to strong privacy that builds
upon the MAFS [Nissim and Brafman 2014] model. In Secure-MAFS, two states that
only differ in their private elements are not communicated to other agents in order
to prevent them from deducing information through the non-private or public part of
the states. Secure-MAFS is proved to guarantee strong privacy for a sub-class of tasks
based on the well-known logistics domain.
In summary, weak privacy is easily achievable through obfuscation of private data,
but provides little security. On the other hand, the proposal of Secure-MAFS lays the
theoretical foundations to strong privacy in MAP but the complexity analysis and the
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:21
practical implementation issues of this approach have not been studied yet. Additionally, object cardinality privacy accounts for a middle ground between weak and strong
privacy. In general, the vast majority of MAP methods are classified under the no privacy or weak privacy levels: the former approaches to MAP do not consider privacy at
all, while most of the recent proposals, which claim to be privacy preserving, resort in
most cases to obfuscation to conceal private information.
5. DISTRIBUTED AND MULTI-AGENT PLANNING SYSTEMS TAXONOMY
As discussed in section 1, MAP is a long-running research field that has been covered in
several articles [desJardins et al. 1999; de Weerdt and Clement 2009; Meneguzzi and
de Silva 2015]. This section reviews the large number of domain-independent cooperative MAP solvers that have been proposed since the introduction of the MA-STRIPS
model [Brafman and Domshlak 2008]. This large body of research was recently crystallized in the 2015 CoDMAP competition [Komenda et al. 2016], the first attempt to
directly compare MAP solvers through a benchmark encoded using the standardized
MA-PDDL language.
The cooperative solvers analyzed in this section cover a wide range of different plan
synthesis schemes. As discussed in section 4, one can identify several aspects that
determine the features of MAP solvers; namely, agent distribution, computational process, plan synthesis scheme, communication mechanism, heuristic search and privacy
preservation. This section presents an in-depth taxonomy that classifies solvers according to their main features and analyzes their similarities and differences (see Table
4.6.3).
This section also aims to critically analyze and compare the strengths and weaknesses of the planners regarding their applicability and experimental performance.
Given that a comprehensive comparison of MAP solvers was issued as a result of the
2015 CoDMAP competition, Table 4.6.3 arranges solvers according to their positions in
the coverage ranking (number of problems solved) of this competition. The approaches
included in this taxonomy are organized according to their plan synthesis scheme, an
aspect that ultimately determines the types of MAP tasks they can solve. Section 5.1
discusses the planners that follow an unthreaded planning and coordination scheme,
while section 5.2 reviews interleaved approaches to MAP.
5.1. Unthreaded Planning and Coordination MAP Solvers
The main characteristic of unthreaded planners is that planning and coordination are
not intertwined but handled as two separate and independent activities. Unthreaded
solvers are labelled as UT in the column Coordination strategy of Table 4.6.3. They
typically apply local single-agent planning and a combinatorial optimization or satisfiability algorithm to coordinate the local plans.
Planning First, 2008 (implemented in 2010). Planning First [Nissim et al. 2010] is the first
MAP solver that builds upon the MA-STRIPS model. It is an early representative of the
unthreaded strategy that inspired the development of many subsequent MAP solvers,
which are presented in the next paragraphs. Planning First generates a local plan for
each agent in a centralized fashion by means of the FF planner [Hoffmann and Nebel
2001], and coordinates the local plans through a distributed Constraint Satisfaction
Problem (DisCSP) solver to come up with a global solution plan. More precisely, Planning First distributes the MAP task among the agents and identifies the coordination
points of the task as the actions whose application affects other agents. The DisCSP is
then used to find consistent coordination points between the local plans. If the DisCSP
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
A. Torreño et al.
84:22
Coordination
strategy
Computational
process
Plan synthesis scheme
Heuristic
Privacy
1st/5th
-
-
CoDMAP
Cent. track
Dist. track
Table IV. Summary of the state-of-the-art MAP solvers and their features. For unthreaded solvers, the plan synthesis schemes are listed in form of pairs ”agent coordination”
& ”local planning technique”.
MAP Solver
2nd/3rd/6th
-
N
-
W
4th
2nd/5th/6th
L
7th/8th
-
G
W
9th/18th/19th
1st/4th
Automated task agentization & heuristic forward search (FD)
W
10th/11th
-
C
G
W
12th/16th
C
G
W
13th
UT
Task mapping into transformer agent task & planning via
FD or IBACOP → solution plan translation into original MAP task
G/L
W
UT
C
Pre-planning goal allocation → task mapping into single-agent task →
solution plan parallelization & heuristic forward search (LAMA)
G
W
ADP [Crosby et al. 2013]
C
Multi-agent heuristic forward search
G
MAP-LAPKT [Muise et al. 2015]
UT
D
Multi-agent heuristic forward search (relaxed, subgoals)
G/L
Task mapping into single-agent task &
heuristic forward search (LAPKT)
CMAP [Borrajo and Fernández 2015]
IL
C
Intersection of Finite Automata &
heuristic forward search (LAMA) → Finite Automata
UT
MAPlan [Fišer et al. 2015]
IL
Multi-agent multi-heuristic state-based search
MARC [Sreedharan et al. 2015]
GPPP [Maliah et al. 2016]
D
-
C
-
UT
14th
3rd
IL
W
15th
PSM [Tožička et al. 2016]
L
W
MADLA [Štolba and Komenda 2014]
L
C
Pre-planning goal allocation → iterative response planning →
solution plan parallelization & heuristic forward search (LAMA)
-
UT
C
17th
PMR [Luis and Borrajo 2014]
UT
-
Pre-planning goal allocation → Plan merging → plan repair →
solution plan parallelization & heuristic forward search (LAMA)
MAPR [Borrajo 2013]
W
-
OC
-
G
-
-
L
W
-
-
Multi-agent A* multi-heuristic search via forward POP
G
W
-
-
D
Multi-agent A* heuristic search via forward POP
L
W
-
C
D
Multi-agent heuristic forward search
L
W
IL
IL
D
Multi-agent A* heuristic forward search
G
-
UT
FMAP [Torreño et al. 2014b]
IL
D
Multi-agent A* heuristic search via backward POP
-
MH-FMAP [Torreño et al. 2015]
MAFS [Nissim and Brafman 2012]
IL
D
N
DPP [Shani et al. 2016]
MAD-A* [Nissim and Brafman 2012]
IL
–
Synthesis of high-level plan over DP projection (FD) &
heuristic forward search (FF)
MAP-POP [Torreño et al. 2014a]
Post-planning coordination via DisCSP & heuristic forward search (FF)
-
C
-
UT
-
-
Planning First [Brafman and Domshlak 2008]
-
-
N
-
–
N
-
–
N
N
–
S
N
Message passing algorithm & Finite Automata
G
–
Iterative response planning & GraphPlan + CSP plan extraction
L
Pre-planning goal allocation → iterative response planning & SAT
C
Asynchronous communication mechanism & A* heuristic forward search
C
C
Multi-agent heuristic forward search
C
UT
C
UT
UT
C
UT
Distoplan [Fabre et al. 2010]
UT
µ-SATPLAN [Dimopoulos et al. 2012]
DPGM [Pellier 2010]
IL
TFPOP [Kvarnström 2011]
A# [Jezequel and Fabre 2012]
Forward-chaining partial-order planning &
synthesis of agent-specific thread of actions
Secure-MAFS [Brafman 2015]
Computational process: C - centralized solver, D - distributed solver
Coordination strategy: UT - unthreaded, IL - interleaved
Privacy: N - no privacy, W - weak privacy, OC - object cardinality privacy, S - strong privacy
Heuristic: L - local, G - global
CoDMAP: coverage classification, Cent. track - centralized track, Dist. track - distributed track, a ’-’ indicates that the solver did not participate in a track.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:23
solver finds a solution, the plan for the MAP task is directly built from the local plans
since the DisCSP solution guarantees compatibility among the underlying local plans.
The authors of [Nissim et al. 2010] empirically evaluate Planning First over a set of
tasks based on the well-known rovers, satellite and logistics domains. The results show
that a large number of coordination points among agents derived from the number of
public actions limits the scalability and effectiveness of Planning First. Later, the MAPPOP solver outperformed Planning First in both execution time and coverage [Torreño
et al. 2012].
DPGM, 2010 (implemented in 2013). DPGM [Pellier 2010] makes also use of CSP techniques to coordinate the agents’ local plans. Unlike Planning First, the CSP solver in
DPGM is explicitly distributed across agents and it is used to extract the local plans
from a set of distributed planning graphs. Under the iterative response planning strategy introduced by DPGM, the solving process is started by one agent, which proposes
a local plan along with a set of coordination constraints. The subsequent agent uses
its CSP to extract a local plan compatible with the prior agent’s plan and constraints.
If an agent is not able to generate a compliant plan, DPGM backtracks to the previous
agent, which puts forward an alternative plan with different coordination constraints.
µ-SATPLAN, 2010. µ-SATPLAN [Dimopoulos et al. 2012] is a MAP solver that extends
the satisfiability-based planner SATPLAN [Kautz 2006] to a multi-agent context. µSATPLAN performs an a priori distribution of the MAP task goals, G, among the agents
in AG. Similarly to DPGM, agents follow an iterative response planning strategy, where
each participant takes the previous agent’s solution as an input and extends it to solve
its assigned goals via SATPLAN. This way, agents progressively generate a solution.
µ-SATPLAN is unable to attain tasks that include group goals because it assumes
that each agent can solve its assigned goals by itself. µ-SATPLAN is experimentally
validated on several multi-agent tasks of the logistics, storage and TPP domains.
Although these tasks feature only two planning agents, the authors claim that µSATPLAN is capable of solving tasks with a higher number agents.
MAPR, PMR, CMAP, 2013-2015. Multi-Agent Planning by Reuse (MAPR) [Borrajo 2013]
allocates the goals G of the task among the agents before planning through a relaxed
reachability analysis. The private information of the local plans is encrypted, thus
preserving weak privacy by obfuscating the agents’ local tasks. MAPR also follows an
iterative response plan synthesis scheme, wherein an agent takes as input the result
of the prior agent’s solution plan and runs the LAMA planner [Richter and Westphal
2010] to obtain an extended solution plan that attains its allocated agent’s goals. The
plan of the last agent is a solution plan for the MAP task, which is parallelized to
ensure that execution agents perform as many actions in parallel as possible. MAPR is
limited to tasks that do not feature specialized agents or group goals. This limitation
is a consequence of the assumption that each agent is able to solve its allocated goals
by itself, which renders MAPR incomplete.
Plan Merging by Reuse (PMR) [Luis and Borrajo 2014; Luis and Borrajo 2015] draws
upon the goal allocation and obfuscation privacy mechanisms of MAPR. Unlike MAPR,
agents carry out the planning stage simultaneously instead of sequentially and each
agent generates local plans for its assigned goals. In the post-planning plan merging strategy of PMR, the resulting local plans are concatenated, yielding a sequential
global solution. If the result of the merging process is not a valid solution plan, local
plans are merged through a repair procedure. If a merged solution is not found, the
task is solved via a single-agent planner.
Although CMAP [Borrajo and Fernández 2015] follows the same goal allocation and
obfuscation strategy of MAPR and PMR, the plan synthesis scheme of CMAP transACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:24
A. Torreño et al.
forms the encrypted local tasks into a single-agent task (|AG| = 1), which is then solved
through the planner LAMA. CMAP was the best-performing approach of this family of
MAP planners in the 2015 CoDMAP competition as shown in Table 4.6.3. CMAP ranked
7th in the centralized track and exhibited a solid performance over the 12 domains of
the CoDMAP benchmark (approximately 90% coverage). PMR and MAPR ranked 14th
and 15th, with roughly 60% coverage. The plan synthesis scheme of MAPR affects
its performance in domains that feature group goals, such as depots or woodworking,
while PMR offers a more stable performance over the benchmark.
MAP-LAPKT, 2015. MAP-LAPKT [Muise et al. 2015] conceives a MAP task as a problem
that can be transformed and solved by a single-agent planner using the appropriate
encoding. More precisely, MAP-LAPKT compiles the MAP task into a task that features
one planning agent (|AG| = 1) and solves with the tools provided in the repository
LAPKT [Ramirez et al. 2015]. The authors of [Muise et al. 2015] try three different variations of best-first and depth-first search that result in algorithms with different theoretical properties and performance. The task translation performed by MAP-LAPKT
offers weak privacy preservation guarantees.
As shown in Table 4.6.3, two of the three versions of MAP-LAPKT that participated
in the CoDMAP ranked 2nd and 3rd in the centralized track. The coverage of MAPLAPKT and of CMAP is roughly to 90% of the benchmark problems, an indication that
shows the efficiency of the scheme that compiles a MAP task into a single-agent task.
MARC, 2015. The Multi-Agent Planner for Required Cooperation (MARC) [Sreedharan et al. 2015] is a centralized MAP solver based on the theory of required cooperation [Zhang and Kambhampati 2014]. MARC analyzes the agent distribution of the
MAP task and comes up with a different arrangement of planning agents. Particularly,
MARC compiles the original task into a task with a set of transformer agents, each one
being an ensemble of various agents; i.e., |AG M ARC | < |AG|. A transformer agent comprises the representation of various agents of the original MAP task including all their
actions. The current implementation of MARC compiles all the agents in AG into a
single transformer agent (|AG M ARC | = 1). Then, a solution plan is computed via FD
[Helmert 2006] or the portfolio planner IBACOP [Cenamor et al. 2014], and the resulting plan is subsequently translated into a solution for the original MAP task. MARC
preserves weak privacy since private elements of the MAP task are occluded in the
transformer agent task.
Regarding experimental performance, MARC ranks at the 4th position of the centralized CoDMAP with 90% coverage, thus being one of the best-performing MAP approaches. The experimental results also reveal the efficiency of this multi-to-one agent
transformation.
ADP, 2013. The Agent Decomposition-based Planner (ADP) [Crosby et al. 2013] is a
factored planning solver that exploits the inherently multi-agent structure (agentization) of some STRIPS-style planning tasks and comes up with a MAP task where
|AG| > 1. ADP applies a state-based centralized planning procedure to solve the MAP
task. In each iteration, ADP determines a set of subgoals that are achievable from the
current state by one of the agents. A search process, guided through the well-known
hF F heuristic, is then applied to find a plan that achieves these subgoals, thus resulting in a new state. This mechanism iterates successively until a solution is found.
Experimentally, ADP outperforms several state-of-the-art classical planners (e.g.,
LAMA) and is the top-ranked solver at the centralized track of the CoDMAP, outperforming other approaches that compile the MAP task into a single-agent planning
task, such as MAP-LAPKT or CMAP.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:25
Distoplan, 2010. Distoplan [Fabre et al. 2010] is a factored planning approach that exploits independence within a planning task. Unlike other factored methods [Kelareva
et al. 2007], Distoplan does not set any bound on the number of actions or coordination points of local plans. In Distoplan, a component or abstraction of the global task
is represented as a finite automaton, which recognizes the regular language formed
by the local valid plan of the component. This way, all local plans are manipulated
at once and a generic distributed optimization technique enables to limit the number
of compatible local plans. With this unbounded representation, all valid plans can be
computed in one run but stronger conditions are required to guarantee polynomial
runtime. Distoplan is the first optimal MAP solver in the literature (note that Planning
First is optimal with respect to the number of coordination points, but local planning is
carried out through a suboptimal planner).
Distoplan was experimentally tested in a factored version of the pipesworld domain.
However, the solver is unable to solve even the smallest instances of this domain in a
reasonable time. The authors claim that the reason is that Distoplan scales roughly as
n3 , where n is the number of components of the global task. For obvious reasons, Distoplan has not been empirically compared against other MAP solvers in the literature.
A#, 2012 (not implemented). In the line of factored planning, A# [Jezequel and Fabre
2012] is a multi-agent A* search that finds a path for the goal in each local component
of a task and ensures that the component actions that must be jointly performed are
compatible. A# runs in parallel a modified version of the A* algorithm in each component, and the local search processes are guided towards finding local plans that are
compatible with each other. Each local A* finds a plan as a path search in a graph and
informs its neighbors of the common actions that may lead to a solution. Particularly,
each agent searches its local graph or component while considering the constraints and
costs of the rest of agents, received through an asynchronous communication mechanism. The authors of [Jezequel and Fabre 2012] do not validate A# experimentally;
however, the soundness, completeness and optimality properties of A# are formally
proven.
PSM, 2014. PSM [Tožička et al. 2016; Tožička et al. 2015] is a recent distributed MAP
solver that follows Distoplan’s compact representation of local agents plans into Finite
Automata, called Planning State Machines (PSMs). This planner defines two basic
operations: obtaining a public projection of a PSM and merging two different PSMs.
These operations are applied to build a public PSM consisting of merged public parts
of individual PSMs. The plan synthesis scheme gradually expands the agents’ local
PSMs by means of new local plans. A solution for the MAP task is found once the
public PSM is not empty. PSM weakly preserves privacy as it obfuscates states of the
PSMs in some situations.
PSM applies an efficient handling of communication among agents, which grants
this solver a remarkable experimental performance in both the centralized and distributed setting. In the centralized CoDMAP track, PSM ranks 12th (solving 70% of
the tasks), and it is the top performer at the distributed track of the competition.
DPP, 2016. The DP-Projection Planner (DPP) [Shani et al. 2016], is a centralized MASTRIPS solver that uses the Dependency-Preserving (DP) projection, a novel and accurate public projection of the MAP task information with object cardinality privacy
guarantees. The single planning agent of DPP uses the FD planner to synthesize a
high-level plan which is then extended with the agents’ private actions via the FF
planner, thus resulting in a multi-agent solution plan.
The authors of [Shani et al. 2016] provide a comprehensive experimental evaluation
of DPP through the complete benchmark of the 2015 CoDMAP competition. The results
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:26
A. Torreño et al.
show that DPP outperforms most of the top contenders of the competition (namely,
GPPP, MAPR, PMR, MAPlan and PSM). All in all, DPP can be considered the current
top MA-STRIPS-based solver, as well as one of the best-performing MAP approaches to
date.
TFPOP, 2011. TFPOP [Kvarnström 2011] is a hybrid approach that combines the flexibility of partial-order planning and the performance of forward-chaining search. Unlike most MA-STRIPS-based solvers, TFPOP supports temporal reasoning with durative actions. TFPOP is a centralized approach that synthesizes a solution for multiple
executors. It computes threaded partial-order plans; i.e., non-linear plans that keep a
thread of sequentially-ordered actions per agent, since authors assume that an execution agent performs its actions sequentially.
TFPOP is tested in a reduced set of domains, which include the well-known satellite
and zenotravel domains, as well as a UAV delivery domain. The objective of this experimentation is to compare TFPOP against several partial-order planners. TFPOP is not
compared to any MAP solver in this taxonomy.
5.2. Interleaved Planning and Coordination MAP Solvers
Under the interleaved scheme, labelled as IL in the column Coordination strategy of
Table 4.6.3, agents jointly explore the search space intertwining their planning and
coordination activities. The development of interleaved MAP solvers heavily relies on
the design of robust communication protocols to coordinate agents during planning.
MAP-POP, FMAP, MH-FMAP, 2010-2015. In this family of MAP solvers, agents apply a
distributed exploration of the plan space. Agents locally compute plans through an
embedded partial-order planning (POP) component and they build a joint search tree
by following an A* search scheme guided by global heuristic functions.
MAP-POP [Torreño et al. 2012; Torreño et al. 2014a] performs an incomplete search
based on a classical backward POP algorithm and POP heuristics. FMAP [Torreño
et al. 2014b] introduces a sound and complete plan synthesis scheme that uses a
forward-chaining POP [Benton et al. 2012] guided through the hDT G heuristic. MHFMAP [Torreño et al. 2015] applies a multi-heuristic search approach that alternates
hDT G and hLand , building a Landmark Graph (LG) to estimate the number of pending landmarks of the partial-order plans. The three planners guarantee weak privacy
since private information is occluded throughout the planning process and heuristic
evaluation. The hLand estimator uses some form of obfuscation during the construction
of the LG.
Regarding experimental results, FMAP is proven to outperform MAP-POP and MAPR
in terms of coverage over 10 MAP domains, most of which are included in the CoDMAP
benchmark. Results in [Torreño et al. 2015] indicate that MH-FMAP obtains better coverage than both FMAP and GPPP. Interestingly, this planner exhibits a much worse
performance in the CoDMAP (see Table 4.6.3), ranking 17th with only 42% coverage.
This is due to the lose of accuracy of the hDT G heuristic when the internal statevariable representation of the tasks in MH-FMAP is transformed to a propositional
representation to be tested in the CoDMAP benchmark, thus compromising the performance of the solver [Torreño et al. 2015].
MADLA, 2013. The Multiagent Distributed and Local Asynchronous Planner (MADLA)
[Štolba et al. 2015] is a centralized solver that runs one thread per agent on a single
machine and combines two versions of the hF F heuristic, a projected (local) variant
(hL ) and a distributed (global) variant (hD ) in a multi-heuristic state-space search.
The main novelty of MADLA is that the agent which is computing hD , which requires
contributions of the other agents for calculating the global heuristic estimator, is run
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:27
asynchronously and so it can continue the search using hL while waiting for responses
from other agents that are computing parts of hD . MADLA evaluates as many states as
possible using the global heuristic hD , which is more informative than hL . This way,
MADLA can use a computationally hard global heuristic without blocking the local
planning process of the agents, thus improving the performance of the system.
Experimentally, MADLA ranks 13th in the centralized CoDMAP, reporting 66% coverage. It outperforms most of the distributed MAP solvers of the competition, but it is
not able to solve the most complex tasks of the CoDMAP domains, thus not reaching
the figures of the top performers such as ADP, MAP-LAPKT or MARC.
MAFS, MAD-A*, 2012-2014. MAFS [Nissim and Brafman 2014] is an updated version of
Planning First that implements a distributed algorithm wherein agents apply a heuristic state-based search (see section 4.5). In [Nissim and Brafman 2012], authors present
MAD-A*, a cost-optimal variation of MAFS. In this case, each agent expands the state
that minimizes f = g +h, where h is estimated through an admissible heuristic. Particularly, authors tested the landmark heuristic LM-Cut [Helmert and Domshlak 2009]
and the abstraction heuristic Merge&Shrink [Helmert et al. 2007]. MAD-A* is the first
distributed and interleaved solver based on MA-STRIPS.
MAFS is compared against MAP-POP and Planning First over the logistics, rovers
and satellite domains, notably outperforming both solvers in terms of coverage and
execution time [Nissim and Brafman 2014]. On the other hand, the authors of [Nissim
and Brafman 2014] only compare MAD-A* against single-agent optimal solvers.
Secure-MAFS, 2015 (not implemented). Secure-MAFS [Brafman 2015] is an extension of
MAFS towards secure MAP, and it is currently the only solver that offers strong privacy
guarantees (see section 4.6.3). Currently, Secure-MAFS is a theoretical work that has
not been yet implemented nor experimentally evaluated.
GPPP, 2014. The Greedy Privacy-Preserving Planner (GPPP) [Maliah et al. 2014;
Maliah et al. 2016] builds upon MAFS and improves its performance via a global
landmark-based heuristic function. GPPP applies a global planning stage and then
a local planning stage. In the former, agents agree on a joint coordination scheme
by solving a relaxed MAP task that only contains public actions (thereby preserving
privacy) and obtaining a skeleton plan. In the local planning stage, agents compute
private plans to achieve the preconditions of the actions in the skeleton plan. Since coordination is done over a relaxed MAP task, the individual plans of the agents may not
succeed at solving the actions’ preconditions. In this case, the global planning stage is
executed again to generate a different coordination scheme, until a solution is found.
In GPPP, agents weakly preserve privacy by obfuscating the private information of the
shared states through private state identifiers.
GPPP provides a notable experimental performance, ranking 10th in the centralized
CoDMAP track. GPPP reaches 83% coverage and is only surpassed by the different
versions of ADP, MARC, MAP-LAPKT, CMAP and MAPlan, which proves the accuracy
of its landmark based heuristic and the overall efficiency of its plan synthesis scheme.
MAPlan, 2015. MAPlan [Fišer et al. 2015] is a heuristic MAFS-based solver that adapts
several concepts from MAD-A* and MADLA. MAPlan is a distributed and flexible approach that implements a collection of state-space search methods, such as best first
or A*, as well as several local and global heuristic functions (hF F , LM-Cut, potential
heuristics and others), which allows the solver to be run under different configurations. MAPlan can be executed in a single-machine, using local communication, or in
a distributed fashion, where each agent is in a different machine and communication
among agents is implemented through network message passing. Regarding privacy,
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:28
A. Torreño et al.
MAPlan applies a form of obfuscation, replacing private facts in search states by unique
local identifiers, which grants weak privacy.
MAPlan exhibits a very solid performance in the centralized and distributed tracks
of the 2015 CoDMAP competition, ranking 9th and 2nd, respectively. In the centralized
track, MAPlan obtains 83% coverage, outperforming GPPP and reaching similar figures
than the top-performing centralized solvers.
6. CONCLUSIONS
The purpose of this article is to comprehensively survey the state of the art in cooperative MAP, offering an in-detail overview of this rapidly evolving research field, which
has experienced multiple key advances over the last decade. These contributions crystallized in the 2015 CoDMAP competition, where MAP solvers were compared through
an exhaustive benchmark testing encoded with MA-PDDL, the first standard modelling language for MAP tasks.
In this paper, the topic of MAP was studied from a twofold perspective: from the
representational structure of a MAP task and from the problem-solving standpoint.
We formally defined a MAP task following the well-known MA-STRIPS model and
provided several examples which illustrate the features that distinguish MAP tasks
from the more compact single-agent planning tasks. We also presented the modelling
of these illustrative tasks with MA-PDDL.
MAP is a broad field that allows for a wide variety of problem-solving approaches.
For this reason, we identified and thoroughly analyzed the main aspects that characterize a solver, from the architectural design to the practical features of a MAP tool.
Among others, these aspects include the computational process of the solvers and the
plan synthesis schemes that stem from the particular combination of planning and
coordination applied by MAP tools, as well as other key features, such as the communication mechanisms used by the agents to interact with each other and the privacy
guarantees offered by the existing solvers.
Finally, we compiled and classified the existing MAP techniques according to the
aforementioned criteria. The taxonomy of MAP techniques presented in this survey
prioritizes recent domain-independent techniques in the literature. Particularly, we
focused on the approaches that took part in the 2015 CoDMAP competition, comparing their performance, strengths and weaknesses. The classification aims to provide
the reader with a clear and comprehensive overview of the existing cooperative MAP
solvers.
The body of work presented in this survey constitutes a solid foundation for the
ongoing and future scientific development of the MAP field. Following, we summarize
several research trends that have recently captured the attention of the community.
Theoretical properties. The aim of the earlier cooperative MAP solvers was to contribute with a satisficing approach capable of solving a relatively small number of problems in a reasonable time but without providing any formal properties [Nissim et al.
2010; Borrajo 2013]. The current maturity of the cooperative MAP field has witnessed
the introduction of some models that focus on granting specific theoretical properties,
such as completeness [Torreño et al. 2014b], optimality [Nissim and Brafman 2014] or
stronger privacy preservation guarantees [Brafman 2015; Shani et al. 2016].
Privacy. The state of the art in MAP shows a growing effort in analyzing and formalizing privacy in MAP solvers. Nowadays, various approaches to model private information and to define information sharing can be found in the literature, which reveals
that privacy is progressively becoming a key topic in MAP. However, the particular
implementation of a MAP solver may jeopardize privacy, if it is possible for an agent
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:29
to infer private information from the received public data. Aside from the four-level
classification exposed in section 4.6.3, other recent approaches attempt to theoretically quantify the privacy guarantees of a MAP solver [Štolba et al. 2016]. In the same
line, the authors of [Tožička et al. 2017] analyze the implications and limits of strong
privacy and present a novel PSM-based planner that offers strong privacy guarantees.
On the other hand, one can also find work that proposes a smart use of privacy to increase the performance of MAP solvers, like DPP [Shani et al. 2016], which calculates
an accurate public projection of the MAP task in order to obtain a robust high-level
plan that is then completed with private actions. This scheme minimizes the communication requirements, resulting in a more efficient search. In [Maliah et al. 2017], authors introduce a novel weak privacy-preserving variant of MAFS which ensures that
two agents that do not share any private variable never communicate with each other,
significantly reducing the number of exchanged messages. In general, the study of privacy in MAP is gaining much attention and more and more sophisticated approaches
have been recently proposed.
MAP with self-interesed agents. The mainstream in MAP with self-interested planning agents is handling situations which involve interactive decision making with
possibly conflicting interests. Game theory, the study of mathematical models of conflict and cooperation between rational self-interested agents, arises naturally as a
paradigm to address human conflict and cooperation within a competitive situation.
Game-theoretic MAP is an active and interesting research field that reflects many
real-world situations, and thus, it has a broad variety of applications, among which we
can highlight congestion games [Jonsson and Rovatsos 2011], cost-optimal planning
[Nissim and Brafman 2013], conflict resolution in the search of a joint plan [Jordán
and Onaindia 2015] or auction systems [Robu et al. 2011].
From a practical perspective, game-theoretic MAP has been successfully applied to
ridesharing problems on timetabled transport services [Hrncı́r et al. 2015]. In general,
strategic approaches to MAP are very appropriate to model smart city applications
like traffic congestion prevention: vehicles can be accurately modelled as rational selfinterested agents that want to reach their destinations as soon as possible, but they
are also willing to deviate from their optimal routes in order to avoid traffic congestion
issues that would affect all the involved agents.
Practical applications. MAP is being used in a great variety of applications, like in
product assembly problems in industry (e.g., car assembly). Agents plan the manufacturing path of the product through the assembly line, which is composed of a number
of interconnected resources that can perform different operations. ExPlanTech, for instance, is a consolidated framework for agent-based production planning, manufacturing, simulation and supply chain management [Pechoucek et al. 2007].
MAP has also been used to control the flow of electricity in the Smart Grid [Reddy
and Veloso 2011]. The agents’ actions are individually rational and contribute to desirable global goals such as promoting the use of renewable energy, encouraging energy
efficiency and enabling distributed fault tolerance. Another interesting application of
MAP is the automated creation of workflows in biological pathways like the MultiAgent System for Genomic Annotation (BioMAS) [Decker et al. 2002]. This system
uses DECAF, a toolkit that provides standard services to integrate agent capabilities,
and incorporates the GPGP framework [Lesser et al. 2004] to coordinate multi-agent
tasks.
In decentralized control problems, MAP is applied in coordination of space rovers
and helicopter flights, multi-access broadcast channels, and sensor network management, among others [Seuken and Zilberstein 2008]. MAP combined with argumentaACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:30
A. Torreño et al.
tion techniques to handle belief changes about the context has been used in applications of ambient intelligence in the field of healthcare [Pajares and Onaindia 2013].
Aside from the aforementioned trends, there is still a broad variety of unexplored
research topics in MAP. The solvers presented in this survey do not support tasks with
advanced requirements. Particularly, handling temporal MAP tasks is an unresolved
matter that should be addressed in the years to come. This problem will involve both
the design of MAP solvers that explicitly support temporal reasoning and the extension of MA-PDDL to incorporate the appropriate syntax to model tasks with temporal
constraints.
Cooperative MAP, as exposed in this paper, puts the focus on offline tasks, without paying much attention to the problematic of plan execution. Online planning carried out by several agents poses a series of challenges derived from the integration of
planning and execution and the need to respond in complex, real-time environments.
Real-time cooperative MAP is about planning and simultaneous execution by several
cooperative agents in a changing environment. This interesting and exciting research
line is very relevant in applications that involve, for example, soccer robots.
Additionally, the body of work presented in this survey does not consider agents
with individual preferences. Preference-based MAP is an unstudied field that can be
interpreted as a middle ground between cooperative and self-interested MAP, since
it involves a set of rational agents that work together towards a common goal while
having their own preferences concerning the properties of the solution plan.
All in all, we believe that the steps taken over the last years towards the standardization of MAP tasks and tools, such as the 2015 CoDMAP competition or the introduction of MA-PDDL, will decisively contribute to foster a rapid expansion of this field in
a wide variety of research directions.
REFERENCES
Eyal Amir and Barbara Engelhardt. 2003. Factored planning. In Proceedings of the 18th International Joint
Conference on Artificial Intelligence (IJCAI), Vol. 3. 929–935.
J. Benton, Amanda J. Coles, and Andrew I. Coles. 2012. Temporal Planning with Preferences and TimeDependent Continuous Costs. In Proceedings of the 22nd International Conference on Automated Planning and Scheduling (ICAPS). 2–10.
Andrea Bonisoli, Alfonso E. Gerevini, Alessandro Saetti, and Ivan Serina. 2014. A Privacy-preserving Model
for the Multi-agent Propositional Planning Problem. In Proceedings of the 21st European Conference on
Artificial Intelligence (ECAI). 973–974.
Daniel Borrajo. 2013. Multi-Agent Planning by Plan Reuse. In Proceedings of the 12th International Conference on Autonomous Agents and Multi-agent Systems (AAMAS). 1141–1142.
Daniel Borrajo and Susana Fernández. 2015. MAPR and CMAP. In Proceedings of the Competition of Distributed and Multi-Agent Planners (CoDMAP-15). 1–3.
Craig Boutilier and Ronen I. Brafman. 2001. Partial-Order Planning with Concurrent Interacting Actions.
Journal of Artificial Intelligence Research 14 (2001), 105–136.
Ronen I. Brafman. 2015. A Privacy Preserving Algorithm for Multi-Agent Planning and Search. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI). 1530–1536.
Ronen I. Brafman and Carmel Domshlak. 2006. Factored Planning: How, When, and When Not. In Proceedings of the 21st National Conference on Artificial Intelligence and the 18th Innovative Applications of
Artificial Intelligence Conference. 809–814.
Ronen I. Brafman and Carmel Domshlak. 2008. From One to Many: Planning for Loosely Coupled MultiAgent Systems. In Proceedings of the 18th International Conference on Automated Planning and
Scheduling (ICAPS). 28–35.
Isabel Cenamor, Tomás de la Rosa, and Fernando Fernández. 2014. IBACOP and IBACOP2 planner. In
Proceedings of the International Planning Competition (IPC).
Bradley J. Clement and Edmund H. Durfee. 1999. Top-down Search for Coordinating the Hierarchical Plans
of Multiple Agents. In Proceedings of the 3rd Annual Conference on Autonomous Agents (AGENTS ’99).
ACM, New York, NY, USA, 252–259.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:31
Daniel D. Corkill. 1979. Hierarchical Planning in a Distributed Environment. In Proceedings of the Sixth
International Joint Conference on Artificial Intelligence, IJCAI 79, Tokyo, Japan, August 20-23, 1979, 2
Volumes. 168–175.
Jeffrey S. Cox and Edmund H. Durfee. 2004. Efficient Mechanisms for Multiagent Plan Merging. In Proceedings of the 3rd Conference on Autonomous Agents and Multiagent Systems (AAMAS). 1342–1343.
Jeffrey S. Cox and Edmund H. Durfee. 2009. Efficient and distributable methods for solving the multiagent
plan coordination problem. Multiagent and Grid Systems 5, 4 (2009), 373–408.
Matthew Crosby, Anders Jonsson, and Michael Rovatsos. 2014. A Single-Agent Approach to Multiagent
Planning. In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI). 237–242.
Matthew Crosby, Michael Rovatsos, and Ronald P. A. Petrick. 2013. Automated Agent Decomposition for
Classical Planning. In Proceedings of the 23rd International Conference on Automated Planning and
Scheduling (ICAPS). 46–54.
Mathijs de Weerdt, André Bos, Hans Tonino, and Cees Witteveen. 2003. A Resource Logic for Multi-Agent
Plan Merging. Annals of Mathematics and Artificial Intelligence 37, 1-2 (2003), 93–130.
Mathijs de Weerdt and Bradley J. Clement. 2009. Introduction to planning in multiagent systems. Multiagent and Grid Systems 5, 4 (2009), 345–355.
Keith Decker, Salim Khan, Carl Schmidt, Gang Situ, Ravi Makkena, and Dennis Michaud. 2002. BioMAS: a
multi-agent system for genomic annotation. International Journal of Cooperative Information Systems
11, 3 (2002), 265–292.
Keith Decker and Victor R. Lesser. 1992. Generalizing the Partial Global Planning Algorithm. International
Journal of Cooperative Information Systems 2, 2 (1992), 319–346.
Marie desJardins and Michael Wolverton. 1999. Coordinating a Distributed Planning System. AI Magazine
20, 4 (1999), 45–53.
Marie E. desJardins, Edmund H. Durfee, Charles L. Ortiz, and Michael J. Wolverton. 1999. A survey of
research in distributed continual planning. AI Magazine 20, 4 (1999), 13–22.
Yannis Dimopoulos, Muhammad A. Hashmi, and Pavlos Moraitis. 2012. µ-SATPLAN: Multi-agent planning
as satisfiability. Knowledge-Based Systems 29 (2012), 54–62.
Jürgen Dix, Héctor Muñoz-Avila, Dana S. Nau, and Lingling Zhang. 2003. IMPACTing SHOP: Putting an
AI Planner Into a Multi-Agent Environment. Annals of Mathematics and Artificial Intelligence 37, 4
(2003), 381–407.
Edmund H. Durfee. 1999. Distributed problem solving and planning. Vol. In Gerhard Weiss editor. The MIT
Press, San Francisco, CA, 118–149.
Edmund H. Durfee and Victor Lesser. 1991. Partial Global Planning: A Coordination Framework for Distributed Hypothesis Formation. IEEE Transactions on Systems, Man, and Cybernetics, Special Issue on
Distributed Sensor Networks 21, 5 (1991), 1167–1183.
Eithan Ephrati and Jeffrey S. Rosenschein. 1994. Divide and Conquer in Multi-Agent Planning. In Proceedings of the 12th National Conference on Artificial Intelligence (AAAI). 375–380.
Eithan Ephrati and Jeffrey S. Rosenschein. 1997. A Heuristic Technique for Multi-Agent Planning. Annals
of Mathematics and Artificial Intelligence 20, 1-4 (1997), 13–67.
Eric Fabre, Loı̈g Jezequel, Patrik Haslum, and Sylvie Thiébaux. 2010. Cost-Optimal Factored Planning:
Promises and Pitfalls. In Proceedings of the 20th International Conference on Automated Planning and
Scheduling (ICAPS). 65–72.
Boi Faltings, Thomas Léauté, and Adrian Petcu. 2008. Privacy guarantees through distributed constraint
satisfaction. In Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence
and Intelligent Agent Technology (WI-IAT), Vol. 2. IEEE, 350–358.
Richard Fikes and Nils J. Nilsson. 1971. STRIPS: A new approach to the application of theorem proving to
problem solving. Artificial Intelligence 2, 3 (1971), 189–208.
Daniel Fišer, Michal Štolba, and Antonı́n Komenda. 2015. MAPlan. In Proceedings of the Competition of
Distributed and Multi-Agent Planners (CoDMAP-15). 8–10.
Foundation for Intelligent Physical Agents. 2002. FIPA Interaction Protocol Specification.
http://www.fipa.org/repository/ips.php3. (2002).
Maria Fox and Derek Long. 2003. PDDL2.1: an Extension to PDDL for Expressing Temporal Planning
Domains. Journal of Artificial Intelligence Research 20 (2003), 61–124.
Malik Ghallab, Adele Howe, Craig Knoblock, Drew McDermott, Ashwin Ram, Manuela M. Veloso, Daniel
Weld, and David Wilkins. 1998. PDDL - The Planning Domain Definition Language. AIPS-98 Planning
Committee (1998).
Malik Ghallab, Dana Nau, and Paolo Traverso. 2004. Automated Planning. Theory and Practice. Morgan
Kaufmann.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:32
A. Torreño et al.
Barbara J. Grosz, Luke Hunsberger, and Sarit Kraus. 1999. Planning and Acting Together. AI Magazine 20,
4 (1999), 23–34.
Malte Helmert. 2004. A Planning Heuristic Based on Causal Graph Analysis. Proceedings of the 14th International Conference on Automated Planning and Scheduling (ICAPS) (2004), 161–170.
Malte Helmert. 2006. The Fast Downward planning system. Journal of Artificial Intelligence Research 26, 1
(2006), 191–246.
Malte Helmert and Carmel Domshlak. 2009. Landmarks, Critical Paths and Abstractions: What’s the Difference Anyway?. In Proceedings of the 19th International Conference on Automated Planning and Scheduling (ICAPS). 162–169.
Malte Helmert, Patrik Haslum, and Jörg Hoffmann. 2007. Flexible Abstraction Heuristics for Optimal Sequential Planning. In Proceedings of the 17th International Conference on Automated Planning and
Scheduling (ICAPS). 176–183.
Jörg Hoffmann and Bernhard Nebel. 2001. The FF Planning System: Fast Planning Generation Through
Heuristic Search. Journal of Artificial Intelligence Research 14 (2001), 253–302.
Jörg Hoffmann, Julie Porteous, and Laura Sebastiá. 2004. Ordered landmarks in planning. Journal of Artificial Intelligence Research 22 (2004), 215–278.
Jan Hrncı́r, Michael Rovatsos, and Michal Jakob. 2015. Ridesharing on Timetabled Transport Services: A
Multiagent Planning Approach. Journal of Intelligent Transportation Systems 19, 1 (2015), 89–105.
Loı̈g Jezequel and Eric Fabre. 2012. A#: A distributed version of A* for factored planning. In Proceedings of
the 51th IEEE Conference on Decision and Control, (CDC). 7377–7382.
Anders Jonsson and Michael Rovatsos. 2011. Scaling Up Multiagent Planning: A Best-Response Approach.
In Proceedings of the 21st International Conference on Automated Planning and Scheduling (ICAPS).
AAAI, 114–121.
Jaume Jordán and Eva Onaindia. 2015. Game-Theoretic Approach for Non-Cooperative Planning. In Proceedings of the 29th Conference on Artificial Intelligence (AAAI). 1357–1363.
Froduald Kabanza, Lu Shuyun, and Scott Goodwin. 2004. Distributed Hierarchical Task Planning on a
Network of Clusters. In Proceedings of the 16th International Conference on Parallel and Distributed
Computing and Systems (PDCS). 139–140.
Henry A. Kautz. 2006. Deconstructing planning as satisfiability. In Proceedings of the National Conference
on Artificial Intelligence, Vol. 21. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press;
1999, 1524.
Elena Kelareva, Olivier Buffet, Jinbo Huang, and Sylvie Thiébaux. 2007. Factored Planning Using Decomposition Trees. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI).
1942–1947.
Antonı́n Komenda, Michal Stolba, and Daniel L Kovacs. 2016. The International Competition of Distributed
and Multiagent Planners (CoDMAP). AI Magazine 37, 3 (2016), 109–115.
Daniel L. Kovacs. 2012. A Multi-Agent Extension of PDDL3.1. In Proceedings of the 3rd Workshop on the
International Planning Competition (IPC). 19–27.
Jonas Kvarnström. 2011. Planning for Loosely Coupled Agents Using Partial Order Forward-Chaining.
In Proceedings of the 21st International Conference on Automated Planning and Scheduling (ICAPS).
AAAI, 138–145.
Victor Lesser, Keith Decker, Thomas Wagner, Norman Carver, Alan Garvey, Bryan Horling, Daniel Neiman,
Rodion Podorozhny, M. Nagendra Prasad, Anita Raja, Regis Vincent, Ping Xuan, and X. Q. Zhang. 2004.
Evolution of the GPGP/TAEMS domain-independent coordination framework. Autonomous Agents and
Multi-Agent Systems 9, 1-2 (2004), 87–143.
Derek Long, Henry Kautz, Bart Selman, Blai Bonet, Hector Geffner, Jana Koehler, Michael Brenner, Joerg
Hoffmann, Frank Rittinger, Corin R. Anderson, Daniel S. Weld, David E. Smith, Maria Fox, and Derek
Long. 2000. The AIPS-98 planning competition. AI magazine 21, 2 (2000), 13–33.
Nerea Luis and Daniel Borrajo. 2014. Plan Merging by Reuse for Multi-Agent Planning. In Proceedings of
the 2nd ICAPS Workshop on Distributed and Multi-Agent Planning (DMAP). 38–44.
Nerea Luis and Daniel Borrajo. 2015. PMR: Plan Merging by Reuse. In Proceedings of the Competition of
Distributed and Multi-Agent Planners (CoDMAP-15). 11–13.
Shlomi Maliah, Ronen I. Brafman, and Guy Shani. 2017. Increased Privacy with Reduced Communication
in Multi-Agent Planning. In Proceedings of the 27th International Conference on Automated Planning
and Scheduling (ICAPS). 209–217.
Shlomi Maliah, Guy Shani, and Roni Stern. 2014. Privacy Preserving Landmark Detection. In Proceedings
of the 21st European Conference on Artificial Intelligence (ECAI). 597–602.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
Cooperative Multi-Agent Planning: A Survey
84:33
Shlomi Maliah, Guy Shani, and Roni Stern. 2016. Collaborative privacy preserving multi-agent planning.
Autonomous Agents and Multi-Agent Systems (2016), 1–38.
Felipe Meneguzzi and Lavindra de Silva. 2015. Planning in BDI agents: a survey of the integration of planning algorithms and agent reasoning. The Knowledge Engineering Review 30, 1 (2015), 1–44.
Christian Muise, Nir Lipovetzky, and Miquel Ramirez. 2015. MAP-LAPKT: Omnipotent Multi-Agent Planning via Compilation to Classical Planning. In Proceedings of the Competition of Distributed and MultiAgent Planners (CoDMAP-15). 14–16.
Dana S. Nau, Tsz-Chiu Au, Okhtay Ilghami, Ugur Kuter, J. William Murdock, Dan Wu, and Fusun Yaman.
2003. SHOP2: An HTN Planning System. Journal of Artificial Intelligence Research 20 (2003), 379–404.
Raz Nissim and Ronen I. Brafman. 2012. Multi-agent A* for parallel and distributed systems. In Proceedings
of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). 1265–
1266.
Raz Nissim and Ronen I. Brafman. 2013. Cost-Optimal Planning by Self-Interested Agents. In Proceedings
of the 27th Conference on Artificial Intelligence (AAAI).
Raz Nissim and Ronen I. Brafman. 2014. Distributed Heuristic Forward Search for Multi-agent Planning.
Journal of Artificial Intelligence Research 51 (2014), 293–332.
Raz Nissim, Ronen I. Brafman, and Carmel Domshlak. 2010. A general, fully distributed multi-agent planning algorithm. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). 1323–1330.
Sergio Pajares and Eva Onaindia. 2013. Context-aware multi-agent planning in intelligent environments.
Information Sciences 227 (2013), 22–42.
Michal Pechoucek, Martin Rehák, Petr Charvát, Tomáš Vlcek, and Michal Kolar. 2007. Agent-based approach to mass-oriented production planning: case study. IEEE Transactions on Systems, Man, and
Cybernetics, Part C 37, 3 (2007), 386–395.
Damien Pellier. 2010. Distributed Planning through Graph Merging. In Proceedings of the 2nd International
Conference on Agents and Artificial Intelligence (ICAART 2010). 128–134.
Miquel Ramirez, Nir Lipovetzky, and Christian Muise. 2015. Lightweight Automated Planning ToolKiT.
http://lapkt.org/. (2015).
Prashant P. Reddy and Manuela M. Veloso. 2011. Strategy learning for autonomous agents in smart grid
markets. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI).
1446–1451.
Silvia Richter and Matthias Westphal. 2010. The LAMA planner: Guiding cost-based anytime planning with
landmarks. Journal of Artificial Intelligence Research 39, 1 (2010), 127–177.
Valentin Robu, Han Noot, Han La Poutré, and Willem-Jan van Schijndel. 2011. A multi-agent platform
for auction-based allocation of loads in transportation logistics. Expert Systems with Applications 38, 4
(2011), 3483–3491.
Óscar Sapena, Eva Onaindia, Antonio Garrido, and Marlene Arangú. 2008. A distributed CSP approach for
collaborative planning systems. Engineering Applications of Artificial Intelligence 21, 5 (2008), 698–709.
Emilio Serrano, Jose M. Such, Juan A. Botı́a, and Ana Garcı́a-Fornes. 2013. Strategies for avoiding preference profiling in agent-based e-commerce environments. Applied Intelligence (2013), 1–16.
Sven Seuken and Shlomo Zilberstein. 2008. Formal models and algorithms for decentralized decision making
under uncertainty. Autonomous Agents and Multi-Agent Systems 17, 2 (2008), 190–250.
Guy Shani, Shlomi Maliah, and Roni Stern. 2016. Stronger Privacy Preserving Projections for Multi-Agent
Planning. In Proceedings of the 26th International Conference on Automated Planning and Scheduling
(ICAPS). 221–229.
Claude E. Shannon. 1948. A mathematical theory of communication. The Bell System Technical Journal 27,
3 (1948), 379–423.
Evren Sirin, Bijan Parsia, Dan Wu, James Hendler, and Dana Nau. 2004. HTN Planning for Web Service
Composition using SHOP2. Journal of Web Semantics 1, 4 (2004), 377–396.
Sarath Sreedharan, Yu Zhang, and Subbarao Kambhampati. 2015. A First Multi-agent Planner for Required Cooperation (MARC). In Proceedings of the Competition of Distributed and Multi-Agent Planners
(CoDMAP-15). 17–20.
Michal Štolba, Daniel Fišer, and Antonı́n Komenda. 2015. Admissible Landmark Heuristic for Multi-Agent
Planning. In Proceedings of the 25th International Conference on Automated Planning and Scheduling
(ICAPS). 211–219.
Michal Štolba and Antonı́n Komenda. 2014. Relaxation Heuristics for Multiagent Planning. In Proceedings
of the 24th International Conference on Automated Planning and Scheduling (ICAPS). 298–306.
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
84:34
A. Torreño et al.
Michal Štolba and Antonı́n Komenda. 2015. MADLA: Planning with Distributed and Local Search. In Proceedings of the Competition of Distributed and Multi-Agent Planners (CoDMAP-15). 21–24.
Michal Štolba, Jan Tožička, and Antonı́n Komenda. 2016. Quantifying Privacy Leakage in Multi-Agent
Planning. Proceedings of the 4rd ICAPS Workshop on Distributed and Multi-Agent Planning (DMAP)
(2016), 80–88.
Jose M. Such, Ana Garcı́a-Fornes, Agustı́n Espinosa, and Joan Bellver. 2012. Magentix2: A privacyenhancing agent platform. Engineering Applications of Artificial Intelligence (2012), 96–109.
Milind Tambe. 1997. Towards Flexible Teamwork. Journal of Artificial Intelligence Research 7 (1997), 83–
124.
Alejandro Torreño, Eva Onaindia, and Óscar Sapena. 2012. An approach to multi-agent planning with incomplete information. In Proceedings of the 20th European Conference on Artificial Intelligence (ECAI),
Vol. 242. IOS Press, 762–767.
Alejandro Torreño, Eva Onaindia, and Óscar Sapena. 2014a. A Flexible Coupling Approach to Multi-Agent
Planning under Incomplete Information. Knowledge and Information Systems 38, 1 (2014), 141–178.
Alejandro Torreño, Eva Onaindia, and Óscar Sapena. 2014b. FMAP: Distributed cooperative multi-agent
planning. Applied Intelligence 41, 2 (2014), 606–626.
Alejandro Torreño, Eva Onaindia, and Óscar Sapena. 2015. Global Heuristics for Distributed Cooperative
Multi-Agent Planning. In Proceedings of the 25th International Conference on Automated Planning and
Scheduling (ICAPS). 225–233.
Alejandro Torreño, Óscar Sapena, and Eva Onaindia. 2015. MH-FMAP: Alternating Global Heuristics in
Multi-Agent Planning. In Proceedings of the Competition of Distributed and Multi-Agent Planners
(CoDMAP-15). 25–28.
Jan Tožička, Jan Jakubuv, and Antonı́n Komenda. 2015. PSM-based Planners Description for CoDMAP 2015
Competition. In Proceedings of the Competition of Distributed and Multi-Agent Planners (CoDMAP-15).
29–32.
Jan Tožička, Jan Jakubuv, Antonı́n Komenda, and Michal Pěchouček. 2016. Privacy-concerned multiagent
planning. Knowledge and Information Systems 48, 3 (2016), 581–618.
Jan Tožička, Michal Štolba, and Antonı́n Komenda. 2017. The Limits of Strong Privacy Preserving MultiAgent Planning. In Proceedings of the 27th International Conference on Automated Planning and
Scheduling (ICAPS). 221–229.
Roman van der Krogt. 2007. Privacy Loss in Classical Multiagent Planning. In Proceedings of the
IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT). 168–174.
Roman van der Krogt. 2009. Quantifying privacy in multiagent planning. Multiagent and Grid Systems 5, 4
(2009), 451–469.
David E. Wilkins. 1988. Practical Planning: Extending the Classical AI Planning Paradigm. Morgan Kaufmann.
David E. Wilkins and Karen L. Myers. 1998. A multiagent planning architecture. In Proceedings of the 4th
International Conference on Artificial Intelligence Planning Systems (AIPS). 154–162.
Michael Wolverton and Marie desJardins. 1998. Controlling Communication in Distributed Planning Using
Irrelevance Reasoning. In Proceedings of the 15th National Conference on Artificial Intelligence (AAAI).
868–874.
Michael Wooldridge. 1997. Agent-Based Software Engineering. IEE Proceedings - Software Engineering 144,
1 (1997), 26–37.
Yu Zhang and Subbarao Kambhampati. 2014. A Formal Analysis of Required Cooperation in Multi-agent
Planning. CoRR abs/1404.5643 (2014). http://arxiv.org/abs/1404.5643
Received August 2016; revised April 2017; accepted July 2017
ACM Computing Surveys, Vol. 50, No. 6, Article 84, Publication date: November 2017.
| 2 |
arXiv:1407.4546v7 [] 28 Sep 2016
The Annals of Statistics
2016, Vol. 44, No. 5, 1931–1956
DOI: 10.1214/15-AOS1375
c Institute of Mathematical Statistics, 2016
CRAMÉR-TYPE MODERATE DEVIATIONS FOR STUDENTIZED
TWO-SAMPLE U -STATISTICS WITH APPLICATIONS
By Jinyuan Chang1,∗,† , Qi-Man Shao2,‡ and Wen-Xin Zhou3,§,†
Southwestern University of Finance and Economics,∗ University of
Melbourne,† Chinese University of Hong Kong‡ and Princeton University§
Two-sample U -statistics are widely used in a broad range of applications, including those in the fields of biostatistics and econometrics. In this paper, we establish sharp Cramér-type moderate deviation theorems for Studentized two-sample U -statistics in a general framework, including the two-sample t-statistic and Studentized
Mann–Whitney test statistic as prototypical examples. In particular,
a refined moderate deviation theorem with second-order accuracy is
established for the two-sample t-statistic. These results extend the
applicability of the existing statistical methodologies from the onesample t-statistic to more general nonlinear statistics. Applications to
Tribute: Peter was a brilliant and prolific researcher, who has made enormously influential contributions to mathematical statistics and probability theory. Peter had extraordinary knowledge of analytic techniques that he often applied with ingenious simplicity
to tackle complex statistical problems. His work and service have had a profound impact
on statistics and the statistical community. Peter was a generous mentor and friend with
a warm heart and keen to help the young generation. Jinyuan Chang and Wen-Xin Zhou
are extremely grateful for the opportunity to learn from and work with Peter in the last
two years at the University of Melbourne. Even in his final year, he had afforded time to
guide us. We will always treasure the time we spent with him. Qi-Man Shao is so grateful
for all the helps and supports that Peter had provided during the various stages of his
career. Peter will be dearly missed and forever remembered as our mentor and friend.
Received June 2015.
1
Supported in part by the Fundamental Research Funds for the Central Universities
(Grant No. JBK160159, JBK150501), NSFC (Grant No. 11501462), the Center of Statistical Research at SWUFE and the Australian Research Council.
2
Supported by Hong Kong Research Grants Council GRF 603710 and 403513.
3
Supported by NIH Grant R01-GM100474-4 and a grant from the Australian Research
Council.
AMS 2000 subject classifications. Primary 60F10, 62E17; secondary 62E20, 62F40,
62H15.
Key words and phrases. Bootstrap, false discovery rate, Mann–Whitney U test, multiple hypothesis testing, self-normalized moderate deviation, Studentized statistics, twosample t-statistic, two-sample U -statistics.
This is an electronic reprint of the original article published by the
Institute of Mathematical Statistics in The Annals of Statistics,
2016, Vol. 44, No. 5, 1931–1956. This reprint differs from the original in
pagination and typographic detail.
1
2
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
two-sample large-scale multiple testing problems with false discovery
rate control and the regularized bootstrap method are also discussed.
1. Introduction. The U -statistic is one of the most commonly used nonlinear and nonparametric statistics, and its asymptotic theory has been well
studied since the seminal paper of Hoeffding (1948). U -statistics extend the
scope of parametric estimation to more complex nonparametric problems
and provide a general theoretical framework for statistical inference. We refer to Koroljuk and Borovskich (1994) for a systematic presentation of the
theory of U -statistics, and to Kowalski and Tu (2007) for more recently
discovered methods and contemporary applications of U -statistics.
Applications of U -statistics can also be found in high dimensional statistical inference and estimation, including the simultaneous testing of many
different hypotheses, feature selection and ranking, the estimation of high
dimensional graphical models and sparse, high dimensional signal detection. In the context of high dimensional hypothesis testing, for example,
several new methods based on U -statistics have been proposed and studied
in Chen and Qin (2010), Chen, Zhang and Zhong (2010) and Zhong and
Chen (2011). Moreover, Li et al. (2012) and Li, Zhong and Zhu (2012) employed U -statistics to construct independence feature screening procedures
for analyzing ultrahigh dimensional data.
Due to heteroscedasticity, the measurements across disparate subjects
may differ significantly in scale for each feature. To standardize for scale,
unknown nuisance parameters are always involved and a natural approach
is to use Studentized, or self-normalized statistics. The noteworthy advantage of Studentization is that compared to standardized statistics, Studentized ratios take heteroscedasticity into account and are more robust against
heavy-tailed data. The theoretical and numerical studies in Delaigle, Hall
and Jin (2011) and Chang, Tang and Wu (2013, 2016) evidence the importance of using Studentized statistics in high dimensional data analysis. As
noted in Delaigle, Hall and Jin (2011), a careful study of the moderate
deviations in the Studentized ratios is indispensable to understanding the
common statistical procedures used in analyzing high dimensional data.
Further, it is now known that the theory of Cramér-type moderate deviations for Studentized statistics quantifies the accuracy of the estimated
p-values, which is crucial in the study of large-scale multiple tests for controlling the false discovery rate [Fan, Hall and Yao (2007), Liu and Shao
(2010)]. In particular, Cramér-type moderate deviation results can be used
to investigate the robustness and accuracy properties of p-values and critical
values in multiple testing procedures. However, thus far, most applications
have been confined to t-statistics [Fan, Hall and Yao (2007), Wang and Hall
(2009), Delaigle, Hall and Jin (2011), Cao and Kosorok (2011)]. It is conjectured in Fan, Hall and Yao (2007) that analogues of the theoretical properties
STUDENTIZED TWO-SAMPLE U -STATISTICS
3
of these statistical methodologies remain valid for other resampling methods based on Studentized statistics. Motivated by the above applications,
we are attempting to develop a unified theory on moderate deviations for
more general Studentized nonlinear statistics, in particular, for two-sample
U -statistics.
The asymptotic properties of the standardized U -statistics are extensively
studied in the literature, whereas significant developments are achieved in
the past decade for one-sample Studentized U -statistics. We refer to Wang,
Jing and Zhao (2000) and the references therein for Berry–Esseen-type
bounds and Edgeworth expansions. The results for moderate deviations can
be found in Vandemaele and Veraverbeke (1985), Lai, Shao and Wang (2011)
and Shao and Zhou (2016). The results in Shao and Zhou (2016) paved the
way for further applications of statistical methodologies using Studentized
U -statistics in high dimensional data analysis.
Two-sample U -statistics are also commonly used to compare the different (treatment) effects of two groups, such as an experimental group and
a control group, in scientifically controlled experiments. However, due to
the structural complexities, the theoretical properties of the two-sample U statistics have not been well studied. In this paper, we establish a Cramértype moderate deviation theorem in a general framework for Studentized
two-sample U -statistics, especially the two-sample t-statistic and the Studentized Mann–Whitney test. In particular, a refined moderate deviation
theorem with second-order accuracy is established for the two-sample tstatistic.
The paper is organized as follows. In Section 2, we present the main
results on Cramér-type moderate deviations for Studentized two-sample U statistics as well as a refined result for the two-sample t-statistic. In Section 3, we investigate statistical applications of our theoretical results to the
problem of simultaneously testing many different hypotheses, based particularly on the two-sample t-statistics and Studentized Mann–Whitney tests.
Section 4 shows numerical studies. A discussion is given in Section 5. All the
proofs are relegated to the supplementary material [Chang, Shao and Zhou
(2016)].
2. Moderate deviations for Studentized U -statistics. We use the following notation throughout this paper. For two sequences of real numbers an
and bn , we write an ≍ bn if there exist two positive constants c1 , c2 such that
c1 ≤ an /bn ≤ c2 for all n ≥ 1, we write an = O(bn ) if there is a constant C
such that |an | ≤ C|bn | holds for all sufficiently large n, and we write an ∼ bn
and an = o(bn ), respectively, if limn→∞ an /bn = 1 and limn→∞ an /bn = 0.
Moreover, for two real numbers a and b, we write for ease of presentation
that a ∨ b = max(a, b) and a ∧ b = min(a, b).
4
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
2.1. A review of Studentized one-sample U -statistics. We start with a
brief review of Cramér-type moderate deviation for Studentized one-sample
U -statistics. For an integer s ≥ 2 and for n > 2s, let X1 , . . . , Xn be independent and identically distributed (i.i.d.) random variables taking values in a
metric space (X, G), and let h : Xd 7→ R be a symmetric Borel measurable
function. Hoeffding’s U -statistic with a kernel h of degree s is defined as
Un =
X
1
n
s
h(Xi1 , . . . , Xis ),
1≤i1 <···<is ≤n
which is an unbiased estimate of θ = E{h(X1 , . . . , Xs )}. In particular, we
focus on the case where X is the Euclidean space Rr for some integer r ≥ 1.
When r ≥ 2, write Xi = (Xi1 , . . . , Xir )T for i = 1, . . . , n.
Let
h1 (x) = E{h(X1 , . . . , Xs )|X1 = x}
for any x = (x1 , . . . , xr )T ∈ Rr
and
σ 2 = var{h1 (X1 )},
vh2 = var{h(X1 , X2 , . . . , Xs )}.
Assume that 0 < σ 2 < ∞, then the standardized nondegenerate U -statistic
is given by
Zn =
n1/2
(Un − θ).
sσ
Because σ is usually unknown, we are interested in the following Studentized U -statistic:
1/2
bn = n (Un − θ),
U
sb
σ
(2.1)
where σ
b2 denotes the leave-one-out jackknife estimator of σ 2 given by
n
(n − 1) X
(qi − Un )2
σ
b =
(n − s)2
2
with
i=1
qi =
1
n−1
s−1
X
h(Xi , Xℓ1 , . . . , Xℓs−1 ).
1≤ℓ1 <···<ℓs−1 ≤n
ℓj 6=i for each j=1,...,s−1
Shao and Zhou (2016) established a general Cramér-type moderate deviation theorem for Studentized nonlinear statistics, in particular for Studentized U -statistics.
5
STUDENTIZED TWO-SAMPLE U -STATISTICS
Theorem 2.1. Assume that vp := [E{|h1 (X1 ) − θ|p }]1/p < ∞ for some
2 < p ≤ 3. Suppose that there are constants c0 ≥ 1 and κ ≥ 0 such that for
all x1 , . . . , xs ∈ R,
#
"
s
X
2
2
2
(2.2)
{h1 (xi ) − θ} .
{h(x1 , . . . , xs ) − θ} ≤ c0 κσ +
i=1
Then there exist constants C, c > 0 depending only on d such that
bn ≥ x)
P(U
= 1 + O(1){(vp /σ)p (1 + x)p n1−p/2 + (as1/2 + vh /σ)(1 + x)3 n−1/2 }
1 − Φ(x)
holds uniformly for 0 ≤ x ≤ c min{(σ/vp )n1/2−1/p , (n/as )1/6 }, where |O(1)| ≤
C and as = max(c0 κ, c0 + s). In particular, we have
bn ≥ x)
P(U
→1
1 − Φ(x)
holds uniformly in x ∈ [0, o(n1/2−1/p )).
Condition (2.2) is satisfied for a large class of U -statistics. Below are some
examples.
Statistic
t-statistic
Sample variance
Gini’s mean difference
One-sample Wilcoxon’s statistic
Kendall’s τ
Kernel function
h(x1 , x2 ) = 0.5(x1 + x2 )
h(x1 , x2 ) = 0.5(x1 − x2 )2
h(x1 , x2 ) = |x1 − x2 |
h(x1 , x2 ) = I{x1 + x2 ≤ 0}
h(x1 , x2 ) = 2I{(x22 − x21 )(x12 − x11 ) > 0}
c0
2
10
8
1
1
κ
0
(θ/σ)2
(θ/σ)2
σ −2
σ −2
2.2. Studentized two-sample U -statistics. Let X = {X1 , . . . , Xn1 } and Y =
{Y1 , . . . , Yn2 } be two independent random samples, where X is drawn from
a probability distribution P and Y is drawn from another probability distribution Q. With s1 and s2 being two positive integers, let
h(x1 , . . . , xs1 ; y1 , . . . , ys2 )
be a kernel function of order (s1 , s2 ) which is real and symmetric both in its
first s1 variates and in its last s2 variates. It is known that a nonsymmetric kernel can always be replaced with a symmetrized version by averaging
across all possible rearrangements of the indices.
Set θ := E{h(X1 , . . . , Xs1 ; Y1 , . . . , Ys2 )}, and let
Un̄ =
1
X
X
h(Xi1 , . . . , Xis1 ; Yj1 , . . . , Yjs2 ),
n1 n2
s1 s2 1≤i1 <···<is ≤n1 1≤j1 <···<js ≤n2
2
1
6
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
be the two-sample U -statistic, where n̄ = (n1 , n2 ). To lighten the notation,
we write Xi1 ,...,iℓ = (Xi1 , . . . , Xiℓ ), Yj1 ,...,jk = (Yj1 , . . . , Yjk ) such that
h(Xi1 ,...,iℓ ; Yj1 ,...,jk ) = h(Xi1 , . . . , Xiℓ ; Yj1 , . . . , Yjk ),
and define
h1 (x) = E{h(X1,...,s1 ; Y1,...,s2 )|X1 = x},
(2.3)
h2 (y) = E{h(X1,...,s1 ; Y1,...,s2 )|Y1 = y}.
Also let vh2 = var{h(X1,...,s1 ; Y1,...,s2 )}, σ12 = var{h1 (Xi )}, σ22 = var{h2 (Yj )}
and
σ 2 = σ12 + σ22 ,
(2.4)
2 2 −1
σn̄2 = s21 σ12 n−1
1 + s 2 σ2 n 2 .
For the standardized two-sample U -statistic of the form σn̄−1 (Un̄ − θ), a
uniform Berry–Esseen bound of order O{(n1 ∧ n2 )−1/2 } was obtained by
Helmers and Janssen (1982) and Borovskich (1983). Using a concentration
inequality approach, Chen and Shao (2007) proved a refined uniform bound
and also established an optimal nonuniform Berry–Esseen bound. For large
deviation asymptotics of two-sample U -statistics, we refer to Nikitin and
Ponikarov (2006) and the references therein.
Here, we are interested in the following Studentized two-sample U -statistic:
(2.5)
where
bn̄ = σ
U
bn̄−1 (Un̄ − θ)
n
n
i=1
i=1
1
1
1 X
1 X
σ
b12 =
qi
qi −
n1 − 1
n1
and
2 2 −1
with σ
bn̄2 = s21 σ
b12 n−1
b2 n2 ,
1 + s2 σ
!2
,
n
n
j=1
j=1
2
2
1 X
1 X
pj
pj −
σ
b22 =
n2 − 1
n2
1
X
X
1
X
X
!2
qi =
h(Xi,i2 ,...,is1 ; Yj1 ,...,js2 ),
n1 −1 n2
s1 −1 s2 1≤i2 <···<is1 ≤n1 1≤j1 <···<js2 ≤n2
iℓ 6=i,ℓ=2,...,s1
pj =
h(Xi1 ,...,is1 ; Yj,j2,...,js2 ).
n1 n2 −1
s1 s2 −1 1≤i1 <···<is1 ≤n1 1≤j2 <···<js2 ≤n2
jk 6=j,k=2,...,s2
Note that σ
b12 and σ
b22 are leave-one-out jackknife estimators of σ12 and σ22 ,
respectively.
7
STUDENTIZED TWO-SAMPLE U -STATISTICS
bn̄ . For p > 2, let
2.2.1. Moderate deviations for U
(2.6) v1,p = [E{|h1 (X1 ) − θ|p }]1/p
and v2,p = [E{|h2 (Y1 ) − θ|p }]1/p .
Moreover, put
s = s1 ∨ s2 ,
n̄ = (n1 , n2 ),
n = n1 ∧ n2
and
λn̄ = vh
n1 + n2
σ12 n2 + σ22 n1
1/2
with vh2 = var{h(X1,...,s1 ; Y1,...,s2 )}.
bn̄ given
The following result gives a Cramér-type moderate deviation for U
in (2.5) under mild assumptions. A self-contained proof can be found in the
supplementary material [Chang, Shao and Zhou (2016)].
Assume that there are constants c0 ≥ 1 and κ ≥ 0 such
Theorem 2.2.
that
2
(2.7) {h(x; y) − θ} ≤ c0
"
s2
s1
X
X
2
2
{h2 (yj ) − θ}2
{h1 (xi ) − θ} +
κσ +
j=1
i=1
#
for all x = (x1 , . . . , xs1 ) and y = (y1 , . . . , ys2 ), where σ 2 is given in (2.4).
Assume that v1,p and v2,p are finite for some 2 < p ≤ 3. Then there exist
constants C, c > 0 independent of n1 and n2 such that
bn̄ ≥ x)
P(U
1 − Φ(x)
(2.8)
(
= 1 + O(1)
holds uniformly for
p
2
X
vℓ,p
(1 + x)p
p/2−1
ℓ=1
σℓp nℓ
p/2−1
0 ≤ x ≤ c min[(σ1 /v1,p )n1
1/2
+ (ad
3
+ λn̄ )(1 + x)
p/2−1
, (σ2 /v2,p )n2
n1 + n2
n1 n2
1/2 )
, a−1/6
{n1 n2 /(n1 + n2 )}1/6 ],
s
where |O(1)| ≤ C and as = max(c0 κ, c0 + s). In particular, as n → ∞,
(2.9)
bn̄ ≥ x)
P(U
→1
1 − Φ(x)
holds uniformly in x ∈ [0, o(n1/2−1/p )).
Theorem 2.2 exhibits the dependence between the range of uniform convergence of the relative error in the central limit theorem and the optimal moment conditions. In particular, if p = 3, the region becomes 0 ≤ x ≤
8
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
O(n1/6 ). See Theorem 2.3 in Jing, Shao and Wang (2003) for similar results
on self-normalized sums. Under higher order moment conditions, it is not
clear if our technique can be adapted to provide a better approximation for
bn̄ ≥ x) for x lying between n1/6 and n1/2 in order.
the tail probability P(U
It is also worth noticing that many commonly used kernels in nonparametric statistics turn out to be linear combinations of the indicator functions
and, therefore, satisfy condition (2.7) immediately.
2.2.2. Two-sample t-statistic. As a prototypical example of two-sample
U -statistics, the two-sample t-statistic is of significant interest due to its
wide applicability. The advantage of using t-tests, either one-sample or twosample, is their high degree of robustness against heavy-tailed data in which
the sampling distribution has only a finite third or fourth moment. The robustness of the t-statistic is useful in high dimensional data analysis under
the sparsity assumption on the signal of interest. When dealing with two
experimental groups, which are typically independent, in scientifically controlled experiments, the two-sample t-statistic is one of the most commonly
used statistics for hypothesis testing and constructing confidence intervals
for the difference between the means of the two groups.
Let X = {X1 , . . . , Xn1 } be a random sample from a one-dimensional population with mean µ1 and variance σ12 , and let Y = {Y1 , . . . , Yn2 } be a random
sample from another one-dimensional population with mean µ2 and variance
σ22 independent of X . The two-sample t-statistic is defined as
Tbn̄ = q
X̄ − Ȳ
,
2 n−1
σ
b12 n−1
+
σ
b
1
2 2
Pn1
−1 Pn2
where n̄ = (n1 , n2 ), X̄ = n−1
1
j=1 Yj and
i=1 Xi , Ȳ = n2
n
σ
b12
1
1 X
(Xi − X̄)2 ,
=
n1 − 1
i=1
n
σ
b22
2
1 X
=
(Yj − Ȳ )2 .
n2 − 1
j=1
The following result is a direct consequence of Theorem 2.2.
Theorem 2.3. Assume that µ1 = µ2 , and E(|X1 |p ) < ∞, E(|Y1 |p ) < ∞
for some 2 < p ≤ 3. Then there exist absolute constants C, c > 0 such that
2
X
P(Tbn̄ ≥ x)
1−p/2
= 1 + O(1)(1 + x)p
(vℓ,p /σℓ )p nℓ
1 − Φ(x)
ℓ=1
1/2−1/p
holds uniformly for 0 ≤ x ≤ c minℓ=1,2 {(σℓ /vℓ,p )nℓ
}, where |O(1)| ≤ C
p
1/p
p
1/p
and v1,p = {E(|X1 − µ1 | )} , v2,p = {E(|Y1 − µ2 | )} .
9
STUDENTIZED TWO-SAMPLE U -STATISTICS
Motivated by a series of recent studies on the effectiveness and accuracy
of multiple-hypothesis testing using t-tests, we investigate whether a higher
order expansion of the relative error, as in Theorem 1.2 of Wang (2005) for
self-normalized sums, holds for the two-sample t-statistic, so that one can
use bootstrap calibration to correct skewness [Fan, Hall and Yao (2007),
Delaigle, Hall and Jin (2011)] or study power properties against sparse alternatives [Wang and Hall (2009)]. The following theorem gives a refined
Cramér-type moderate deviation result for Tbn̄ , whose proof is placed in the
supplementary material [Chang, Shao and Zhou (2016)].
Theorem 2.4. Assume that µ1 = µ2 . Let γ1 = E{(X1 − µ1 )3 } and γ2 =
E{(Y1 − µ2 )3 } be the third central moment of X1 and Y1 , respectively. Moreover, assume that E(|X1 |p ) < ∞, E(|Y1 |p ) < ∞ for some 3 < p ≤ 4. Then
−2
P(Tbn̄ ≥ x)
γ1 n−2
3
1 − γ2 n2
x
= exp −
2 −1 3/2
1 − Φ(x)
3(σ12 n−1
1 + σ2 n 2 )
(2.10)
"
#
p
2 3
X
vℓ,3 (1 + x) vℓ,p (1 + x)p
+ p p/2−1
× 1 + O(1)
1/2
σℓ3 nℓ
σℓ n ℓ
ℓ=1
holds uniformly for
(2.11)
1/2
1/2−1/p
0 ≤ x ≤ c min min{(σℓ /vℓ,3 )3 nℓ , (σℓ /vℓ,p )nℓ
ℓ=1,2
},
where |O(1)| ≤ C and for every q ≥ 1, v1,q = {E(|X1 − µ1 |q )}1/q , v2,q =
{E(|Y1 − µ2 |q )}1/q .
A refined Cramér-type moderate deviation theorem for the one-sample tstatistic was established in Wang (2011), which to our knowledge, is the best
result for the t-statistic known up to date, or equivalently, self-normalized
sums.
2.2.3. More examples of two-sample U -statistics. Beyond the two-sample
t-statistic, we enumerate three more well-known two-sample U -statistics
and refer to Nikitin and Ponikarov (2006) for more examples. Let X =
{X1 , . . . , Xn1 } and Y = {Y1 , . . . , Yn2 } be two independent random samples
from population distributions P and Q, respectively.
Example 2.1 (The Mann–Whitney test statistic).
order (s1 , s2 ) = (1, 1), defined as
h(x; y) = I{x ≤ y} − 1/2
The kernel h is of
with θ = P(X1 ≤ Y1 ) − 1/2,
10
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
and in view of (2.3),
h1 (x) = 1/2 − G(x),
h2 (y) = F (y) − 1/2.
In particular, if F ≡ G, we have σ12 = σ22 = 1/12.
Example 2.2 (The Lehmann statistic).
(2, 2), defined as
The kernel h is of order (s1 , s2 ) =
h(x1 , x2 ; y1 , y2 ) = I{|x1 − x2 | ≤ |y1 − y2 |} − 1/2
with θ = P(|X1 − X2 | ≤ |Y1 − Y2 |) − 1/2. Then under H0 : θ = 0, E{h(X1 , X2 ;
Y1 , Y2 )} = 0, and
h1 (x) = G(x){1 − G(x)} − 1/6,
h2 (y) = F (y){F (y) − 1} + 1/6.
In particular, if F ≡ G, then σ12 = σ22 = 1/180.
Example 2.3 (The Kochar statistic). The Kochar statistic was constructed by Kochar (1979) to test if the two hazard failure rates are different.
Denote by F the class of all absolutely continuous cumulative distribution
functions (CDF) F (·) satisfying F (0) = 0. For two arbitrary CDF’s F, G ∈ F ,
and let f = F ′ , g = G′ be their densities. Thus, the hazard failure rates are
defined by
rF (t) =
f (t)
,
1 − F (t)
rG (t) =
g(t)
,
1 − G(t)
as long as both 1 − F (t) and 1 − G(t) are positive. Kochar (1979) considered the problem of testing the null hypothesis H0 : rF (t) = rG (t) against
the alternative H1 : rF (t) ≤ rG (t), t ≥ 0 with strict inequality over a set of
nonzero measures. Observe that H1 holds if and only if δ(s, t) = F̄ (s)Ḡ(t) −
F̄ (t)Ḡ(s) ≥ 0 for s ≥ t ≥ 0 with strict inequality over a set of nonzero measures, where F̄ (·) := 1 − F (·) for any F ∈ F .
Recall that X1 , . . . , Xn1 and Y1 , . . . , Yn2 are two independent samples
drawn respectively from F and G. Following Nikitin and Ponikarov (2006),
we see that
η(F ; G) = E{δ(X ∨ Y, X ∧ Y )}
= P(Y1 ≤ Y2 ≤ X1 ≤ X2 ) + P(X1 ≤ Y2 ≤ Y2 ≤ X2 )
− P(X1 ≤ X2 ≤ Y1 ≤ Y2 ) − P(Y1 ≤ X1 ≤ X2 ≤ Y2 ).
Under H0 , η(F ; G) = 0 while under H1 , η(F ; G) > 0. The U -statistic with
the kernel of order (s1 , s2 ) = (2, 2) is given by
h(x1 , x2 ; y1 , y2 ) = I{yyxx or xyyx} − I{xxyy or yxxy}.
STUDENTIZED TWO-SAMPLE U -STATISTICS
11
Here, the term “yyxx” refers to y1 ≤ y2 ≤ x1 ≤ x2 and similar treatments
apply to xyyx, xxyy and yxxy. Under H0 : rF (t) = rG (t), we have
h1 (x) = −4G3 (x)/3 + 4G2 (x) − 2G(x),
h2 (y) = 4F 3 (y)/3 − 4F 2 (y) + 2F (y).
In particular, if F ≡ G, then σ12 = σ22 = 8/105.
3. Multiple testing via Studentized two-sample tests. Multiple-hypothesis
testing occurs in a wide range of applications including DNA microarray
experiments, functional magnetic resonance imaging analysis (fMRI) and
astronomical surveys. We refer to Dudoit and van der Laan (2008) for a systematic study of the existing multiple testing procedures. In this section, we
consider multiple-hypothesis testing based on Studentized two-sample tests
and show how the theoretical results in the previous section can be applied
to these problems.
3.1. Two-sample t-test. A typical application of multiple-hypothesis testing in high dimensions is the analysis of gene expression microarray data.
To see whether each gene in isolation behaves differently in a control group
versus an experimental group, we can apply the two-sample t-test. Assume
that the statistical model is given by
Xi,k = µ1k + εi,k ,
i = 1, . . . , n1 ,
(3.1)
Yj,k = µ2k + ωj,k ,
j = 1, . . . , n2 ,
for k = 1, . . . , m, where index k denotes the kth gene, i and j indicate the
ith and jth array, and the constants µ1k and µ2k , respectively, represent the
mean effects for the kth gene from the first and the second groups. For each k,
ε1,k , . . . , εn1 ,k (resp., ω1,k , . . . , ωn2 ,k ) are independent random variables with
2 > 0 (resp., σ 2 > 0). For the kth marginal test,
mean zero and variance σ1k
2k
2 and σ 2 are unequal, the two-sample twhen the population variances σ1k
2k
statistic is most commonly used to carry out hypothesis testing for the null
H0k : µ1k = µ2k against the alternative H1k : µ1k 6= µ2k .
Since the seminal work of Benjamini and Hochberg (1995), the Benjamini
and Hochberg (B–H) procedure has become a popular technique in microarray data analysis for gene selection, which along with many other procedures
depend on p-values that often need to be estimated. To control certain simultaneous errors, it has been shown that using approximated p-values is
asymptotically equivalent to using the true p-values for controlling the kfamilywise error rate (k-FWER) and false discovery rate (FDR). See, for
example, Kosorok and Ma (2007), Fan, Hall and Yao (2007) and Liu and
Shao (2010) for one-sample tests. Cao and Kosorok (2011) proposed an alternative method to control k-FWER and FDR in both large-scale one-
12
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
and two-sample t-tests. A common thread among the aforementioned literature is that theoretically for the methods to work in controlling FDR at a
given level, the number of features m and the sample size n should satisfy
log m = o(n1/3 ).
Recently, Liu and Shao (2014) proposed a regularized bootstrap correction
method for multiple one-sample t-tests so that the constraint on m may
be relaxed to log m = o(n1/2 ) under less stringent moment conditions as
assumed in Fan, Hall and Yao (2007) and Delaigle, Hall and Jin (2011). Using
Theorem 2.4, we show that the constraint on m in large scale two-sample
t-tests can be relaxed to log m = o(n1/2 ) as well. This provides theoretical
justification of the effectiveness of the bootstrap method which is frequently
used for skewness correction.
To illustrate the main idea, here we restrict our attention to the special
case in which the observations are independent. Indeed, when test statistics are correlated, false discovery control becomes very challenging under
arbitrary dependence. Various dependence structures have been considered
in the literature. See, for example, Benjamini and Yekutieli (2001), Storey,
Taylor and Siegmund (2004), Ferreira and Zwinderman (2006), Leek and
Storey (2008), Friguet, Kloareg and Causeur (2009) and Fan, Han and Gu
(2012), among others. For completeness, we generalize the results to the
dependent case in Section 3.1.3.
3.1.1. Normal calibration and phase transition. Consider the large-scale
significance testing problem:
H0k : µ1k = µ2k
versus
H1k : µ1k 6= µ2k ,
1 ≤ k ≤ m.
Let V and R denote, respectively, the number of false rejections and the
number of total rejections. The well-known false discovery proportion (FDP)
is defined as the ratio FDP = V / max(1, R), and FDR is the expected FDP,
that is, E{V / max(1, R)}. Benjamini and Hochberg (1995) proposed a
distribution-free method for choosing a p-value threshold that controls the
FDR at a prespecified level where 0 < α < 1. For k = 1, . . . , m, let pk be
the marginal p-value of the kth test, and let p(1) ≤ · · · ≤ p(m) be the order
statistics of p1 , . . . , pm . For a predetermined control level α ∈ (0, 1), the B–H
procedure rejects hypotheses for which pk ≤ p(k̂) , where
αk
k̂ = max 0 ≤ k ≤ m : p(k) ≤
(3.2)
m
with p(0) = 0.
In microarray analysis, two-sample t-tests are often used to identify differentially expressed genes between two groups. Let
Tk = q
X̄k − Ȳk
2 n−1 + σ
2 n−1
σ
b1k
b2k
1
2
,
k = 1, . . . , m,
STUDENTIZED TWO-SAMPLE U -STATISTICS
where X̄k = n−1
1
Pn 1
i=1 Xi,k ,
Ȳk = n−1
2
n
2
=
σ
b1k
1
1 X
(Xi,k − X̄k )2 ,
n1 − 1
i=1
Pn 2
j=1 Yj,k
2
=
σ
b2k
13
and
n
2
1 X
(Yj,k − Ȳk )2 .
n2 − 1
j=1
1
2
Here and below, {Xi,1 , . . . , Xi,m }ni=1
and {Yj,1 , . . . , Yj,m}nj=1
are independent
random samples from {X1 , . . . , Xm } and {Y1 , . . . , Ym }, respectively, generated according to model (3.1), which are usually non-Gaussian in practice.
Moreover, assume that the sample sizes of the two samples are of the same
order, that is, n1 ≍ n2 .
Before stating the main results, we first introduce a number of notation.
Set H0 = {1 ≤ k ≤ m : µ1k = µ2k }, let m0 = #H0 denote the number of true
null hypotheses and m1 = m − m0 . Both m = m(n1 , n2 ) and m0 = m0 (n1 , n2 )
are allowed to grow as n = n1 ∧ n2 increases. We assume that
m0
= π0 ∈ (0, 1].
lim
n→∞ m
In line with the notation used in Section 2, set
2
= var(Xk ),
σ1k
2
= var(Yk ),
σ2k
γ1k = E{(Xk − µ1k )3 },
γ2k = E{(Yk − µ2k )3 }
2 = σ 2 n−1 + σ 2 n−1 . Throughout this subsection, we focus on the
and σn̄,k
2k 2
1k 1
normal calibration and let pbk = 2 − 2Φ(|Tk |), where Φ(·) is the standard
normal distribution function. Indeed, the exact null distribution of Tk and
thus the true p-values are unknown without the normality assumption.
Theorem 3.1. Assume that {X1 , . . . , Xm , Y1 , . . . , Ym } are independent
nondegenerate random variables; n1 ≍ n2 , m = m(n1 , n2 ) → ∞ and log m =
o(n1/2 ) as n = n1 ∧ n2 → ∞. For independent random samples {Xi,1 , . . . ,
1
2
Xi,m }ni=1
and {Yj,1 , . . . , Yj,m}nj=1
, suppose that
max max{E(ξk4 ), E(ηk4 )} ≤ C < ∞
(3.3) min min(σ1k , σ2k ) ≥ c > 0,
1≤k≤m
1≤k≤m
−1
−1
for some constants C and c, where ξk = σ1k
(Xk − µ1k ) and ηk = σ2k
(Yk −
µ2k ). Moreover, assume that
(3.4)
#{1 ≤ k ≤ m : |µ1k − µ2k | ≥ 4(log m)1/2 σn̄,k } → ∞
as n → ∞, and let
(3.5)
c0 = lim inf
n,m→∞
n1/2 X −3
−2
−2
σn̄,k |γ1k n1 − γ2k n2 | .
m0
k∈H0
14
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
(i) Suppose that log m = o(n1/3 ). Then as n → ∞, FDPΦ →P απ0 and
FDRΦ → απ0 .
(ii) Suppose that c0 > 0, log m ≥ c1 n1/3 for some c1 > 0 and that log m1 =
1/3
o(n ). Then there exists some constant β ∈ (α, 1] such that
lim P(FDPΦ ≥ β) = 1
n→∞
and
lim inf FDRΦ ≥ β.
n→∞
(log m)/n1/3
(iii) Suppose that c0 > 0,
→ ∞ and log m1 = o(n1/3 ). Then
as n → ∞, FDPΦ →P 1 and FDRΦ → 1.
Here, FDRΦ and FDPΦ denote, respectively, the FDR and the FDP of the
B–H procedure with pk replaced by pbk in (3.2).
Together, conclusions (i) and (ii) of Theorem 3.1 indicate that the number
of simultaneous tests can be as large as exp{o(n1/3 )} before the normal calibration becomes inaccurate. In particular, when n1 = n2 = n, the skewness
parameter c0 given in (3.5) reduces to
1 X |γ1k − γ2k |
.
c0 = lim inf
2 + σ 2 )3/2
m→∞
m0
(σ1k
2k
k∈H
0
As noted in Liu and Shao (2014), the limiting behavior of the FDRΦ varies
in different regimes and exhibits interesting phase transition phenomena as
the dimension m grows as a function of (n1 , n2 ). The average of skewness c0
plays a crucial role. It is also worth noting that conclusions (ii) and (iii) hold
under the scenario π0 = 1, that is, m1 = o(m). This corresponds to the sparse
settings in applications such as gene detections. Under finite 4th moments of
Xk and Yk , the robustness of two-sample t-tests and the accuracy of normal
calibration in the FDR/FDP control have been investigated in Cao and
Kosorok (2011) when m1 /m → π1 ∈ (0, 1). This corresponds to the relatively
dense setting, and the sparse case that we considered above is not covered.
3.1.2. Bootstrap calibration and regularized bootstrap correction. In this
subsection, we first use the conventional bootstrap calibration to improve
the accuracy of FDR control based on the fact that the bootstrap approximation removes the skewness term that determines first-order inaccuracies
of the standard normal approximation. However, the validity of bootstrap
approximation requires the underlying distribution to be very light tailed,
which does not seem realistic in real data applications. As pointed in the
literature of gene study, many gene data are commonly recognized to have
heavy tails which violates the assumption on underlying distribution used to
make conventional bootstrap approximation work. Recently, Liu and Shao
(2014) proposed a regularized bootstrap method that is shown to be more
robust against the heavy tailedness of the underlying distribution and the
dimension m is allowed to be as large as exp{o(n1/2 )}.
15
STUDENTIZED TWO-SAMPLE U -STATISTICS
†
†
†
†
Let Xk,b
= {X1,k,b
, . . . , Xn† 1 ,k,b }, Yk,b
= {Y1,k,b
, . . . , Yn†2 ,k,b }, b = 1, . . . , B,
denote bootstrap samples drawn independently and uniformly, with replacement, from Xk = {X1,k , . . . , Xn1 ,k } and Yk = {Y1,k , . . . , Yn2 ,k }, respectively.
†
†
Let Tk,b
be the two-sample t-statistic constructed from {X1,k,b
− X̄k , . . . ,
†
Xn† 1 ,k,b − X̄k } and {Y1,k,b
− Ȳk , . . . , Yn†2 ,k,b − Ȳk }. Following Liu and Shao
(2014), we use the following empirical distribution:
m
†
Fm,B
(t) =
B
1 XX
†
I{|Tk,b
| ≥ t}
mB
k=1 b=1
to approximate the null distribution, and thus the estimated p-values are
†
given by pbk,B = Fm,B
(|Tk |). Respectively, FDPB and FDRB denote the FDP
and the FDR of the B–H procedure with pk replaced by pbk,B in (3.2).
The following result shows that the bootstrap calibration is accurate provided log m increases at a strictly slower rate than (n1 ∧ n2 )1/2 , and the
underlying distribution has sub-Gaussian tails.
Theorem 3.2.
Assume the conditions in Theorem 3.1 hold and that
2
2
max max{E(et0 ξk ), E(et0 ηk )} ≤ C < ∞
1≤k≤m
for some constants t0 , C > 0.
(i) Suppose that log m = o(n1/3 ). Then as n → ∞, FDPB →P απ0 and
FDRB → απ0 .
(ii) Suppose that log m = o(n1/2 ) and m1 ≤ mρ for some ρ ∈ (0, 1). Then
as n → ∞, FDPB →P α and FDRB → α.
The sub-Gaussian condition in Theorem 3.2 is quite stringent in practice,
whereas it can hardly be weakened in general when the bootstrap method
is applied. In the context of family-wise error rate control, Fan, Hall and
Yao (2007) proved that the bootstrap calibration is accurate if the observed
data are bounded and log m = o(n1/2 ). The regularized bootstrap method,
however, adopts the very similar idea of the trimmed estimators and is a twostep procedure that combines the truncation technique and the bootstrap
method.
First, define the trimmed samples
bi,k = Xi,k I{|Xi,k | ≤ λ1k },
X
Ybj,k = Yi,k I{|Yj,k | ≤ λ2k }
for i = 1, . . . , n1 , j = 1, . . . , n2 , where λ1k and λ2k are regularized parameters
†
b† , . . . , X
b†
b†
b†
b†
to be specified. Let Xbk,b
= {X
1,k,b
n1 ,k,b } and Yk,b = {Y1,k,b , . . . , Yn2 ,k,b },
b = 1, . . . , B, be the corresponding bootstrap samples drawn by sampling
randomly, with replacement, from
b1,k , . . . , X
bn ,k } and Ybk = {Yb1,k , . . . , Ybn ,k },
Xbk = {X
1
2
16
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
†
respectively. Next, let Tbk,b
be the two-sample t-test statistic constructed from
P
†
n
−1
−1 Pn1 b
−1
1
b†
b
b†
b
{X
i=1 Xi,k } and {Y1,k,b − n2 ×
i=1 Xi,k , . . . , Xn1 ,k,b − n1
1,k,b − n1
Pn 2 b
−1 Pn2 b
b†
j=1 Yj,k }. As in the previous procedure, define
j=1 Yj,k , . . . , Yn2 ,k,b − n2
the estimated p-values by
m
†
pbk,RB = Fbm,RB
(|Tk |)
B
1 XX
†
†
with Fbm,RB
(t) =
I{|Tbk,b
| ≥ t}.
mB
k=1 b=1
Let FDPRB and FDRRB denote the FDP and the FDR, respectively, of the
B–H procedure with pk replaced by pbk,RB in (3.2).
Theorem 3.3.
(3.6)
Assume the conditions in Theorem 3.1 hold and that
max max{E(|Xk |6 ), E(|Yk |6 )} ≤ C < ∞.
1≤k≤m
The regularized parameters (λ1k , λ2k ) are such that
1/6
1/6
n1
n2
λ1k ≍
(3.7)
and λ2k ≍
.
log m
log m
(i) Suppose that log m = o(n1/3 ). Then as n → ∞, FDPRB →P απ0 and
FDRRB → απ0 .
(ii) Suppose that log m = o(n1/2 ) and m1 ≤ mρ for some ρ ∈ (0, 1). Then
as n → ∞, FDPRB →P α and FDRRB → α.
In view of Theorem 3.3, the regularized bootstrap approximation is valid
under mild moment conditions that are significantly weaker than those required for the bootstrap method to work theoretically. The numerical performance will be investigated in Section 4. To highlight the main idea, a
self-contained proof of Theorem 3.1 is given in the supplementary material
[Chang, Shao and Zhou (2016)]. The proofs of Theorems 3.2 and 3.3 are
based on straightforward extensions of Theorems 2.2 and 3.1 in Liu and
Shao (2014), and thus are omitted.
3.1.3. FDR control under dependence. In this section, we generalize the
results in previous sections to the dependence case. Write ̺ = n1 /n2 . For
2 + ̺σ 2 and define
every k, ℓ = 1, . . . , m, let σk2 = σ1k
2k
(3.8)
rkℓ = (σk σℓ )−1 {cov(Xk , Xℓ ) + ̺ cov(Yk , Yℓ )},
which characterizes the dependence between (Xk , Yk ) and (Xℓ , Yℓ ). Partic2 = σ 2 , we see that r = 1 {corr(X , X ) +
ularly, when n1 = n2 and σ1k
kℓ
k
ℓ
2k
2
corr(Yk , Yℓ )}. In this subsection, we impose the following conditions on the
dependence structure of X = (X1 , . . . , Xm )T and Y = (Y1 , . . . , Ym )T .
STUDENTIZED TWO-SAMPLE U -STATISTICS
17
(D1) There exist constants 0 < r < 1, 0 < ρ < (1 − r)/(1 + r) and b1 > 0
such that
max
1≤k6=ℓ≤m
|rkℓ | ≤ r
and
max sk (m) ≤ b1 mρ ,
1≤k≤m
where for k = 1, . . . , m,
sk (m) = {1 ≤ ℓ ≤ m : corr(Xk , Xℓ ) ≥ (log m)−2−γ
or corr(Yk , Yℓ ) ≥ (log m)−2−γ }
for some γ > 0.
(D2) There exist constants 0 < r < 1, 0 < ρ < (1 − r)/(1 + r) and b1 > 0
such that max1≤k6=ℓ≤m |rkℓ | ≤ r and for each Xk , the number of variables Xℓ
that are dependent of Xk is less than b1 mρ .
The assumption max1≤k6=ℓ≤m |rkℓ | ≤ r for some 0 < r < 1 imposes a constraint on the magnitudes of the correlations, which is natural in the sense
that the correlation matrix R = (rkℓ )1≤k,ℓ≤m is singular if max1≤k6=ℓ≤m |rkℓ | =
1. Under condition (D1), each (Xk , Yk ) is allowed to be “moderately” correlated with at most as many as O(mρ ) other vectors. Condition (D2) enforces
a local dependence structure on the data, saying that each vector is dependent with at most as many as O(mρ ) other random vectors and independent
of the remaining ones. The following theorem extends the results in previous sections to the dependence case. Its proof is placed in the supplementary
material [Chang, Shao and Zhou (2016)].
Theorem 3.4. Assume that either condition (D1) holds with log m =
O(n1/8 ) or condition (D2) holds with log m = o(n1/3 ).
(i) Suppose that (3.3) and (3.4) are satisfied. Then as n → ∞, FDPΦ →P
απ0 and FDRΦ → απ0 .
(ii) Suppose that (3.3), (3.6) and (3.7) are satisfied. Then as n → ∞,
FDPRB →P απ0 and FDRRB → απ0 .
In particular, assume that condition (D2) holds with log m = o(n1/2 ) and
m1 ≤ mc for some 0 < c < 1. Then as n → ∞, FDPRB →P απ0 and FDRRB →
απ0 .
3.2. Studentized Mann–Whitney test. Let X = {X1 , . . . , Xn1 } and Y =
{Y1 , . . . , Yn2 } be two independent random samples from distributions F and
G, respectively. Let θ = P(X ≤ Y ) − 1/2. Consider the null hypothesis H0 :
θ = 0 against the one-sided alternative H1 : θ > 0. This problem arises in
many applications including testing whether the physiological performance
of an active drug is better than that under the control treatment, and testing
the effects of a policy, such as unemployment insurance or a vocational
training program, on the level of unemployment.
18
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
The Mann–Whitney (M–W) test [Mann and Whitney (1947)], also known
as the two-sample Wilcoxon test [Wilcoxon (1945)], is prevalently used for
testing equality of means or medians, and serves as a nonparametric alternative to the two-sample t-test. The corresponding test statistic is given
by
n
(3.9)
n
2
1 X
1 X
Un̄ =
I{Xi ≤ Yj },
n1 n2
n̄ = (n1 , n2 ).
i=1 j=1
The M–W test is widely used in a wide range of fields including statistics, economics and biomedicine, due to its good efficiency and robustness
against parametric assumptions. Over one-third of the articles published
in Experimental Economics use the Mann–Whitney test and Okeh (2009)
reported that thirty percent of the articles in five biomedical journals published in 2004 used the Mann–Whitney test. For example, using the M–W U
test, Charness and Gneezy (2009) developed an experiment to test the conjecture that financial incentives help to foster good habits. They recorded
seven biometric measures (weight, body fat percentage, waist size, etc.) of
each participant before and after the experiment to assess the improvements
across treatments. Although the M–W test was originally introduced as a
rank statistic to test if the distributions of two related samples are identical, it has been prevalently used for testing equality of medians or means,
sometimes as an alternative to the two-sample t-test.
It was argued and formally examined recently in Chung and Romano
(2016) that the M–W test has generally been misused across disciplines. In
fact, the M–W test is only valid if the underlying distributions of the two
groups are identical. Nevertheless, when the purpose is to test the equality of
distributions, it is recommended to use a statistic, such as the Kolmogorov–
Smirnov or the Cramér–von Mises statistic, that captures the discrepancies
of the entire distributions rather than an individual parameter. More specifically, because the M–W test only recognizes deviation from θ = 0, it does
not have much power in detecting overall distributional discrepancies. Alternatively, the M–W test is frequently used to test the equality of medians.
However, Chung and Romano (2013) presented evidence that this is another
improper application of the M–W test and suggested to use the Studentized
median test.
Even when the M–W test is appropriately applied for testing H0 : θ = 0,
the asymptotic variance depends on the underlying distributions, unless
the two population distributions are identical. As Hall and Wilson (1991)
pointed out, the application of resampling to pivotal statistics has better
asymptotic properties in the sense that the rate of convergence of the actual
significance level to the nominal significance level is more rapid when the
19
STUDENTIZED TWO-SAMPLE U -STATISTICS
pivotal statistics are resampled. Therefore, it is natural to use the Studentized Mann–Whitney test, which is asymptotic pivotal.
Let
bn̄ = σ
U
bn̄−1 (Un̄ − 1/2)
(3.10)
bn̄2 = σ
b12 n−1
denote the Studentized test statistic for Un̄ as in (3.9), where σ
1 +
−1
2
σ
b2 n2 ,
!2
!2
n1
n2
n1
n2
1 X
1 X
1 X
1 X
2
2
qi −
pj −
qi ,
pj
σ
b2 =
σ
b1 =
n1 − 1
n1
n2 − 1
n2
i=1
j=1
i=1
j=1
−1 Pn1
−1 Pn2
with qi = n2
i=1 I{Xi ≤ Yj }.
j=1 I{Yj < Xi } and pj = n1
When dealing with samples from a large number of geographical regions (suburbs, states, health service areas, etc.), one may need to make
many statistical inferences simultaneously. Suppose we observe a family
of paired groups, that is, for k = 1, . . . , m, Xk = {X1,k , . . . , Xn1 ,k }, Yk =
{Y1,k , . . . , Yn2 ,k }, where the index k denotes the kth site. Assume that Xk is
drawn from Fk , and independently, Yk is drawn from Gk .
For each k = 1, . . . , m, we test the null hypothesis H0k : θk = P(X1,k ≤
Y1,k )−1/2 = 0 against the one-sided alternative H1k : θk > 0. If H0k is rejected,
we conclude that the treatment effect (of a drug or a policy) is acting within
the kth area. Define the test statistic
−1
bn̄,k = σ
U
bn̄,k
(Un̄,k − 1/2),
bn̄,k is constructed from the kth paired samples according to (3.10).
where U
Let
bn̄,k ≤ t|H k )
Fn̄,k (t) = P(U
0
and Φ(t) = P(Z ≤ t),
where Z is the standard normal random variable. Then the true p-values
bn̄,k ), and pbk = 1 − Φ(U
bn̄,k ) denote the estimated p-values
are pk = 1 − Fn̄,k (U
based on normal calibration.
To identify areas where the treatment effect is acting, we can use the
B–H method to control the FDR at α level by rejecting the null hypotheses
indexed by S = {1 ≤ k ≤ m : pbk ≤ pb(k̂) }, where k̂ = max{1 ≤ k ≤ m : pb(k) ≤
αk/m}, and {b
p(k) } denote the ordered values of {b
pk }. As before, let FDRΦ
be the FDR of the B–H method based on normal calibration.
Alternative to normal calibration, we can also consider bootstrap calibra†
†
†
†
tion. Recall that Xk,b
= {X1,k,b
, . . . , Xn† 1 ,k,b } and Yk,b
= {Y1,k,b
, . . . , Yn†2 ,k,b },
b = 1, . . . , B, are two bootstrap samples drawn independently and uniformly,
with replacement, from Xk = {X1,k , . . . , Xn1 ,k } and Yk = {Y1,k , . . . , Yn2 ,k }, reb†
spectively. For each k = 1, . . . , m, let U
n̄,k,b be the bootstrapped test statistic
20
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
†
†
constructed from Xk,b
and Yk,b
, that is,
#
"
n2
n1 X
X
1
†
−1
b
I{Xi,k ≤ Yj,k } ,
U
bn̄,k,b Un̄,k,b −
n̄,k,b = σ
n1 n2
i=1 j=1
where Un̄,k,b and σ
bn̄,k,b are the analogues of Un̄ given in (3.9) and σ
bn̄ specified
†
†
below (3.10) via replacing Xi and Yj by Xi,k,b
and Yj,k,b
, respectively. Using
the empirical distribution function
m
B
XX
b † | ≤ t},
b† (t) = 1
I{|U
G
n̄,k,b
m,B
mB
k=1 b=1
b† (U
b † ). For a predewe estimate the unknown p-values by pbk,B = 1 − G
m,B
n̄,k,b
termined α ∈ (0, 1), the null hypotheses indexed by SB = {1 ≤ k ≤ m : pbk,B ≤
pb(k̂B ),B } are rejected, where k̂B = max{0 ≤ k ≤ m : pbk,B ≤ αk/m}. Denote by
FDRB the FDR of the B–H method based on bootstrap calibration.
Applying the general moderate deviation result (2.9) to Studentized Mann–
bn̄,k leads to the following result. The proof is based on
Whitney statistics U
a straightforward adaptation of the arguments we used in the proof of Theorem 3.1, and hence is omitted.
Theorem 3.5. Assume that {X1 , . . . , Xm , Y1 , . . . , Ym } are independent
random variables with continuous distribution functions Xk ∼ Fk and Yj ∼
Gk . The triplet (n1 , n2 , m) is such that n1 ≍ n2 , m = m(n1 , n2 ) → ∞, log m =
o(n1/3 ) and m−1 #{k = 1, . . . , m : θk = 1/2} → π0 ∈ (0, 1] as n = n1 ∧ n2 →
1
2
∞. For independent samples {Xi,1 , . . . , Xi,m }ni=1
and {Yj,1 , . . . , Yj,m}nj=1
, suppose that min1≤k≤m min(σ1k , σ2k ) ≥ c > 0 for some constant c > 0 and as
n → ∞,
#{1 ≤ k ≤ m : |θk − 1/2| ≥ 4(log m)1/2 σn̄,k } → ∞,
2 = var{G (X )}, σ 2 = var{F (Y )} and σ 2 = σ 2 n−1 + σ 2 n−1 .
where σ1k
k k
k
k
2k 2
1k 1
n̄,k
2k
Then as n → ∞, FDPΦ , FDPB →P απ0 and FDRΦ , FDRB → απ0 .
Attractive properties of the bootstrap for multiple-hypothesis testing were
first noted by Hall (1990) in the case of the mean rather than its Studentized
counterpart. Now it has been rigorously proved that bootstrap methods
are particularly effective in relieving skewness in the extreme tails which
leads to second-order accuracy [Fan, Hall and Yao (2007), Delaigle, Hall
and Jin (2011)]. It is interesting and challenging to investigate whether these
advantages of the bootstrap can be inherited by multiple U -testing in either
the standardized or the Studentized case.
21
STUDENTIZED TWO-SAMPLE U -STATISTICS
4. Numerical study. In this section, we present numerical investigations
for various calibration methods described in Section 3 when they are applied to two-sample large-scale multiple testing problems. We refer to the
simulation for two-sample t-test and Studentized Mann–Whitney test as
Sim1 and Sim2 , respectively. Assume that we observe two groups of m1
2
dimensional gene expression data {Xi }ni=1
and {Yj }nj=1
, where X1 , . . . , Xn1
and Y1 , . . . , Yn2 are independent random samples drawn from the distributions of X and Y, respectively.
For Sim1 , let X and Y be such that
(4.1)
X = µ1 + {ε1 − E(ε1 )} and
Y = µ2 + {ε2 − E(ε2 )},
where ε1 = (ε1,1 , . . . , ε1,m )T and ε2 = (ε2,1 , . . . , ε2,m )T are two sets of i.i.d.
random variables. The i.i.d. components of noise vectors ε1 and ε2 follow two
types of distributions: (i) the exponential distribution Exp(λ) with density
function λ−1 e−x/λ ; (ii) Student t-distribution t(k) with k degrees of freedom.
The exponential distribution has nonzero skewness, while the t-distribution
is symmetric and heavy-tailed. For each type of error distribution, both cases
of homogeneity and heteroscedasticity were considered. Detailed settings for
the error distributions are specified in Table 1.
For Sim2 , we assume that X and Y satisfy
(4.2)
X = µ1 + ε1
and
Y = µ2 + ε2 ,
where ε1 = (ε1,1 , . . . , ε1,m )T and ε2 = (ε2,1 , . . . , ε2,m )T are two sets of i.i.d.
random variables. We consider several distributions for the error terms ε1,k
and ε2,k : standard normal distribution N (0, 1), t-distribution t(k), uniform
distribution U (a, b) and Beta distribution Beta(a, b). Table 2 reports four
settings of (ε1,k , ε2,k ) used in our simulation. In either setting, we know
P(ε1,k ≤ ε2,k ) = 1/2 holds. Hence, the power against the null hypothesis
H0k : P(Xk ≤ Yk ) = 1/2 will generate from the magnitude of the difference
between the kth components of µ1 and µ2 .
In both Sim1 and Sim2 , we set µ1 = 0, and assume that the first m1 =
1/2 and
2 −1
⌊1.6m1/2 ⌋ components of µ2 are equal to c{(σ12 n−1
1 + σ2 n2 ) log m}
2
2
the rest are zero. Here, σ1 and σ2 denote the variance of ε1,k and ε2,k , and
Table 1
Distribution settings in Sim1
Exponential distributions
Student t-distributions
Homogeneous case
Heteroscedastic case
ε1,k ∼ Exp(2)
ε2,k ∼ Exp(2)
ε1,k ∼ Exp(2)
ε2,k ∼ Exp(1)
ε1,k ∼ t(4)
ε2,k ∼ t(4)
ε1,k ∼ t(4)
ε2,k ∼ t(3)
22
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
Table 2
Distribution settings in Sim2
Identical distributions
Nonidentical distributions
Case 1
ε1,k ∼ N (0, 1)
ε2,k ∼ N (0, 1)
ε1,k ∼ N (0, 1)
ε2,k ∼ t(3)
Case 2
ε1,k ∼ U (0, 1)
ε2,k ∼ U (0, 1)
ε1,k ∼ U (0, 1)
ε2,k ∼ Beta(10, 10)
c is a parameter employed to characterize the location discrepancy between
the distributions of X and Y. The sample size (n1 , n2 ) was set to be (50, 30)
and (100, 60), and the discrepancy parameter c took values in {1, 1.5}. The
significance level α in the B–H procedure was specified as 0.05, 0.1, 0.2 and
0.3, and the dimension m was set to be 1000 and 2000. In Sim1 , we compared
three different methods to calculate the p-values in the B–H procedure: normal calibration given in Section 3.1.1, bootstrap calibration and regularized
bootstrap calibration proposed in Section 3.1.2. For regularized bootstrap
calibration, we used a cross-validation approach as in Section 3 of Liu and
Shao (2014) to choose regularized parameters λ1k and λ2k . In Sim2 , we
compared the performance of normal calibration and bootstrap calibration
proposed in Section 3.2. For each compared method, we evaluated its performance via two indices: the empirical FDR and the proportion among the
true alternative hypotheses was rejected. We call the latter correct rejection
proportion. If the empirical FDR is low, the proposed procedure has good
FDR control; if the correct rejection proportion is high, the proposed procedure has fairly good performance in identifying the true signals. For ease
of exposition, we only report the simulation results for (n1 , n2 ) = (50, 30)
and m = 1000 in Figures 1 and 2. The results for (n1 , n2 ) = (100, 60) and
m = 2000 are similar, which can be found in the supplementary material
[Chang, Shao and Zhou (2016)]. Each curve corresponds to the performance
of a certain method and the line types are specified in the caption below.
The horizontal ordinates of the four points on each curve depict the empirical FDR of the specified method when the pre-specified level α in the B–H
procedure was taken to be 0.05, 0.1, 0.2 and 0.3, respectively, and the vertical
ordinates indicate the corresponding empirical correct rejection proportion.
We say that a method has good FDR control if the horizontal ordinates
of the four points on its performance curve are less than the prescribed α
levels.
In general, as shown in Figures 1 and 2, the B–H procedure based on
(regularized) bootstrap calibration has better FDR control than that based
on normal calibration. In Sim1 where the errors are symmetric (e.g., ε1,k and
ε2,k follow the Student t-distributions), the panels in the first row of Figure 1 show that the B–H procedures using all the three calibration methods
STUDENTIZED TWO-SAMPLE U -STATISTICS
23
Fig. 1. Performance comparison of B–H procedures based on three calibration methods in Sim1 with (n1 , n2 ) = (50, 30) and m = 1000. The first and second rows show the
results when the components of noise vectors ε1 and ε2 follow t-distributions and exponential distributions, respectively; left and right panels show the results for homogeneous
and heteroscedastic cases, respectively; horizontal and vertical axes depict empirical false
discovery rate and empirical correct rejection proportion, respectively; and the prescribed
levels α = 0.05, 0.1, 0.2 and 0.3 are indicated by unbroken horizontal black lines. In each
panel, dashed lines and unbroken lines represent the results for the discrepancy parameter
c = 1 and 1.5, respectively, and different colors express different methods employed to calculate p-values in the B–H procedure, where blue line, green line and red line correspond
to the procedures based on normal, conventional and regularized bootstrap calibrations,
respectively.
are able to control or approximately control the FDR at given levels, while
the procedures based on bootstrap and regularized bootstrap calibrations
outperform that based on normal calibration in controlling the FDR. When
the errors are asymmetric in Sim1 , the performances of the three B–H procedures are different from those in the symmetric cases. From the second row
of Figure 1, we see that the B–H procedure based on normal calibration is
distorted in controlling the FDR while the procedure based on (regularized)
bootstrap calibration is still able to control the FDR at given levels. This
24
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
Fig. 2. Performance comparison of B–H procedures based on two different calibration
methods in Sim2 with (n1 , n2 ) = (50, 30) and m = 1000. The first and second rows show
the results when the components of noise vectors ε1 and ε2 follow the distributions specified in cases 1 and 2 of Table 2, respectively; left and right panels show the results for
the cases of identical distributions and nonidentical distributions, respectively; horizontal
and vertical axes depict empirical false discovery rate and empirical correct rejection proportion, respectively; and the prescribed levels α = 0.05, 0.1, 0.2 and 0.3 are indicated by
unbroken horizontal black lines. In each panel, dashed lines and unbroken lines represent
the results for the discrepancy parameter c = 1 and 1.5, respectively, and different colors
express different methods employed to calculate p-values in the B–H procedure, where blue
line and red line correspond to the procedures based on normal and bootstrap calibrations,
respectively.
phenomenon is further evidenced by Figure 2 for Sim2 . Comparing the B–H
procedures based on conventional and regularized bootstrap calibrations, we
find that the former approach is uniformly more conservative than the latter
in controlling the FDR. In other words, the B–H procedure based on regularized bootstrap can identify more true alternative hypotheses than that
using conventional bootstrap calibration. This phenomenon is also revealed
in the heteroscedastic case. As the discrepancy parameter c gets larger so
that the signal is stronger, the correct rejection proportion of the B–H pro-
STUDENTIZED TWO-SAMPLE U -STATISTICS
25
cedures based on all the three calibrations increase and the empirical FDR
is closer to the prescribed level.
5. Discussion. In this paper, we established Cramér-type moderate deviations for two-sample Studentized U -statistics of arbitrary order in a general framework where the kernel is not necessarily bounded. Two-sample
U -statistics, typified by the two-sample Mann–Whitney test statistic, have
been widely used in a broad range of scientific research. Many of these applications rely on a misunderstanding of what is being tested and the implicit
underlying assumptions, that were not explicitly considered until relatively
recently by Chung and Romano (2016). More importantly, they provided
evidence for the advantage of using the Studentized statistics both theoretically and empirically.
Unlike the conventional (one- and two-sample) U -statistics, the asymptotic behavior of their Studentized counterparts has barely been studied
in the literature, particularly in the two-sample case. Recently, Shao and
Zhou (2016) proved a Cramér-type moderate deviation theorem for general
Studentized nonlinear statistics, which leads to a sharp moderate deviation
result for Studentized one-sample U -statistics. However, extension from onesample to two-sample in the Studentized case is totally nonstraightforward,
and requires a more delicate analysis on the Studentizing quantities. Further,
for the two-sample t-statistic, we proved moderate deviation with secondorder accuracy under a finite 4th moment condition (see Theorem 2.4),
which is of independent interest. In contrast to the one-sample case, the
two-sample t-statistic cannot be reduced to a self-normalized sum of independent random variables, and thus the existing results on self-normalized
ratios [Jing, Shao and Wang (2003), Wang (2005, 2011)] cannot be directly
applied. Instead, we modify Theorem 2.1 in Shao and Zhou (2016) to obtain
a more precise expansion that can be used to derive a refined result for the
two-sample t-statistic.
Finally, we show that the obtained moderate deviation theorems provide
theoretical guarantees for the validity, including robustness and accuracy, of
normal, conventional bootstrap and regularized bootstrap calibration methods in multiple testing with FDR/FDP control. The dependence case is also
covered. These results represent a useful complement to those obtained by
Fan, Hall and Yao (2007), Delaigle, Hall and Jin (2011) and Liu and Shao
(2014) in the one-sample case.
Acknowledgements. The authors would like to thank Peter Hall and Aurore Delaigle for helpful discussions and encouragement. The authors sincerely thank the Editor, Associate Editor and three referees for their very
constructive suggestions and comments that led to substantial improvement
of the paper.
26
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
SUPPLEMENTARY MATERIAL
Supplement to “Cramér-type moderate deviations for Studentized twosample U -statistics with applications” (DOI: 10.1214/15-AOS1375SUPP;
.pdf). This supplemental material contains proofs for all the theoretical results in the main text, including Theorems 2.2, 2.4, 3.1 and 3.4, and additional numerical results.
REFERENCES
Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: A practical
and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B. Stat. Methodol. 57
289–300. MR1325392
Benjamini, Y. and Yekutieli, D. (2001). The control of the false discovery rate in
multiple testing under dependency. Ann. Statist. 29 1165–1188. MR1869245
Borovskich, Y. V. (1983). Asymptotics of U -statistics and Von Mises’ functionals. Soviet
Math. Dokl. 27 303–308.
Cao, H. and Kosorok, M. R. (2011). Simultaneous critical values for t-tests in very high
dimensions. Bernoulli 17 347–394. MR2797995
Chang, J., Shao, Q. and Zhou, W.-X. (2016). Supplement to “Cramér-type
moderate deviations for Studentized two-sample U -statistics with applications.”
DOI:10.1214/15-AOS1375SUPP.
Chang, J., Tang, C. Y. and Wu, Y. (2013). Marginal empirical likelihood and sure
independence feature screening. Ann. Statist. 41 2123–2148. MR3127860
Chang, J., Tang, C. Y. and Wu, Y. (2016). Local independence feature screening
for nonparametric and semiparametric models by marginal empirical likelihood. Ann.
Statist. 44 515–539. MR3476608
Charness, G. and Gneezy, U. (2009). Incentives to exercise. Econometrica 77 909–931.
Chen, S. X. and Qin, Y.-L. (2010). A two-sample test for high-dimensional data with
applications to gene-set testing. Ann. Statist. 38 808–835. MR2604697
Chen, L. H. Y. and Shao, Q.-M. (2007). Normal approximation for nonlinear statistics
using a concentration inequality approach. Bernoulli 13 581–599. MR2331265
Chen, S. X., Zhang, L.-X. and Zhong, P.-S. (2010). Tests for high-dimensional covariance matrices. J. Amer. Statist. Assoc. 105 810–819. MR2724863
Chung, E. and Romano, J. P. (2013). Exact and asymptotically robust permutation
tests. Ann. Statist. 41 484–507. MR3099111
Chung, E. and Romano, J. (2016). Asymptotically valid and exact permutation tests
based on two-sample U -statistics. J. Statist. Plann. Inference. 168 97–105. MR3412224
Delaigle, A., Hall, P. and Jin, J. (2011). Robustness and accuracy of methods for high
dimensional data analysis based on Student’s t-statistic. J. R. Stat. Soc. Ser. B. Stat.
Methodol. 73 283–301. MR2815777
Dudoit, S. and van der Laan, M. J. (2008). Multiple Testing Procedures with Applications to Genomics. Springer, New York. MR2373771
Fan, J., Hall, P. and Yao, Q. (2007). To how many simultaneous hypothesis tests can
normal, Student’s t or bootstrap calibration be applied? J. Amer. Statist. Assoc. 102
1282–1288. MR2372536
Fan, J., Han, X. and Gu, W. (2012). Estimating false discovery proportion under arbitrary covariance dependence. J. Amer. Statist. Assoc. 107 1019–1035. MR3010887
Ferreira, J. A. and Zwinderman, A. H. (2006). On the Benjamini–Hochberg method.
Ann. Statist. 34 1827–1849. MR2283719
STUDENTIZED TWO-SAMPLE U -STATISTICS
27
Friguet, C., Kloareg, M. and Causeur, D. (2009). A factor model approach to multiple testing under dependence. J. Amer. Statist. Assoc. 104 1406–1415. MR2750571
Hall, P. (1990). On the relative performance of bootstrap and Edgeworth approximations
of a distribution function. J. Multivariate Anal. 35 108–129. MR1084945
Hall, P. and Wilson, S. R. (1991). Two guidelines for bootstrap hypothesis testing.
Biometrics 47 757–762. MR1132543
Helmers, R. and Janssen, P. (1982). On the Berry–Esseen theorem for multivariate
U -statistics. In Math. Cent. Rep. SW 90 1–22. Mathematisch Centrum, Amsterdam.
Hoeffding, W. (1948). A class of statistics with asymptotically normal distribution. Ann.
Math. Statistics 19 293–325. MR0026294
Jing, B.-Y., Shao, Q.-M. and Wang, Q. (2003). Self-normalized Cramér-type large
deviations for independent random variables. Ann. Probab. 31 2167–2215. MR2016616
Kochar, S. C. (1979). Distribution-free comparison of two probability distributions with
reference to their hazard rates. Biometrika 66 437–441. MR0556731
Koroljuk, V. S. and Borovskich, Y. V. (1994). Theory of U -Statistics. Mathematics
and Its Applications 273. Kluwer Academic, Dordrecht. MR1472486
Kosorok, M. R. and Ma, S. (2007). Marginal asymptotics for the “large p, small
n” paradigm: With applications to microarray data. Ann. Statist. 35 1456–1486.
MR2351093
Kowalski, J. and Tu, X. M. (2007). Modern Applied U -Statistics. Wiley, Hoboken, NJ.
MR2368050
Lai, T. L., Shao, Q.-M. and Wang, Q. (2011). Cramér type moderate deviations for
Studentized U -statistics. ESAIM Probab. Stat. 15 168–179. MR2870510
Leek, J. T. and Storey, J. D. (2008). A general framework for multiple testing dependence. Proc. Natl. Acad. Sci. USA 105 18718–18723.
Li, R., Zhong, W. and Zhu, L. (2012). Feature screening via distance correlation learning.
J. Amer. Statist. Assoc. 107 1129–1139. MR3010900
Li, G., Peng, H., Zhang, J. and Zhu, L. (2012). Robust rank correlation based screening.
Ann. Statist. 40 1846–1877. MR3015046
Liu, W. and Shao, Q.-M. (2010). Cramér-type moderate deviation for the maximum of
the periodogram with application to simultaneous tests in gene expression time series.
Ann. Statist. 38 1913–1935. MR2662363
Liu, W. and Shao, Q.-M. (2014). Phase transition and regularized bootstrap in largescale t-tests with false discovery rate control. Ann. Statist. 42 2003–2025. MR3262475
Mann, H. B. and Whitney, D. R. (1947). On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Statistics 18 50–60. MR0022058
Nikitin, Y. and Ponikarov, E. (2006). On large deviations of nondegenerate two-sample
U - and V -statistics with applications to Bahadur efficiency. Math. Methods Statist. 15
103–122. MR2225432
Okeh, U. M. (2009). Statistical analysis of the application of Wilcoxon and Mann–
Whitney U test in medical research studies. Biotechnol. Molec. Biol. Rev. 4 128–131.
Shao, Q.-M. and Zhou, W.-X. (2016). Cramér type moderate deviation theorems for
self-normalized processes. Bernoulli 22 2029–2079.
Storey, J. D., Taylor, J. E. and Siegmund, D. (2004). Strong control, conservative
point estimation and simultaneous conservative consistency of false discovery rates: A
unified approach. J. R. Stat. Soc. Ser. B. Stat. Methodol. 66 187–205. MR2035766
Vandemaele, M. and Veraverbeke, N. (1985). Cramér type large deviations for Studentized U -statistics. Metrika 32 165–179. MR0824452
Wang, Q. (2005). Limit theorems for self-normalized large deviation. Electron. J. Probab.
10 1260–1285 (electronic). MR2176384
28
J. CHANG, Q.-M. SHAO AND W.-X. ZHOU
Wang, Q. (2011). Refined self-normalized large deviations for independent random variables. J. Theoret. Probab. 24 307–329. MR2795041
Wang, Q. and Hall, P. (2009). Relative errors in central limit theorems for Student’s t
statistic, with applications. Statist. Sinica 19 343–354. MR2487894
Wang, Q., Jing, B.-Y. and Zhao, L. (2000). The Berry–Esseen bound for Studentized
statistics. Ann. Probab. 28 511–535. MR1756015
Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics 1 80–83.
Zhong, P.-S. and Chen, S. X. (2011). Tests for high-dimensional regression coefficients
with factorial designs. J. Amer. Statist. Assoc. 106 260–274. MR2816719
J. Chang
School of Statistics
Southwestern University of Finance
and Economics
Chengdu, Sichuan 611130
China
and
School of Mathematics and Statistics
University of Melbourne
Parkville, Victoria 3010
Australia
E-mail: [email protected]
Q.-M. Shao
Department of Statistics
Chinese University of Hong Kong
Shatin, NT
Hong Kong
E-mail: [email protected]
W.-X. Zhou
Department of Operations Research
and Financial Engineering
Princeton University
Princeton, New Jersey 08544
USA
and
School of Mathematics and Statistics
University of Melbourne
Parkville, Victoria 3010
Australia
E-mail: [email protected]
| 10 |
October 13, 2017
arXiv:1710.04207v1 [eess.IV] 11 Oct 2017
Algebraic Image Processing
Enrico Celeghini
Dipartimento di Fisica, Università di Firenze and INFN-Sezione di Firenze
I50019 Sesto Fiorentino, Firenze, Italy.
Departamento de Fı́sica Teórica, Atómica y Óptica and IMUVA.
Universidad de Valladolid, 47011 Valladolid, Spain.
e-mail: [email protected].
Abstract
We propose an approach to image processing related to algebraic operators
acting in the space of images. In view of the interest in the applications in optics
and computer science, mathematical aspects of the paper have been simplified as
much as possible. Underlying theory, related to rigged Hilbert spaces and Lie
algebras, is discussed elsewhere.
OCIS codes 100.2000, 100.2980, 110.1085, 110.1758
1
Introduction
The fundamental problem in image analysis is that the information contained in an image
is immense, too much in order that the human mind can manage it. So in science we
have to disregard a large part of the contained information and isolate the required one,
quite more restricted, in function of our specific interests.
We suggest that this work to clean the signal from forgettable elements and to highlight
the relevant information could be improved by the mathematical theory of operators
acting in the space of images.
Same application of these ideas, samehow similar to adaptive optics, are described in
the following. Yet, while in adaptive optics the perturbations to be removed are related
to the phases of a complex function f (r, θ) our approach, that we could call Algebraic
Image Processing (AIP), acts on the module of the image |f (r, θ)|. So, we suggest that
the image |f (r, θ)|2 registered by an experimental apparatus can be assumed not as the
final result of the optical measure but possibly as an intermediate step to be further
elaborated by a computer program based on operators acting in the space of images. AIP
is indeed a soft procedure and operates on a set of pixels to obtain another set of pixels
independently from the causes of the effects we wish to eliminate.
Moreover AIP is a general theory of the connections of images between them and thus
can be applied also outside the cleaning of images to any manipulation of images.
We restrict ourselves in this paper to practical aspects relevant in optics and computer
science. Underlying mathematics, related to rigged Hilbert spaces and Lie groups, is
considered in detail in [1] and, at higher theoretical level, in [2–4]. We recommend to the
reader interested in the theory to look there and references therein.
Features of the proposed approach are:
1. It can be used on-line and also off-line if, at later time, more accuracy is required.
2. It is a soft procedure, cheap and without mechanical moving parts.
3. It allows to consider together images of different origin and frequencies like optical
and radio images.
The relevant operative points are:
• The vector space of images on the disk has as a basis the Zernike functions [5].
• The space of images can be integrated with the operators in the space of images i.e.
the operators that transform Zernike functions into Zernike functions.
• This space of images and its operators define the unitary irreducible representation
+
+
D1/2
⊗ D1/2
of the Lie algebra su(1, 1) ⊕ su(1, 1) .
• Every operator of AIP can be written as a polynomial in the generators of the
algebra su(1, 1) ⊕ su(1, 1)
• and can be computed applying this polynomial to the Zernike functions.
In sect.2 we summarize the vector space of images. In sect.3 we introduce the operators
acting on this space, their algebraic structure and their realization. In sect.4 we give a
description of a possible modus operandi that could be realized in automatic or semiautomatic way by a computer program. Color images can also be introduced by means
of an additional factorization of a color code.
2
Vector Space of Images
Radial Zernike polynomials Rnm (r) can be found in [5] and are real polynomials defined
for 0 ≤ r ≤ 1 such that Rnm (1) = 1, where n is a natural number and m an integer, such
that 0 ≤ m ≤ n and n − m is even.
2
As the interest in optics is focused on real functions defined on the disk, starting from
Rnm (r) it is usual to introduce, with 0 ≤ θ < 2π, the functions
Zn−m (r, θ) := Rnm (r) sin(mθ)
Znm (r, θ) := Rnm (r) cos(mθ)
and, because only smooth functions are normally considered, to take into account only
low values of n and m, summarized in a unique sequential index [6].
However, in a general theory, functions are not necessarily smooth and formal properties are relevant. We thus came back to the classical form of Born and Wolf in the
complex space [5]:
Znm (r, θ) := Rn|m| (r) eimθ
with n natural number, m integer with n − m even and −n ≤ m ≤ n. The symmetry can
be improved writing n and m in function of two arbitrary independent natural numbers
k and l [7]
n=k+l
m=k−l
(k = 0, 1, 2, ...; l = 0, 1, 2, ...)
and introducing a multiplicative factor. The Zernike functions we consider here are thus
√
|k−l|
Vk,l (r, θ) := k + l + 1 Rk+l (r) ei(k−l)θ
(1)
and depend from two natural numbers k and l and two continuous variables r and θ.
The functions Vk,l (r, θ) are an orthonormal basis in the space L2 (D) of square integrable
complex functions defined on the unit disk D as:
Z 2π Z 1
1
dθ
dr 2 Vk,l (r, θ)∗ Vk′ ,l′ (r, θ) = δk,k′ δl,l′
2π 0
0
(2)
∞ X
∞
X
1
∗
2
2
Vk,l (r, θ) Vk,l (s, φ) = δ(r − s ) δ(θ − φ)
2π k=0 l=0
and have the symmetries
Vl,k (r, θ) = Vk,l (r, θ)∗ = Vk,l (r, −θ).
Every function f (r, θ) ∈ L2 (D) can thus be written
f (r, θ) =
∞ X
∞
X
fk,l Vk,l (r, θ)
(3)
k=0 l=0
where
fk,l
1
=
2π
Z
2π
dθ
0
Z
1
dr 2 Vk,l (r, θ)∗ f (r, θ)
(4)
0
is the component along Vk,l (r, θ) of the function f (r, θ). In this paper we consider only
normalized states, so that the Parseval identity gives for each state:
3
1
2π
Z
0
2π
dθ
Z
0
1
2
2
dr |f (r, θ)| =
∞ X
∞
X
k=0 l=0
|fk,l |
2
= 1.
(5)
Restrictions on the values of k and l can be used as filters. k + l > h (< h) is a
high-pass (low-pass) filter in r, while |k − l| > h (< h) is high-pass (low-pass) filter in θ
and a combination of the restrictions can be also considered. Of course, using filters, the
results must be multiplied for adequate factors to obtain normalized states.
3
Operators and Algebra in the disk space
Now let us consider the differential applications :
A+
A−
B+
B−
e+iθ
d
:=
−(1 − r 2 )
2
dr
−iθ
e
d
:=
+(1 − r 2 )
2
dr
d
e−iθ
−(1 − r 2 )
:=
2
dr
e+iθ
d
:=
+(1 − r 2 )
2
dr
r
k+l+2
1
+ r(k + l + 2) + (k − l)
r
k+l+1
r
k+l
1
+ r(k + l) + (k − l)
r
k+l+1
r
1
k+l+2
+ r(k + l + 2) − (k − l)
r
k+l+1
r
k+l
1
+ r(k + l) − (k − l)
r
k+l+1
(6)
that, by inspection, are the rising and lowering recurrence applications on the Zernike
functions Vk,l (r, θ) :
A+ Vk,l (r, θ) = (k + 1) Vk+1,l (r, θ) ,
B+ Vk,l (r, θ) = (l + 1) Vk,l+1(r, θ) ,
A− Vk,l (r, θ) = k Vk−1,l (r, θ) ,
B− Vk,l (r, θ) = l Vk,l−1(r, θ) .
(7)
A± and B± establish the recurrence relations but are not operators because each
application of A± or B± modifies the parameters k or l to be read by the following applications. To obtain the true rising and lowering operators, we need indeed to introduce
the operators R, DR , Θ, DΘ , K, L:
R Vk,l (r, θ) := r Vk,l (r, θ) ,
dVk,l (r, θ)
,
dr
dVk,l(r, θ)
DΘ Vk,l (r, θ) :=
,
dθ
L Vk,l (r, θ) := l Vk,l (r, θ) ;
DR Vk,l (r, θ) :=
Θ Vk,l (r, θ) := φ Vk,l (r, θ) ,
K Vk,l (r, θ) := k Vk,l (r, θ) ,
4
and rewrite eqs.(6) as operators:
r
K +L+2
e+iΘ
1
A+ :=
−(1 − R2 )DR + R(K + L + 2) + (K − L)
,
2
R
K +L+1
r
1
K +L
e−iΘ
+(1 − R2 )DR + R(K + L) + (K − L)
,
A− :=
2
R
K +L+1
r
1
e−iΘ
K +L+2
2
−(1 − R )DR + R(K + L + 2) − (K − L)
B+ :=
,
2
R
K +L+1
r
e+iΘ
1
K +L
2
B− :=
+(1 − R )DR + R(K + L) − (K − L)
.
2
R
K +L+1
(8)
Now, as A± and K are operators, we can apply them iteratively on Vk,l (r, θ) and in
particular we can calculate the action of the commutators:
[A+ , A− ] Vk,l (r, θ) = −2(k + 1/2) Vk,l (r, θ) ,
[K, A± ] Vk,l(r, θ) = ± Vk±1,l (r, θ) .
So, defining A3 := K + 1/2 , we find that {A+ , A3 , A− } are on the Vk,l (r, θ) the
generators of one algebra su(1, 1):
[A+ , A− ] = −2A3 ,
[A3 , A± ] = ±A± .
(9)
Analogously
[B+ , B− ] Vk,l (r, θ) = −2(l + 1/2) Vk,l (r, θ) ,
[L, B± ]Vk,l (r, θ) = ± Vk,l±1(r, θ)
exhibit that {B+ , B3 := L + 1/2 , B− } are on the Vk,l (r, θ) the generators of another
Lie algebra su(1, 1):
[B+ , B− ] = −2B3
[B3 , B± ] = ±B± .
(10)
Finally, as Ai and Bj commute on the Vk,l (r, θ), the algebra is completed by
[Ai , Bj ] = 0 .
(11)
Thus in the vector space of images on the disk, eqs.(9 -11) define a differential realization
of the 6 dimensional Lie algebra su(1, 1) ⊕ su(1, 1) .
We can now calculate on Vk,l(r, θ) the Casimir invariants of the two su(1, 1):
1
1
2
CA Vk,l (r, θ) =
{A+ , A− } − A3 Vk,l (r, θ) = Vk,l (r, θ),
2
4
1
1
2
{B+ , B− } − B3 Vk,l (r, θ) = Vk,l (r, θ).
CB Vk,l (r, θ) =
2
4
As the Casimir of the discrete series Dj+ of su(1, 1) is j(1 − j) [8] with j = 1/2, 1, ..,
the space {Vk,l (r, θ)} is isomorphic to {|k, li|k, l = 0, 1, 2 . . . }, the unitary irreducible
5
+
+
representation D1/2
⊗ D1/2
of the group SU(1, 1) ⊗ SU(1, 1), where indeed the action of
the generators of the algebra is:
A+ |k, li = (k + 1) |k + 1, li ,
A3 |k, li = k |k, li ,
A− |k, li = k |k − 1, li ,
B+ |k, li = (l + 1) |k, l + 1i ,
B3 |k, li = l |k, li ,
B− |k, li = l |k, l − 1i .
(12)
Now we move from the generators of the algebra su(1, 1) ⊗ su(1, 1) to their products that belong to the associated universal enveloping algebra (UEA). It is the algebra
β1 β2 β3
UEA[su(1, 1) ⊕ su(1, 1)] constructed on the ordered monomials Aα+1 Aα3 2 Aα−3 B+
B3 B−
(where αi and βj are natural numbers) submitted to the relations (9-11). So every
operator O ∈ UEA[su(1, 1) ⊕ su(1, 1)] can be written as
X
X
β1 β2 β3
O=
Oᾱ,β̄ =
cᾱ,β̄ Aα+1 Aα3 2 Aα−3 B+
B3 B− .
(13)
ᾱ,β̄
ᾱ,β̄
where cᾱ,β̄ are arbitrary complex functions of ᾱ = (α1 , α2 , α3 ) and β̄ = (β1 , β2 , β3 ). Since
{Vk,l (r, θ)} is a differential representation of the algebra su(1, 1) ⊕ su(1, 1), it is also a
differential representation of the UEA[su(1, 1) ⊕ su(1, 1)].
+
+
Because the representation D1/2
⊗ D1/2
is unitary and irreducible the set of unitary
2
operators acting on the space L (D), {O[L2 (D)], is isomorphic to the set of operators
+
+
acting on D1/2
⊗ D1/2
. Therefore all invertible operators that transform disk images into
disk images can be written in the form (13) and belong to the UEA[su(1, 1) ⊕ su(1, 1)].
In concrete: all transformations of an arbitrary image in whatsoever other image can
be realized by means of iterated applications of the operators (8).
4
Applications to image processing
In physics a fundamental point of image processing is that every image is the result
of a measure and each measure has a measure error. This implies that all numbers in
the preceding sections –that in mathematics are unlimited– in physics can be considered
always finite, because the limited level of accuracy. All problems related to the rigged
Hilbert space are thus irrelevant as, in finite dimensions, rigged Hilbert spaces and Hilbert
spaces are equivalent (see [1], [3] and [4]).
Let us start depicting the procedure that, starting from one image |f (r, θ)|2 , by means
of an operator O of the UEA, gives us another image |g(r, θ)|2 .
As each image |f (r, θ)|2 does not depend from the phases, it is completely determined
by |f (r, θ)|. Thus we look for the components along Vk,l (r, θ) of |f (r, θ)|:
Z 2π Z 1
1
dθ
dr 2 Vk,l (r, θ)∗ |f (r, θ)|
(14)
fk,l =
2π 0
0
6
where, as |f (r, θ)| is real, fl,k = (fk,l )∗ .
Because of the measure errors, the relevant values of k and l are limited to the kM and
lM such that the Parseval identity eq.(5) is satisfied in the approximation appropriate to
the accuracy of the experimental result |f (r, θ)|2 . We thus have:
|f (r, θ)| ≈
lM
kM X
X
fk,l Vk,l (r, θ).
(15)
k=0 l=0
Let us describe now how a transformation O allows to obtain from the image |f (r, θ)|2 a
new image |g(r, θ)|2 .
The operator O will be
X
X
β1 β2 β3
O=
Oᾱ,β̄ =
cᾱ,β̄ Aα+1 Aα3 2 Aα−3 B+
B3 B− .
ᾱ,β̄
(16)
ᾱ,β̄
where in physics the sums on ᾱ and β̄ can be assumed to be finite.
We thus write
Oᾱ,β̄ |f (r, θ)| =
kM X
lM
X
β1 β2 β3
fkl cᾱ,β̄ Aα+1 Aα3 2 Aα−3 B+
B3 B− Vk,l (r, θ) ,
k=0 l=0
then –by means of iterated applications of operators (8)– we calculate
β1 β2 β3
Aα+1 Aα3 2 Aα−3 B+
B3 B− Vk,l (r, θ)
finding the coefficients gk,l that satisfy
β1 β2 β3
Aα+1 Aα3 2 Aα−3 B+
B3 B− Vk,l (r, θ) = gk+α1 −α3 ,l+β1 −β3 Vk+α1 −α3, l+β1 −β3 (r, θ).
Thus we have
kM ,lM
Oᾱ,β̄ |f (r, θ)| =
X
fkl cᾱ,β̄ gk+α1 −α3 , l+β1 −β3 Vk+α1 −α3 , l+β1 −β3 (r, θ) ,
k,l=0,0
that, combined with eq.(16), allows to obtain:
g(r, θ) = O |f (r, θ)|
from which |g(r, θ)|2, the transformed image under O of |f (r, θ)|2, is obtained.
Analogous procedure can be applied to obtain the operator O from |g(r, θ)| and
|f (r, θ)|.
To conclude, let us consider now a possible application to improve, by means of AIP,
an image obtained by a flawed instrument. We start observing a null signal that, with a
7
perfect tool, would give |f (r, θ)| = |V0,0 (r, θ)|. A defective instrument, on the contrary,
will give a perturbed image |f (r, θ)|2 such that
|f (r, θ)| =
lM
kM X
X
fk,l Vk,l (r, θ) =
lM
kM X
X
fk,l
k=0 l=0
k=0 l=0
l
Ak+ B+
V0,0 (r, θ)
k! l!
(17)
∗
where fk,l (= fl,k
) are the parameters, obtained from eq.(14), that characterize the distortion of the null image made by the instrument.
So that the operator that eliminates the defects of the instrument is
kM X
lM
X
fk,l
k=0 l=0
l
Ak+ B+
k! l!
!−1
.
If the observation of the object we are interested in gives, with gk,l = (gl,k )∗ ,
|g(r, θ)| =
kM ′ lM ′
X
X
gk,l Vk,l (r, θ) =
kM ′ lM ′
X
X
gk,l
k=0 l=0
k=0 l=0
l
Ak+ B+
V0,0 (r, θ)
k! l!
the final cleaned image will be
!−1
kM ′ lM ′
lM
kM X
l
k
l
X
X
X
Ak+ B+
A+ B+
fk,l
V0,0 (r, θ)
gk,l
k!l!
k!l!
k=0 l=0
k=0 l=0
(18)
2
,
(19)
formula that can be easily calculated in series as all operators commute.
References
[1] Celeghini E, 2017 J of Phys: Conf Series 880 012055
[2] Reed M, Simon B, 1980 Methods of Modern Math Phys, (Acad Press, London)
[3] Celeghini E, Gadella M, delOlmo M A, 2016 J Math Phys 57 072105
[4] Celeghini E, Gadella M, delOlmo M A, 2017 to be published
[5] Born M, Wolf E, 2005 Principles of Optics (Cambridge UP 7th ed, Cambridge Ma)
[6] Noll R J, 1976 J Opt Soc Am 66 207
[7] Dunkl C F, 2001 Enciclopaedia of Mathematics, Suppl 3 pg 454 (Kluwer, Dordrecht)
[8] Bargmann V, 1947 Ann. Math. 48 568
8
| 1 |
Risk Minimization and Optimal Derivative Design in a Principal
arXiv:0710.5512v1 [] 29 Oct 2007
Agent Game∗
Ulrich Horst
Santiago Moreno-Bromberg
Department of Mathematics
Department of Mathematics
Humboldt University Berlin
University of British Columbia
Unter den Linden 6
1984 Mathematics Road
10099 Berlin
Vancouver, BC, V6T 1Z2
[email protected]
[email protected]
March 15, 2018
Abstract
We consider the problem of Adverse Selection and optimal derivative design within a
Principal-Agent framework. The principal’s income is exposed to non-hedgeable risk factors
arising, for instance, from weather or climate phenomena. She evaluates her risk using a
coherent and law invariant risk measure and tries minimize her exposure by selling derivative
securities on her income to individual agents. The agents have mean-variance preferences
with heterogeneous risk aversion coefficients. An agent’s degree of risk aversion is private
information and hidden to the principal who only knows the overall distribution. We show
that the principal’s risk minimization problem has a solution and illustrate the effects of risk
transfer on her income by means of two specific examples. Our model extends earlier work of
Barrieu and El Karoui (2005) and Carlier, Ekeland and Touzi (2007).
Preliminary - Comments Welcome
AMS classification: 60G35, 60H20, 91B16, 91B70.
Keywords: Optimal derivative design, structured securities, adverse selection, risk transfer.
∗
We thank Guillaume Carlier, Pierre-Andre Chiappori, Ivar Ekeland, Andreas Putz and seminar participants
at various institutions for valuable comments and suggestions. Financial support through an NSERC individual
discovery grant is gratefully acknowledged.
1
1
Introduction
In recent years there has been an increasing interest in derivative securities at the interface of
finance and insurance. Structured products such as risk bonds, asset-backed securities and weather
derivatives are end-products of a process known as securitization that transforms non-tradable
risk factors into tradable financial assets. Developed in the U.S. mortgage markets, the idea of
pooling and underwriting risk that cannot be hedged through investments in the capital markets
alone has long become a key factor driving the convergence of insurance and financial markets.
Structured products are often written on non-tradable underlyings, tailored to the issuers
specific needs and traded “over the counter”. Insurance companies, for instance, routinely sell
weather derivatives or risk bonds to customers that cannot go to the capital markets directly
and/or seek financial securities with low correlation with stock indices as additions to diversified
portfolios. The market for such claims is generally incomplete and illiquid. As a result, many of
the standard paradigms of traditional derivative pricing theory, including replication arguments
do not apply to structured products. In an illiquid market framework, preference-based valuation
principles that take into account characteristics and endowment of trading partners may be more
appropriate for designing, pricing and hedging contingent claims. Such valuation principles have
become a major topic of current research in economics and financial mathematics. They include
rules of Pareto optimal risk allocation ([11], [16]), market completion and dynamic equilibrium
pricing ([14], [15]) and, in particular, utility indifference arguments ([2], [3], [5], [6], [9], ...). The
latter assumes a high degree of market asymmetry. For indifference valuation to be a pricing rather
than valuation principle, the demand for a financial security must come from identical agents
with known preferences and negligible market impact while the supply must come from a single
principal. When the demand comes from heterogeneous individuals with hidden characteristics,
indifference arguments do not always yield an appropriate pricing scheme.
In this paper we move away from the assumption of investor homogeneity and allow for
heterogeneous agents. We consider a single principal with a random endowment whose goal is to
lay off some of her risk with heterogeneous agents by designing and selling derivative securities
on her income. The agents have mean variance preferences. An agent’s degree of risk aversion
is private information and hidden to the principal. The principal only knows the distribution of
risk aversion coefficients which puts her at an informational disadvantage. If all the agents were
homogeneous, the principal, when offering a structured product to a single agent, could (perhaps)
extract the indifference (maximum) price from each trading partner. In the presence of agent
heterogeneity this is no longer possible, either because the agents would hide their characteristics
from the principal or prefer another asset offered by the principal but designed and priced for
another customer.
The problem of optimal derivative design in a Principal-Agent framework with informed agents
and an uninformed principal has first been addressed in a recent paper by of Carlier, Ekeland
2
and Touzi [7]. With the agents being the informed party, theirs is a screening model. The
literature on screening within the Adverse Selection framework can be traced back to Mussa and
Rosen [18], where both the principal’s allocation rule and the agents’ types are one-dimensional.
Armstrong [1] relaxes the hypothesis of agents being characterized by a single parameter. He
shows that, unlike the one-dimensional case, “bunching” of the first type is robust when the
types of the agents are multi-dimensional. In their seminal paper, Rochet and Choné [19] further
extend this analysis. They provide a characterization of the contracts, determined by the (nonlinear) pricing schedule, that maximize the principal’s utility under the constraints imposed by
the asymmetry of information in the models. Building on their work, Carlier, Ekeland and Touzi
[7] study a Principal-Agent model of optimal derivative design where the agents’ preferences are of
mean-variance type and their multi-dimensional types characterize their risk aversions and initial
endowments. They assume that there is a direct cost to the principal when she designs a contract
for an agent, and that the principal’s aim is to maximize profits.
We start from a similar set-up, but substitute the idea that providing products carries a cost
for the idea that traded contracts expose the principal to additional risk - as measured by a convex
risk measure - in exchange for a known revenue. This may be viewed as a partial extension of the
work by Barrieu and El Karoui ([2],[3]) to an incomplete information framework.
The principal’s aim is to minimize her risk exposure by trading with the agents subject to the
standard incentive compatibility and individual rationality conditions on the agents’ choices. In
order to prove that the principal’s risk minimization problem has a solution we first follow the
seminal idea of Rochet and Choné [19] and characterize incentive compatible catalogues in terms
of U -convex functions. When the impact of a single trade on the principal’s revenues is linear
as in Carlier, Ekeland and Touzi [7], the link between incentive compatibility and U -convexity is
key to establish the existence of an optimal solution. In our model the impact is non-linear as a
single trade has a non-linear impact on the principal’s risk assessment. Due to this non-linearity
we face a non-standard variational problem where the objective cannot be written as the integral
of a given Lagrangian. Instead, our problem can be decomposed into a standard variational
part representing the aggregate income of the principal, plus the minimization of the principal’s
risk evaluation, which depends on the aggregate of the derivatives traded. We state sufficient
conditions that guarantee that the principal’s optimization problem has a solution and illustrate
the effect of risk transfer on her exposure by means of two specific examples.
The remainder of this paper is organized as follows. In Section 2 we formulate our PrincipalAgent model and state the main result. The proof is given in Section 3. In Section 4 we illustrate
the effects of risk transfer on the principal’s position by two examples. In the first we consider
a situation where the principal restricts itself to type-dependent multiples of some benchmark
claim. This case can be solved in closed form by means of a standard variational problem. The
second example considers put options with type-dependent strikes. In both cases we assume that
3
the principal’s risk measure is Average Value at Risk. As a consequence the risk minimization
problem can be stated in terms of a min-max problem; we provide an efficient numerical scheme
for approximating the optimal solution. The code is given in an appendix.
2
The Microeconomic Setup
We consider an economy with a single principal whose income W is exposed to non-hedgeable
risk factors rising from, e.g., climate or weather phenomena. The random variable W is defined
on a standard, non-atomic, probability space (Ω, F, P) and it is square integrable:
W ∈ L2 (Ω, F, P).
The principal’s goal is to lay off parts of her risk with individual agents. The agents have
heterogenous mean-variance preferences1 and are indexed by their coefficients of risk aversion
θ ∈ Θ. Given a contingent claim Y ∈ L2 (Ω, F, P) an agent of type θ enjoys the utility
U (θ, Y ) = E[Y ] − θ Var[Y ].
(1)
Types are private information. The principal knows the distribution µ of types but not the
realizations of the random variables θ. We assume that the agents are risk averse and that the
risk aversion coefficients are bounded away from zero. More precisely,
Θ = [a, 1]
for some a > 0.
The principal offers a derivative security X(θ) written on her random income for any type θ.
The set of all such securities is denoted by
X := X = {X(θ)}θ ∈Θ | X ∈ L2 (Ω × Θ, P ⊗ µ), X is σ(W ) × B(Θ) measurable .
(2)
v(θ) = sup U (θ, X(θ ′ )) − π(θ ′ ) .
θ ′ ∈Θ
(3)
We refer to a list of securities {X(θ)} as a contract. A catalogue is a contract along with prices
π(θ) for every available derivative X(θ). For a given catalogue (X, π) the optimal net utility of
the agent of type θ is given by
Remark 2.1 No assumption will be made on the sign of π(θ); our model contemplates both the
case where the principal takes additional risk in exchange of financial compensation and the one
where she pays the agents to take part of her risk.
1
Our analysis carries over to preferences of mean-variance type with random initial endowment as in [7]; the
assumption of simple mean-variance preferences is made for notational convenience.
4
A catalogue (X, π) will be called incentive compatible (IC) if the agent’s interests are best
served by revealing her type. This means that her optimal utility is achieved by the security X(θ):
U (θ, X(θ)) − π(θ) ≥ U (θ, X(θ ′ )) − π(θ ′ ) for all
θ, θ′ ∈ Θ.
(4)
We assume that each agent has some outside option (“no trade”) that yields a utility of zero.
A catalogue is thus called individually rational (IR) if it yields at least the reservation utility for
all agents, i.e., if
U (θ, X(θ)) − π(θ) ≥ 0 for all θ ∈ Θ.
(5)
Remark 2.2 By offering only incentive compatible contracts, the principal forces the agents to
reveal their type. Offering contracts where the IR constraint is binding allows the principal to
exclude undesirable agents from participating in the market. It can be shown that under certain
conditions, the interests of the Principal are better served by keeping agents of “lower types” to
their reservation utility; Rochet and Choné [19] have shown that in higher dimensions this is
always the case.
R
If the principal issues the catalogue (X, π), she receives a cash amount of Θ π (θ) dµ(θ) and is
R
subject to the additional liability Θ X(θ)µ(dθ). She evaluates the risk associated with her overall
position
Z
(π(θ) − X(θ))dµ(θ)
W+
Θ
via a coherent and law-invariant risk measure ̺ : L2 (Ω, F, P) → R ∪ {∞} that has the Fatou
property. It turns out that such risk measures can be represented as robust mixtures of Average
Value at Risk.2 The principal’s risk associated with the catalogue (X, π) is given by
Z
(π(θ) − X(θ))dµ(θ) .
(6)
̺ W+
Θ
Her goal is to devise contracts (X, π) that minimize (6) subject to the incentive compatibility and
individual rationality condition:
Z
(π(θ) − X(θ))dµ(θ) | X ∈ X , X is IC and IR .
(7)
inf ̺ W +
Θ
We are now ready to state the main result of this paper. The proof requires some preparation
and will be carried out in the following section.
Theorem 2.3 If ̺ is a coherent and law invariant risk measure on L2 (P) and if ̺ has the Fatou
property, then the principal’s optimization problem has a solution.
For notational convenient we establish our main result for the spacial case dµ(θ) = dθ. The
general case follows from straight forward modifications.
We review properties of coherent risk measures on Lp spaces in the appendix and refer to the textbook by
Föllmer and Schied [12] and the paper of Jouini, Schachermayer and Touzi [17] for detailed discussion of law
invariant risk measures.
2
5
3
Proof of the Main Theorem
Let (X, π) be a catalogue. In order to prove our main result it will be convenient to assume that
the principal offers any square integrable contingent claim and to view the agents’ optimization
problem as optimization problems over the set L2 (P). This can be achieved by identifying the
price list {π(θ)} with the pricing scheme
π : L2 (P) → R
that assigns the value π(θ) to an available claim X(θ) and the value E[Y ] to any other claim
Y ∈ L2 . In terms of this pricing scheme the value function v defined in (3) satisfies
v(θ) =
sup {U (θ, Y ) − π(Y )} .
(8)
Y ∈L2 (P)
for any individually rational catalogue. For the remainder of this section we shall work with the
value function of the (8). It is U -convex in the sense of the following definition; it actually turns
out to be convex and non-increasing as we shall prove in Proposition 3.2 below.
Definition 3.1 Let two spaces A and B and a function U : A × B → R be given.
(i) The function f : A → R is called U -convex if there exists a function p : B → R such that
f (a) = sup {U (a, b) − p(b)} .
b∈B
(ii) For a given function p : B → R the U -conjugate pU (a) of p is defined by
pU (a) = sup {U (a, b) − p(b)} .
b∈B
(iii) The U-subdifferential of p at b is given by the set
∂U p(b) := a ∈ A | pU (a) = U (a, b) − p(b) .
(iv) If a ∈ ∂U p(b), then a is called a U-subgradient of p(b).
Our goal is to identify the class of IC and IR catalogues with a class of convex and nonincreasing functions on the type space. To this end, we first recall the link between incentive
compatible contracts and U -convex functions from Rochet and Choné [19] and Carlier, Ekeland
and Touzi [7].
Proposition 3.2 ([19], [7]) If a catalogue (X, π) is incentive compatible, then the function v
defined by (3) is proper and U-convex and X(θ) ∈ ∂U v(θ). Conversely, any proper, U-convex
function induces an incentive compatible catalogue.
6
Proof. Incentive compatibility of a catalogue (X, π) means that
U (θ, X(θ)) − π(θ) ≥ U (θ, X(θ ′ )) − π(θ ′ ) for all
θ, θ′ ∈ Θ,
so v(θ) = U (θ, X(θ)) − π(θ) is U-convex and X(θ) ∈ ∂U v(θ). Conversely, for a proper, U-convex
function v and X(θ) ∈ ∂U v(θ) let
π(θ) := U (θ, X(θ)) − v(θ).
By the definition of the U-subdifferential, the catalogue (X, π) is incentive compatible.
✷
The following lemma is key. It shows that the U -convex function v is convex and non-increasing
and that any convex and non-increasing function is U -convex, i.e., it allows a representation of
the form (8). This allows us to rephrase the principal’s problem as an optimization problem over
a compact set of convex functions.
Lemma 3.3 (i) Suppose that the value function v as defined by (8) is proper. Then v is convex
and non-increasing. Any optimal claim X ∗ (θ) is a U -subgradient of v(θ) and almost surely
−Var[X ∗ (θ)] = v ′ (θ).
(ii) If v̄ : Θ → R+ is proper, convex and non-increasing, then v̄ is U -convex, i.e., there exists a
map π̄ : L2 (P) → R such that
v̄(θ) =
sup {U (θ, Y ) − π̄(Y )} .
Y ∈L2 (P)
Furthermore, any optimal claim X̄(θ) belongs to the U -subdifferential of v̄(θ) and satisfies
−Var[X̄(θ)] = v̄ ′ (θ).
Proof.
(i) Let v be a proper, U -convex function. Its U -conjugate is:
v U (Y ) =
sup {E[Y ] − θVar[Y ] − v(θ)}
θ∈Θ
= E[Y ] + sup {θ(−Var[Y ]) − v(θ)}
θ∈Θ
= E[Y ] + v ∗ (−Var[Y ]),
where v ∗ denotes the convex conjugate of v. As a U -convex function, the map v is characterized by the fact that v = (v U )U . Thus
v(θ) = (v U )U (θ)
=
sup {U (θ, Y ) − E[Y ] − v ∗ (−Var[Y ])}
Y ∈L2 (P)
=
sup {E[Y ] − θVar[Y ] − E[Y ] − v ∗ (−Var[Y ])}
Y ∈L2 (P)
= sup {θ · y − v ∗ (y)}
y≤0
7
where the last equality uses the fact that the agents’ consumption set contains claims of
any variance. We deduce from the preceding representation that v is non-increasing. Furthermore v = (v ∗ )∗ so v is convex. To characterize ∂U v(θ) we proceed as follows:
∂U v(θ) = Y ∈ L2 | v(θ) = U (θ, X) − v U (Y )
= Y ∈ L2 | v(θ) = E[Y ] − θVar[Y ] − v U (Y )
= Y ∈ L2 | v(θ) = E[Y ] − θVar[Y ] − E[Y ] − v ∗ (−Var[Y ])
= Y ∈ L2 | v(θ) = θ(−Var[Y ]) − v ∗ (−Var[Y ])
= Y ∈ L2 | −Var[Y ] ∈ ∂v(θ)
The convexity of v implies it is a.e. differentiable so we may write
∂U v(θ) := Y ∈ L2 | v ′ (θ) = −Var[Y ]) .
(ii) Let us now consider a proper, non-negative, convex and non-increasing function v̄ : Θ → R.
There exists a map f : R → R such that
v̄(θ) = sup {θ · y − f (y)} .
y≤0
Since v̄ is non-increasing there exists a random variable Y (θ) ∈ L2 (P) such that −Var[Y (θ)] ∈
∂v̄(θ) and the definition of the subgradient yields
v̄(θ) = sup {θ(−Var[Y ]) − f (−Var[Y ])} .
Y ∈L2
With the pricing scheme on L2 (P) defined by
π̄(Y ) := −E[Y ] − f (−Var[Y ])
this yields
v̄(θ) = sup {U (θ, Y ) − π̄(Y )} .
Y ∈L2
The characterization of the subdifferential follows by analogy to part (i).
✷
The preceding lemma along with Proposition 3.2 shows that any convex, non-negative and
non-increasing function v on Θ induces an incentive compatible catalogue (X, π) via
X(θ) ∈ ∂U v(θ) and π(θ) = U (θ, X(θ)) − v(θ).
Here we may with no loss of generality assume that E[X(θ)] = 0. In terms of the principal’s
choice of v her income is given by
Z
θv ′ (θ) − v(θ) dθ.
I(v) =
Θ
8
Since v ≥ 0 is decreasing and non-negative the principal will only consider functions that satisfy
the normalization constraint
v(1) = 0.
We denote the class of all convex, non-increasing and non-negative real-valued functions on Θ
that satisfy the preceding condition by C:
C = {v : Θ → R | v is convex, non-increasing, non-negative and v(1) = 0.}
Conversely, we can associate with any IC and IR catalogue (X, π) a non-negative U -convex
function of the form (8) where the contract satisfies the variance constraint −Var[X(θ)] = v ′ (θ). In
view of the preceding lemma this function is convex and non-increasing so after normalization we
may assume that v belongs to the class C. We therefore have the following alternative formulation
of the principal’s problem.
Theorem 3.4 The principal’s optimization problem allows the following alternative formulation:
Z
′
X(θ)dθ − I(v) | v ∈ C, E[X(θ)] = 0, −Var[X(θ)] = v (θ) .
inf ̺ W −
Θ
In terms of our alternative formulation we can now prove a preliminary result. It states that
a principal with no initial endowment will not issue any contracts.
Lemma 3.5 If the principal has no initial endowment, i.e., if W = 0, then (v, X) = (0, 0) solves
her optimization problem.
Proof. Since ̺ is a coherent, law invariant risk measure on L2 (P) that has the Fatou property
it satisfies
̺(Y ) ≥ −E[Y ] for all Y ∈ L2 (P).
(9)
For a given function v ∈ C the normalization constraint E[X(θ)] = 0 therefore yields
Z
Z
X(θ)dθ − I(v) = −I(v).
X(θ)dθ − I(v) ≥ E
̺ −
Θ
Θ
Since v is non-negative and non-increasing −I(v) ≥ 0. Taking the infimum in the preceding
inequality shows that v ≡ 0 and hence X(θ) ≡ 0 is an optimal solution.
✷
3.1
Minimizing the risk for a given function v
In the general case we approach the principal’s problem in two steps. We start by fixing a function
v from the class C and minimize the associated risk
Z
X(θ)dθ
̺ W−
Θ
9
subject to the moment conditions E[X(θ)] = 0 and −Var[X(θ)] = v ′ (θ). To this end, we shall
first prove the existence of optimal contracts Xv for a relaxed optimization where the variance
constraint is replaced by the weaker condition
Var[X(θ)] ≤ −v ′ (θ).
In a subsequent step we show that based on Xv the principal can transfer risk exposures among
the agents in such a way that (i) the aggregate risk remains unaltered; (ii) the variance constraint
becomes binding. We assume with no loss of generality that v does not have a jump at θ = a.
3.1.1
The relaxed optimization problem
For a given v ∈ C let us consider the convex set of derivative securities
X v := X ∈ X | E[X(θ)] = 0, Var[X(θ)] ≤ −v ′ (θ) µ − a.e. .
Lemma 3.6
(10)
(i) All functions v ∈ C that are acceptable for the principal are uniformly bounded.
(ii) Under the conditions of (i) the set X v is closed and bounded in L2 (P ⊗ µ). More precisely,
kXk22 ≤ v(a)
for all
X ∈ X v.
Proof.
(i) If v is acceptable for the principal, then any X ∈ X v satisfies
Z
X(θ)dθ − I(v) ≤ ̺(W ).
̺ W−
Θ
From (9) and that fact that E[X(θ)] = 0 we deduce that
Z
X(θ)dθ − I(v) ≤ ̺(W )
−E[W ] − I(v) ≤ ̺ W −
Θ
so
−I(v) ≤ E[W ] + ̺(W ) =: K.
Integrating by parts twice and using that v is non-increasing and v(1) = 0 we see that
Z 1
v(θ)dθ ≥ av(a).
K ≥ −I(v) = av(a) + 2
a
This proves the assertion because a > 0.
(ii) For X ∈ X v we deduce from the normalization constraint v(1) = 0 that
Z Z
Z
2
2
kXk2 =
X (θ, ω)dP dθ ≤ − v ′ (θ)dθ ≤ v(a)
so the assertion follows from part (i).
10
✷
Since ̺ is a convex risk measure on L2 and because the set Xv of contingent claims is convex,
closed and bounded in L2 a general result from the theory of convex optimization yields the
following proposition.
Proposition 3.7 If the function v is acceptable for the principal, then there exists a contract
{Xv (θ)} such that
Z
Z
Xv (θ)dθ .
X(θ)dθ = ̺ W −
inf ̺ W −
X∈X v
Θ
Θ
The contract Xv along with the pricing scheme associated with v does not yield an incentive
compatible catalogue unless the variance constraints happen to be binding. However, as we are
now going to show, based on Xv the principal can find a redistribution of risk among the agents
such that the resulting contract satisfies our IC condition.
3.1.2
Redistributing risk exposures among agents
Let
∂X v = X ∈ X v | E[X(θ)] = 0, Var[X(θ)] = −v ′ (θ), µ − a.e.
be the set of all contracts from the class X v where the variance constraint is binding. Clearly,
Z
Z
X(θ)dθ .
Xv (θ)dθ ≤ inf ̺ W −
̺ W−
X∈∂ X v
Θ
Θ
Let us then introduce the set of types
Θv := θ ∈ Θ | Var[Xv (θ)] < −v ′ (θ) ,
for whom the variance constraint is not binding. If µ(Θv ) = 0, then Xv yields an incentive
compatible contract. Otherwise, we consider a random variable Ỹ ∈ X v , fix some type θ ∈ Θ and
define
Ỹ (θ)
.
(11)
Y := q
Var[Ỹ (θ)]
We may with no loss of generality assume that Y is well defined for otherwise the status
quo is optimal for the principal and her risk minimization problem is void. The purpose of
introducing Y is to offer a set of structured products Zv based on Xv , such that Zv together with
the pricing scheme associated with v yields an incentive compatible catalogue. To this end, we
choose constants α̃(θ) for θ ∈ Θv such that
Var[Xv (θ) + α̃(θ)Y ] = −v ′ (θ).
11
This equation holds for
α̃± (θ) = −Cov[Xv (θ), Y ] ±
q
Cov2 [Xv (θ), Y ] − v ′ (θ) − Var[Xv (θ)].
For a type θ ∈ Θv the variance constraint is not binding. Hence −v ′ (θ) − Var[Xv (θ)] > 0 so
that α+ (θ) > 0 and α− (θ) < 0. An application of Jensen’s inequality together with the fact that
kXv k2 is bounded shows that α± are µ-integrable functions. Thus there exists a threshold type
θ∗ ∈ Θ such that
Z
Z
+
α− (θ)dθ = 0.
α (θ)dθ +
Θv ∩(θ ∗ ,1]
Θv ∩(a,θ ∗ ]
In terms of θ∗ let us now define a function
(
α(θ) :=
α̃+ (θ), if
α̃− (θ), if
θ ≤ θ∗
θ > θ∗
and a contract
Zv := Xv + αY ∈ ∂X v .
(12)
R
Since αdθ = 0 the aggregate risks associated with Xv and Zv are equal. As a result, the contract
Zv solves the risk minimization problem
Z
X(θ)µ(dθ) .
(13)
inf ̺ W +
X∈∂ X v
Θ
Remark 3.8 In Section 4 we shall consider a situation where the principal restricts itself to a
class of contracts for which the random variable Xv can be expressed in terms of the function
v. In general such a representation will not be possible since v only imposes a restriction on the
contracts’ second moments.
3.2
Minimizing the overall risk
In order to finish the proof of our main result it remains to show that the minimization problem
Z
Zv (θ)µ(dθ) − I(v)
inf ̺ W −
v∈C
Θ
has a solution and the infimum is obtained. To this end, we consider a minimizing sequence
{vn } ⊂ C. The functions in C are locally Lipschitz continuous because they are convex. In fact
they are uniformly locally Lipschitz: by Lemma 3.6 (i) the functions v ∈ C are uniformly bounded
and non-increasing so all the elements of ∂v(θ) are uniformly bounded on compact sets of types.
As a result, {vn } is a sequence of uniformly bounded and uniformly equicontinuous functions
when restricted to compact subsets of Θ. Thus there exists a function v̄ ∈ C such that, passing
to a subsequence if necessary,
lim vn = v̄
n→∞
uniformly on compact sets.
12
A standard 3ǫ-argument shows that the convergence properties of the sequence {vn } carry over
to the derivatives so that
lim v ′
n→∞ n
= v̄ ′
almost surely uniformly on compact sets.
Since −θvn′ (θ) + vn (θ) ≥ 0 it follows from Fatou’s lemma that −I(v̄) ≤ lim inf n→∞ −I(vn ) so
Z
lim inf ̺ W −
Zvn (θ)µ(dθ) − I(vn )
n→∞
Θ
Z
Zvn (θ)µ(dθ) + lim inf −I(vn )
≥ lim inf ̺ W −
n→∞
n→∞
ZΘ
Zvn (θ)µ(dθ) − I(v̄)
≥ lim inf ̺ W −
n→∞
Θ
and it remains to analyze the associated risk process. For this, we first observe that for Zvn ∈ ∂Xvn
Fubini’s theorem yields
Z Z
Z
2
2
kZvn k2 =
Zvn dPdθ = − vn′ (θ)dθ = vn (a).
(14)
Since all the functions in C are uniformly bounded, we see that the contracts Zvn are contained in
an L2 bounded, convex set. Hence there exists a square integrable random variable Z such that,
after passing to a subsequence if necessary,
w − lim Zn = Z
(15)
n→∞
Let Zv̄ ∈ Xv̄ . Convergence of the functions vn implies kZvn k2 → kZv̄ k2 . Thus (15) yields
kZk2 = kZv̄ k2 along with convergence of aggregate risks:
Z
Z
Z(θ, ω)dθ weakly in L2 (P).
Zn (θ, ω)dθ →
kZk2 = kZv̄ k2 and
Θ
Θ
By Corollary I.2.2 in Ekeland and Témam (1976) [10], a lower semi-continuous convex function
f : X → R remains ,so with respect to the weak topology σ(X, X ∗ ), the Fatou property of the
risk measure ̺ guarantees that
Z
Z
Z(θ)µ(dθ)
Zv̄ (θ)µ(dθ)
≤ ̺ W−
̺ W−
Θ
Θ
Z
≤ lim inf ̺ W −
n→∞
We conclude that (Zv̄ , v̄) solves the Principal’s problem.
13
Θ
Z(θ)µ(dθ) .
4
Examples
Our main theorem states that the principal’s risk minimization problem has a solution. The
solution can be characterized in terms of a convex function that specifies the agents’ net utility.
Our existence result is based on a min-max optimization scheme whose complexity renders a rather
involved numerical analysis . In this section we consider some examples where the principal’s
choice of contracts is restricted to class of numerically more amenable securities. The first example
studies a situation where the principal offers type-dependent multiplies of some benchmark claim.
In this case the principal’s problem can be reduced to a constraint variational problem that can
be solved in closed form. A second example comprises put options with type dependent strikes.
Here we provide a numerical algorithm for approximating the optimal solution.
4.1
A single benchmark claim
In this section we study a model where the principal sells a type-dependent multiple of a benchmark claim f (W ) ≥ 0 to the agents. More precisely, the principal offers contracts of the form
X(θ) = α(θ)f (W ).
(16)
In order to simplify the notation we shall assume that the T-bond’s variance is normalized:
Var[f (W )] = 1.
4.1.1
The optimization problems
Let (X, π) be a catalogue where the contract X is of the form (16). By analogy to the general
case it will be convenient to view the agents’ optimization problem as an optimization problem
of the set of claims {γf (W ) | γ ∈ R} so the function α : Θ → R solves
sup {U (θ, γf (W )) − π(θ)} .
γ∈R
In view of the variance constraint on the agents’ claims the principal’s problem can be written as
Z p
inf {̺ (W − C(v)f (W )) − I(v) | v ∈ C} where C(v) =
−v ′ (θ)dθ.
Θ
p
Note that E[f (W )] > 0, so the term E[f (W )] −v ′ (θ) must be included in the income. Before
proceeding with the general case let us first consider a situation where in addition to being coherent
and law invariant, the risk measure ̺ is also comonotone. In this case each security the principal
sells to some agent increases her risk by the amount
p
̺ −(f (W ) − E[f (W )]) −v ′ (θ) + v(θ) − θv ′ (θ) ≥ 0.
This suggests that it is optimal for the principal not to sell a bond whose payoff moves into the
same direction as her initial risk exposure.
14
Proposition 4.1 Suppose that ̺ is comonotone additive. If f (W ) and W are comonotone, then
v = 0 is a solution to the principal’s problem.
Proof. If W and f (W ) are comonotone, then the risk measure in equation (4.1.1) is additive
and the principal needs to solve
Z
p
v(θ) + ̺ (f (W ) − E[f (W )]) −v ′ (θ) − θv ′ (θ) dθ.
̺ (W ) + inf
v∈C Θ
Since ̺ (f (W ) − E[f (W )]) ≥ 0 and −θv ′ (θ) ≥ 0 we see that
Z
p
v(θ) + ρ (F (W )) −v ′ (θ) − θv ′ (θ) dθ ≥ 0
Θ
and hence v ≡ 0 is a minimizer.
✷
In view of the preceding proposition the principal needs to design the payoff function f in
such a way that W and f (W ) are not comonotone. We construct an optimal payoff function in
the following subsection.
4.1.2
A solution to the principal’s problem
Considering the fact that ̺(·) is a decreasing function the principal’s goal must be to make the
quantity C(v) as small as possible while keeping the income as large as possible. In a first step
we therefore solve, for any constant A ∈ R the optimization problem
Z
p
(17)
E[f (W )] −v ′ (θ) − v(θ) + θv ′ (θ) dθ = A.
sup C(v) subject to
v∈C
Θ
The constraint variational problem (17) captures the problem of risk minimizing subject to
an income constraint. It can be solved in closed form. The associated Euler-Lagrange equation
is given by
!
λE[f ] − 1
d
−λθ + p
,
(18)
λ=
dθ
2 −v ′ (θ)
where λ is the Lagrange multiplier. The income constraint and boundary conditions are:
v ′ (a) = −
(λ′ )2
4λ2 a2
and v(1) = 0 where λ′ = (λE[f ] − 1).
Integrating both sides of equation (18) and taking into account the normalization condition
v(1) = 0, we obtain
1 λ′ 2
1
1
v(θ) =
−
.
8 λ
2θ − a 2 − a
Inserting this equation into the constraint yields
s
′ 2 Z 1
Z
λ
θ
1
1
1
1
λ′ 2 1 dθ
−
−
dθ.
A = E[f ]
+
λ
λ
8 2θ − a 2 − a
4 (2θ − a)2
a 2θ − a
a
15
In terms of
M :=
Z
a
1
1
θ
1
1
1
−
dθ
+
8 2θ − a 2 − a
4 (2θ − a)2
and N :=
Z
a
1
dθ
2θ − a
we have the quadratic equation
−M
which has the solution
s
λ′
λ
λ′
λ
2
2
=
s
λ′ 2
+ N E[f ]
− A = 0,
λ
N E[f ] −
p
(N E[f ])2 − 4AM
2M
We have used the root with alternating signs, as we require the problem to reduce to ̺(W ) for
A = 0.
Remark 4.2 We notice that the constraint variational problem (17) is independent of the risk
measure employed by the principal. This is because we minimized the risk pointwise subject to a
constraint on aggregate revenues.
In view of the preceding considerations the principal’s problem reduces to a one-dimensional
minimization problems over the Reals:
N p
N 2 E[f ]
2
+ f (W )
inf ̺ W − f (W )
(N E[f ]) − 4AM − A.
A
2M
2M
Once the optimal value A∗ has been determined, the principal offers the securities
λE[f ] − 1
f (W )
4θλ − 2λa
at a price
λE[f ] − 1
2
3Eλ(2θ − a) − a λE − 1 1
+
(4θλ − 2λa)2
λ2 2 − a
.
Example 4.3 Assume that the principal measures her risk exposure using Average Value at Risk
at level 0.05. Let W̃ be a normally distributed random variable with mean 1/2 and variance
1/20. One can think that W̃ represents temperature. Suppose that the principal’s initial income
is exposed to temperature risk and it is given by W = 0.1(W̃ − 1.1) with associated risk
̺(W ) = 0.0612.
Suppose furthermore that the principal sells units of a put option on W̃ with strike 0.5, i.e.,
f (W ) = (W − 0.5)+
16
By proceeding as above we approximated the principal’s risk as −0.6731 and she offers the security
X(θ) =
0.5459
f (W )
2θ − a
to the agent of type θ for a price
π(θ) =
4.2
1.1921
(1.1921)θ − (0.22)(2θ − a)
√
−
.
8(2 − a)
2(2θ − a)2
Put options with type dependent strikes
In this section we consider the case where the principal underwrites put options on her income
with type-dependent strikes. We assume that W ≤ 0 is a bounded random variable and consider
contracts of the form
X(θ) = (K(θ) − |W |)+
with
0 ≤ K(θ) ≤ kW k∞ .
The boundedness assumption on the strikes is made with no loss of generality and each equilibrium pricing scheme is necessarily non-negative. Note that in this case the risk measure can
be defined on L(P), so we only require convergence in probability to use the Fatou property. We
deduce that both the agents’ net utilities and the variance of their positions is bounded from
above by some constants K1 and K2 , respectively. Thus, the principal chooses a function v and
contract X from the set
{(X, v) | v ∈ C, v ≤ K1 , −Var[K(θ) − |W |] = v ′ (θ), |v ′ | ≤ K2 , 0 ≤ K(θ) ≤ kW k∞ }.
The variance constraint v ′ (θ) = −Var[(K(θ) − W )+ ] allows us to express the strikes in terms
of a continuous function of v ′ , i.e.,
K(θ) = F (v ′ (θ)).
The Principal’s problem can therefore be written as
Z
′
+
′
+
(F (v (θ)) − |W |) − E[(F (v (θ)) − |W |) ] d − I(v)
inf ̺ W −
Θ
where the infimum is taken over the set of all functions v ∈ C that satisfy v ≤ K1 and |v ′ | ≤ K2 .
Remark 4.4 Within our current framework the contracts are expressed in terms of the derivative
of the principal’s choice of v. This reflects the fact that the principal restricts itself to typedependent put options and is not always true in the general case.
17
4.2.1
An existence result
Let {vn } be a minimizing sequence for the principal’s optimization problem. The functions vn
are uniformly bounded and uniformly equicontinuous so we may with no loss of generality assume
that vn → v uniformly. Recall this also implies a.s. convergence of the derivatives. By dominated
convergence and the continuity of F, along with the fact that W is bounded yields
Z
Z
′
+
(F (v ′ (θ)) − |W |)+ d P-a.s.
(F (vn (θ)) − |W |) d −→
Θ
Θ
and
lim
n→∞
Z
E[(F (vn′ (θ)) − |W |)+ dθ =
Z
E[(F (v ′ (θ)) − |W |)+ dθ
Θ
Θ
This shows that the principal’s positions converge almost surely and hence in probability. Since
̺ is lower-semi-continuous with respect to convergence in probability we deduce that v solves the
principal’s problem.
4.2.2
An algorithm for approximating the optimal solution
We close this paper with a numerical approximation scheme for the principal’s optimal solution
within the pit option framework. We assume the set of states of the World is finite with cardinality
m. Each possible state ωj can occurs with probability pj . The realizations of the principal’s wealth
are denoted by W = (W1 , . . . , Wm ). Note that p and W are treated as known data. We implement
a numerical algorithm to approximate a solution to the principal’s problem when she evaluates
risk via the risk measure
m
X
X(ωj )pj qj ,
̺(X) = − sup
q∈Qλ j=1
where
−1
Qλ := q ∈ Rm
.
+ | p · q = 1, qj ≤ λ
We also assume the set of agent types is finite with cardinality n, i.e. θ = (θ 1 , . . . , θn ). The
density of the types is given by M := (M1 , . . . , Mn ). In order to avoid singular points in the
principal’s objective function, we approximate the option’s payoff function f (x) = (K − x)+ by
the differentiable function
if x ≤ K − ǫ,
0,
T (x, K) =
S(x, K), if K − ǫ < x < K + ǫ,
x − K,
if x ≥ K + ǫ.
where
S(x, K) =
x2 ǫ − K
K 2 − 2Aǫ + ǫ2
+
x+
.
4ǫ
2ǫ
4ǫ
18
The algorithm uses a penalized Quasi-Newton method, based on Zakovic and Pantelides [20],
to approximate a minimax point of
n
n
n
n
n−1
X
X
X
1 X X
1
Wi p i q i +
F (v, K, q) = −
T (Kj − |Wi |) pi qi −
T (Kj − |Wi |) pi
n
n
i=1
i=1
i=1
j=1
j=1
n
1X
1
vn − vn−1
vi+1 − vi
+
+
vn −
vi − θ i
n
θi+1 − θi
n
1 − θ n−1
i=1
where v = (v1 , . . . , vn ) stands for the values of a convex, non-increasing function, K = (K1 , . . . , Kn )
denotes the vector of type dependent strikes and the derivatives v ′ (θi ) are approximated by
v ′ (θ i ) =
vi+1 − vi
.
θi+1 − θ i
The need for a penalty method arises from the fact that we face the equality constraints
= −V ar[(K(θ) − |W |)+ ] and p · q = 1. In order to implement a descent method, these
constraints are relaxed and a penalty term is added. We denote by ng the total number of
constraints. The principal’s problem is to find
v ′ (θ)
min max F (v, K, q)
(v,K) q∈Qλ
subject to
G(v, K, q) ≤ 0
where G : R2n+m → Rng determines the constraints that keep (v, K) within the set of feasible
contracts and q ∈ Qλ . The Maple code for our procedure is given in the appendix for completeness.
Example 4.5 Let us illustrate the effects of risk transfer on the principal’s position in two
model with five agent types and two states of the world. In both cases W = (−1, −2), θ =
(1/2, 5/8, 3/4, 7/8, 1) and λ = 1.1 The starting values v0 , q0 and K0 we set are (4, 3, 2, 1, 0), (1, 1)
and (1, 1, 1, 1, 1) respectively.
i) Let p = (0.5, 0.5) and the types be uniformly distributed. The principal’s initial evaluation
of her risk is 1.52. The optimal function v and strikes are:
V1
0.1055
K1
1.44
V2
0.0761
K2
1.37
V3
0.0501
K3
1.07
V4
0.0342
K4
1.05
V5
0.0195
K5
1.05
The Principal’s valuation of her risk after the exchanges with the agents decreases to 0.2279.
ii) In this instance p = (0.25, 0.75) and M = (1/15, 2/15, 3/15, 4/15, 5/15). The principal’s
initial evaluation of her risk is 1.825. The values for the discretized v the type-dependent
strikes are:
19
1.5
0.11
0.1
1.45
0.09
1.4
0.08
1.35
0.07
1.3
0.06
1.25
0.05
1.2
0.04
1.15
0.03
1.1
1.05
0.02
1
1.5
2
2.5
3
3.5
4
4.5
0.01
5
1
1.5
(a) The type-dependent strikes.
2
2.5
3
3.5
4
4.5
5
(b) The optimal function v.
Figure 1: Optimal solution for underwriting put options, Case 1.
V1
0.0073
K1
1.27
V2
0.0045
K2
1.16
V3
0.0029
K3
1.34
V4
0.0026
K4
0.11
V5
0.0025
K5
0.12
The Principal’s valuation of her risk after the exchanges with the agents is 0.0922.
−3
1.4
7
x 10
6.5
1.2
6
1
5.5
0.8
5
0.6
4.5
4
0.4
3.5
0.2
0
3
1
1.5
2
2.5
3
3.5
4
4.5
2.5
5
(a) The type-dependent strikes.
1
1.5
2
2.5
3
3.5
4
4.5
(b) The optimal function v.
Figure 2: Optimal solution for underwriting put options, Case 2.
20
5
5
Conclusions
In this paper we analyzed a screening problem where the principal’s choice space is infinite dimensional. Our motivation was to present a nonlinear pricing scheme for over-the-counter financial
products, which she trades with a set of heterogeneous agents with the aim of minimizing the
exposure of her income to some non-hedgeable risk. In order to characterize incentive compatible
and individually rational catalogues, we have made use of U-convex analysis. To keep the problem tracktable we have assumed the agents have mean-variance utilities, but this is not necessary
for the characterization of the problem. Considering more general utility functions is an obvious
extension to this work. Our main result is a proof of existence of a solution to the principal’s
risk minimization problem in a general setting. The examples we have studied suggest that the
methodologies for approaching particular cases are highly dependent on the choice of risk measure, as well as on the kinds of contracts the principal is willing (or able) to offer. In most cases
obtaining closed form solutions is not possible and implementations must be done using numerical
methods. As a work in progress we are considering agents with heterogenous initial endowments
(or risk exposures), as well as a model that contemplates an economy with multiple principals.
A
Coherent risk measures on L2.
In this appendix we recall some properties and representation results for risk measures on L2
spaces; we refer to the textbook of Föllmer and Schied [12] for a detailed discussion of convex risk
measures on L∞ and to Cheridito and Tianbui [8] for risk measures on rather general state spaces.
Bäuerle and Müller [4] establish representation properties of risk law invariant risk measures on Lp
spaces for p ≥ 1. We assume that all random variables are defined on some standard non-atomic
probability space (Ω, F, P).
Definition A.1 (i) A monetary measure of risk on L2 is a function ̺ : L2 → R ∪ {∞} such
that for all X, Y ∈ L2 the following conditions are satisfied:
• Monotonicity: if X ≤ Y then ̺(X) ≥ ̺(Y ).
• Cash Invariance: if m ∈ R then ̺(X + m) = ̺(X) − m.
(ii) A risk measure is called coherent if it is convex and homogeneous of degree 1, i.e., if the
following two conditions hold:
• Convexity: for all λ ∈ [0, 1] and all positions X, Y ∈ L2 :
̺(λX + (1 − λ)Y ) ≤ λ̺(X) + (1 − λ)̺(Y )
21
• Positive Homogeneity: For all λ ≥ 1
̺(λX) = λ̺(X).
(iii) The risk measure is called coherent and law invariant, if, in addition,
ρ(X) = ρ(Y )
for any two random variables X and Y which have the same law.
(iv) The risk measure ̺ on L2 has the Fatou property if for any sequence of random variables
X1 , X2 , . . . that converges in L2 to a random variable X we have
ρ(X) ≤ lim inf ρ(Xn ).
n→∞
Given λ ∈ (0, 1], the Average Value at Risk of level λ of a position Y is defined as
1
AV @Rλ (Y ) := −
λ
Z
λ
qY (t)dt,
0
where qY (t) is the upper quantile function of Y . If Y ∈ L∞ , then we have the following characterization
AV @Rλ (Y ) = sup −EQ [Y ]
Q∈Qλ
where
dQ
1
Qλ = Q << P |
≤
.
dP
λ
Proposition A.2 For a given financial position Y ∈ L2 the mapping λ 7→ AV @Rλ (Y ) is decreasing in λ.
It turns out the Average Value of Risk can be viewed as a basis for the space of all lawinvariant, coherent risk measures with the Fatou property. More precisely, we have the following
result.
Theorem A.3 The risk measure ̺ : L2 → R is law-invariant, coherent and has the Fatou Property if and only if ̺ admits a representation of the following form:
Z 1
̺(Y ) = sup
AV @Rλ (Y )µ(dλ)
µ∈M
0
where M is a set of probability measures on the unit interval.
As a consequence of Proposition A.2 and Theorem A.3 we have the following Corollary:
22
Corollary A.4 If ̺ : L2 → R is a law-invariant, coherent risk measure with the Fatou Property
then
̺(Y ) ≥ −E[Y ].
An important class of risk measures are comonotone risk measures risk. Comonotone risk
measures are characterized by the fact that the risk associated with two position whose payoff
“moves in the same direction” is additive.
Definition A.5 A risk measure ̺ is said to be comonotone if
̺(X + Y ) = ρ(X) + ρ(Y )
whenever X and Y are comonotone, i.e., whenever
(X(ω) − X(ω ′ ))(Y (ω) − Y (ω ′ )) ≥ 0
P-a.s.
Comonotone, law invariant and coherent risk measures with the Fatou property admit a
representation of the form
Z 1
̺(Y ) =
AV @Rλ (Y )µ(dλ).
0
B
Maple code for the example of Section 4.2.2
with(LinearAlgebra)
n := 5:
m := 2:
ng := 2*m+4*n+1:
This section constructs the objective function f and its gradient.
x := Vector(2*n, symbol = xs):
q := Vector(m, symbol = qs):
f :=
add(−G[j] ∗ p[j] ∗ q[j], j = 1..m)+
add(add(T (x[i + n], W [j]) ∗ M [i] ∗ p[j] ∗ q[j], i = 1..n), j = 1..m)−
add(add(T (x[i + n], W [j]) ∗ M [i] ∗ p[j], i = 1..n), j = 1..m)+
add((x[i] − t[i] ∗ (x[i + 1] − x[i])/(t[i + 1] − t[i])) ∗ M [i], i = 1..n − 1)+
x[n] − t[n] ∗ (x[n] − x[n − 1]) ∗ M [n]/(t[n] − t[n − 1]) :
T T := (x, K)− > 1/4 ∗ x2 /eps + 1/2 ∗ (eps − K) ∗ x/eps + 1/4 ∗ (K 2 − 2 ∗ K ∗ eps + eps2 )/eps :
T := (x, K)− > piecewise(x ≤ K − eps, 0, x < K + eps, T T (x, K), K + eps ≤ x, x − K) :
gradfx := 0:
gradfq := 0:
23
for i from 1
do gradfx[i]
end do:
for i from 1
do gradfq[i]
end do:
to 2*n
:= diff(f, x[i])
to m
:= diff(f, q[i])
This section constructs the constraint function g and its gradient.
g := Vector(ng, symbol = tt):
for j from 1 to n
do g[j] := -x[j] end do:
for j from 1 to n-1
do g[j+n] := x[j+1]-x[j]
end do:
for i from 1 to n-1
do g[i+2*n-1] := add(T(x[i+n], W [j])2 ∗ p[j], j = 1..m) − add(T (x[i + n], W [j]) ∗ p[j], j = 1..m)2 + (x[i +
1] − x[i])/(t[i + 1] − t[i]) − eps2
end do:
g[3 ∗ n − 1] := add(T (x[2 ∗ n], W [j])2 ∗ p[j], j = 1..m) − add(T (x[2 ∗ n], W [j]) ∗ p[j], j = 1..m)2 + (x[n] −
x[n − 1])/(t[n] − t[n − 1]) − eps2 :
for i from 1 to n-1
do g[i + 3 ∗ n − 1] := −add(T (x[i + n], W [j])2 ∗ p[j], j = 1..m) + add(T (x[i + n], W [j]) ∗ p[j], j = 1..m)2 −
(x[i + 1] − x[i])/(t[i + 1] − t[i]) − eps2
end do:
g[4 ∗ n − 1] := −add(T (x[2 ∗ n], W [j])2 ∗ p[j], j = 1..m) + add(T (x[2 ∗ n], W [j]) ∗ p[j], j = 1..m)2 − (x[n] −
x[n − 1])/(t[n] − t[n − 1]) − eps2 :
g[4*n] := add(p[i]*q[i], i = 1 .. m)-1+eps3:
g[4*n+1] := -add(p[i]*q[i], i = 1 .. m)-1-eps3:
for i from 1 to m
do g[i+4*n+1] := -q[i]
end do:
for i to m
do g[i+m+4*n+1] := q[i]-lambda
end do: gradgx := 0:
gradgq := 0:
for i from 1 to ng
do for j from 1 to 2*n
do gradgx[i, j] := diff(g[i], x[j])
end do:
end do:
24
for i from 1 to ng
do for j from 1 to m
do gradgq[i, j] := diff(g[i], q[j])
end do:
end do:
This section constructs the slackness structures.
e := Vector(ng, 1):
s := Vector(ng, symbol = si):
z := Vector(ng, symbol = zi):
S :=DiagonalMatrix(s):
Z := DiagonalMatrix(z):
This section initializes the variable and parameter vectors.
x := Vector(2*n, symbol = xs):
q := Vector(m, symbol = qs):
p :=Vector(m, symbol = ps):
W := Vector(m, symbol = ws):
G := Vector(m,symbol = gs):
t := Vector(n, symbol = ts):
M := Vector(n, symbol = ms):
chi := convert([x, q, s, z], Vector):
This section constructs the Lagrangian and its Hessian matrix.
F :=convert([gradfx+(VectorCalculus[DotProduct])(Transpose(gradgx), z),
gradfq-(VectorCalculus[DotProduct])(Transpose(gradgq), z),
(VectorCalculus[DotProduct])((VectorCalculus[DotProduct])(Z, S), e)-mu*e, g+s], Vector):
DF := 0:
for i from 1 to 2*n+m+2*ng
do for j from 1 to 2*n+m+2*ng
do DF[i, j] :=diff(F[i], chi[j])
end do:
end do:
This section inputs the initial values of the variables and the values of the parameters.
xinit := (4,3, 2, 1, 0, 1, 1, 1, 1, 1):
qinit := (1, 1):
25
sinit := (.1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1, .1):
zinit := (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1):
tinit := (1/2, 5/8, 3/4, 7/8,1):
pinit := (.5, .5):
ginit := (-1, -2):
minit := (1,1, 1, 1, 1):
winit := (1, 2):
tau := .1; mu := .1; rho := .5: lambda := 1.1: eps := .1: eps2 := .1: eps3 := .2:
xo := xinit; qo :=qinit; so := sinit; zo := zinit; po := pinit; go := ginit; mo := minit; wo := winit;
xs := xinit; qs := qinit; si := sinit; zi := zinit; ps := pinit; gs := ginit; ms := minit; ws := winit; ts := tinit:
The following section contains the executable code.
i := 0; j := 0
normF:=Norm(F,2):
while (normF > mu and j < 40)
do
printf(” Inner loop: #Iteration = %g /n”,j);
printf(” - Solve Linear System (11) ...”);
d := LinearSolve(DF, -Transpose(F)):
printf(”doneN”);
printf(” - Update Params ...”);
alphaS:=min(seq(select(type,-s[k]/d[2*n+m+k],positive), k = 1 .. Dimension(s)));
alphaZ:=min(seq(select(type,-z[k]/d[2*n+m+ng+k],positive), k = 1 .. Dimension(z)));
alphamax:=min(alphaS,alphaZ):
alphamax:=min(tau*alphamax,1);
printf(”done/n”);
chiold := chi;
chinew := chiold+alphamax*d;
xs := chinew[1 .. 2*n]: ys := chinew[2*n+1 .. 2*n+m]:
si := chinew[2*n+m+1 .. 2*n+m+ng]: zi := chinew[2*n+m+ng+1 .. 2*n+m+2*ng]:
normF:=Norm(F,2);
printf(”doneN”);
j:=j+1;
end do:
26
References
[1] Armstrong, M.: Multiproduct Nonlinear Pricing, Econometrica, 64, 51-75, 1996.
[2] Barrieu, P. & N. El Karoui: Optimal Design of Derivatives in Illiquid Framework,
Quantitative Finance, 2, 1-8, 2005.
[3] Barrieu, P. & N. El Karoui: Inf-convolution of risk measures and optimal risk transfer, Finance and Stochastics, 9, 269-298, 2005.
[4] Bäuerle, N. & A. Müller: Stochastic Orders and Risk Measures: Consistency and
Bounds, Insurance: Mathematics & Economics,38, 132-148, 2006.
[5] Becherer, D.: Utility indifference Hedging and Valuation via Reaction Diffusion Systems, Proceedings of the Royal Society, Series A, 460, 27-51, 2004.
[6] Becherer, D.: Bounded Solutions to Backward SDE’s with Jumps for Utility Optimization and Indifference Hedging, Annals of Applied Probability, to appear.
[7] Carlier, G., Ekeland, I & N. Touzi: Optimal Derivatives Design for Mean-Variance
Agents under Adverse Selection, Preprint, 2006.
[8] Cheridito, P. & L. Tianhui: Monetary Risk Measures on Maximal Subspaces Of Orlicz
Classes, Preprint, 2006.
[9] Davis, M.: Pricing Weather Derivatives by Marginal Value, Quantitative Finance, 1,
305–308, 2001.
[10] Ekeland, I., Témam, R., Convex Analysis and Variational Problems, Classics in Applied
Mathematics, 28, SIAM, 1976.
[11] Filipović, D. & M. Kupper: Equilibrium and Optimality for Monetary Utility Functions under Constraints, Preprint, 2006.
[12] Föllmer, H. & A. Schied: Stochastic Finance. An Introduction in Discrete Time, de
Gruyter Studies in Mathematics, 27, 2004.
[13] Guesnerie, R.: A contribution to the Pure Theory of Taxation, Econometrica, 49, 33-64,
1995.
[14] Horst U. & M. Müller: On the Spanning Property of Risk Bonds Priced by Equilibrium, Mathematics of Operations Research, to appear.
[15] Horst U., Pirvu, T. & G. Nunes dos Reis: On Securitization, Market Completion
and Equilibrium Risk Transfer, Working Paper, 2007.
27
[16] Hu, Y., Imkeller, P. & M. Müller: Market Completion and Partial Equilibrium,
International Journal of Theoretical & Applied Finance, to appear.
[17] Jouini, E., Schachermeyer, W. & N. Touzi: Law Invariant Risk Measures have the
Fatou Property, Advances in Mathematical Economics 9, 49-71, 2006.
[18] Mussa M. & S. Rosen: Monopoly and Product Quality, Journal of Economic Theory,
18, 301-317, 1978.
[19] Rochet, J.-C. & P. Choné: Iroining, Sweeping and Multidimensional Screening, Econometrica,66, 783-826, 1988.
[20] Zakovic, S. & C. Pantelides: An Interior Point Algorithm for Computing Saddle
Points of Constrainde Continuous Minimax, Annals of Operations Research, 99, 59-77,
2000.
28
| 5 |
1
Fast Greedy Approaches for Compressive Sensing
of Large-Scale Signals
Sung-Hsien Hsieh∗,∗∗, Chun-Shien Lu∗∗ , and Soo-Chang Pei∗
Inst. Comm. Eng., National Taiwan University, Taipei, Taiwan
∗∗ Institute of Information Science, Academia Sinica, Taipei, Taiwan
arXiv:1509.03979v2 [] 17 Mar 2016
∗ Graduate
Abstract—Cost-efficient compressive sensing is challenging
when facing large-scale data, i.e., data with large sizes. Conventional compressive sensing methods for large-scale data will
suffer from low computational efficiency and massive memory
storage. In this paper, we revisit well-known solvers called greedy
algorithms, including Orthogonal Matching Pursuit (OMP), Subspace Pursuit (SP), Orthogonal Matching Pursuit with Replacement (OMPR). Generally, these approaches are conducted by
iteratively executing two main steps: 1) support detection and 2)
solving least square problem.
To reduce the cost of Step 1, it is not hard to employ the sensing
matrix that can be implemented by operator-based strategy
instead of matrix-based one and can be speeded by fast Fourier
Transform (FFT). Step 2, however, requires maintaining and
calculating a pseudo-inverse of a sub-matrix, which is random
and not structural, and, thus, operator-based matrix does not
work. To overcome this difficulty, instead of solving Step 2
by a closed-form solution, we propose a fast and cost-effective
least square solver, which combines a Conjugate Gradient (CG)
method with our proposed weighted least square problem to
iteratively approximate the ground truth yielded by a greedy
algorithm. Extensive simulations and theoretical analysis validate that the proposed method is cost-efficient and is readily
incorporated with the existing greedy algorithms to remarkably
improve the performance for large-scale problems.
Index Terms—Compressed/Compressive sensing, Greedy algorithm, Large-scale Data, Least square, Sparsity
s
In this section, we first briefly introduce the background
of compressive sensing (CS) in Sec. I-A. Then, the existing
CS recovery algorithms (for large-scale signals) are discussed
in Sec. I-B. Finally, the overview and contributions of our
proposed method are described in Sec. I-C, followed by the
organization of the remainder of this paper.
A. Background
Compressive sensing (CS) [1][2][3] for sparse signals in
achieving simultaneous data acquisition and compression has
been extensively studied in the literature. CS is recognized to
be composed of fast encoder and slow decoder.
Let x ∈ RN denote a K-sparse 1-D signal to be sensed, let
Φ ∈ RM×N (M < N ) represent a sampling matrix, and let
y ∈ RM be the measurement vector. At the encoder, a signal
x is simultaneously sensed and compressed via Φ to obtain a
so-called measurement vector y as:
Corresponding author: Chun-Shien Lu; e-mail: [email protected]
where A = ΦΨ ∈ RM×N . We say that x is K-sparse if s
contains only K non-zero entries (exactly K-sparse) or K
significant components (approximately K-sparse).
At the decoder, the original signal x can be perfectly
recovered by an intuitive solution to CS recovery, called ℓ0 minimization, which is defined as:
min ksk0
I. I NTRODUCTION
y = Φx,
which is usually called a procedure of ranfom projection. The
measurement rate, defined as 0 < M
N < 1, indicates the compression ratio (without quantization) and is a major concern
in many applications. [1][3] show that it is a good choice to
design Φ as a Gaussian random matrix to satisfy either mutual
incoherence property (MIP) or restricted isometry property
(RIP). Moreover, sparsity is an inherent assumption made in
compressed sensing to solve the underdetermined system in
Eq. (1) due to M < N . Nevertheless, for real applications,
natural signals are often not sparse in either the time or space
domain but can be sparsely represented in a transform (e.g.,
discrete cosine transform (DCT) or wavelet) domain. Namely,
x = Ψs, where Ψ is a transform basis (or dictionary) and s is
a sparse representation with respect to Ψ. So, Eq. (1) is also
rewritten as:
y = ΦΨs = As,
(2)
(1)
s.t. ky − Ask2 ≤ ǫ,
(3)
where ǫ is a tolerable error term. Due to M < N , this
system is underdetermined and there exists infinite solutions.
Thus, solving ℓ0 -minimization problem requires combinatorial
search and is NP-hard.
Alternative solutions to Eq. (3) usually are based on two
strategies: convex programming and greedy algorithms. For
convex programming, researchers [1][3] have shown that when
N
) holds, solving ℓ0 -minimization is equivaM ≥ O(K log K
lent to solving ℓ1 -minimization, defined as:
min ksk1
s
s.t. ky − Ask2 ≤ ǫ.
(4)
Typical ℓ1 -minimization models include Basis Pursuit (BP) [4]
and Basis Pursuit De-Noising (BPDN) [5] with the computational complexity of recovery being polynomial. For greedy
approaches, including Orthogonal Matching Pursuit (OMP)
[6], CoSaMP [7], and Subspace pursuit (SP) [8], utilize a
greedy strategy for support detection first and then solve a
least square problem to recover the original signal. The main
difference among these greedy algorithms is how the support
detection step is condcuted.
2
Nevertheless, for large-scale signals (e.g., N > 220 ), both
ℓ1 -minimization and greedy algorithms suffer from high computational complexity and massive memory usage. Ideally,
the memory costs at the encoder and decoder are expected
to approximate O(M ) and O(N ), respectively, which are
the minimum costs to store the original measurement vector
y and signal x. If A is required to be stored completely,
however, it will cost O(M N ) bytes (e.g., when N = 220
and M = 218 , the sensing matrix needs several terabytes). It
often overwhelms the capability of existing hardware devices.
In view of the incoming big data era, such a troublesome
problem needs to pay immediate attention. In this paper,
we say that X ∈ RN1 ×N2 ×···×ND is a kind of big data
if N1 × N2 × · · · × ND is lare enough or more specifically
its size approaches the storage limit of hardwares like PC,
notebook, and so on.
B. Related Work
The existing methods that can deal with compressive sensing of large-scale signals are discussed in this section. As
mentioned above, we focus on computational efficiency and
memory usage. Basically, our survey is conducted from the
aspects of encoder and decoder in compressive sensing. We
mainly discuss block-based, tensor-based, and operator-based
compressive sensing algorithms here.
1) Strategies at Encoder: From Eq. (1), we can see that
both the storage (for Ψ) and computation (for Ψx) costs
require O(M N ) bytes and O(M N ) operations, respectively.
When the signal length becomes large enough, storing Φ and
computing Φx become an obstacle.
In the literature, Gan [11] and Mun and Fowler [12] propose
block-based compressive sensing techniques, wherein a largescale signal is separated into several small block signals,
which are individually sensed via the same but smaller sensing
matrix. The structure of block sensing reduces both storage
and computation costs to O( MN
B ), where B is the number
of blocks. Although block-based compressive sensing can
deal with small blocks quickly and easily, it actually cannot
work for the scenario of medical imaging in that an image
generated from the fast Fourier Transform (FFT) coefficients
of an entire sectional view [18] violates the structure of blockbased sensing.
Shi et al. [19] and Caiafa and Cichocki [9] consider the
problem of large-scale compressive sensing based on tensors.
In other words, the signal is directly sensed and reconstructed
in the original (high) dimensional space instead
√of reshaping
√
N× N
is sensed
to 1-D. For example, a 2-D image X ∈ R
via
Y = Φ1 XΦT2 ,
(5)
√
√
√
√
where Φ1 and Φ2 ∈ R M × N , and Y ∈ R M × M . This
strategy is often called separable sensing [20], [21]. In this
case,
√ both the storage and computation costs are reduced to
O( M N ). [10] further presents a close-from solution for reconstruction from compressive sensing based on assuming the
low-rank structure. It should be noted that since tensor-based
approaches, in fact, change the classical sensing structure (i.e.,
y = Φx) of CS, the decoder no longer follows the conventional
solvers like Eq. (4). Specifically, the measurements in tensorbased approaches form a tensor but conventional solvers only
accept one-dimensional measurement vector.
In addition to block-based and tensor-based approaches,
operator-based approaches are to design Φ as a deterministic
matrix or structurally random matrix, implemented by certain
fast operators. For example, Candes et al. [22] propose the
use of a randomly-partial Fourier matrix as Φ. In this case,
we can implement Φx by D (F F T (x)), where F F T (·) is the
function of fast Fourier transform (FFT) and D (·) denotes
a downsampling operator that outputs an M × 1 vector.
Thus, Φ is not necessarily stored in advance. In addition, the
computation cost also becomes O(N log N ), which especially
outperforms O(M N ) for large-scale signals because M is
positively proportional to N . Do et al. [23] further propose
a kind of random Gaussian-like matrices, called Structurally
Random Matrix (SRM), which benefits from operator-based
strategy and achieves reconstruction performance as good
as random Gaussian matrix. In sum, since operator-based
approaches follow the original CS structure, the decoder is
not necessary to be modified.
2) Strategies at Decoder: For block-based approaches
[11][12], each block can be individually recovered with low
computation cost and memory usage but incurs blocky effects
between boundaries of blocks. In [12], Mun and Fowler propose a method, called BCS-SPL, which further removes blocky
effects by Wiener filtering. In addition to the incapabliity
of sensing medical images like MRI, BCS-SPL is also not
adaptive in that the measurement rates are fixed for different
blocks by ignoring the potential differences in smoothing
blocks that need less measurement rates and complex blocks
that require more measurement rates.
For tensor-based compressive sensing, [9] develops a new
solver called N-way block OMP (N-BOMP). Though NBOMP is indeed faster than conventional CS solvers, its
performance closely depends on the unique sparsity pattern,
i.e., block sparsity, of an image. Specifically, block sparsity
states that the importnat components of an images are clustered
together in blocks. This characteristic seems to only naturally
appear in hyperspectral imaging. In [24], a multiway compressive sensing (MWCS) method for sparse and low-rank tensors
is proposed. MWCS achieves more efficient reconstruction,
but its performance relies heavily on tensor rank estimation,
which is NP-hard. A generalized tensor compressive sensing
(GTCS) method [25], which combines ℓ1 -minimization with
high-order tensors, is beneficial for parallel computation.
For operator-based compressive sensing algorithms, since
the conventional solvers, mentioned in the previous subsection,
still can be used, here we mainly review state-of-the-art convex
optimization algorithms focusing on the large-scale problem,
where only simple operations such as A and AT conducted
by operator are required.
Cevher et al. [26] point out that an optimization algorithm
based on the first-order method such as gradient descent
features nearly dimension-independent convergence rate and
is theoretically robust to the approximations of their oracles.
Moreover, the first-order method such as NESTA [27] often
involves the transpose of sensing matrix, which is easily
3
TABLE I
C OMPARISON BETWEEN EACH ALGORITHM .
algorithms
N-BOMP [9]
[10]
BCS [11]
BCS-SPL [12]
[13]
GPSR [14], SpaRSA [15]
[16][17]
sensing strategy
tensor-based (2D)
tensor-based (2D)
block-based (1D)
block-based (1D)
conventional (1D)
conventional (1D)
conventional (1D)
assumptions
block sparsity
low multilinear-rank
-
implemented by operator. Both GPSR [14] and SpaRSA
[15] are closely related to iterative shrinkage/threshold (IST)
methods and support the operator-based strategy. In addition,
[16][17][28] have shown that algorithms based on solving
fixed-point equation have fast convergence rate, which can
be combined with operator-based strategy too. For example,
Milzareket al. [13] further propose a globalized semismooth
Newton method, where partial DCT matrix is adopted as
the sensing matrix for fast sensing. But, it requires that
signals are sparse in the time/spatial domain leading to limited
applications.
3) Brief Summary of Related Works: Table I depicts the
comparisons among the aforementioned algorithms, where
storage is estimated based on non-operator version. If operator
can be used, the storage of storing a sensing matrix is not
required and, thus, is bounded by O(N ) for each row vector,
which is only related to the minimum requirement for storing
the reconstructed signal. Since the characteristic of compressive sensing states that CS encoder spends lower memory
and computation cost than CS decoder, when taking hardware
implementation in real world into consideration, tensor-based
methods are more complicated than others. For example, the
single-pixel camera designed in [29] uses a DMD array as
a row of A to sense x. By changing the pattern of DMD
array M times, the measurements are collected. This structure,
however, cannot support separable sensing that is commonly
used in tensor-based methods. Block-based CS methods do
not intrinsically overcome large-scale problems and lack convincing theoretical proof about complexity, performance, and
convergence analysis. Operator-based CS methods maintain
the original structures of CS encoder and decoder. Thus, most
of the existing fast algorithms for ℓ1 -minimization can be used
only if all of matrix operations can be executed in an operator
manner. Furthermore, they have strong theoretical validation
since ℓ1 -minimization is a well-known model and has been
developed for years. In fact, the operator-based strategy can
also be employed in tensor-based and block-based compressive
sensing methods to partially reduce their computation cost and
storage usage.
C. Contributions and Overview of Our Method
Up to now, it is still unclear how greedy algorithms can deal
with large-scale problems by utilizing operator-based strategy.
Although SparseLab releases the OMP code combined with
operator, the program still cannot deal with large-scale signals.
This challenge is the objective of this paper and, to our
algorithm type
greedy
closed-form
Landweber-based
Landweber-based
Armijo-based
IST-based
FPC AS-based
storage
√
O(√M N )
O( M N )
O( MBN )
O( MBN )
O(M N )
O(M N )
O(M N )
knowledge, we are the first to explore this issue. In fact, our
idea can help all greedy algorithms to deal with large-scale
signals. We will discuss the problem in detail in Sec. II.
Generally, greedy algorithms are conducted by iteratively
executing two main stages: (a) support detection and (b)
solving least square problem with the known support. To
reduce the cost of support detection, we follow the common
strategy of adopting operator-based, instead of matrix-based,
design of a sensing matrix (e.g., [23]). Therefore, we no longer
discuss this step in this paper as it is not the focus of our
method.
For solving the least square problem, we propose a fast
and cost-effective solver by combining a Conjugate Gradient
(CG) method with a weighted least square model to iteratively
approximate the ground truth. In our method, the memory cost
of solving least square problem is reduced to O(N ), and the
computation cost of CG method is approximately O(N log N )
for finite floating point precision and O(KN log N ) for exact
precision.
In should be noted that although using CG to solve the least
square problem is not new, our extended use of CG brings
additional advantages. For example, Blumensath et al. [30]
proposed “Gradient Pursuits (GP)” in which the memory cost
is dominated by O(M N ) to save Φ (see Table 1 in [30]),
which cannot be stored explicitly for large-scale problem. Our
method extends GP to reduce the memory cost by using SRM
to avoid saving Φ. In addition, we reformulate a least square
problem used in GP into a weighted one and show both models
are equivalent. More specifically, solving weighted least square
problem only requires Φ and can benefit from fast computation
of operator-based approaches (as in SRM). Traditional least
square problem, however, involves sub-matrices of Φ and
cannot directly be conducted by fast operator.
On the other hand, “iteratively reweighted least-square
(IRLS)” was proposed in [31]. Though both IRLS and our
method involve weighting, they are totally different. First,
IRLS uses weighting to approximate ℓ1 -norm solution instead
of ℓ2 -norm solution in original least square problem while our
greedy method still solves ℓ2 -norm solution in the weighted
least square problem. Second, IRLS is not a greedy method.
Moreover, we conduct extensive simulations to demonstrate
that our method can greatly improve OMP [6], Subspace
pursuit (SP) [8], and OMPR [32] in terms of memory usage
and computation cost for large-scale problems.
4
D. Outline of This Paper
The rest of this paper is organized as follows. In Sec. II,
we describe the bottleneck of current greedy algorithms that
is to solve the least square problem. The proposed idea of
fast and cost-effective least square solver for speeding greedy
approaches along with theoretical analysis is discussed in Sec.
III. In Sec. IV, extensive simulations are conducted to show
that our method indeed can be readily incorporated with stateof-the-art greedy algorithms, including OMP, SP, and OMPR,
to improve their performance in terms of the memory and
computation costs. Finally, conclusions are drawn in Sec. V.
II. P ROBLEM S TATEMENT
In this paper, without loss of generality, we focus on a
signal ∈ RN1 ×N2 ×···×ND , where N = N1 and N = N1 × N2
are large enough with respect to D = 1 and D = 2. When
D = 2, the signal usually is reshaped to 1-D form in the
context of compressive sensing. Ideally, the memory cost of a
compressive sensing algorithm should be O(N ), which is the
minimum requirement for saving the original signal, x. The
computational cost, however, depends on an algorithm itself.
Since greedy algorithms share the same framework composed
of support detection and solving least square problem, we shall
focus on reducing the costs of these two procedures.
In this section, we discuss the core of proposed fast and
cost-effective greedy approach and take OMP as an example
for subsequent explanations. We shall point out the dilemma
in terms of memory cost and computation cost when handling
large-scale signals. In fact, both costs suffer from solving the
least square problem, which cannot be conducted by operator
directly.
First, we follow the notations mentioned in the previous
section and briefly introduce OMP [6] in a step-by-step manner
as follows.
1) Initialize the residual measurement r0 = y and initialize
the set of selected supports S0 = {}. Let the initial
iteration counter be i = 1. Let AS be the sub-matrix of
A, where AS consists of the column of A with indices
belonging to the support set S. AT is the transpose of A.
2) Detect supports (or positions of significant components)
by seeking maximum correlation from
(6)
t = argmax | AT ri−1 t |,
calculated, where ŷ = [y T , 0, ..., 0]T and IF F T (·) denotes
| {z }
N −M
inverse FFT function. In this paper, we call it operator-based
OMP (O-OMP).
Unfortunately, the key is that (ATSi ASi )−1 still cannot be quickly computed in terms of operator. Hence,
it requires O(KM ) to store ASi . Instead of calculating
(ATSi ASi )−1 directly, by preserving the Cholesky factorization
of (ATSi−1 ASi−1 )−1 at the (i − 1)th iteration for subsequent
use, Step 3 is accelerated and the memory cost is reduced
to O(K 2 ). Moreover, it is worth mentioning that the sparsity
K of natural signals is often linear to signal length N . For
example, the number of significant DCT coefficients for an
image usually ranges from 0.01N to 0.1N . Also CS has shown
that M must be linear to K log N for successful recovery with
high probability. Under the circumstance, O(N ) is equivalent
to O(K) and O(M ) in the sense of big-O notation. We can
see that when N is large, O(K 2 ) dominates the memory cost
because K 2 ≫ N . Thus, Step 3 makes OMP infeasible for
recovering large-scale signals.
In fact, greedy algorithms share the same operations, i.e.,
AT ri−1 in Step 2 and (ATSi ASi )−1 ATSi y in Step 3, where
the main difference is that the support set Si is found
by different ways, and face the same dilemma. A simple
experiment is conducted and results are shown in Fig. 1
to illustrate the comparison of memory usage among MOMP, O-OMP, and ideal cost (which is defined as N × 8
bytes required for Double data type in Matlab). The OMP
code running in Matlab was downloaded from SparseLab
(http://sparselab.stanford.edu). Obviously, though the memory
cost of O-OMP is reduced without storing A, it still far higher
than that of ideal cost. It is also observed that both M-OMP
and O-OMP exhibit the same slope. Specifically, M-OMP and
O-OMP cost O(M N ) and O(K 2 ), respectively. As mentioned
before, since M, K are linear to N , it means O(K 2 ) = O(N 2 )
and O(M N ) = O(N 2 ) such that both orders of memory cost
of M-OMP and O-OMP are the same and are larger than that
of ideal case. Consequently, solving the least square problem
becomes a bottleneck in greedy approaches. This challenging
issue will be solved in this paper.
3
10
and update the support set Si = Si−1 ∪ {t}.
3) Solve a least square problem si = (ATSi ASi )−1 ATSi y, and
update residual measurement ri = y − ASi si .
4) If i = K, stop; otherwise, i = i + 1 and return to Step 2.
Tropp and Gilbert [6] derive that the computational complexity of OMP is bounded by Step 2 (support detection)
with O(M N ) and Step 3 (solving least square problem)
with O(M K), and the memory cost is O(M N ) when A is
executed in a matrix form. In this paper, we call it matrixbased OMP (M-OMP). As mentioned in Sec. I-B, A can be
designed to be an SRM conducted by operator. Nevertheless,
operator is only helpful for certain operations such as A and
AT . For example, if A is a partial random Fourier matrix,
Ax = D (F F T (x)) and AT y = IF F T (ŷ) can be quickly
Memory Cost (MegaBytes)
t
2
10
1
10
0
10
−1
10
Ideal Cost
O−OMP
M−OMP
−2
10
−3
10
10
11
12
13
14
15
16
17
log2 N
Fig. 1. Memory cost comparison among the matrix-based OMP, operatorbased OMP, and ideal cost under M = N
and K = M
.
4
4
5
III. P ROPOSED M ETHOD FOR S PEEDING R ECOVERY
G REEDY A LGORITHMS
OF
In this section, we first introduce how to determine a
sensing matrix, which can be easily implemented by operator.
Then, we reformulate the least square problem as a weighted
one, which is solved by conjugate gradient (CG) method
to avoid involving the sub-matrices, ASi or A†Si . We first
prove that the solutions to the least square problem and its
weighted counterpart are the same, and then prove that the
solutions to the weighted least square problem and its CGbased counterpart are the same.
A. Sensing Matrix
A random Gaussian matrix is commonly used as the sensing
matrix as it and any orthonomal basis can pair together to
satisfy RIP and MIP in the context of compressive sensing.
The use of random Gaussian matrix as the sensing matrix,
however, leads to the overhead of storage and computation
costs. Although storage consumption can be overcome by
using a seed to generate a random Gaussian matrix, it still
encounters high computational cost.
In [23], Do et al. propose a framework, called Structurally
Random Matrix (SRM), defined as:
Φ = DF R,
(7)
where D ∈ RM×N is a sampling matrix, F ∈ RN ×N is an
orthonormal matrix, and R ∈ RN ×N is a uniform random permutation matrix (randomizer). Since the distributions between
a random Gaussian matrix and SRM’s Φ are verified to be
similar, we choose Eq. (7) as the sensing matrix for our use.
It should be noted that D and R can inherently be replaced by
operators but F depends on what kind of orthonormal basis is
used. It is obvious that any fast transform can be adopted as F .
In our paper, we set F to the Discrete Cosine Transform (DCT)
due to its fast computation and cost-effectiveness. There are
literatures discussing the design of sensing matrix but it is not
the focus of our study here.
B. Reformulating Least Square Problem: Weighted Least
Square Problem
In Sec. II, we describe that the bottleneck of greedy algorithms is the least square problem. To solve the problem,
it is reformulated as a weighted least square problem in our
method. To do that, we first introduce a weighted matrix
W ∈ RN ×N defined as:
(
1, j ∈ Si
,
(8)
WSi [j, j] =
0, j 6∈ Si
where, without loss of generality, Si = {1, 2, ..., i} denotes a
support set at the i-th iteration and WSi [j, j] is the (j, j)th
entry of WSi . As can be seen in Eq. (8), the weighte matrix
W is designed to make the supports unchanged.
Then, we prove that both solutions of the least square
problem and its weighted counterpart are the same.
Theorem 1. Suppose the sub-matrix ASi ∈ RM×K of A has
full column rank with support set Si . Let si ∈ RK be the
solution to the least square problem:
si = argmin ky − ASi ŝk2 ,
ŝ
(9)
and let θi ∈ RN be the solution to the weighted least square
problem:
(10)
θi = argmin ky − AWSi θ̂k2 .
θ̂
We have
si [j] = θi [j],
for 1 ≤ j ≤ i.
−1 T
ATSi ASi
ASi
T
T
Proof. Let θ∗ = [si 0] =
y be a
0
solution minimizing ky − AWSi θ∗ k2 . Then {θ∗ + v|v ∈
N ull(AWSi )} is the solution set of Eq. (10). Since ASi has
full column rank with rank(ASi ) = K and N ull(AWSi ) =
span (ei+1 , ei+2 , ..., eN ), where ei is a standard basis. Thus,
no matter what v is, the first i entries of θ∗ + v are invariant.
We complete the proof.
C. Reformulating Weighted Least Square as CG-based
Weighted Least Square Problem
We can see from Eq. (10) that the introduce of weighted matrix involves AWSi instead of submatrix ASi so that A can be
calculated by fast operator.
Nonetheless,
−1 T the closed-form soluT
ASi ASi
ASi
tion of Eq. (10) is
y and still faces the dif0
−1 T
ASi cannot be easily implemented
ficulty in that ATSi ASi
by operator. Instead of seeking closed-form solutions, we aim
to explore the first-order methods (e.g., gradient descent),
which have the following advantages: (1) the operations only
involve A and AT instead of the pseudo-inverse of A, (2) the
convergence rate is nearly dimension-independent [26], and (3)
if A is a sparse matrix, the computation cost can be further
reduced. Conjugate gradient (CG) [33][34] is a well-known
first order method to numerically approximate the solution of
symmetric, positive-definite or positive-semidefinite system of
linear equations. Thus, CG benefits from the advantages of
the first-order method. However, the matrix AWSi in Eq. (10)
is not symmetric. Thus, we reformulate Eq. (10) in terms of
CG as follows. We will prove that both the solutions to the
weighted least square problem and CG-based weighted least
square problem are the same.
Theorem 2. Suppose the sub-matrix ASi ∈ RM×K of A has
full column rank with support set Si . Let θi ∈ RN be the
solution to weighted least square problem defined in Eq. (10).
Let θe ∈ RN be the solution to the CG-based weighted least
square problem reformulated from Eq. (10) as:
θei = argmin kWSTi AT (y − AWSi θ̂)k2 .
(11)
θ̂
Then
θi [j] = θei [j],
for 1 ≤ j ≤ i.
Proof. We first note that now WSTi AT AWSi is symmetirc to
−1 T
ATSi ASi
ASi
meet the requirement of CG. Since
y is an
0
6
optimal solution for both Eq. (10) and Eq. (11), the solution
set of Eq. (11) can be expressed as:
−1 T
ATSi ASi
ASi
{
y + v|AWSi v ∈ N ull(WSTi AT )}.
0
It should be noted that N ull(WSTi AT ) = N ull(ATSi ).
In addition, AWSi v ∈ C(AWSi ), where C(AWSi ) denotes the column space
T of AWSi and C(AWSi ) =
C(ASi ). Since C(ASi ) N ull(ATSi ) = {0}, it implies
AWSi v = 0. In other words, v ∈ N ull(AWSi ). Due to
N
i ) = span
ull(AWS−1
(ei+1 , ei+2 , ..., eN ), the first i entries of
ATSi ASi
ATSi
y + v are invariant. Similarly, the solution
0
set of Eq. (10) is
−1 T
ATSi ASi
ASi
{
y + v|v ∈ N ull(AWSi )}.
0
−1 T
ATSi ASi
ASi
The first i entries of
y + v are also invari0
ant. We complete the proof.
So far, we prove that, with correct support detection, the optimal solution to the CG-based weighted least square problem
in Eq. (11) is equivalent to that to the original least square
problem in Eq. (9). In addition, the matrix WSTi AT AWSi
in Eq. (11) is symmetric and can quickly be solved by CG
method. Nevertheless, [34] points out that the system of linear
equations with a positive-semidefinite matrix diverges unless
some conditions are satisfied. Thus, Theorem 3 further shows
the condition of convergence.
Theorem 3. [34] If WSTi AT y ∈ C(WSTi AT AWSi ), CG
method converges but the solution is not unique.
Now, we check whether the CG-based weighted least square
problem converges. Again, let Si = {1, 2, ..., i} denote a
support set. Then, we have WSTi AT y = [y T ASi 0]T and
T
A A
C(WSTi AT AWSi ) = C( Si Si ).
0
Because ATSi ASi ∈ Ri×i is full rank, the first i entries of
WSTi AT y must be spanned by the basis of ATSi ASi . The
remaining N − i entries are 0 and trivial. Thus, it implies
the CG part of our CG-based weighted least square solver
satisfies Theorem 3 and converges.
D. Speeding Orthogonal Matching Pursuit and Complexity
Analysis
In the previous section, we descirbe the proposed CGbased weighted least square solver. In this section, we first
show that the CG-based solver can be implemented easily by
operator. Then, we combine it with OMP as a new paradigm
to achieve fast OMP. Moreover, we discuss the convergence
rate of CG and derive the computation complexity of proposed
operator-based OMP via CG (dubbed as CG-OMP). Finally,
we conclude that the memory cost of CG-OMP achieves ideal
O(N ) if Ψ is also conducted by operator.
Algorithm 1 describes the proposed CG-OMP method
(Lines 01 - 10), which employs a CG technique (Lines 11
Algorithm 1 Proposed Orthogonal Matching Pursuit
Input: y, A, K; Output: sK ;
Initialization: i = 1, r0 = y,s0 = AT y, S0 = {};
01. function Proposed Operator-based OMP()
02. for i = 1 to K
03.
t = argmaxt̂ (AT ri−1 )t̂ ;
04.
Si = Si−1 + t;
05.
Assign WSi according to Eq. (8);
06.
si =CG(A, WSi , y, i);
07.
ri = y − Asi ;
07.
i = i + 1;
08. end for
09. Return:sK ;
10. end function
11. function [ ŝ ]=CG(A, WSi , y, i)
12. b = WSTi AT y, H = WSTi AT AWSi ;
13. d0 = r0 = b, ŝ = 0;
14. j = 0;
15. while(krj k2 ≤ ξ)
rjT rj
16.
αj =
17.
18.
ŝ = ŝ + αj dj ;
rj+1 = rj − αj Hdj ;
19.
βj+1 =
dT
j Hdj
;
T
rj+1
rj+1
rjT rj
;
20.
dj+1 = rj+1 + βj+1 dj ;
21.
j = j + 1;
22. end while
23. Return:ŝ;
24. end function
- 23). It is worth mentioning that ξ in Line 15 controls the
precision of CG method. If ξ = 0, it means exact precision
such that the output of CG method is equal to least square
solution. If ξ > 0, CG method attains finite precision. However, the result with finite precision is not certainly worse than
that with exact precision especially under noisy interference,
as later discussed in the 4-th paragraph of Sec. IV-C.
Now we check whether all matrix operations can be implemented by operators in Algorithm 1 in the following.
•
•
•
WSi ∈ RN ×N :
WSi is a diagonal matrix. Thus, WSi x is equal to assign
x[j] = 0 for j 6∈ Si . The memory cost is O(K) to store
the indices of support set and the computation cost is
O(N ).
Φ = DF R ∈ RM×N :
Dx is equal to randomly choose M indices from N
entries in x. F x = DCT (x), where DCT can be speeded
by FFT. Rx is equal to randomly permute the indices
of vector x. The memory cost is bounded by O(N ) in
order to store the sequence of random permutation and
the computation cost is O(N log N ).
Ψ ∈ RN ×N :
If Ψ is a deterministic matrix, it is not necessarily to
be stored. Thus, the memory cost of Ψx is O(N ).
The computational cost depends on whether Ψx can be
speeded up. In the worst case, it costs O(N 2 ).
Other operations such as rjT rj (Line 16) only involve multiplications between vectors. Thus, both memory cost and
7
computation cost are bounded by O(N ).
From the above analysis, one can see that the computation
cost of Algorithm 1 is mainly bounded by Ψ. We discuss some
applications below, where the computation cost involving Ψ
is low. For compressive sensing of images, wavelet transform
is often chosen as Ψ such that Ψx costs O(N ). For magnetic
resonance imaging (MRI), partial Fourier transform is selected
as the component F of the sensing matrix expressed as DF R.
In this case, Ψ is I in order to satisfy MIP or RIP, and costs
nothing. Spectrum sensing is another application, where Ψ is
a discrete Fourier transform matrix done with O(N log N ).
Now, the total cost of Algorithm 1 is discussed. Both the
computation and memory costs of Lines 1-10 except Line 6
(CG method) will be O(N log N ) and O(N ), respectively. As
for the memory cost of CG method, it needs O(N ). Therefore,
the total memory cost of Algorithm 1 is bounded by O(N ).
In addition, the computation cost is related to two factors,
i.e., the number of iterations to converge in CG and Ψ. They
are further discussed as follows.
Theorem 4. (Theorem 2.2.3 in [35]) Let H be symmetric and
positive-definite. Assume that there are exactly k < N distinct
eigenvalues in H. Then, CG terminates in at most k iterations.
In our case, H = WSTi AT AWSi is positive-semidefinite
instead of positive-definite. Thus, we derive the following
theorem.
Theorem 5. Given b = WSTi AT y and H = WSTi AT AWSi ,
solving Eq. (11) requires the number of iterations at most K
in CG, where K is the sparsity of an original signal.
Proof. Without loss of generality, let support set Si =
{1, 2, ..., i}. We start from another optimization problem:
s̄i = argmin kATSi (y − ASi ŝ)k2 .
ŝ
(12)
Following the same skill in Theorem 2, s̄i is a unique and
optimal solution to both Eq. (9) and Eq. (12). Thus, s̄i is also
the solution of Eq. (11) for the first i entries. Furthermore,
ATSi ASi is non-singular such that ATSi ASi is a positive-definite
matrix and has at most i distinct eigenvalues. From Theorem
4, solving Eq. (12) requires at most i iterations. Then, we
have b = WSTi#AT y = [y T ASi 0]T and H = WSTi AT AWSi =
"
ATSi ASi 0
in Eq. (11). When we only take the first
0
0
i entries of b and the left-top i × i submatrix of H into
consideration, it is equivalent to solving Eq. (12). This fact can
be checked trivially by comparing each step of CG for both
optimization problems in Eq. (11) and Eq. (12). Thus, the first
i entries of ŝ in Eq. (11) is updated in the same manner with
that of s̄ in Eq. (12). The remaining N − i entries of ŝ are
unrelated to convergence because the N − i entries of H ŝ are
zero. In sum, the required number of iterations to converge
in Eq. (11) is identical to that in Eq. (12). Hence, solving
Eq. (11) also requires at most i iterations. Since i ≤ K, the
number of iterations is at most K. We complete the proof.
Moreover, for each iteration in CG, the operation, Hdj ,
dominates the whole computation cost. It is obvious that if Ψ
can be executed with O(N log N ) or even lower computation
complexity, Hdj costs O(N log N ) since H involves Ψ,
which spends O(N log N ). Note that the total computation
complexity of CG-OMP will be O(N K 2 log N ), where K 2
comes from the outer loop in OMP, which needs O(K), and
solving Eq. (11) that requires the number of iterations at most
K in CG, as proved in Theorem 5. On the other hand, if, under
the worst case, Ψ costs O(N 2 ) operations, the computation
complexity of CG-OMP will be O(N 2 K 2 ). For applications
that accept finite-precision accuracy instead of exact precision,
CG [33] requires fewer steps (≤ K) to achieve approximation.
Under the circumstance, the complexity of CG-OMP nearly
approximates O(N K log N ) and O(N 2 K) operations for Ψ
with complexity O(N log N ) and O(N 2 ), respectively.
Consequently, a reformulation for solving a least square
problem is proposed based on CG such that the new matching
pursuit methodology can deal wtih large-scale signals quickly.
It should be noted that, in the future, CG may be substituted
with other first order methods that outperform CG. The
proposed idea can also be readily applied to other greedy
algorithms to enhance their performance.
E. Strategies for Reducing the Cost of Ψ
We further consider that if Ψ is a learned dictionary, it
will become a bottleneck for operator-based algorithms since
it requires O(N 2 ) for storage. To overcome this difficulty, Ψ
should √
be learned
in a tensor structure. Let x = vec(X), where
√
X ∈ R N × N and vec(·) is a vectorization operator. That is,
a two-dimensional vector is reshaped to a one-dimensional
vector. Then, we can learn a 2D dictionary such that x =
Ψs = vec(Ψ1 SΨT2 ) with s = vec(S) and Ψ = Ψ2 ⊗ Ψ1 (⊗ is
a Kronecker product). Under the circumstance, all operations
in CG involving ΦΨs can be replaced by Φvec(Ψ1 SΨT2 ).
Moreover, both Ψ1 and Ψ2 only require O(N ) in terms of
memory cost. In the literature, the existing algorithms for 2D
separable dictionary learning include [19][36][37].
IV. E XPERIMENTAL R ESULTS
In this section, we conduct comparisons among O-OMP,
M-OMP, and CG-OMP in terms of the memory cost and
computation cost. The code of OMP was downloaded from
SparseLab (http://sparselab.stanford.edu). We have also applied the proposed fast and cost-effective least square solver
to SP and OMPR in order to verify if our idea can speed the
family of matching pursuit algorithms. For SP and OMPR, we
implemented the corresponding original matrix-based versions
(M-OMPR and M-SP), original operator-based versions (OOMPR and O-SP), and proposed CG-based (CG-OMPR and
CG-SP). It should be noted that although both SP and OMPR
work well for increasingly adding a index to the support set
like OMP, they are not accelerated by Cholesky factorization.
A. Simulation Setting
The simulations were conducted in an Matlab R2012b
environment with an Intel CPU Q6600 and 4 GB RAM under
Microsoft Win7 (64 bits).
The model for the measurement vector in CS is y = As+η,
where η is an addictive Gaussian noise with standard deviation
8
ση . The input signal s was produced via a Gaussian models
as:
2
s ∼ pN 0, σon
,
(13)
which was also adopted in [38]. In Eq. (13), p is the probability
of the activity of a signal and controls the number of nonzero entries of x. Sparsity K is defined to be K = pN .
σon is standard deviation for input signal. Φ is designed from
SRM and Ψ is chosen to be a discrete cosine transform. In
the following experiments, M = N4 , K = M
4 , p = 0.0625,
σon = 1, and ση = 0.01.
B. Memory Cost Comparison
Fig. 2 shows the comparison in terms of memory cost vs.
signal length N . Since the matrix-based algorithms (M-OMP,
M-OMPR, and M-SP) run out of memory, their results are not
shown in Fig. 2 (note that the result regarding matrix-based
OMP can be found in Fig. 1). First, we can observe from
Fig. 2 that O-OMP, O-OMPR, and O-SP still require about
O(N 2 ) and fail to work when N > 218 . Second, in contrast
with O-OMP, O-OMPR, and O-SP although CG-OMP, CGOMPR, and CG-SP need more memory costs than the ideal
cost, which is O(N ), their slopes are nearly identical, which
seems to imply that the CG-based versions incur larger Big-O
constants. Thus, the proposed idea of fast and cost-effective
least square solver is readily incorporated with the existing
greedy algorithms to improve their capability of handling
large-scale signals.
4
Memory Cost (MegaBytes)
10
3
10
Figs. 3(a), (b), and (c) show the computation cost vs.
signal length for OMP, OMPR, and SP, respectively, under
the condition that the precision of CG was set to be exact.
In other words, we fix the same reconstruction quality for all
comparisons in Fig. 3 and discuss the computation costs for
these different versions of algorithms. It should be noted that
some curves are cut because of running out of memory.
From Fig. 3, it is observed that the operator-based strategy
(denoted with dash curves) or our proposed CG-based method
(denoted with solid curves) can effectively reduce the order of
computation cost in comparison with the matrix-based strategy
(denoted with solid-star curves). More specifically, in Fig.
3(a), it is noted that M-OMP is only fast than CG-OMP with
N ≤ 214 due to smaller Big-O constant. However, CG-OMP
outperforms M-OMP in the end since the order of computation
complexity of CG-OMP is lower than that of M-OMP. In
particular, such improvements are significant for large-scale
signals (with large N ). Moreover, in Figs. 3(b) and (c), OOMPR and O-SP have the same orders with M-OMPR and
M-SP because no Cholesky factorization is used.
On the other hand, when finite precision is considered, we
consider two cases of setting ξ = 10−5 and ξ = 10−10 to
verify that finite precision is adequate under the condition of
noisy interferences. Taking exact precision as the baseline, the
difference of SNR values for reconstruction between settings
for exact precision and ξ = 10−5 is about ±0.1 dB. Similarly,
the difference between exact precision and ξ = 10−10 is
about ±0.01 dB. Tables II, III, and IV further illustrate the
comparisons of computation costs under different precisions.
The precision setting ξ = 10−10 results in about four times
faster than the exact precision but only sacrifices ±0.1 dB
for the reconstruct quality, which is acceptable for many
applications. In fact, the performance occasionally is better
because exact precision may lead to over-fitting.
2
10
V. C ONCLUSIONS
Ideal Cost
CG−OMP
CG−OMPR
CG−SP
O−OMP
O−OMPR
O−SP
1
10
0
10
−1
10
16
17
18
19
20
21
22
23
24
log2 N
Fig. 2. Memory cost vs. Signal length, where M =
(p = 0.0625).
N
4
and P =
M
4
The bottleneck of greedy algorithms in the context of
compressive sensing is to solve the least square problem, in
particular, when facing large-scale data. In this paper, we
address this challenging issue and propose a fast but costeffective least square solver. Our solution has been theoretically proved and can be readily incorporated with the
existing greedy algorithms to improve their performance by
significantl reducing computation complexit and memory cost.
Case studies on combining our method and OMP, SP, and
OMPR have been conducted and shown promising results.
C. Computation Cost Comparison
Before illustrating the computation cost comparison, we
first discuss the convergence condition of CG in Algorithm
1 as follows: 1) For exact precision, krj k2 = 0 is set as the
stopping criterion. As described in Theorem 5, it costs at most
K iterations to converge. 2) With inexact or finite precision,
we set krj k2 ≤ ξ with ξ > 0. Though the precision is finite,
the signals, in practice, are interfered with noises and solutions
with finite precision are adequate. Under the circumstance, the
number of iterations required to converge will be significantly
decreased.
VI. ACKNOWLEDGMENT
This work was supported by Ministry of Science and
Technology, Taiwan, ROC, under grants MOST 104-2221-E001-019-MY3 and NSC 104-2221-E-001-030-MY3.
R EFERENCES
[1] D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
[2] R. Baraniuk, “Compressive sensing,” IEEE Signal Processing Magazine,
vol. 24, no. 4, pp. 118–121, 2007.
9
TABLE II
T HE COMPUTATIONAL COST ( IN SECONDS ) OF CG-OMP UNDER DIFFERENT PRECISIONS .
o
N
Exact precision
krj k2 ≤ 10−10
krj k2 ≤ 10−5
211
1.2
0.6
0.4
212
4.7
2.3
1.8
213
16.2
8.7
7.0
214
59.8
35.5
26.9
215
241.0
131.3
103.6
216
980.3
564.7
432.1
217
3901.7
2143.9
1662.8
218
15301.4
8341.3
6479.2
TABLE III
T HE COMPUTATIONAL COST ( IN SECONDS ) OF CG-OMPR UNDER DIFFERENT PRECISIONS .
o
N
Exact precision
krj k2 ≤ 10−10
krj k2 ≤ 10−5
211
0.2
0.1
0.06
212
0.3
0.2
0.09
213
0.6
0.3
0.2
214
1.0
0.5
0.3
215
1.9
1.0
0.6
216
4.1
2.2
1.3
217
10.5
4.8
2.9
218
23.6
10.1
6.0
219
51.8
23.6
13.1
220
111.3
49.8
27.7
221
232.1
113.9
60.3
222
495.3
249.3
131.3
223
1109.1
532.3
293.4
224
2392.3
1130.4
643.6
223
1581.3
784.5
513.2
224
3195.4
1706.3
1096.8
TABLE IV
T HE COMPUTATIONAL COST ( IN SECONDS ) OF CG-SP UNDER DIFFERENT PRECISIONS .
o
N
Exact precision
krj k2 ≤ 10−10
krj k2 ≤ 10−5
211
0.3
0.2
0.09
212
0.5
0.3
0.2
213
0.9
0.5
0.4
214
1.5
0.8
0.6
215
3.0
1.6
0.9
216
6.2
3.6
2.1
[3] E. J. Candes and M. B. Wakin, “An introduction to compressive
sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30,
2008.
[4] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition
by basis pursuit,” SIAM Journal of Scientific Computing, vol. 20, no. 1,
pp. 33–61, 1998.
[5] D. L. Donoho and M. Elad, “Maximal sparsity representation via
minimization,” Proceedings of the National Academy of Sciences, vol.
100, no. 5, pp. 2197–2202, 2003.
[6] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on
Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007.
[7] D. Needella and J. A. Troppb, “Cosamp: Iterative signal recovery from
incomplete and inaccurate samplesstar,” Applied and Computational
Harmonic Analysis, vol. 26, no. 3, pp. 301–321, 2009.
[8] W. Dai, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Transactions on Information Theory, vol. 55, no. 5, pp.
2230–2249, 2009.
[9] C.F. Caiafa and A. Cichocki, “Computing sparse representations of
multidimensional signals using kronecker bases,” Neural Comput, vol.
25, no. 1, pp. 186 – 220, 2013.
[10] C.F. Caiafa and A. Cichocki, “Stable, robust and super fast reconstruction of tensors using multi-way projections,” IEEE Transactions on
Signal Processing, vol. 63, no. 3, pp. 780 – 793, 2015.
[11] L. Gan, “Block compressed sensing of natural images,” in Conf. on
Digital Signal Processing, 2007.
[12] S. Mun and J. E. Fowler, “Block compressed sensing of images using
directional transforms,” Proc. IEEE Int. Conf. Image Processing, pp.
3021–3024, 2009.
[13] A. Milzarek and M. Ulbrich, “A semismooth newton method with
multidimensional filter globalization for l1-optimization,” SIAM Journal
on Optimization, vol. 50, pp. 298–333, 2014.
[14] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient
projection for sparse reconstruction: Application to compressed sensing
and other inverse problems,” IEEE Journal of Selected Topics in Signal
Processing, vol. 1, no. 4, pp. 586–597, 2007.
[15] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” Proceedings of IEEE International
Conference on Acoustics, Speech and Signal Processing, pp. 3373–3376,
2008.
[16] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding
algorithm for linear inverse problems,” SIAM Journal on Imaging
Sciences, vol. 2, pp. 183–202, 2009.
[17] Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang, “A fast algorithm for
sparse reconstruction based on shrinkage, subspace optimization, and
continuation,” SIAM Journal on Scientific Computing, vol. 32, pp. 1832–
1857, 2010.
[18] S. Mun and J. E. Fowler, “Motion-compensated compressed-sensing reconstruction for dynamic mri,” Proc. IEEE Int. Conf. Image Processing,
pp. 1006–1010, 2013.
217
16.6
7.8
5.2
218
37.9
17.3
10.0
219
79.4
36.1
22.5
220
173.2
78.3
49.1
221
359.2
177.6
109.2
222
740.6
382.9
241.6
[19] Y. Shi, X. Sun, J. Wang, and B. Yin, “Two dimensional synthesis sparse
model,” in IEEE ICME, 2013.
[20] Y. Rivenson and A. Stern, “Compressed imaging with a separable
sensing operator,” IEEE Signal Processing Letters, vol. 16, no. 6, pp.
449–452, 2009.
[21] Y. Rivenson and A. Ster, “Practical compressive sensing of large
images,” in 16th Int’l Conf. on Digital Signal Processing, 2009.
[22] E. H. Candes, J. Romberg, and T. Taio, “Robust uncertainty principles:
Exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp.
489–507, 2006.
[23] T. T. Do, G. Lu, N. H. Nguyen, and T. D. Tran, “Fast and efficient compressive sensing using structurally random matrices,” IEEE Transations
on Signal Processing, vol. 60, pp. 139–154, 2012.
[24] N. Sidiropoulos and A. Kyrillidis, “Multi-way compressed sensing for
sparse low-rank tensors,” IEEE Signal Processing Letters, vol. 19, no.
11, pp. 757–760, 2012.
[25] Q. Li, D. Schonfeld, and S. Friedland, “Generalized tensor compressive
sensing,” IEEE International Conference on Multimedia and Expo, pp.
1–6, 2013.
[26] V. Cevher, S. Becker, and M. Schmidt, “Convex optimization for
big data: Scalable, randomized, and parallel algorithms for big data
analytics,” IEEE Signal Processing Magazine, vol. 31, no. 5, pp. 32–43,
2014.
[27] S. Becker, J. Bobin, and E. Candes, “Nesta: A fast and accurate firstorder method for sparse recovery,” SIAM Journal on Imaging Sciences,
vol. 4, pp. 1–39, 2011.
[28] Z. Wen, W. Yin, H. Zhang, and D. Goldfarb, “On the convergence of
an active-set method for ‘1 minimization,” Optimization Methods and
Software, vol. 27, pp. 1127–1146, 2012.
[29] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F.
Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive
sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 83–91,
2008.
[30] T. Blumensath and M. E. Davies, “Gradient pursuits,” IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2370–2382, 2008.
[31] R. Chartrand and Yin Wotao, “Iteratively reweighted algorithms for
compressive sensing,” in Proceedings of IEEE International Conference
on Acoustics, Speech and Signal Processing, 2008, pp. 3869–3872.
[32] A. Tewari P. Jain and I. S. Dhillon, “Orthogonal matching pursuit with
replacement,” in NIPS, 2011.
[33] A. van der Sluis and H. A. van der Vorst, “The rate of convergence
of conjugate gradients,” Numerische Mathematik, vol. 48, pp. 543–560,
1986.
[34] E. F. Kaasschieter, “Preconditioned conjugate gradients for solving
singular system,” Journal of Computational and Applied Mathematics,
vol. 24, pp. 265–275, 1988.
[35] C. T. Kelley, “Iterative methods for linear and nonlinear equations,” in
Frontiers in Applied Mathematics, 1995.
10
5
10
4
Computational Cost (Sec)
10
3
10
2
10
1
10
0
10
M−OMP
CG−OMP
O−OMP
−1
10
−2
10
11
12
13
14
15
16
17
18
log N
2
(a) OMP
4
Computational Cost (Sec)
10
3
10
2
10
1
10
M−OMPR
CG−OMPR
O−OMPR
0
10
−1
10
10
12
14
16
18
20
22
24
log N
2
(b) OMPR
4
Computational Cost (Sec)
10
3
10
2
10
1
10
M−SP
CG−SP
O−SP
0
10
−1
10
10
12
14
16
18
20
22
24
log N
2
(c) SP
Fig. 3. Computation cost vs. signal length for OMP (a), OMPR (b), and SP
, and K = M
(p = 0.0625). The precision
(c), respectively, under M = N
4
4
of CG was set to be exact.
[36] S. Hawe, M. Seibert, and M. Kleinsteuber, “Separable dictionary
learning,” IEEE CVPR, pp. 438–445, 2013.
[37] S.-H. Hsieh, C.-S. Lu, and S.-C. Pei, “2d sparse dictionary learning
via tensor decomposition,” IEEE Global Conference on Signal and
Information Processing, pp. 492–496, 2014.
[38] H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “A fast approach for
overcomplete sparse decomposition based on smoothed l0 norm,” IEEE
Transactions on Signal Processing, vol. 57, pp. 289–301, 2009.
| 8 |
arXiv:1703.06460v1 [] 19 Mar 2017
Almost Buchsbaumness of some rings arising from
complexes with isolated singularities
Connor Sawaske
Department of Mathematics
University of Washington
Seattle, WA 98195-4350, USA
[email protected]
March 21, 2017
Abstract
We study properties of the Stanley–Reisner rings of simplicial complexes with
isolated singularities modulo two generic linear forms. Miller, Novik, and Swartz
proved that if a complex has homologically isolated singularities, then its Stanley–
Reisner ring modulo one generic linear form is Buchsbaum. Here we examine the
case of non-homologically isolated singularities, providing many examples in which
the Stanley–Reisner ring modulo two generic linear forms is a quasi-Buchsbaum but
not Buchsbaum ring.
1
Introduction
Many combinatorial, algebraic, and topological statements about polytopes and triangulations of spheres or manifolds have been proven through the study of their Stanley–Reisner
rings. These rings are well-understood, and the translation of their algebraic properties
into combinatorial and topological invariants has a storied and celebrated past. The usefulness of this approach is made apparent by its presence in decades of continued progressive
research (excellent surveys may be found in [Sta96] and [KN16]).
In contrast, the main objects considered in this paper are simplicial complexes with
isolated singularities. A simplicial complex ∆ has isolated singularities if it is pure and the
link in ∆ of every face of dimension at least 1 is Cohen-Macaulay (more precise definitions
will be provided later). Common examples are provided by triangulations of a pinched
torus, the suspension of a manifold, or more generally by pseudomanifolds that fail to
be manifolds at finitely many points (see [GM80] for an in-depth discussion). The gap
between pseudomanifolds and manifolds is well understood from a topological viewpoint,
but there are powerful tools available to the Stanley–Reisner rings of triangulations of
1
manifolds that presently lack any meaningful extension to the world of pseudomanifolds.
For instance, unlike results due to Stanley ([Sta75]) and Schenzel ([Sch81]), we do not
know the Hilbert series of a generic Artinian reduction of the Stanley–Reisner ring of such
a complex, even when considering a triangulation of the suspension of a manifold that is
not a homology sphere.
The central obstruction in extending the pre-existing knowledge of triangulations of
manifolds to the singular case is the Stanley–Reisner ring failing to be Buchsbaum. Miller,
Novik, and Swartz were able to circumvent this roadblock in the case that the singularities
in question are homologically isolated. By showing that a certain quotient of the associated
Stanley–Reisner ring is in fact Buchsbaum, they established some enumerative theorems
related to f - and h-vectors in [MNS11]. Novik and Swartz were then able to calculate the
Hilbert series of a generic Artinian reduction of the Stanley–Reisner ring as well as prove
singular analogs of the Dehn-Sommerville relations in [NS12].
These established results were the inspiration for this paper. However, the ultimate
purpose here is twofold; for one, we intend for the algebraic implications of isolated singularities to become as equally well-understood as the topological ones. To this end, we will
investigate with some precision how the topological properties of singular vertices translate to the algebraic setting of Stanley–Reisner rings. This, in particular, leads to a notion
of generically isolated singularities, defined in Section 2. This notion plays a crucial role
in our main results described below. We will exhibit similarities and differences between
the singular and non-singular cases, examine some special examples, and provide alternate
interpretations of some classical results. In doing so, we will see that these rings have
interesting properties that are worth studying in their own right; these properties provide
the second purpose for this paper. In particular, we present new findings demonstrating
that some quotients of Stanley-Reisner rings of simplicial complexes with isolated singularities are very near to being Buchsbaum, and we characterize when and to what degree
this occurs.
Theorem 1.1. Let ∆ be a connected simplicial complex with isolated singularities on vertex
set V , let k be an infinite field, and denote by A the polynomial ring k[xv : v ∈ V ] and by
m the irrelevant ideal of A. If θ1 , θ2 is a generic regular sequence for the Stanley-Reisner
ring k[∆], then the local cohomology module Hmi (k[∆]/(θ1 , θ2 )k[∆]) satisfies
m · Hmi (k[∆]/(θ1 , θ2 )k[∆]) = 0
for all i if and only if the singularities of ∆ are generically isolated.
Theorem 1.2. In the setting of Theorem 1.1, the canonical maps of graded modules
ϕi : Ext iA (k, k[∆]/(θ1 , θ2 )k[∆]) → Hmi (k[∆]/(θ1 , θ2 )k[∆])
are surjective in all degrees except (possibly) 0 if and only if the singularities of ∆ are
generically isolated.
2
The condition m · Hmi (k[∆]/(θ1 , θ2 )k[∆]) = 0 in Theorem 1.1 establishes a strong necessary condition for a ring to be Buchsbaum (see [GS84, Corollary 1.5]), known as quasiBuchsbaumness. These rings were first introduced by Goto and Suzuki ([GS84]). As with
Buchsbaum rings, the properties and various characterizations of quasi-Buchsbaum rings
have long been of some interest (see, e.g., [Suz87] and [Yam09]). The usefulness of the
property is evidenced, for example, by its equivalence to Buchsbaumness in some special
cases ([SV86, Corollary 3.6]).
The maps ϕi being surjective in Theorem 1.2 is a complement to the quasi-Buchsbaum
property in that it is incredibly near to one characterization of Buchsbaumness (see Theorem 4.1). As we will see in Corollary 4.7, if Γ is a Buchsbaum (but not Cohen-Macaulay)
complex and ∆ is an arbitrary triangulation of the geometric realization of the suspension of Γ, then k[∆]/(θ1 , θ2 )k[∆] is never Buchsbaum. Though Vogel ([Vog81]) and Goto
([Got84]) provided initial examples of quasi-Buchsbaum but not Buchsbaum rings, here
we exhibit an infinite family of quasi-Buchsbaum rings of arbitrary dimensions and varying depths which fail in a geometrically tangible way to be fully Buchsbaum in only the
slightest sense.
The structure of the paper is as follows. In Section 2 we provide definitions and
foundational results, allowing for some initial computations in Section 3. We prove our
main results in Section 4, and in Section 5 we will use some properties of quasi-Buchsbaum
rings to calculate the Hilbert series of a certain Artinian reduction of the Stanley–Reisner
ring of a complex with isolated singularities. We will close with comments and open
problems in Section 6.
2
Preliminaries
This paper has been largely influenced by the works of Miller, Novik, and Swartz in
[MNS11] and Novik and Swartz in [NS12]. In order to retain consistency with these
references, much of their notation will be adopted for our uses as well.
2.1
Combinatorics and topology
A simplicial complex ∆ with vertex set V is a collection of subsets of V that is closed
under inclusion. The elements of ∆ are called faces, and the dimension of a face F is
dim F := |F | − 1. The 0-dimensional faces in ∆ are called vertices, and the maximal
faces under inclusion are called facets. We say that ∆ is pure if all facets have the same
dimension. The dimension of the complex ∆ is dim ∆ := max{dim F : F ∈ ∆}. For the
remainder of this paper, unless stated otherwise we will assume that a simplicial complex
∆ is pure of dimension d − 1 with vertex set V .
The link of a face F is the subcomplex of ∆ defined by
lk∆ F = {G ∈ ∆ : F ∪ G ∈ ∆, F ∩ G = ∅},
3
and the contrastar of a face F is defined by
cost∆ F = {G ∈ ∆ : F 6⊂ G}.
In the case that F = {v} is a vertex, we write lk∆ v and cost∆ v instead of lk∆ {v} and
cost∆ {v}, respectively. If W ⊂ V , then the induced subcomplex ∆W is the simplicial
complex {F ∈ ∆ : F ⊂ W }. For 0 ≤ i ≤ d − 1, the complex ∆(i) := {F ∈ ∆ : |F | ≤ i + 1}
is the i-skeleton of ∆.
Given a field k, denote by Hi (∆) and H i (∆) the ith (simplicial) homology and cohomology groups of ∆ computed over k, respectively (definitions and further resources may
be found in [Hat02]). If F is a face of ∆, we denote by HFi (∆) the relative (simplicial)
cohomology group H i (∆, cost∆ F ) (in line with other conventions, we write Hvi (∆) for
i
H{v}
(∆)). It will often be helpful to identify HFi (∆) with H̃ i−|F |(lk∆ F ) (see, e.g., [Grä84,
Section 1.3]); in particular, note that H∅i (∆) = H̃ i (∆), the reduced cohomology group of
∆. Finally, let
ιiF : HFi (∆) → H∅i (∆)
be the map induced by the inclusion (∆, ∅) → (∆, cost F ).
If H∅0 (∆) = 0, then we call ∆ connected. We say that a face F of ∆ is singular if
HFi (∆) 6= 0 for some i < d − 1. Conversely, if HFi (∆) = 0 for all i < d − 1, we call F a
nonsingular face. We call ∆ Cohen-Macaulay over k if every face of ∆ (including ∅)
is nonsingular, and we call ∆ Buchsbaum (over k) if it is pure and every face aside from
∅ is nonsingular.
If ∆ contains a singular face, then we define the singularity dimension of ∆ to be
max{dim F : F ∈ ∆ and F is singular}. If the singularity dimension of ∆ is 0, then we say
that ∆ has isolated singularities. Such complexes will be our main objects of study. As
a special case, if the images of the maps ιiv : Hvi (∆) → H∅i (∆) are linearly independent as
vector subspaces of H∅i (∆), then we call the singularities of ∆ homologically isolated.
Equivalently, the singularities of ∆ are homologically isolated if the kernel of the sum of
maps
!
X
M
ιiv :
Hvi (∆) → H∅i (∆)
v∈V
decomposes as the direct sum
M
v∈V
v∈V
Kerιiv : Hvi (∆) → H∅i (∆) .
Lastly, we call the singularities of ∆ generically isolated if for sufficiently generic choices
of coefficients {αv : v ∈ V } and {γv : v ∈ V }, the two maps θα and θγ defined by
!
!
X
M
X
M
αv ιiv :
Hvi (∆) → H∅i (∆)
and
γv ιiv :
Hvi (∆) → H∅i (∆),
v∈V
v∈V
v∈V
4
v∈V
respectively, satisfy
(Kerθα ) ∩ (Kerθγ ) =
M
Kerιiv : Hvi (∆) → H∅i (∆)
v∈V
for all i.
2.2
The connection to algebra
For the remained of the paper, let k be a fixed infinite field. Define A := k[xv : v ∈ V ]
and let m = (xv : v ∈ V ) be the irrelevant ideal of A. If F ⊂ V , let
Y
xF =
xv
v∈F
and define the Stanley-Reisner ideal I∆ by
I∆ = (xF : F 6∈ ∆).
The ring k[∆] := A/I∆ is the Stanley-Reisner ring of ∆. We will usually consider k[∆]
as an A-module that is graded with respect to Z by degree.
Given any Z-graded A-module M of Krull dimension d, we denote by Mj the collection
of homogeneous elements of M of degree j. A sequence Θ = (θ1 , θ2 , . . . , θd ) of linear forms
in A is called a homogeneous system of parameters (or h.s.o.p.) for M if each θi is
a homogeneous element of A and M/ΘM is a finite-dimensional vector space over k. In
the case that each θi is linear, we call Θ a linear system of parameters (or l.s.o.p.)
for M. If
(θ1 , . . . , θi−1 )M :M θi = (θ1 , . . . , θi−1 )M :M m
for 1 ≤ i ≤ d for any choice of Θ, then we call M Buchsbaum. The reasoning behind the
geometric definition of a Buchsbaum simplicial complex is made apparent in the following
theorem due to Schenzel ([Sch81]):
Theorem 2.1. A pure simplicial complex ∆ is Buchsbaum over
Buchsbaum A-module.
k if and only if k[∆] is a
Two collections of objects that are most vital to our results are the A-modules
Ext iA (k, k[∆]) and Hmi (k[∆]). An excellent resource for their construction and basic
properties is [ILL+ 07]. In the case of Stanley-Reisner rings, the Z-graded structure of
Ext iA (k, k[∆]) is provided by [Mi89, Theorem 1]:
Theorem 2.2 (Miyazaki). Let ∆ be a simplicial complex. Then as vector spaces over k,
0
j < −1 or j > 0
i−1
H∅ (∆)
j=0
M
Ext iA (k, k[∆])j ∼
=
i−2
H∅ (cost∆ v)
j = −1.
v∈V
5
The local cohomology modules Hmi (k[∆]) may be computed as the direct limit of the
Ext iA (A/ml , k[∆]) modules (see [Mi89, Corollary 1]). However, we will also need a thorough
understanding of the A-module structure of the local cohomology modules of k[∆]. The
necessary details are provided by [Grä84, Theorem 2]:
Theorem 2.3 (Gräbe). Let ∆ be a (d − 1)-dimensional
singularities and let 0 ≤ i ≤ d. Then
0
i−1
H
∅ (∆)
M
Hmi (k[∆])j ∼
=
Hvi−1 (∆)
simplicial complex with isolated
j>0
j=0
j<0
v∈V
as k-vector spaces. If αv ∈ Hvi−1(∆), then the A-module structure on Hmi (k[∆]) is given
by
·xu : Hmi (k[∆])j−1 −−−−→
Hmi (k[∆])j
αv
j < 0 and u = v
i−1
ι
(α
)
j=0
αv
7−−−−→
v
v
0
otherwise.
It is immediate from Theorem 2.3 that a pure simplicial complex ∆ is Buchsbaum if
and only if it is pure and Hmi (k[∆]) is concentrated in degree 0 for all i 6= d. We note
at this point that an A-module M is called quasi-Buchsbaum if m · Hmi (M) = 0 for
all i. Evidently, the above theorem implies that k[∆] is Buchsbaum if and only if it is
quasi-Buchsbaum; this is certainly not the case for general modules, as we will see!
Elements and homomorphisms related to Hmi (k[∆]) will usually be represented and
referenced according to the topological identifications above; these identifications will also
be expanded as we progress. As a motivating example, consider the k-vector space
M
i−1
i−1
K i :=
Kerιi−1
:
H
(∆)
→
H
(∆)
.
v
v
∅
v∈V
This space is central to the definition of homological isolation of singularities. Sometimes,
it will be easier to identify K i with its counterpart in local cohomology. If we denote by
Hmi (k[∆])v the direct summand of Hmi (k[∆])−1 corresponding to Hvi−1 (∆) in Theorem 2.3,
then K i is identified with the submodule
M
Ker ·xv : Hmi (k[∆])v → Hmi (k[∆])
v∈V
of Hmi (k[∆]). In light of how intertwined these two objects are, we will use the notation
K i interchangeable between the two settings. In local cohomology, the equality
!
X
M
K i = Ker
·xv :
Hmi (k[∆])v → Hmi (k[∆])0
v∈V
v∈V
6
is equivalent to the singularities of ∆ being homologically isolated. In the same way, if
θ1 and θ2 are generic linear forms, then the generic isolation of the singularities of ∆ is
equivalent to the equality
!
!
M
M
Ker ·θ1 :
Hmi (k[∆])v → Hmi (k[∆])0 ∩Ker ·θ2 :
Hmi (k[∆])v → Hmi (k[∆])0 = K i
v∈V
v∈V
holding for all i.
Remark 2.4. In their original statements, the above structure theorems are written with
respect to a Z|V | -grading and are proved by examining the chain complex Hom A (K·l , k[∆]),
where K·l is the Koszul complex of A with respect to ml . When coarsening to a Z-grading,
this chain complex becomes much larger. However, an argument similar to Reisner’s
original proof that Hmi (k[∆])0 ∼
= H∅i−1 (∆) (see [Rei76], pp. 41-42) shows that the only
potentially non-acyclic components of Hom A (K·l , M) under a Z-grading are those also
appearing in the Z|V | -graded complex.
3
Auxiliary calculations
Unless stated otherwise, we will always assume that ∆ is a connected (d − 1)-dimensional
simplicial complex with isolated singularities. Let V be the vertex set of ∆ and set A :=
k[xv : v ∈ V ]. We will always consider R := k[∆] as an A-module. If θ1 , . . . , θd is a
homogeneous system of parameters for ∆, we denote Ri := k[∆]/(θ1 , . . . , θi )k[∆].
Since ∆ is connected, the 1-skeleton ∆(1) of ∆ is Cohen-Macaulay and the depth of
R is at least 2 (see [Hib91, Corollary 2.6]). Hence, there exists a homogeneous system of
parameters θ1 , . . . , θd for ∆ in which θ1 , θ2 are linear and form the beginning of a regular
sequence for R, i.e., θ1 is a non-zero-divisor on R and θ2 is a non-zero-divisor on R1 .
Generically, we may assume that θ1 and θ2 both have non-zero coefficients on all xv ’s.
Unless stated otherwise, we will always work with such a system of parameters for ∆.
Our results primarily depend upon an understanding of the A-modules Hmi (Rj ). We begin by computing their dimensions over k when j = 1 or 2 and discussing some connections
to the topology of ∆.
3.1
Local cohomology
Consider the exact sequence of A-modules
·θ
π
1
0 → R −→
R−
→ R1 → 0.
By looking at graded pieces of this sequence, there are exact sequences of vector spaces
over k of the form
·θ1
π
0 → Rj−1 −→
Rj −
→ Rj1 → 0.
(3.1)
7
These sequences induce the following long exact sequence in local cohomology, where
θ1i,j : Hmi (R)j−1 → Hmi (R)j is the map induced by multiplication, π is the map induced by
the projection R → R1 , and δ is the connecting homomorphism:
θ i,j
π
θ i+1,j
δ
1
1
−→ Hmi+1(R)j .
→ Hmi (R)j −
→ Hmi (R1 )j −
→ Hmi+1 (R)j−1 −−
Hmi (R)j−1 −−
(3.2)
In light of Theorem 2.3, we make the following conclusions. When j > 1, all terms are
zero. When j = 1, δ is an isomorphism. When j ≤ −1, each θ1i,j is an isomorphism (all
coefficients of θ1 are non-zero). When j = 0, we obtain the short exact sequence
0 → Coker θ1i,0 → Hmi (R1 )0 → Kerθ1i+1,0 → 0.
(3.3)
Hence, as k-vector spaces,
H∅i (∆)
j=1
Hmi (R1 )j ∼
Coker θ1i,0 ⊕ Kerθ1i+1,0
j=0
=
0
otherwise.
(3.4)
It will be useful to keep in mind that Coker θ1i,0 is identified with a quotient of H∅i−1 (∆)
and that Kerθ1i+1,0 is identified with a submodule of ⊕v∈V Hvi (∆). Although the short exact
sequence (3.3) above is not necessarily split, we will leverage the “geometric” A-module
structures of Coker θ1i,0 and Kerθ1i+1,0 along with (3.3) to further analyze Hmi (R1 ) in Section
4.
Now we repeat this argument; consider the short exact sequence
·θ
π
2
1
0 → Rj−1
−→
Rj1 −
→ Rj2 → 0
(3.5)
of vector spaces over k, giving rise to the long exact sequence
θ i,j
π
δ
θ i+1,j
2
2
−→ Hmi+1 (R1 )j .
→ Hmi (R1 )j −
→ Hmi (R2 )j −
→ Hmi+1 (R1 )j−1 −−
Hmi (R1 )j−1 −−
(3.6)
As in the previous computation, all terms are zero when j < 0 or j > 2, π is an
isomorphism when j = 0, and δ is an isomorphism when j = 2. When j = 1, we have the
exact sequence
0 → Coker θ2i,1 → Hmi (R2 )1 → Kerθ2i+1,1 → 0.
(3.7)
Hence, as vector spaces over k,
H∅i+1 (∆)
j=2
i,1
i+1,1
Coker
θ
⊕
Kerθ
j=1
2
2
Hmi (R2 )j ∼
=
i,0
i+1,0
Coker θ1 ⊕ Kerθ1
j=0
0
otherwise.
8
(3.8)
3.2
Local cohomology: suspensions
We will now briefly consider the special case of suspensions. Suppose ∆ is an arbitrary triangulation of the suspension of a (d−2)-dimensional manifold that is not Cohen-Macaulay,
with suspension points a and b (so that a and b are isolated singularities of ∆). In this
context, the maps ιia and ιib from Section 2 are isomorphisms. If g i is a generator for H∅i (∆),
denote gai := (ιia )−1 (g) ∈ Hai (∆) and gbi := (ιib )−1 (g) ∈ Hbi (∆). As usual, we will consider
these generators interchangeable with their corresponding elements in Hmi+1 (R).
P
Examining
the
sequence
in
(3.3)
for
this
special
case,
suppose
θ
=
1
v∈V xv and
P
i−1
i−1
θ2 =
c
x
with
c
=
6
c
and
c
,
c
=
6
0.
Given
g
a
generator
of
H
a
b
a b
v∈V v v
∅ (∆), the
i,0 i−1
i,0
i,0 i−1
i,0
i−1
map θ1 acts via θ1 (ga ) = θ1 (gb ) = g . Hence, θ1 is a surjection whose kernel is
generated as a direct sum by elements of the form (gai−1 − gbi−1 ). In particular, Hmi (R1 )0 ∼
=
i+1,0 ∼
i
Ker(θ1 ) = H∅ (∆). In summary,
i
j=1
H∅ (∆)
i
1
i
∼
H (∆)
j=0
Hm (R )j =
∅
0
otherwise,
under the aforementioned isomorphisms.
Now repeat the process above using the sequence in (3.7). If g i is a generator of H∅i (∆),
then θ2i+1,0 acts on Hmi+1 (R)−1 via θ2i+1,0 (gai ) = ca g i and θ2i+1,0 (gbi ) = cb g i . In particular,
identifying Hmi+1 (R1 )0 with the subspace Ker(θ1i+1,0 ) of Hmi+1 (R)1 , the induced action of
i+1,1
is
θ2i+1,1 is given by θ2i+1,1 (gai − gbi ) = (ca − cb )g i ∈ H∅i (∆) ∼
= Hmi+1(R1 )1 . That is, θ2
an isomorphism as long as ca 6= cb (note also that the singularities of ∆ are generically
isolated); this means that Hmi (R2 )1 = 0. In summary:
i+1
j=2
H∅ (∆)
i
2
i
∼
H∅ (∆)
j=0
(3.9)
Hm (R )j =
0
otherwise.
Since Hmi (R2 )1 = 0, is is immediate that Hmi (R2 ) is quasi-Buchsbaum for all i. The specific
choice of θ1 was made for the ease of calculation. For sufficiently generic choices of θ1 and
θ2 , the same isomorphisms hold. In particular, it is evident that the singularities of ∆ are
generically isolated.
4
4.1
Results
Buchsbaumness
We now move on to showing whether or not certain modules are Buchsbaum. For this,
the following theorem ([SV86, Theorem I.3.7]) is vital.
Theorem 4.1. Let k be an infinite field, with M a Noetherian graded A-module and
d := dim M > 0. Then M is a Buchsbaum module if and only if the natural maps
ϕiM : Ext iA (k, M) → Hmi (M) are surjective for i < d.
9
Thus far we know some limited information about Ext iA (k, Rj ) and Hmi (Rj ) in terms
of the simplicial cohomology of subcomplexes of ∆. Thankfully, Miyazaki has furthered
this correspondence with an explicit description of ϕiR in [Mi91, Corollary 4.5].
Theorem 4.2. The canonical map ϕiR : Ext iA (k, R) → Hmi (R) corresponds to the identity
map on H∅i−1 (∆) in degree zero and to the direct sum of maps
M
ϕiv : H∅i−2 (cost∆ v) → H∅i−2 (lk∆ v)
v∈V
induced by the inclusions in degree −1.
For our purposes, an alternate expression for ϕiR turns out to be even more powerful
than the one above. For some fixed v, consider the long exact sequence in simplicial
cohomology for the triple (∆, cost∆ v, ∅). In our notation, it is written as
ιi−1
δ
v
· · · → H∅i−2 (∆) → H∅i−2 (cost∆ v) −
→ Hvi−1 (∆) −−
→ H∅i−1 (∆) → · · · .
(4.1)
Under the isomorphism Hvi−1 (∆) ∼
= H∅i−2 (lk∆ v), a quick check shows that the connecting
homomorphism δ in this sequence is equivalent to ϕiv in the theorem above (this is also
made apparent in examining its proof). On the other hand, if we consider the cohomology
modules above as components of Hmi (R) as in Theorem 2.3, then the ιvi−1 map in this
sequence is the same as the “multiplication by xv ” map on Hmi (R)v . These equivalences
along with the exactness of (4.1) allow us to deduce the following proposition.
Proposition 4.3. The image of the H∅i−2 (cost∆ v) component of Ext iA (k, R)−1 under the
canonical map ϕiR : Ext iA (k, R) → Hmi (R) is precisely the kernel of ιi−1
: Hvi−1 (∆) →
v
i−1
H∅ (∆). In particular,
(Im ϕiR )−1 = K i
through the identifications
Im ϕiR
−1
=
M
v∈V
M
Im ϕiv =
= K i.
Kerιi−1
v
v∈V
Note that if θ is any linear form then the proposition immediately implies that
(Im ϕiR )−1 ⊆ Kerθi,0 . Examining this containment more closely provides a characterization of the Buchsbaumness of R1 . Before stating this characterization, we note that one
commutative diagram in particular will be used repeatedly in proving many of our results.
Here we explain its origin.
Construction 4.4. The short exact sequence (3.1) induces the following commutative
diagram of vector spaces with exact rows:
Ext iA (k, R)j−1
θ1i,j
ϕiR
Hmi (R)j−1
Ext iA (k, R)j
π
Hmi (R)j
δ
ϕi
ϕiR
θ1i,j
Ext iA (k, R1 )j
Hmi (R1 )j
10
θ1i+1,j
ϕi+1
R
R1
π
Ext i+1
A (k, R)j−1
δ
Hmi+1 (R)j−1
Ext i+1
A (k, R)j
ϕi+1
R
θ1i+1,j
Hmi (R)j .
Since θ1 is the beginning of a regular sequence for R, it acts trivially on Ext iA (A/ml , R)
for all i and j (see, e.g., [HH11, p. 272]). By the commutativity of the rightmost square,
i+1
the image of Ext i+1
must lie in Kerθ1i+1,j . On the other hand, the
A (k, R)j−1 under ϕR
exactness of the bottom row tells us that π : Hmi (R)j → Hmi (R1 )j factors through the
projection Hmi (R)j → Coker θ1i,j . As this does not alter the commutativity of the diagram,
we can now alter it so that the top and bottom rows are both short exact sequences as
follows
0
Ext iA (k, R)j
Ext iA (k, R1 )j
Ext i+1
A (k, R)j−1
ϕi+1
R
ϕi
R1
0
Coker θ1i,j
0
Kerθ1i+1,j
Hmi (R1 )j
0,
where the left vertical map is the composition of ϕiR with the projection Hmi (R)j →
Coker θ1i,j . We can repeat this construction starting with the short exact sequence (3.5),
yielding the same diagram as above with R, R1 , and θ1 replaced by R1 , R2 , and θ2 ,
respectively.
Our first use of this construction will be in proving the following proposition (an alternate proof of the “if” direction also appears in [NS12, Lemma 4.3(2)]).
Proposition 4.5. If ∆ has isolated singularities, then R1 is Buchsbaum if and only if the
singularities of ∆ are homologically isolated.
Proof. Construction 4.4 provides the following diagram:
0
Ext iA (k, R)0
Ext iA (k, R1 )0
Ext i+1
A (k, R)−1
ϕi+1
R
ϕi
R1
0
Coker θ1i,0
0
Kerθ1i+1,0
Hmi (R1 )0
0.
By definition, if ∆ contains singularities that are not homologically isolated then there
exists some i such that K i+1 ( Kerθ1i+1,0 . By Proposition 4.3, this implies that the ϕi+1
R
map appearing in the diagram above is not a surjection. Since ϕiR : Ext iA (k, R)0 → Hmi (R)0
is an isomorphism, the left vertical map is always a surjection. Then the snake lemma
applied to this diagram shows that ϕiR1 is not a surjection, so R1 is not Buchsbaum by
Theorem 4.1.
Conversely, if the singularities of ∆ are homologically isolated, then Kerθ1i+1,0 = K i+1
for all i, so that the ϕi+1
R map in the diagram is always a surjection. The snake lemma now
i
shows that ϕR1 is a surjection in degree 0. In degree 1, we only need to raise the degrees
in the previous diagram by one. The diagram simplifies to
0
Ext iA (k, R1 )1
Ext i+1
A (k, R)0
ϕi
ϕi+1
R
R1
0
0
Hmi (R1 )1
Hmi+1 (R)0
11
0,
because Ext iA (k, R)1 = Coker θ1i,1 = 0. Since ϕi+1
is an isomorphism is degree 0, this
R
completes the proof.
We have now seen that spaces with homologically isolated singularities are “close” to
being Buchsbaum in that R1 is Buchsbaum. It is natural to ask whether descending to R2
could always provide a Buchsbaum module, even in the non-homologically-isolated case.
This is not true, as exhibited by the following proposition.
Proposition 4.6. Suppose ∆ is a space with non-homologically-isolated singularities and
that there exists i such that H∅i−1 (∆) = 0, while Kerθ1i+1,0 6= 0 and ιiv is injective for all v.
Then R2 is not Buchsbaum.
Proof. The hypotheses combined with the exact sequence in (4.1) show that H∅i−1 (cost v) =
i,0
i−1
0, so that Ext i+1
A (k, R)−1 = 0. Also, Coker θ1 = 0 because H∅ (∆) = 0. Then the
diagram of Construction 4.4 can be filled in as follows
0
Ext iA (k, R)0
Ext iA (k, R1 )0
0
0
Kerθ1i+1,0
0.
ϕi
R1
0
Hmi (R1 )0
0
This demonstrates that ϕiR1 is the zero map in degree 0. Now repeat this argument using R1
i+1
1
and R2 instead of R and R1 . In this case, Ext i+1
A (k, R )−1 = 0 because Ext A (k, R)−1 = 0.
Since Hmi (R1 )−1 = 0, the diagram provided is
Ext iA (k, R1 )0
∼
ϕi
ϕi
R1
Hmi (R1 )0
Ext iA (k, R2 )0
R2
∼
Hmi (R2 )0 ,
showing that ϕiR2 is the zero map in degree 0 as well. Since Hmi (R2 )0 6= 0, Theorem 4.1
completes the proof.
The hypotheses of this proposition may seem fairly restrictive, but that is not necessarily the case. In fact, choosing i to be the least i such that H∅i (∆) 6= 0 when ∆ is a
triangulation of the suspension of a manifold that is not a homology sphere will always do
the trick, yielding the following Corollary:
Corollary 4.7. If ∆ is the suspension of a Buchsbaum complex that is not Cohen-Macaulay,
then R2 is not Buchsbaum.
12
4.2
Almost Buchsbaumness
Although these R2 modules are not guaranteed to be Buchsbaum when ∆ has nonhomologically-isolated singularities, they are “close” to being Buchsbaum in some interesting ways and share some of the same properties. The examples above in which R2 is not
Buchsbaum fail the criterion of Theorem 4.1 in the degree 0 piece of Hmi (R2 ). Theorem
1.2 asserts that (in the generically isolated case) this is the only possible obstruction to
satisfying Theorem 4.1, and we now present its proof.
Proof of Theorem 1.2. : By the calculations in Section 3, we only need to verify surjectivity
in degrees 1 and 2. The last diagram in the proof of Proposition 4.5 holds regardless
of whether or not the singularities of ∆ are homologically isolated. Hence, ϕiR1 is an
isomorphism in degree 1. Construction 4.4 then induces the diagram
0
Ext iA (k, R2 )2
1
Ext i+1
A (k, R )1
ϕi+1
1
ϕi
R2
R
Hmi (R2 )2
0
0
Hmi+1 (R1 )1
0,
so that ϕiR2 is always an isomorphism in degree 2.
It remains to show that ϕiR2 is a surjection in degree 1. As usual, we have the following
diagram.
1
Ext i−1
A ( k, R ) 1
0
2
Ext i−1
A (k, R )1
Ext iA (k, R1 )0
0
Kerθ2i,1
0.
ϕi−1
2
R
Coker θ2i−1,1
0
Hmi−1 (R2 )1
According to the previous paragraph, the left vertical map must be a surjection. If we can
show that the right vertical map is a surjection as well, then the proof will be complete.
The right map is obtained by restricting the range of ϕiR1 to the subspace Kerθ2i,1 of
Hmi (R1 )0 . Note that the failure of ϕiR1 to be a surjection in this degree is precisely what
made R1 fail to be Buchsbaum in the non-homologically-isolated case.
Now consider a larger commutative diagram, all of whose rows are exact. All vertical
maps are those induced by the action of θ2 , and all maps from the back “panel” to the
front are induced by the canonical maps ϕiRj .
Ext iA (k, R)0
0
Coker θ1i,0
0
0
Ext i+1
A (k, R)−1
Kerθ1i+1,0
Hmi (R1 )0
Ext iA (k, R)1
0
0
Ext iA (k, R1 )0
Ext iA (k, R1 )1
Hmi (R1 )1
0
Ext i+1
A (k, R)0
Hmi+1 (R)0
13
0
0
0.
Now consider applying the snake lemma to both the front panel and the back panel. Note
that the vertical maps on the back panel are all identically zero, since θ2 acts trivially on
all of the modules there. Denote by τ the restriction of θ2i+1,0 to Kerθ1i+1,0 , appearing as
the right vertical map in the front panel of the diagram. By the naturality of the sequence
induced by the snake lemma, we obtain maps from the “top” panel as follows:
0
Ext iA (k, R)0
Ext iA (k, R1 )0
Ext i+1
A (k, R)−1
0
0
Coker θ1i,0
Kerθ2i,0
Kerτ
0.
Since the left vertical map is a surjection, we will be done if we can show that the right
vertical map is a surjection. However, Kerτ is simply
(Kerθ1i+1,0 ) ∩ (Kerθ2i+1,2 ) := Li+1 .
Note that the singularities of ∆ are generically isolated if and only if Li+1 = K i+1 . On
the other hand, K i+1 is precisely the image of Ext iA (k, R)−1 under ϕiR , completing the
proof.
The intersection Li+1 above is also central to the proof of Theorem 1.1, which we now
present as well.
Proof of Theorem 1.1. Once more, since Hmi (R2 ) may only have non-zero components in
the graded degrees 0, 1, and 2, we only need to check that m acts trivially on the degree 0
and degree 1 parts. We begin in degree 1. Let αv , βv , and γv all denote the map induced
by multiplication by xv in the diagram below.
0
Coker θ2i,1
αv
0
0
Hmi (R2 )1
βv
Hmi (R2 )2
Kerθ2i+1,1
0
γv
Hmi+1(R1 )1
0.
The snake lemma provides the exact sequence
0 → Coker θ2i,1 → Kerβv → Kerγv → 0.
Comparing this to the top row of the previous diagram, if we can show that Kerγv is all
of Kerθ2i+1,1 , then we may conclude that Kerβv is all of Hmi (R2 )1 . Note that Kerθ2i+1,1 is
a submodule of Hmi+1(R1 )0 ; to study this submodule, consider the following diagram with
exact rows.
0
Coker θ1i+1,0
Hmi+1 (R1 )0
θ2i+1,1
θ2i+1,1
0
0
Hmi+1 (R1 )1
14
Kerθ1i+2,0
0
τ
Hmi+2 (R)0
0.
As in the previous proof, the rightmost vertical map τ is the restriction of θ2i+2,0 to
Kerθ1i+2,0 . Once more, note that
Kerτ = (Kerθ1i+2,0 ) ∩ (Kerθ2i+2,2 ) := Li+2 .
Through another application of the snake lemma, we get a short exact sequence fitting
into the top row of the following diagram
Coker θ1i+1,0
0
Kerθ2i+1,1
Hmi+1 (R1 )1
0
0
·xv
·xv
·xv
0
Li+2
Hmi+2 (R)0
0.
So, it now remains to show that the rightmost map is zero. But xv acts trivially on Li+2
for all i and for all v if and only if
Li+2 = K i+2 ,
i.e., if and only if the singularities of ∆ are generically isolated.
Now we show that m · Hmi (R2 )0 = 0, independent of the type of isolation of the singularities of ∆. Consider the diagram below:
Hmi (R1 )0
0
Hmi (R2 )0
·xv
Hmi (R1 )0
θ2i,1
0
·xv
Hmi (R1 )1
Hmi (R2 )1
Hmi+1 (R1 )0 .
From the exactness of the rows of this diagram, we can conclude that xv · Hmi (R2 )0 = 0 if
xv · Hmi (R1 )0 ⊆ θ2 · Hmi (R1 )0 .
Generically, all coefficients of θ2 are non-zero. Combining this with the structure of Hmi (R)
outlined in Theorem 2.3, it is immediate that
xv · Hmi (R)−1 ⊆ θ2 · Hmi (R)−1 .
The following diagram now completes the proof.
0
Hmi (R1 )1
δ
Hmi+1 (R)0
·xv
Hmi (R)0
·xv
Hmi (R1 )0
δ
Hmi+1 (R)−1
·θ2
0
0
Hmi (R1 )1
Hmi+1 (R)0
·θ2
δ
Hmi+1 (R)0
15
0
Example 4.8. Combining Theorem 1.1 with Corollary 4.7 provides an infinite family of
interesting examples of rings with some prescribed properties that are quasi-Buchsbaum
but not Buchsbaum. In particular, let M be a d-dimensional manifold that is not a
homology sphere and let Γ be an arbitrary triangulation of M. Suppose further that
max{i : Γ(i) is Cohen-Macaulay} = r + 1.
Equivalently, the depth of k[Γ] is r +1. If we set Γ′ to be the join of Γ with two points, then
Γ′ is a triangulation of the suspension of M and the depth of k[Γ′ ] is r + 2. Since the depth
of the Stanley–Reisner ring is a topological invariant ([Mun84, Theorem 3.1]), if ∆ is an
arbitrary triangulation of the suspension of M then R2 has depth r. Since the singularities
of ∆ are generically isolated, R2 is a quasi-Buchsbaum ring of Krull dimension d and depth
r that is not Buchsbaum; furthermore, the canonical maps ϕiR2 : Ext iA (k, R2 ) → Hmi (R2 )
are surjections in all degrees except 0.
4.3
Another surjection
By [SV86, Proposition I.3.4], the quasi-Buchsbaum property of some R2 is equivalent to
the fact that every homogeneous system of parameters of R2 contained in m2 is a weakly
regular sequence. In light of the typical definition of the Buchsbaum property in terms
of l.s.o.p.’s being weakly regular sequences (see [SV86])) along with the characterization
appearing in Theorem 4.1 by surjectivity of the maps ϕiM : Ext iA (k, M) → Hmi (M), what
happens when we consider the natural maps Ext iA (A/m2 , R2 ) instead? Our next result
establishes another measure of the gap between the structure of R2 and the Buchsbaum
property:
Proposition 4.9. Suppose ∆ has isolated singularities. Then the canonical maps ψRi 2 :
Ext iA (A/m2 , R2 ) → Hmi (R2 ) are surjective.
Proof. Once more, surjectivity needs only to be demonstrated in degrees 0, 1, and 2. We
will begin with the degree 2 piece. The exact sequence (3.1) with j = 1 and l = 2 gives
rise to the commutative diagram below, where the horizontal maps are isomorphisms.
Ext iA (A/m2 , R1 )1
δ
ψi
i+1
ψR
R1
Hmi (R1 )1
2
Ext i+1
A (A/m , R)0
δ
Hmi+1 (R)0
2
i+1
By [Mi91, Corollary 4.5], the map ψRi+1 : Ext i+1
A (A/m , R)0 → Hm (R)0 is equivalent to
i+1
i
the identity map on H∅ (∆). Hence, ψR1 is an isomorphism in degree 1. By the same
argument, the exact sequence (3.5) and the corresponding commutative diagram show
that ψRi 2 : Ext iA (A/m2 , R2 )2 → Hmi (R2 )2 is an isomorphism.
16
The arguments for further graded pieces are similar. First consider the following diagram with exact rows again induced by (3.1).
0
Ext iA (A/m2 , R)0
2
Ext i+1
A (A/m , R)−1
Ext iA (A/m2 , R1 )0
ψi
i
ψR
i+1
ψR
R1
Hmi (R1 )0
Hmi (R)0
0
Hmi+1 (R)−1
By [Mi89, Corollary 2], the left and right vertical maps are isomorphisms. Exactness then
implies that the middle vertical map is surjective. So, ψRi 1 is an isomorphism in degree 1
and a surjection in degree 0. For the surjectivity of ψRi 2 in degree 1, consider the following
commutative diagram induced by (3.5) with exact rows.
0
Ext iA (A/m2 , R1 )1
2
1
Ext i+1
A (A/m , R )0
Ext iA (A/m2 , R2 )1
ψi
ψi+1
1
ψi
R1
R2
Hmi (R1 )1
0
R
Hmi (R2 )1
Hmi+1 (R1 )0
We have just demonstrated that the left and right vertical maps are at least surjections.
Again by exactness, we now have that ψRi 2 is a surjection in degree 1. Now consider one
final commutative diagram.
Ext iA (A/m2 , R1 )0
Ext iA (A/m2 , R2 )0
ψi
ψi
R1
R2
Hmi (R1 )0
Hmi (R2 )0
From Section 3, we know that the bottom map is an isomorphism. Since ψRi 1 is a surjection
in degree zero, ψRi 2 must be as well.
5
An enumerative theorem
Although R2 is not Buchsbaum, the quasi-Buchsbaum property does allow for a computation of the Hilbert series of the generic Artinian reduction of R by a h.s.o.p. of a particular
type. Let ∆ have generically isolated singularities and say Θ = θ1 , . . . , θd is a homogeneous
system of parameters for ∆ such that θ1 , θ2 is a linear regular sequence, while θ3 , . . . , θd
are quadratic forms. For 2 ≤ i ≤ d − 1, there are exact sequences
·θi+1
i
−−−
→ Rji → Ri+1 → 0.
0 → (0 :Ri θi+1 )j−2 → Rj−2
Since R2 is quasi-Buchsbaum, the sequence θ3 , . . . , θd is a weakly regular sequence by
[SV86, Proposition I.2.1(ii)]. Furthermore, the proof of the proposition shows that (0 :R2
17
θ3 ) = Hm0 (R2 ). On the other hand, [Suz87, Theorem 3.6] states that Ri is quasi-Buchsbaum
for 2 ≤ i ≤ d − 1. Hence, the sequence above can be re-written as
i
0 → Hm0 (Ri )j−2 → Rj−2
→ Rji → Ri+1 → 0.
for 2 ≤ i ≤ d − 1. If Hilb(M; t) denotes the Hilbert series of a Z-graded A-module M,
then these exact sequences imply
Hilb(Ri+1 ; t) = (1 − t2 ) Hilb(Ri ; t) + t2 Hilb(Hm0 (Ri ); t).
A standard calculation then shows
d
Hilb(R ; t) = (1 + t)
d−2
d
(1 − t) Hilb(R; t) +
d−1
X
i=2
t2 (1 − t2 )d−1−i Hilb(Hm0 (Ri ); t) . (5.1)
P
The first term reduces to (1 + t)d−2 di=0 hi (∆)ti , following [Sta75]. To analyze the sum,
[Suz87, Lemma 3.5] provides the exact sequence
0 → Hmj (Ri )k → Hmj (Ri+1 )k → Hmj+1(Ri )k−2 → 0
for 2 ≤ i ≤ d − 2 and 0 ≤ j ≤ d − i − 2. So, as vectors spaces over k, for 2 ≤ i ≤ d − 1
there are isomorphisms
i−2
M
M j 2
Hm (R )−2j .
Hm0 (Ri )j ∼
=
j=0
(i−2
j )
That is,
Hilb(Hm0 (Ri ); t)
i−2
X
i − 2 2j
t Hilb(Hmj (R2 ); t).
=
j
j=0
(5.2)
Now define
and
µi = dimk Hmi (R2 )0 = dimk Coker θ1i,0 ⊕ Kerθ1i+1,0 ,
ν i = dimk Hmi (R2 )1 = dimk Coker θ2i,1 ⊕ Kerθ2i+1,1 ,
β∅i (∆) = dimk H∅i (∆),
so
Hilb(Hmi (R2 ); t) = µi + ν i t + β∅i+1(∆)t2 .
Combining this equality with equations (5.1) and (5.2) implies the following theorem describing Hilb(Rd ; t).
18
Theorem 5.1. If q = 2p is even, then
dimk (Rqd )
X
p−1
q
X
pβ∅k (∆)
d−2
k
k
p−1 d − 2
.
(−1) µ +
hi (∆) + (−1)
=
d−1−p
p
q−i
i=0
k=0
If q = 2p + 1 is odd, then
dimk (Rqd )
X
p−1
q
X
d−2
p−1 d − 2
(−1)k ν k .
hi (∆) + (−1)
=
p
q
−
i
i=0
k=0
Remark 5.2. Note that µi is actually a topological invariant of ∆. If k∆k is the geometric
realization of ∆ and Σ is the set of isolated singularities of ∆, then µi = dimk H∅i−1 (k∆krΣ)
(see [NS12, Theorem 4.7]). At present there is no similar description for ν i , as it is not
clear how to trace the geometry of ∆ all the way through to Hmi (R2 )1 in such a precise
manner.
6
Comments
There are many possible abstractions of the results that have been presented. Perhaps the
most immediate consideration is in introducing singularities of dimension greater than 0.
In this case, the structure of Hmi (k[∆]) outlined in Theorem 2.3 becomes more involved
and hinders the calculations of Section 3. For instance, if ∆ contains singular faces even
of dimension 1, then θ1i,j will not, in general, be an isomorphism in degrees j < 0. This
implies that there is some i such that Hmi (k[∆]/θ1 k[∆])j is non-zero for infinitely many
values of j, i.e., k[∆]/θ1 k[∆] does not have finite local cohomology. In fact, Miller, Novik,
and Swartz classified when this is the case for quotients of k[∆] by arbitrarily many generic
linear forms:
Theorem 6.1. ([MNS11, Theorem 2.4]) A simplicial complex ∆ is of singularity dimension at most m − 1 if and only if k[∆]/(θ1 , . . . , θm )k[∆] has finite local cohomology.
In the case that m = 1, we know that k[∆]/θ1 k[∆] not only has finite local cohomology,
but it is also Buchsbaum if and only if the singularities of ∆ are homologically isolated.
So, one may pose the following question.
Question 6.2. If ∆ is of singularity dimension m−1, is there an analog of the homological
isolation property for singularities of arbitrary dimension classifying when k[∆]/(θ1 , . . . , θm )k[∆]
is Buchsbaum?
A possible property could be that all pairs of images of maps of the form H i (∆, cost∆ (F ∪
{u})) → H i (∆, cost∆ F ) and H i (∆, cost∆ (F ∪ {v})) → H i (∆, cost∆ F ) occupy linearly independent subspaces of H i (∆, cost∆ F ) for all faces F and all vertices u and v in the
appropriate dimensions.
19
On the other hand, when m = 1 we know that k[∆]/θ1 k[∆] has finite local cohomology
and that k[∆]/(θ1 , θ2 )k[∆] is quasi-Buchsbaum if and only if the singularities of ∆ are
generically isolated. The leads to our next question.
Question 6.3. If ∆ is a simplicial complex of singularity dimension m − 2 and k[∆] is
of depth at least m with θ1 , . . . , θm a regular sequence on k[∆], is there an analog of the
generic isolation property classifying when k[∆]/(θ1 , . . . , θm )k[∆] is quasi-Buchsbaum?
Again, one candidate property would be that given m + 1 generic linear forms, the
pairwise intersections
Kerθil,0 ∩ Kerθjl,0
are all trivially equal to K l .
Lastly, we have been able to provide many examples of complexes ∆ with isolated
singularities in which k[∆]/(θ1 , θ2 )k[∆] is quasi-Buchsbaum but not Buchsbaum. In light
of these examples, we present the following conjecture.
Conjecture 6.4. In the setting of Theorem 1.1, if the singularities of ∆ are not homologically isolated then k[∆]/(θ1 , θ2 )k[∆] is never Buchsbaum.
Acknowledgements
The author would like to thank Isabella Novik for proposing a problem that led to the
results in this paper. The author is also grateful to Satoshi Murai for pointing out an error
in an earlier version, leading to the notion of generic isolation of singularities.
References
[GM80] Mark Goresky and Robert MacPherson. Intersection homology theory. Topology,
19(2):135–162, 1980.
[Got84] Shiro Goto. A note on quasi-Buchsbaum rings. Proc. Amer. Math. Soc., 90(4):511–
516, 1984.
[Grä84] Hans-Gert Gräbe. The canonical module of a Stanley-Reisner ring. J. Algebra,
86(1):272–281, 1984.
[GS84] Shiro Goto and Naoyoshi Suzuki. Index of reducibility of parameter ideals in a
local ring. J. Algebra, 87(1):53–88, 1984.
[Hat02] Allen Hatcher. Algebraic topology. Cambridge University Press, Cambridge, 2002.
[HH11] Jürgen Herzog and Takayuki Hibi. Monomial ideals, volume 260 of Graduate Texts
in Mathematics. Springer-Verlag London, Ltd., London, 2011.
20
[Hib91] Takayuki Hibi. Quotient algebras of Stanley-Reisner rings and local cohomology.
J. Algebra, 140(2):336–343, 1991.
[ILL+ 07] Srikanth B. Iyengar, Graham J. Leuschke, Anton Leykin, Claudia Miller, Ezra
Miller, Anurag K. Singh, and Uli Walther. Twenty-four hours of local cohomology,
volume 87 of Graduate Studies in Mathematics. American Mathematical Society,
Providence, RI, 2007.
[KN16] Steven Klee and Isabella Novik. Face enumeration on simplicial complexes. In
Recent trends in combinatorics, volume 159 of IMA Vol. Math. Appl., pages 653–
686. Springer, [Cham], 2016.
[MNS11] Ezra Miller, Isabella Novik, and Ed Swartz. Face rings of simplicial complexes
with singularities. Math. Ann., 351(4):857–875, 2011.
[Mi89] Mitsuhiro Miyazaki. Characterizations of Buchsbaum complexes. Manuscripta
Math., 63(2):245–254, 1989.
[Mi91] Mitsuhiro Miyazaki. On the canonical map to the local cohomology of a StanleyReisner ring. Bull. Kyoto Univ. Ed. Ser. B, (79):1–8, 1991.
[Mun84] James R. Munkres. Topological results in combinatorics. Michigan Math. J.,
31(1):113–128, 1984.
[NS12] Isabella Novik and Ed Swartz. Face numbers of pseudomanifolds with isolated
singularities. Math. Scand., 110(2):198–222, 2012.
[Rei76] Gerald Allen Reisner. Cohen-Macaulay quotients of polynomial rings. Advances
in Math., 21(1):30–49, 1976.
[Sch81] Peter Schenzel. On the number of faces of simplicial complexes and the purity of
Frobenius. Math. Z., 178(1):125–142, 1981.
[Sta75] Richard P. Stanley. The upper bound conjecture and Cohen-Macaulay rings. Studies in Appl. Math., 54(2):135–142, 1975.
[Sta96] Richard P. Stanley. Combinatorics and commutative algebra, volume 41 of Progress
in Mathematics. Birkhäuser Boston, Inc., Boston, MA, second edition, 1996.
[Suz87] Naoyoshi Suzuki. On quasi-Buchsbaum modules. An application of theory of FLCmodules. In Commutative algebra and combinatorics (Kyoto, 1985), volume 11 of
Adv. Stud. Pure Math., pages 215–243. North-Holland, Amsterdam, 1987.
[SV86] J. Stückrad and W. Vogel. Buchsbaum rings and applications, volume 21 of Mathematische Monographien [Mathematical Monographs]. VEB Deutscher Verlag der
Wissenschaften, Berlin, 1986. An interaction between algebra, geometry, and topology.
21
[Vog81] Wolfgang Vogel. A non-zero-divisor characterization of Buchsbaum modules.
Michigan Math. J., 28(2):147–152, 1981.
[Yam09] Kikumichi Yamagishi. Buchsbaumness of certain generalization of the associated
graded modules in the equi-I-invariant case. J. Algebra, 322(8):2861–2885, 2009.
22
| 0 |
1
Performance Analysis of Dynamic Channel
Bonding in Spatially Distributed
High Density WLANs
arXiv:1801.00594v1 [cs.NI] 2 Jan 2018
Sergio Barrachina-Muñoz, Francesc Wilhelmi, Boris Bellalta
Abstract—In this paper we discuss the effects on throughput and fairness of dynamic channel bonding (DCB) in spatially distributed
high density (HD) wireless local area networks (WLANs). First, we present an analytical framework based on continuous time Markov
networks (CTMNs) for depicting the phenomena given when applying different DCB policies in spatially distributed scenarios, where
nodes are not required to be within the carrier sense of each other. Then, we assess the performance of DCB in HD IEEE 802.11ax
WLANs by means of simulations. Regarding spatial distribution, we show that there may be critical interrelations among nodes – even
if they are located outside the carrier sense range of each other – in a chain reaction manner. Results also show that, while always
selecting the widest available channel normally maximizes the individual long-term throughput, it often generates unfair scenarios
where other WLANs starve. Moreover, we show that there are scenarios where DCB with stochastic channel width selection improves
the latter approach both in terms of individual throughput and fairness. It follows that there is not a unique DCB policy that is optimal for
every case. Instead, smarter bandwidth adaptation is required in the challenging scenarios of next-generation WLANs.
Index Terms—Dynamic channel bonding, spatial distribution, policy, CTMN, WLAN, throughput, IEEE 802.11ax
F
1
I NTRODUCTION
W
IRELESS local area networks (WLANs), with IEEE
802.11 as the most widely used standard, are a costefficient solution for wireless Internet access that can satisfy
most of the current communication requirements in domestic, public, and business scenarios. However, the scarcity
of the frequency spectrum, the increasing throughput demands given by new hungry-bandwidth applications, and
the heterogeneity of current wireless network deployments
give rise to substantial complexity. Such issues gain importance in dense WLAN deployments, leading to multiple
partially overlapping scenarios and coexistence problems.
In this regard, two main approaches for optimizing the
scarce resources of the frequency spectrum are being deeply
studied in the context of WLANs: channel allocation (CA)
and channel bonding (CB). While CA refers to the action of
allocating the potential transmission channels for a WLAN
or a group of WLANs (i.e., allocating both the primary
and secondary basic channels), CB is a technique whereby
nodes are allowed to use a contiguous set of idle channels
for transmitting in larger bandwidths, and thus potentially
achieving higher throughputs.
This paper focuses on CB, which was firstly introduced
in the IEEE 802.11n amendment [1], where two 20 MHz
basic channels can be aggregated to transmit in a 40 MHz
channel. Newer amendments like IEEE 802.11ac (11ac) [2]
extend the number of basic channels that can be aggregated
up to 160 MHz channel widths. It is expected that IEEE
802.11ax (11ax) will boost the use of wider channels [3].
Nonetheless, due to the fact that using wider channels
increases the contention and interference among nodes,
undesirable lower performances may be experienced when
•
All the authors are with Universitat Pompeu Fabra (UPF).
E-mails: {sergio.barrachina, boris.bellalta, francesc.wilhelmi}@upf.edu
applying static channel bonding (SCB), specially in high
density (HD) WLAN scenarios. To mitigate such a negative
effect, dynamic channel bonding (DCB) policies are used to
select the bandwidth in a more flexible way based on the
instantaneous spectrum occupancy. A well-known example
of DCB policy is always-max (AM)1 [4], [5], where transmitters select the widest channel found idle when the backoff
counter terminates.
To the best of our knowledge, the works in the literature
assessing the performance of DCB study only the SCB
and AM policies, while they also assume fully overlapping
scenarios where all the WLANs are within the carrier sense
range of the others [4], [6], [7], [8]. Therefore, there is an
important lack of insights on the performance of CB in more
realistic WLAN scenarios, where such a condition usually
does not hold.
With this work we aim to extend the state of the art
by providing new insights on the performance of DCB in
WLAN scenarios that are not required to be fully overlapping; where the effect of hidden and exposed nodes plays
a crucial role due to the spatial distribution interdependencies. Namely, a node’s operation has a direct impact on the
nodes inside its carrier sense range, which in turn may affect
nodes located outside such range in a complex and hard
to prevent way. Besides, we assess different DCB policies,
including an stochastic approach that selects the channel
width randomly.
To do so, we first introduce the Spatial-Flexible Con-
1. Some papers in the literature use the terms DCB and AM indistinctly. In this paper we notate AM as an special case of DCB.
2
tinuous Time Markov Network (SFCTMN),2 an analytical
framework based on Continuous Time Markov Networks
(CTMNs). This framework is useful for describing the different phenomena that occur in WLAN networks when considering DCB in spatially distributed scenarios, i.e., where
nodes are not required to be within the carrier sense range of
each other. In this regard, we depict such complex phenomena by means of illustration through several toy scenarios.
We then evaluate the performance of the proposed policies in HD 11ax WLAN scenarios by means of simulations
using the 11axHDWLANsSim [9] wireless network simulator.3 We find that, while AM is normally the best DCB policy
for maximizing the individual long-term throughput of a
WLAN, it may generate unfair scenarios where some other
WLANs starve. In fact, there are cases where less aggressive
policies like stochastic channel width selection improves
AM both in terms of individual throughput and fairness.
The contributions of this paper are as follows:
1) Novel insights on the effects of DCB in HD WLANs.
We depict the complex interactions that are given in
spatially distributed scenarios – i.e., considering path
loss, signal-to-interference-plus-noise ratio (SINR) and
clear channel assessment (CCA) thresholds, co-channel
and adjacent channel interference, etc. – and discuss
the influence that nodes operation may have among
them. In this regard, we show that AM is not always the
optimal policy and that different networks may require
different DCB policies. This leads to the need of local
policy adaptation.
2) Generalization of DCB policies including only-primary
(i.e., selecting just the primary channel for transmitting), SCB, AM, and probabilistic uniform (PU) (i.e.,
selecting the channel width stochastically). We show
that WLAN scenarios implementing any combination
of DCB policies can be characterized by a CTMN with
specific transition probabilities. This allows us to analytically compare the behavior of these policies.
3) Algorithm for modeling WLAN scenarios with CTMNs
that extends the one presented in [8]. This extension
allows us capturing non-fully overlapping scenarios,
taking into consideration spatial distribution implications.
4) Performance evaluation of the presented DCB policies
in HD WLAN scenarios by means of simulations. The
physical (PHY) and medium access control (MAC)
layers are set in a representative way for single user
(SU) 11ax WLANs. We show that AM does not always
maximize the individual throughput, but more flexible
policies like PU are required. Besides, we show that
policy learning and/or adaptation is required on a perWLAN basis in order to maximize both the individual
throughput and system fairness.
The remainder of this article is organized as follows. In
Section 2, we provide related work on DCB. Then, in Section
2. All of the source code of SFCTMN is open, encouraging sharing of
algorithms between contributors and providing the ability for people to
improve on the work of others under the GNU General Public License
v3.0. The code used in this work can be found at https://github.com/
sergiobarra/SFCTMN.
3. An overview of the simulator, as well as the parameters used in
the simulations are shown in the Appendix
3 we describe the system model considered throughout
the article. The SFCTMN framework for modeling WLAN
scenarios with CTMNs is detailed in Section 4. We depict the
interactions in frequency and space in Section 5 by means of
several toy scenarios. The performance of the different DCB
policies in HD WLANs is assessed in Section 6. We conclude
with some final remarks and future work at Section 7.
2
R ELATED WORK
Several works in the literature assess the performance of
CB by means of analytical models, simulations or real
testbeds. Authors in [10], [11] experimentally analyze SCB
in IEEE 802.11n WLANs and show that the reduction of
Watt/Hertz when transmitting in larger channel widths
causes lower SINR at the receivers. This lessens the coverage
area consequently and increases the probability of packets
losses due to the accentuated vulnerability to interference.
Nonetheless, they also show that DCB can provide significant throughput gains when such issues are palliated by
properly adjusting the transmission power and data rates.
With the 11ac and 11ax amendments larger channel
widths are allowed (up to 160 MHz) and the pros and
cons of CB are also accentuated. Nevertheless, it is important to emphasize that in the dense and short-range
WLAN scenarios expected in the coming years [3], the issues
concerning low SINR values may be palliated due to the
shorter distances between transceiver and receiver, and to
techniques like spatial diversity multiple-input multipleoutput (MIMO) [12]. In this regard, authors in [5], [13]
assess the performance of DCB in 11ac WLANs by means
of simulations and show that it can provide significant
throughput gains. Still, they also corroborate the fact that
these gains are severely compromised by the activity of
overlapping wireless networks.
There are also works in the literature that use an analytical approach for assessing the performance of CB. For
instance, authors in [6] analytically model and evaluate the
performance of CB in short-range 11ac WLANs, proving
significant performance boost when the presence of external
interference is low to moderate. In [4], authors show that
the use of CB can provide significant performance gains
even in scenarios with a high density of WLANs, though
it may also cause unfair situations. Likewise, authors in
[8] use a CTMN-based model to explain key properties
of DCB such as the sensitivity to the backoff and transmission time distributions, or the high switching times
between different dominant states. Non-saturation regimes
are considered in [14], where authors propose an analytical
model for the throughput performance of channel bonding in 11ac WLANs in presence of collisions under both
saturated and non-saturated traffic loads. Authors in [15]
develop an analytical framework to study the performance
of opportunistic channel bonding in WLANs where 11ac
users coexist with legacy users. Similarly, authors in [16]
assess the performance of CB in the context of opportunistic
spectrum access via an analytical model and conclude that
CB is generally beneficial where there is low primary users
activity.
Some solutions involving CB are also available in the
literature. For instance, an intelligent scheme for jointly
3
Fig. 1: Simplified set of allowed channels for transmitting in the 11ac and 11ax amendments. This particular channelization scheme is defined by C =
{{1}, {2}, ..., {1, 2}, {3, 4}, ..., {1, 2, 3, 4}, ..., {1, 2, ..., 8}}.
adapting the rate and bandwidth in MIMO IEEE 802.11n
WLANs is presented in [12]. Real experiments show that
such scheme (ARAMIS) accurately adapts to a wide variety of channel conditions with negligible overhead and
achieving important performance gains. In [17], authors
propose an spectrum-distribution algorithm where access
points (APs) adjust both the primary channel and width
to match their traffic loads. They show that the spectrum
utilization can be substantially improved when allocating
more bandwidth to APs with high traffic load. An stochastic
spectrum distribution framework accounting for WLANs
demand uncertainty is presented in [18], showing better
performance compared to the naive allocation approach.
Authors in [19] show that the maximal throughput performance can be achieved with DCB under the CA scheme with
the least overlapped channels among WLANs.
We believe that this is the first work that provides
insights on the performance of DCB in spatially distributed
scenarios, where the effect of hidden and exposed nodes
plays a crucial role due to the spatial interdependencies
among nodes. In this regard, we also provide an algorithm
for generating the CTMNs to model such kind of scenarios.
Besides, we assess the performance of different DCB policies, including a novel stochastic approach. and show that
always selecting the widest available channel is sub-optimal
in some of the studied scenarios.
3
•
•
•
•
must share the same primary channel pw . Essentially,
the primary channel is used to i) sense the medium for
decrementing the backoff when the primary channel’s
frequency band is found free, and ii) listening to decodable packets for performing the request to send (RTS)
and clear to send (CTS) handshake – or entering in
network allocation vector (NAV) state – and receiving
data or ACK packets. Hence, the primary channel is
always used when transmitting and receiving packets.
Channel C : a channel C = {c1 , c2 , ..., cN } consists of
a contiguous set of N basic channels. The width (or
bandwidth) of a channel is N |c|.
Channelization scheme C : the set of channels that can
be used for transmitting is determined by the channel access specification and the system channel (Csys ),
whose bandwidth is given by Nsys |c|. Namely, all the
nodes in the system must transmit in some channel included in C . A simplified version of the channelization
considered in the 11ac and 11ax standards is shown in
Figure 1.
WLAN’s allocated channel Cw : nodes in a WLAN w
must transmit in a channel contained in Cw ∈ C . Different WLANs may be allocated with different primary
channels and different available channel widths. We use
Nw to denote the number of basic channels allocated for
WLAN w.
tx
Transmission channel Cn
: a node n belonging to a
WLAN w has to transmit in a channel Cntx ⊆ Cw ∈ C ,
which will be given by i) the set of basic channels in Cw
found idle by node n at the end of the backoff (Cnfree ),4
and ii) on the implemented DCB policy.
In the example shown in Figure 2, the system channel
Csys counts with Nsys = 8 basic channels and WLAN X is
allocated with the channel CX = {1, 2, 3, 4} and primary
channel pX = 3. Note that in this example the AP of WLAN
X does not select the entire channel allocated, but a smaller
one, i.e., CXtx = {3, 4} ⊆ CX . Two reasons may be the cause:
i) basic channels 1 and 2 are detected busy at the end of the
backoff, or ii) the DCB policy determines not to pick them.
S YSTEM MODEL UNDER CONSIDERATION
In this Section we first depict the notation regarding channelization that is used throughout this article. We also define
the DCB policies that are studied and provide a general
description of the carrier sense multiple access with collision
avoidance (CSMA/CA) operation in IEEE 802.11 WLANs.
Finally, we expose the main assumptions considered in the
presented scenarios.
3.1
Channelization
Let us define some terms regarding channel access for the
sake of facilitating further explanation:
• Basic channel c: the frequency spectrum is split into
basic channels of width |c| = 20 MHz.
• Primary channel pw : basic channel with different roles
depending on the node state. All the nodes belonging
to the same WLAN w – i.e., AP and stations (STAs) –
Fig. 2: Channel access notation. WLAN X is allocated with 4
basic channels but only 2 are used for the transmission.
4. Note that, in order to include secondary basic channels for transmitting, a WLAN must listen them free during at least a PIFS period
before the backoff counter terminates as shown in Figure 3. While such
PIFS condition is not considered in the SFCTMN framework for the
sake of analysis simplicity, the 11axHDWLANSim simulator does.
4
3.2
CSMA/CA operation in IEEE 802.11 WLANs
According to the CSMA/CA operation, when a node n
belonging to a WLAN w has a packet ready for transmission, it measures the power sensed in the frequency band
of pw . Once the primary channel has been detected free,
i.e., the power sensed by n at pw is smaller than its CCA
threshold, the node starts the backoff procedure by selecting
a random initial value of BO ∈ [0, CW − 1] time slots. The
contention window is defined as CW = 2b CWmin , where
b ∈ {0, 1, 2, ..., m} is the backoff stage with maximum value
m, and CWmin is the minimum contention window. When
a packet transmission fails, b is increased by one unit, and
reset to 0 when the packet is acknowledged.
After computing BO, the node starts decreasing the backoff counter while sensing the primary channel. Whenever
the power sensed by n at pw is higher than its CCA, the
backoff is paused and set to the nearest higher time slot until
pw is detected free again, at which point the countdown is
resumed. When the backoff timer reaches zero, the node
selects the transmission channel Cntx based on the set of idle
basic channels Cnfree and on the DCB policy.
The selected transmission channel is then used throughout the whole packet exchanges involved in a data packet
transmission between the transceiver and receiver. Namely,
RTS (used for notifying the selected transmission channel),
CTS, and ACK packets are also transmitted in Cntx . Likewise,
any other node that receives an RTS in its primary channel
with enough power to be decoded will enter in NAV state,
which is used for deferring channel access and avoiding
packet collisions (especially those caused by hidden node
situations).
In Figure 3, the temporal evolution of a node operating under AM is shown. In this example, the WLAN
to which the node belongs is allocated with the channel
Cw = {1, 2, ..., 8} and primary channel pw = 2 . While at
the end of the first backoff the node is able to select the full
allocated channel to transmit in, i.e., Cntx = Cw , at the end of
the second one, because of the interference sensed at channel
3 during the PIFS period, a narrower channel Cntx = {1, 2}
is selected.
3.3
DCB policies
The DCB policy determines which transmission channel
a node must pick from the set of available ones. When
the backoff terminates, any node belonging to a WLAN w
operates according to the DCB policy as follows:
• Only-primary (OP): picks just the primary channel for
transmitting if it is found idle. This policy is also known
as single-channel.
• Static channel bonding (SCB): exclusively picks the
full channel allocated in its WLAN when found free.
Namely, nodes operating under SCB cannot transmit in
channels different than Cw .
• Always-max (AM): picks the widest possible channel
found free in Cw for transmitting.
• Probabilistic uniform (PU): picks with same probability any of the possible channels found free inside the
allocated channel Cw .
For the sake of illustration, let us consider the example
shown in Figure 3, where the evolution of a node imple-
menting AM is presented. Regarding the rest of DCB policies, OP would just pick the basic channel 2 after both backoff terminations. Instead, while SCB would transmit in the
entire channel after the first backoff, it would not transmit
at the end of second one because part of Cw is busy. Finally,
PU would transmit on channels {2}, {1, 2}, {1, 2, 3, 4} or
{1, 2, ..., 8} with same probability 1/4 at the first backoff
termination, and on channels {2} or {1, 2} with probability
1/2 at the end of the second one. An schematic flowchart of
the DCB policy operation is shown in Figure 4.
3.4
Main assumptions
In this paper we present results gathered via the SFCTMN
framework based on CTMNs, and also via simulations
through the 11axHDWLANsSim wireless network simulator.
While in the latter case we are able to introduce more
realistic implementations of the 11ax amendment, in the
analytical model we use relaxed assumptions for facilitating posterior analysis. This subsection depicts the general
assumptions considered in both cases.
1) Channel model: signal propagation is isotropic regardless of the selected path loss model L. Also, the propagation delay between any pair of nodes is considered
negligible due to the small carrier sense areas given
in the 5 GHz band. Besides, the transmission power
is divided uniformly among the basic channels in the
selected transmission channel. Specifically, we apply a
loss of 3 dB per basic channel each time the bandwidth
is doubled. We also consider an adjacent channel interference model that replicates the half of the power
transmitted per Hertz into the two basic channels that
are contiguous to the actual transmission channel Cntx .
2) Packet errors: we consider that a packet is lost if i)
the power of interest received at the receiver is less
than its CCA, ii) the SINR (γ ) perceived at the receiver
does not accomplish the capture effect (CE), i.e., γ <
CE, or iii) the receiver was already receiving a packet.
In the latter case, the decoding of the first packet is
ruptured only if the CE is no longer accomplished
because of a the interfering transmission. We assume
an infinite maximum number of retransmissions per
packet, whose effect is almost negligible in most of the
cases because of the small probability of retransmitting
a data packet more than few times [20].
3) Modulation coding scheme (MCS): the MCS index
used by each WLAN is the highest possible, and it is
kept constant throughout all the simulation. We assume
that the MCS selection is designed to keep the packet
error rate constant and equal to η , given that static
deployments are considered.
4) Traffic: downlink traffic is considered. In addition,
we assume a full-buffer model, i.e., APs always have
packets pending for transmission as they operate in
saturated regime.
4
T HE CTMN MODEL FOR HD WLAN S
The analysis of CSMA/CA networks through CTMN models was firstly introduced in [21]. Such models were later
applied to IEEE 802.11 networks in [4], [6], [8], [14], [19],
5
Fig. 3: CSMA/CA temporal evolution of a node operating under AM and the 11ax channelization scheme.
Fig. 4: Flowchart of the transmission channel selection. In
this example channel 5 is the primary channel and a DCB
policy D = AM is applied.
[22], [23], among others. Experimental results in [24], [25]
demonstrate that CTMN models, while idealized, provide
remarkably accurate throughput estimates for actual IEEE
802.11 systems. A comprehensible example-based tutorial
of CTMN models applied to different wireless networking
scenarios can be found in [26].
Nevertheless, to the best of our knowledge, works that
model CSMA/CA WLANs using CB study just the SCB and
AM policies, while assuming fully overlapping scenarios
where all the WLANs are inside the carrier sense range of
each other. Therefore, there is an important lack of insights
on more general WLAN scenarios, where such condition
usually does not hold and interdependencies among nodes
may have a critical impact on their performance.
In this section we depict our extended version of the
algorithm introduced in [8] for generating the CTMNs corresponding to spatially distributed WLAN scenarios, which
is implemented in the SFCTMN framework. With this extension, as the condition of having fully overlapping networks
is no longer required for constructing the corresponding
CTMNs, more factual observations can be made.
In this paper, since we consider that there is only one
transmitter-receiver pair per WLAN (i.e., the AP and one
STA), we will simply refer to the WLAN activity as a unit.
4.1
Implications
Modeling WLAN scenarios with CTMNs requires the backoff and transmission times to be exponentially distributed.
This has a main implication: because of the negligible propagation delay, the probability of packet collisions between
two or more nodes within the carrier sense range of the
other nodes is zero. The reason is that two WLANs will
never end their backoff at the same time, and therefore they
will never start a transmission at the same time either.
In overlapping single-channel CSMA/CA networks, it
is shown that the state probabilities are insensitive to the
backoff and transmission time distributions [25], [27]. However, even though authors in [8] prove that the insensitivity
property does not hold for DCB networks, the sensitivity
to the backoff and transmission time distributions is very
small. Therefore, the analytical results obtained using the
exponential assumption offer a good approximation for deterministic distributions of the backoff, data rate and packet
length as well.
4.2
Constructing the SFCTMN
In order to depict how CTMNs are generated, let us
consider the toy scenario (Scenario I) shown in Figure 5,
which is composed of two fully overlapping WLANs implementing AM.5 The channel allocation of such scenario
can be defined as CA = {1, 2, 3, 4} with pA = 2, and
CB = {3, 4} with pB = 3. That is, there are four basic
channels in the system, and the set of valid transmission
channels according to the 11ax channel access scheme is
CI = {{1}, {2}, {3}, {4}, {1, 2}, {3, 4}, {1, 2, 3, 4}} (see Figure 1).
Fig. 5: Scenario I. WLANs A and B are inside the carrier
sense range of each other with potentially overlapping basic
channels 3 and 4.
Due to the fact that both WLANs are inside the carrier
sense range of each other, their APs could be transmitting
simultaneously at any time t only if their transmission
channels do not overlap, i.e., CAtx (t) ∩ CBtx (t) = ∅. Notice
that slotted backoff collisions cannot occur because their
counters decrease continuously in time, and therefore two
transmissions cannot be neither started nor finished at the
very same time.
5. We have selected Scenario I for depicting the algorithm in a simple
way. CTMNs corresponding to non-fully overlapping scenarios (e.g.,
Scenario III in Section 5) can be also generated with the very same
algorithm.
6
4.2.1
States
4.2.2
CTMN algorithm: finding states and transitions
The first step for constructing the CTMN is to identify the
global state space Ψ, which is simply composed by all the
possible combinations given by the system channelization
scheme and the channel allocations of the WLANs. The
A22 B33
−
∅
s1
)
(s 5
A
µ
,
8
−,
λA , µA (s2 )
1, 3
A22
ψ8
ψ6
A22 B34
A21
ψ9
λ
s5
B,
9,
7
A41
B(
s4
B34
s3
)
ψ10
A21 B34
)
(s 4
B33
ψ7
A21 B33
µ
s2
)
(s 3
B
,µ
λ B 2, 5
A state in the CTMN is defined by the set of WLANs active
and the basic channels on which they are transmitting.
Essentially, we say that a WLAN is active if it is transmitting
in some channel, and inactive otherwise. We define two
types of state spaces: the global state space (Ψ) and the
feasible state space (S ).
• Global state space: a global state ψ ∈ Ψ is a state
that accomplishes two conditions: i) the channels in
which the active WLANs are transmitting comply with
the channelization scheme C , and ii) all active WLANs
transmit inside their allocated channels. That is, Ψ only
depends on the particular channelization scheme C
in use and on the channel allocation of the WLANs
in the system. In this paper, we assume that every
transmission should be made in channels inside Csys
that are composed of a = 2k contiguous basic channels,
for some integer k ≤ log2 Nsys , and that their rightmost
basic channels fall on multiples of a, as stated in the
11ac and 11ax amendments.
• Feasible state space: a feasible state s ∈ S ⊆ Ψ exists
only if each of the active WLANs in such state started
their transmissions by accomplishing the CCA requirement derived from the assigned DCB policy. Namely,
given a global state space, S depends only on the spatial
distribution and on the DCB policies assigned to each
WLAN.
The CTMN corresponding to the toy scenario being
discussed is shown in Figure 6. Regarding the notation,
we represent the states by the most left and most right
basic channels used in the transmission channels of each of
their active WLANs. For instance, state s4 = A21 B43 refers
to the state where A and B are transmitting in channels
CAtx = {1, 2} and CBtx = {3, 4}, respectively.
Concerning the state spaces, states ψ6 , ψ7 , ψ8 , ψ9 , ψ10 ,
ψ11 , ψ12 ∈
/ S are not reachable (i.e., they are global but
not feasible) for two different reasons. First, states ψ11 and
ψ12 are not feasible because of the overlapping channels
involved. Secondly, the rest of unfeasible states are so due
to the fact that AM is applied, thus at any time t that
WLAN A(B) finishes its backoff and B(A) is not active, A(B)
picks the widest available channel, i.e., CAtx (t) = {1, 2, 3, 4}
or CBtx (t) = {3, 4}, respectively. Likewise, any time A(B)
finishes its backoff and B(A) is active, A(B) picks again
the widest available channel, which in this case would be
CAtx (t) = {1, 2} for A and CBtx (t) = {3, 4} for B if A is not
transmitting in its full allocated channel, respectively.
Some states such as s5 = A21 are reachable only via backward transitions. In this case, when A finishes its backoff
and B is transmitting in CBtx (t) = {3, 4} (i.e., s3 ), A picks just
CAtx (t) = {1, 2} because the power sensed in channels 3 and
4 exceeds the CCA as a consequence of B’s transmission.
That is, s5 is only reachable through a backward transition
from s4 , given when B finishes its transmission in state s4 .
λ
A
,µ ,6
A
4
s4
A41 B33
ψ11
A41 B34
ψ12
Fig. 6: CTMN of Scenario I when applying AM. Circles represent states. All the states are global. Specifically, feasible
states are displayed in white, while non-feasible states are
gray colored. Two-way transitions are noted with forward
and backward rates λ, µ, respectively, to avoid cluttering
in the figure. The only backward transition is colored in
red. The blue pair of numbers beside the transition edges
represent the algorithm’s discovery order of the forward
and backward transitions, respectively.
feasible states in S are later identified by exploring the states
in Ψ. Algorithm 1 shows the pseudocode for identifying
both S and the transitions among such states, which are
represented by the transition rate matrix Q.
Essentially, while there are discovered states in S that
have not been explored yet, for any state sk ∈ S not explored, and for each WLAN X in the system, we determine if
X is active or not. If X is active, we then set possible backward
transitions to already known and not known states. To do
so, it is required to fully explore Ψ looking for states where:
i) other active WLANs in the state remain transmitting in the
same transmission channel,6 and ii) WLAN X is not active.
On the other hand, if WLAN X is inactive in state sk , we
try to find forward transitions to other states. To that aim,
the algorithm fully explores Ψ looking for states where i)
other active WLANs in the state remain transmitting in the
6. Notice that we use Xab ∈ s to say that a WLAN X transmits in a
range of contiguous basic channels [a, b] when the CTMN is in state
s. With slight abuse of notation, s − Xab represents the state where
all WLANs that were active in s remain active except for X, which
becomes inactive after finishing its packet transmission. Similarly, s +
Xab , represents the state were all the active WLANs in s remain active
and X is transmitting in the range of basic channels [a, b].
7
Algorithm 1: CTMN generation of spatially distributed
DCB WLAN scenarios.
1 i = 1; .........# Index of the last state found
2 k = 1; ....... # Index of the state currently being explored
3 sk = ∅; ......# State currently being explored
4 S = {sk }; # Set of feasible states
Q = [ ]; ....... # Transition rate matrix
# Generate the global state space Ψ
7 Ψ = generate_psi_space() ;
5
6
8
9
10
11
12
while sk ∈ S do
foreach WLAN X do
# If WLAN is active in sk
if ∃a, b s.t. Xba ∈ sk then
foreach ψ ∈ Ψ do
# If there exists a backward transition
13
if sk − Xba == ψ then
if ψ 6∈ S then
i = i + 1;
k 0 = i;
S = S ∪ ψ;
14
15
16
17
18
19
else
20
22
# Get index of state ψ in S
k 0 = get_index(ϕ∗ );
# New backward transition sk → sk0
23
Qk,k0 = µX (sk )
21
24
# If WLAN is NOT active in sk
25
else
Φ = ∅; . # Set of global states reachable from sk
Φ∗ = ∅; . # Set of feasible states reachable from sk
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# Find possible forward states
foreach ψ ∈ Ψ do
if ∃a, b s.t. sk + Xba == ψ then
Φ = Φ ∪ ψ;
# Function f finds feasible states and corresponding
transition probabilities according to the DCB
policy D and the basic channels found free
CXfree (sk ) = {PXrx (sk , CX ) < CCAX : 0, 1};
~ } = f (D, CXfree , Φ);
{Φ∗ , α
foreach ϕ∗ ∈ Φ∗ do
if ϕ 6∈ S then
i = i + 1;
k 0 = i;
S = S ∪ ϕ∗ ;
else
# Get index of state ϕ∗ in S
k 0 = get_index(ϕ∗ );
# New forward transition sk → sk0
40
41
42
Qk,k0 = α
~ (ϕ∗ )λX ;
43
44
k = k + 1;
same transmission channel, and ii) X is active in the new
state as a result of applying the implemented DCB policy
(D) as shown in line 33. It is important to remark that in
order to apply such policy, the set of idle basic channels
in state sk , i.e., CXfree (sk ), must be identified according to
the power sensed in each of the basic channels allocated
to X, i.e., PXrx (sk , CX ), and on its CCA level. Afterwards,
the transmission channel is selected through the f function,
which applies D. In Figure 4, a simple flowchart of the
transmission channel selection is shown.
Each transition between two states si and sj has a
corresponding transition rate Qi,j . For forward transitions,
the packet transmission attempt rate (or simply backoff rate)
has an average duration λ = 1/(E[B] · Tslot ), where E[B]
is the expected backoff duration in time slots, determined
−1
by the minimum contention window, i.e., E[B] = CWmin
.
2
Furthermore, for backward transitions, the departure rate (µ)
depends on the duration of a successful transmission (i.e.,
µ = 1/tsuc ), which in turn depends on both the data rate
(r) given by the selected MCS and transmission channel
width, and on the packet length E[Ldata ]. In the algorithm
we simply say that the data rate of a WLAN X depends on
the state of the system, which collects such information, i.e.,
µX (s).
Depending on the DCB policy, different feasible forward
transitions may exist from the very same state, which are
represented by the set Φ∗ . As shown in line 43, every feasible
forward transition rate is weighted by a transition probability
vector (α
~ ) whose elements determine the probability of transiting to each of the possible global states in Φ. Namely, with
an slight abuse of mathematical notation, the probability
to transit to any given feasible state ϕ∗ ∈ Φ is α
~ (ϕ∗ ). As
aPconsequence, α
~ must follow the normalization condition
α
~ (ϕ∗ ) = 1.
For the sake of illustration, in the CTMNs of Figures 6,
8 and 10, states are numbered according to the order in
which they are discovered. Transitions between states are
also shown below the edges. Note that with SFCTMN, as
non-fully overlapping networks are allowed, transitions to
states where one or more WLANs may suffer from packet
loses due to interference are also reachable as shown in
Section 5.
4.3
Performance metrics
Since there are a limited number of possible channels to
transmit in, the constructed CTMN will always be finite.
Furthermore, it will be irreducible due to the fact that
backward transitions between neighboring states are always
feasible. Therefore, a steady-state solution to the CTMN
always exists. However, due to the possible existence of
one-way transitions between states, the CTMN is not always
time-reversible and the local balance may not hold [28], and
hence, it prevents to find simple product-from solutions to
compute the equilibrium distribution of the CTMNs.
The equilibrium distribution vector ~π represents the
fraction of time the system spends in each feasible state.
Hence, we define πs as the probability of finding the system
at state s. In order to obtain ~π we can use the transition rate
matrix Q given the system of equations ~π Q = 0.
As an example, for scenario I, considering that its elements are sorted by the discovery order of the states,
8
~π = (π∅ , πA41 , πB43 , πA21 B43 , πA21 ). Besides, the corresponding
transition rate matrix is
∗
λA
λB
0
0
µA (s2 ) ∗
0
0
0
,
∗
λA
0
Q = µB (s3 ) 0
0
0 µA (s4 ) ∗ µB (s4 )
µA (s5 ) 0
0
λB
∗
where λA , λB and µA (s), µB (s) are the packet generation and
departure rates in state s of WLANs A and B, respectively.
The diagonal elements represented by ‘*’ in the matrix
should be replaced by the negative sum of the rest of items
of their row, e.g., Q4,4 = − µA (s4 ) + µB (s4 ) , but for the
sake of format issues we do not include them in the matrix.
Once ~π is computed, estimating the average throughput
experienced by each WLAN is straightforward. Specifically,
the average throughput of an WLAN w is
X
Γw := E[L]
{γw (s) > CE : 0, 1}µw (s)πs 1 − η ,
s∈S
where E[L] is the expected data packet length, γw (s) is the
SINR perceived by the STA in WLAN w in state s, CE is the
capture effect, and η is the constant packet error probability.
The system aggregate throughput is then the sum of the
PM
throughputs of all the WLANs, i.e., Γ := w=1 Γw . Besides,
in order to evaluate the fairness of a given scenario, we can
use the proportional fairness metric,
P :=
M
X
log10 Γw ,
w=1
and the Jain’s Fairness Index (F ) [29],
2
PM
w Γw
F :=
PM 2 .
M
w Γw
5
I NTERACTIONS IN FREQUENCY AND SPACE
In this section we draw some relevant conclusions about
applying different DCB policies in CSMA/CA WLANs by
analyzing four representative toy scenarios with different
channel allocations and spatial distributions. To that aim we
use the SFCTMN analytical framework. We also validate the
gathered results by means of the 11axHDWLANsSim wireless network simulator (refer to the Appendix for details on
the parameters considered).
5.1
Feasible states dependence on the DCB policy
In Table 1 we show the effect of applying different DCB
policies on the average throughput experienced by WLANs
A and B (ΓA and ΓB , respectively), and by the whole
network (Γ), in scenarios I and II (presented in Figures 5
and 7, respectively).
Let us first consider Scenario I. As explained in Section 4 and shown in Figure 6, the CTMN reaches 5 feasible states when WLANs implement AM. Instead, due to
the fact that both WLANs overlap in channels 3 and 4
when transmitting in their whole allocated channels – i.e.,
CAtx = CA = {1, 2, 3, 4} and CBtx = CB = {3, 4}, respectively
– the SCB policy reaches just three feasible states. Such states
correspond to those with a single WLAN transmitting, i.e.,
TABLE 1: DCB policy effect on the average throughput
[Mbps] in Scenario I and Scenario II. The values obtained
through 11axHDWLANsSim are displayed in red, while the
other correspond to the SFCTMN framework.
Policy
D
|S|
OP
4
SCB
3
AM
5
PU
10
Scenario I
ΓA
ΓB
Γ
113.23 113.23 226.47
113.23 113.23 226.46
143.46 143.46 286.92
131.98 148.85 280.83
220.12 212.21 432.34
217.60 214.81 432.41
149.14 148.38 297.52
149.20 148.42 297.63
|S|
4
3
3
6
Scenario II
ΓA
ΓB
Γ
113.23 113.23 226.47
113.23 113.23 226.46
109.19 109.19 218.38
108.72 108.84 217.56
109.19 109.19 218.38
108.72 108.84 217.56
113.19 113.19 226.38
113.20 113.18 226.38
S = {∅, A41 , B43 }. In the case of OP, both WLANs are forced
to pick just their primary channel for transmitting and,
therefore, S = {∅, A22 , B33 , A22 B33 }. Notice that state A22 B33 is
feasible because A and B have different primary channels
and do not overlap when transmitting in them.
The last policy studied is PU, which is characterized by
providing further exploration on the global state space Ψ. It
usually allows expanding the feasible state space S accordingly because more transitions are permitted. In scenario I,
whenever the CTMN is in state ∅ and the backoff of A or
B expires, the WLANs pick each of the possible available
channels with same probability. Namely, the CTMN will
transit to A22 , A21 or A41 with probability 1/3 when A’s
backoff counter terminates, and to B33 or B43 with probability
1/2 whenever B’s backoff counter terminates.
Likewise, if the system is in state B33 , and A terminates its
backoff counter, the CTMN will transit to the feasible states
A22 B33 or A21 B33 with probability 1/2. Similarly, whenever the
system is in state A22 or A21 , and B finishes its backoff, B
will pick the transmission channels {3} or {3, 4} with same
probability 1/2 making the CTMN to transit to the corresponding state where both WLANs transmit concurrently.
These probabilities are called transition probabilities and are
represented by the vector α
~ X,s (s0 ). For instance, in the latter
case, the probability to transit from s = A22 to s0 = A22 B33
when B terminates its backoff is α
~ B,A22 (A22 B33 ) = 1/2.
5.2
The short vs. long-term throughput dilemma
Intuitively, one could think that, as it occurs in Scenario I,
always picking the widest channel found free by means
of AM, i.e., maximizing the throughput of the immediate
packet transmission (or short-term throughput), may be
the best strategy for maximizing the long-term throughput
as well. However, the Scenario II depicted in Figure 7, is
a counterexample that illustrates such lack of applicable
intuition. It consists of two overlapping WLANs as Scenario I, but with different channel allocation. Specifically,
CA = CB = {1, 2} with pA = 1 and pB = 2, respectively.
The CTMNs that are generated according to the different
DCB policies – generalized to any value the corresponding
transition probabilities α
~ A,∅ , α
~ B,∅ may have – are shown in
Figure 8.
Regarding the transition probabilities, Table 2 shows the
vectors α
~ A,∅ , α
~ B,∅ that are given for each of the studied
DCB policies in Scenario II. Firstly, with OP, due to the fact
9
A11 B22 ,
Fig. 7: Scenario II. WLANs A and B are inside the carrier
sense range of each other with potentially overlapping basic
channels 1 and 2.
,µ
)
s2
(
A
A11
s2
)λ A
s 2 1, 5
(
∅
,
α~ A
)λ A,
∅
s1
3)
µ A(s
(s 3
~ A,∅
α
2, 7
α
~ B,
∅ (s
4 )λ
B, µ
B,
∅(
s5
4,
1
B,
A21
λ A,
B22
)λ
B
0
6,
µ
B(
12
s5
)
s3
B (s
4)
3, 9
α~
λ
s 5)
µ A(
1
8, 1
A11 B22
s6
s4
,µ
B(
s5
)
B12
s5
Fig. 8: CTMN corresponding to Scenario II. Transitions edges
are dashed for referring to those that may be given or not
depending on the DCB policy. For instance, state s6 is only
reachable for the OP and PU policies. The discovery order
of the states and transitions (displayed in blue) corresponds
to the PU policy.
that WLANs are only allowed to transmit in their primary
channel, the CTMN can only transit from state ∅ to states
A11 or B22 , i.e., α
~ A,∅ (s2 ) = α
~ B,∅ (s4 ) = 1. Similarly, with
SCB, WLANs can only transmit in their complete allocated
channel, thus, when being in state ∅ the CTMN transits
only to A21 or B21 , i.e., α
~ A,∅ (s3 ) = α
~ B,∅ (s5 ) = 1. Notice
that AM generates the same transition probabilities (and
respective average throughput) than SCB because whenever
the WLANs have the possibility to transmit – which only
happens when the CTMN is in state ∅ – both A and B
pick the widest channel available, i.e., CAtx = CBtx = {1, 2}.
Finally, PU picks uniformly at random any of the possible
transitions that A and B provoke when terminating their
backoff in state ∅, i.e., α
~ A,∅ (s2 ) = α
~ A,∅ (s3 ) = 1/2 and
α
~ B,∅ (s4 ) = α
~ B,∅ (s5 ) = 1/2, respectively.
TABLE 2: Transition probabilities from state ∅ of WLANs A
and B in Scenario II for different DCB policies.
D
|S|
OP
SCB
AM
PU
4
3
3
6
α
~ A,∅ (s2 ) α
~ A,∅ (s3 ) α
~ B,∅ (s4 ) α
~ B,∅ (s5 )
1.0
0.0
0.0
0.5
0.0
1.0
1.0
0.5
1.0
0.0
0.0
0.5
0.0
1.0
1.0
0.5
Notice that state
when both WLANs are transmitting at the same time, is reachable from states A11 and
B22 for both OP and PU. In such states, when either A or B
terminates its backoff and the other is still transmitting in its
primary channel, only a transition to state A11 B22 is possible,
i.e., α
~ A,s2 (s6 ) = α
~ B,s4 (s6 ) = 1.
Interestingly, as shown in Table 1, applying OP in Scenario II, i.e., being conservative and unselfish, is the best
policy to increase both the individual average throughput of
A and B (ΓA , ΓB , respectively) and the system’s aggregated
one (Γ). Instead, being aggressive and selfish, i.e., applying
SCB or AM, provides the worst results both in terms of
individual and system’s aggregate throughput.
In addition, PU provides similar results than OP in
average because most of the times that A and B terminate
their backoff counter, they can only transmit in their primary
channel, as the secondary channel is most likely occupied by
the other WLAN. In fact, state A11 B22 is the dominant state
for both OP and PU. Specifically, the probability of finding
the CTMN in state A11 B22 , i.e., ~πs6 , is 0.9802 for OP and 0.9702
for PU, respectively. Therefore, the slight differences on
throughput experienced with OP and PU are given because
of the possible transition from ∅ to the states A21 and B12
in PU, where WLANs occupy entirely the allocated channel
preventing the other for decreasing its backoff.
Despite being a very simple scenario, we have shown
that it is not straightforward to determine the optimal DCB
policy that the AP in each WLAN must follow. Evidently,
in a non-overlapping scenario, AM would be the optimal
policy for both WLANs because of the non-existence of
inter-WLAN contention. However, this is not a typical case
in dense scenarios and, consequently, AM should not be
adopted as de facto DCB policy, even though it provides
more flexibility than SCB. This simple toy scenario also
serves to prove that some intelligence should be implemented in the APs in order to harness the information
gathered from the environment.
Concerning the throughput differences in the values
obtained by SFCTMN and 11axHDWLANsSim,7 we note
that the main disparities correspond to the AM and SCB
policies. It is important to remark that while SFCTMN does
not consider neither backoff collisions nor NAV periods,
11axHDWLANsSim actually does so in a more realistic way.
Therefore, in 11axHDWLANsSim, whenever there is an slotted backoff collision, the RTS packets can be decoded by the
STAs in both WLANs if the CE is accomplished. That is why
the average throughputs are increased consequently.
Regarding the NAV periods, an interesting phenomena
occurs in Scenario I when implementing SCB, AM or PU.
While the RTS packets sent by B cannot be decoded by A
because its primary channel is always outside the possible
transmission channels of B (i.e., pA = 2 ∈
/ CBtx = {3, 3} or
{3, 4}), the opposite occurs when A transmits them. Due
to the fact that the RTS is duplicated in each of the basic
channels used for transmitting, whenever A transmits in its
whole allocated channel, B is able to decode the RTS (i.e.,
pB = 3 ∈ CAtx = {1, 2, 3, 4}) and enters in NAV consequently.
7. In 11axHDWLANsSim the throughput is simply computed as the
number of useful bits (corresponding to data packets) that are successfully transmitted divided by the observation time of the simulation.
10
5.3
Cumulative interference and hidden nodes
When considering non-fully overlapping scenarios, i.e.,
where some of the WLANs are not inside the carrier sense
range of the others, complex and hard to prevent phenomena may occur. As an illustrative example, let us consider
the case shown in Figure 9a, where 3 WLANs sharing a
single channel (i.e., CA = CB = CC = {1}) are deployed
composing a line network.
As the carrier sense range is fixed and is the same for
each AP, by locating the APs at different distances we obtain
different topologies that are worth to be analyzed. We name
these topologies from T1 to T4 depending on the distance
between consecutive APs, which increases according to the
topology index. Notice that all the DCB policies discussed
in this work behave exactly the same way in single-channel
scenarios. Therefore, in this subsection we do not make
distinctions among them.
The average throughput experienced by each WLAN
in each of the regions is shown in Figure 9b. Regarding
topology T1, when APs are close enough to be inside the
carrier sense range of each other in a fully overlapping
manner, the medium access is shared fairly because of the
CSMA/CA mechanism. For that reason, the throughput is
decreased to approximately 1/3 with respect to topology T4.
Specifically, the system spends almost the same amount of
time in the states where just one WLAN is transmitting, i.e.,
π(A11 ) = π( B11 ) = π(C11 ) ≈ 1/3.
The neighbor overlapping case in topology T2, where
A and C can transmit at the same time whenever B is not
active, but B can only do so when neither A nor C are
active, is a clear case of exposed-node starvation. Namely,
B has very few transmission opportunities as A and C are
transmitting almost permanently and B must continuously
pause its backoff consequently.
An interesting and hard to prevent phenomena occurs
in the potential central node overlapping case at topology
T3. Figure 10 shows the corresponding CTMN. In this case,
the cumulated interference that B perceives when A and C
transmit at the same time prevents it to decrease the backoff.
However, B is able to decrement the backoff any time A or
C are not transmitting.
This leads to two possible outcomes regarding packet
collisions. On the one hand, if the capture effect condition is
accomplished by B (i.e., γB > CE) no matter whether A and
C are transmitting, B will be able to successfully exchange
packets and the throughput will increase accordingly. On
the other hand, if the capture effect condition is not accomplished, B will suffer a huge packet error rate because
most of the initiated transmissions will be lost due to the
concurrent transmissions of A and C (i.e., γB < CE when
A and C transmit). This phenomena may be recurrent and
have considerable impact in high density networks where
multiples WLANs interact with each other. Therefore, it
should be foreseen in order to design efficient DCB policies.
Finally, as expected, in topology T4, due to the fact that
WLANs are isolated, i.e., outside the carrier sense range of
each other, they achieve the maximum throughput because
all their transmission are successful and they are never
required to pause their backoff.
Concerning the differences on the average throughput
values estimated by SFCTMN and 11axHDWLANsSim, we
observe two phenomena regarding backoff collisions in
topologies T1 and T3. In T1, due to the fact that simultaneous transmissions (or backoff collisions) are permitted and
captured in 11axHDWLANsSim – and such transmissions do
not cause any packet losses as the capture effect is accomplished – the throughput is slightly higher accordingly.
The most notable difference is given in T3. In this topology SFCTMN estimates that B is transmitting just the 50.1%
of the time because, as A and C operate like in isolation,
most of the time they transmit concurrently, causing backoff
freezing at B. However, 11axHDWLANsSim estimates that
B transmits about the 75% of the time, capturing a more
realistic behavior. Such difference is caused by the insensitivity property of the CTMN. For instance, whenever the
system is in state s6 = A11 C11 and A finishes its transmission
(transiting to s4 = C11 ), B decreases its backoff accordingly
while C is still active. In this case it is more probable to
transit from s4 to s7 = B11 C11 than to s6 = A11 C11 again
because, in average, the remaining backoff counter of B
will be smaller than the generated by A when finishing its
transmission. This is in fact not considered by the CTMN,
which assumes the same probability to transit from s4 to s6
than to s7 because of the exponential distribution and the
memoryless property.
5.4
Variability of optimal policies
Most often, the best DCB policy for increasing the own
throughput, no matter what policies the rest of WLANs may
implement, is AM. Nonetheless, there are exceptions like the
one presented in Scenario II. Besides, if achieving throughput
fairness between all WLANs is the objective, other policies
may be required. Therefore, there is not always an optimal
common policy to be implemented by all the WLANs.
In fact, there are cases where different policies must be
assigned to different WLANs in order to increase both the
fairness and individual WLAN throughputs.
For instance, let us consider another toy scenario (Scenario IV) using the topology T2 of Scenario III, where three
WLANs are located in a line in such a way that they are in
the carrier sense range of the immediate neighbor. In this
case, however, let us assume a different channel allocation:
CA = CB = CC = {1, 2} and pA = pC = 1, pB = 2.
Table 3 shows the individual and aggregated throughputs, the proportional throughput fairness, and the Jain’s
fairness index for different combinations of DCB policies.
We note that, while implementing AM in all the WLANs
the system’s aggregated throughput is the highest (i.e.,
Γ = 428.73 Mbps), the throughput experienced by B is
the lowest (i.e., ΓB = 4.04 Mbps), leading to a very unfair
situation as indicated by J = 0.67928.
We also find another case where implementing AM does
not maximize the individual throughput. Namely, when A
and C implement AM (i.e., DA = DC = AM), it is preferable
for B to implement PU and force states in which A and
C transmit only in their primary channels. This increases
considerably both the throughput of B (i.e., ΓB = 66.46
Mbps) and the fairness (i.e., P = 6.206 and J = 0.89988).
Looking at the most fair combinations, we notice that A,
C or both must implement PU in order to let B transmit with
similar amount of opportunities. Specifically, when both A
11
(b) Average throughput.
(a) Topologies.
Fig. 9: Scenario III. Yellow and blue arrows indicate the carrier sense range of WLANs A and C, respectively. The carrier sense
range of WLAN B is not displayed. T3-noCE refers to topology T3 when B does not accomplish the capture effect condition
whenever A, B and C are active. MC and Sim refer to the values obtained through SFCTMN and 11axHDWLANsSim.
A
s2 em
4
1,
∅
2
C
−, 22
s6
6
1
ty 10,
9t,y
p 2
em 0
11, 19
15
,2
3
AC
s3 em
p
3,
1
s5
13
B
2, 8
s1
5, 14
pt
y y 7,
p6,t 1
em 7
6
AB
1
,2
18
ABC
s8
BC
s4
s7
Fig. 10: CTMN corresponding to Scenario III-T3. For the sake
of visualization, neither the transition rates nor the transmission channels are included in the figure. The discovery
order of the transitions is represented by the pairs in blue.
TABLE 3: Policy combinations effect on throughput and
fairness in the WLANs of Scenario IV.
Policy
Throughput [Mbps]
DA DB DC
AM
AM
PU
PU
AM
AM
AM
PU
AM
PU
AM
PU
AM
AM
PU
PU
PU
PU
Fairness
ΓA
ΓB
ΓC
Γ
P
212.35
155.48
113.76
113.77
115.36
115.33
04.04
66.46
112.27
112.27
110.70
110.73
113.77
113.77
113.76
113.77
114.28
114.28
428.73
377.42
339.79
339.81
340.34
340.34
5.260
6.206
6.162
6.162
6.164
6.164
J
0.67928
0.89988
0.99996
0.99996
0.99969
0.99970
and C implement PU, B experiences the highest throughput
and the system achieves the highest fairness (i.e., J ≈ 1)
accordingly. Nonetheless, the price to pay is to heavily
decrease the throughput of A and C. Therefore, as most of
the times it does not exist a global policy that satisfies all the
WLANs in the system, different policies should be adopted
depending on the parameter to be optimized.
E VALUATION OF DCB IN HD WLAN S
In this Section we study the effects of the presented DCB
policies on dense WLAN scenarios. We first draw some general conclusions from analyzing the throughput and fairness
obtained through the different policies when increasing the
number of WLANs per area unit. Then, we discuss what is
the optimal policy that a particular WLAN should locally
pick in order to maximize its own throughput.
The results gathered in this section have been obtained
using 11axHDWLANSim, an event-based wireless network
simulator that aims to capture the effects of the newest
technologies included in the IEEE 802.11 amendments. The
values of the considered parameters are shown in the Appendix.
6.1
Network density vs. throughput
Figure 11 shows the general scenario considered for conducting the experiments presented in this Section. For the
sake of speeding up 11axHDWLANSim simulations, we assume that each WLAN is composed just of 1 AP and 1 STA.
Due to the fact that APs and STAs are located randomly
in the map, the number of STAs should not have a significant impact on the results because only downlink traffic is
assumed.
Essentially, we consider a rectangular area Amap =
100 × 100 m2 , where M WLANs are placed uniformly at
random with the single condition that any pair of APs must
be separated at least dmin
AP-AP = 10 m. The STA of each
WLAN is located also uniformly at random at a distance
max
dAP-STA ∈ [dmin
AP-STA , dAP-STA ] = [1, 5] m from the AP. The
channelization C counts with Nsys = 8 basic channels
and follows the 11ax proposal (see Figure 1). The channel
allocation is also set uniformly at random, i.e., every WLAN
w is assigned a primary channel pw ∼ U [1, 8] and allocated
channel CX containing Nw ∼ U {1, 2, 4, 8} basic channels.
For each number of WLANs studied (i.e., M =
2, 5, 10, 20, 30, 40, 50), we generate ND = 50 deployments
with different random node locations and channel allocations. Then, for each of the deployments, we assign to all
the WLANs the same DCB policy. Namely, we simulate
12
Fig. 11: Example scenario where M = 11 WLANs are spread
uniformly at random in a map of size mw × mh .
NM × ND × NP = 7 × 50 × 4 = 1400 scenarios, where NM
is the number of different M values studied and NP is the
number of DCB policies considered in this paper. Besides,
all the simulations have a duration of Tobs = 20 seconds.
In Figure 12, it is shown by means of boxplots8 the
average throughput per WLAN for each of the presented
DCB policies. As expected, when there are few WLANs in
the area, the most aggressive policies (i.e., SCB and AM)
provide the higher throughputs. Instead, PU, and especially
OP, perform the worst, as they do not extensively exploit the
free bandwidth.
However, when M increases and the scenario gets
denser, the average throughput obtained by all the policies
except SCB tends to be similar. This occurs due to the fact
that WLANs implementing AM and PU will likely perform
single-channel transmissions (as OP does) because the PIFS
condition for multiple channels will most likely not be
accomplished. In the case of SCB, some of the basic channels
inside the WLAN’s allocated channel will most likely be
occupied by other WLANs, and therefore its backoff counter
will get repeatedly paused. Thus, its average throughput in
dense scenarios is considerably low with respect to the other
policies.
In order to asses the use of the spectrum, we first define
the average bandwidth usage of a WLAN w as
BWw =
Nsys
1 X tx
t (c) · |c|,
Tobs c=1 w
where ttx
w (c) is the time that WLAN w is transmitting in a
channel containing at least the basic channel c, and |c| is the
bandwidth of a basic channel (i.e., 20 MHz). We can then
define the spectrum utilization of a given scenario as the
ratio
P
ACS M
w=1 BWw
ρ=
,
Amap · Nsys · |c|
where ACS is the circular carrier sense area of each WLAN
when transmitting in a single-channel. The average of such
ratio in the considered scenarios is shown in Figure 13.
Similarly to the throughput, while OP and PU do not
leverage the free spectrum in low density scenarios, SCB
8. Matlab’s bloxplot definition: https://www.mathworks.com/help/
stats/boxplot.html
Fig. 12: Node density effect on the average WLAN throughput. On each box, the central mark indicates the median,
and the bottom and top edges of the box indicate the 25th
and 75th percentiles, respectively. The whiskers extend to
the most extreme data points not considered outliers, and
the outliers are plotted individually using the ‘+’ symbol.
Fig. 13: Node density effect on spectrum utilization.
and AM do so by exploiting the most bandwidth. Instead,
when the number of nodes per area increases, SCB suffers
from heavy contention periods, which reiterates the need
of flexibility to adapt to the channel state. In this regard,
we note that AM is clearly the policy exploiting the most
bandwidth in average for any number of WLANs.
Nonetheless, neither the average throughput per WLAN
nor the spectrum utilization may be a proper metric when
assessing the performance of the whole system. Namely,
having some WLANs experiencing high throughputs when
some others starve is often a situation preferable to be
avoided. In that sense, we focus on the fairness, which is
both indicated by the boxes and outliers in Figure 12, and
13
more clearly by the expected Jain’s fairness index shown in
Figure 14.
WLANs operating with the same primary channel due to
heavy interference, specially when OP is being used.
6.2
Fig. 14: Node density effect on Jain’s Fairness Index.
As expected, the policy providing the highest fairness
is OP. In fact, no matter the channel allocation, WLANs
only pick their primary channel for transmitting when implementing OP; hence the fairness is always maximized at
the cost of probably wasting part of the frequency spectrum,
specially when the node density is low. In this regard, PU
also provides high fairness while exploiting the spectrum
more, which increases the average throughput per WLAN
accordingly.
Regarding the aggressive policies, as expected, SCB is
clearly the most unfair policy due to its ‘all or nothing’
strategy. Therefore, it seems preferable to prevent WLANs
from applying SCB in dense scenarios because of the number of WLANs that may starve or experience really low
throughput. However, even though being aggressive, AM
is able to adapt its transmission channel to the state of
the medium, thus providing both higher throughput and
fairness.
Still, as indicated by the boxes and outliers of Figure
12, AM is not per se the optimal policy. In fact, there are
scenarios where PU performs better in terms of both fairness
and throughput. Consequently, there is room to improve
the presented policies with some smarter adaptation or
learning approaches (e.g., tunning properly the transition
probabilities α
~ when implementing stochastic DCB).
There are also some phenomena that we have observed
during the simulations that are worth to be mentioned.
Regarding backoff decreasing slowness, in some scenarios,
it can be the case that a WLAN w is forced to decrease its
backoff counter very slowly due to the fact that neighboring
WLANs operate in a channel including the primary channel
of w. That is why more fairness is achieved with PU in dense
networks as such neighboring WLANs do not always pick
the whole allocated channel. Thus, they let w to decrease
their backoff more often, and to proceed to transmit accordingly.
Finally, concerning the transmission power and channel
width, we have observed that transmitting just in the primary channel can also be harmful to other WLANs because
of the higher transmission power used per 20 MHz channel.
While this may allow to use higher MCS and respective
data rates, it may also cause packet losses in neighboring
Local optimal policy
With the following experiment we aim to identify what
would be the optimal policy that a particular WLAN should
adopt in order to increase its own throughput. In this case,
we consider an area Amap = 50 × 50 m2 with one WLAN
(A) located at the center, and M − 1 = 19 WLANs spread
uniformly at random in the area. Regarding the channel allocation, all the WLANs are again set with primary channel
and allocated channel selected uniformly at random, with
the exception of A, which is allocated the widest channel
(i.e., CA = {1, ..., 8}) in order to provide more flexibility
and capture complex effects. The primary channel of A is
also selected uniformly at random (i.e., pA ∼ U [1, 8]).
While the DCB policies of the M − 1 WLANs are also
set uniformly at random (i.e., they will implement OP, SCB,
AM or PU with same probability 1/4), A’s policy is set in
a deterministic way. Specifically, we generate ND = 400
deployments following the aforementioned conditions for
each of the DCB policies that A can implement. That is, we
simulate ND × NP = 1600 scenarios. The simulation time of
each scenario is Tobs = 25 seconds.
The first noticeable result is that, in dense scenarios,
SCB is non-viable for WLANs with wide allocated channels
because they are most likely prevented to initiate transmissions. In fact, A is not able to successfully transmit data
packets in any of the 400 scenarios simulated for SCB (i.e.,
E[ΓSCB
A ] = 0).
Regarding the rest of policies, on average, A’s throughput is 3.2 % higher when implementing AM (E[ΓAM
A ] =
41.07 Mbps) with respect to PU (E[ΓPU
A ] = 39.74 Mbps).
Besides, for dense scenarios like this one, there is a clear
trend to pick just one channel when implementing AM
and PU. That is why OP provides an average throughput
(E[ΓOP
A ] = 39.04 Mbps) close to the ones achieved by AM
and PU.
Nonetheless, as the high standard deviation of the
throughput indicates (e.g., σAM (ΓA ) = 38.25 Mbps when
implementing AM), there are important differences regarding ΓA depending on the scenario. Table 4 compares the
share of scenarios where AM or PU provide the highest
individual throughput for A. Three types of outcomes are
categorized accordingly a defined margin δΓ :
AM
(a) PU performs better than AM: E[ΓPU
A ] − E[ΓA ] > δΓ
AM
(b) PU performs similarly to AM: |E[ΓPU
]
−
E
[Γ
A
A ]| < δΓ
PU
(c) PU performs worse than AM: E[ΓAM
]
−
E
[Γ
A
A ] > δΓ
We use the margin δΓ = 0.5 Mbps for capturing the cases
where AM and PU perform similarly.
TABLE 4: Share of scenarios where AM or PU provide the
highest individual throughput for WLAN A.
AM
E[ΓPU
A ] > E[ΓA ]
AM
E[ΓPU
A ] ≈ E[ΓA ]
PU
E[ΓAM
A ] > E[ΓA ]
97/400 (24.25 %)
135/400 (33.75 %)
168/400 (42.00 %)
We see that in most of the cases AM performs better
than PU. However, in about 1/4 of the scenarios PU improves AM and this share is even higher when considering
14
smaller values of δΓ . Also, there are a few scenarios where
the throughput experienced by PU with respect to AM is
considerable (more than 30 %). This mainly occurs when the
neighboring nodes frequently occupy A’s primary channel
through complex interactions that keep frozen its backoff
accordingly for long periods of time.
Therefore, as a rule of thumb for dense networks, we
can state that, while AM reaches higher throughputs on
average, stochastic DCB is less risky and performs similarly
well. Nonetheless, even though PU is more fair than AM
on average, it does not guarantee the absence of starving
WLANs either. It follows that WLANs must be provided
with some kind of adaptability to improve both the individual throughput and fairness with acceptable certainty.
7
C ONCLUSIONS
In this work we show the effect of DCB in spatially distributed scenarios, where WLANs are not required to be
within the carrier sense range of each other. We study
different types of DCB policies including a new approach
that selects the transmission channel width probabilistically.
We also present the SFCTMN analytical framework for
modeling WLAN scenarios through CTMNs, and use it to
depict the phenomena that occur in several toy scenarios. In
this regard, we prove that i) the feasible states in the CTMNs
highly depend on the DCB policy, ii) always selecting the
widest available channel found free does not always maximize the own throughput (short vs. long-term throughput
dilemma), iii) cumulative interference from hidden nodes
may cause important throughput losses in spatially distributed scenarios due to packet losses or inadmissible large
contention periods, and iv) often, there is not an optimal
global policy to be applied to each WLAN, but different
policies are required, specially in non-fully overlapping
scenarios where chain reaction actions are complex to foresee
in advance.
Besides, we analyze via simulations the impact of the
node density on the individual throughput and system’s
fairness in 11ax WLANs. Results corroborate that, while
DCB is normally the best policy to maximize the individual
short and long-term throughput, there are cases, specially in
high density scenarios, where other policies like stochastic
PU perform better both in terms of individual throughput
and throughput fairness among WLANs.
Therefore, we conclude that the performance of DCB
can be importantly improved by implementing adaptive
policies capable of harnessing the knowledge gathered from
the medium and/or via information distribution. In this
regard, our next work will focus on studying machine learning based policies to enhance WLANs performance in high
density scenarios. We also aim to capture non-saturation
regimes in the SFCTMN framework, as the reduction of
overlapping situations may have an important impact on
the performance of the policies.
A PPENDIX A
11 A X HDWLAN S S I M & 11 AX PARAMETERS
For the high-density results presented in Section 6, we have
simulated multiple 11ax scenarios9 in 11axHDWLANsSim.
This simulator is a particular release of Komondor,10 a
wireless networks simulator built on top of COST library
[30]. COST facilitates the development of discrete event simulation using CompC++, a component-oriented extension
to C++. COST provides a collection of components that
interact with each other by exchanging messages through
communication, and a simulation engine that is responsible
for synchronizing components.
The value of the parameters considered in the simulations are shown in Table 5. Regarding the path loss,
we use the simple but accurate partition loss model for 5
GHz indoor environments with corridors proposed in [31].
Specifically, the path loss in dB experienced at a distance d
is defined by PL(d) = PLFree (d) + αd, where PLFree (d) is the
well-known free space path loss at distance d and α = 0.44
dB/m is the constant attenuation per unit of path length
that fits the best to the measurements collected in an office.
TABLE 5: Parameters considered in the presented scenarios.
Parameter
Description
Value
CWmin
m
CCA
Ptx
Gtx
Grx
Ldata
LBACK
LRTS
LCTS
nagg
CE
N
Tslot
SIFS
DIFS
PIFS
η
fc
Tofdm
Tphy
nss
HE
Tphy
Lsf
Ldel
Lmac
Ltail
ACS
Min. contention window
Backoff stage
CCA threshold
Transmission power
Transmitting gain
Reception gain
Length of a data packet
Length of a block ACK
Length of an RTS packet
Length of a CTS packet
Num. data packets aggregated
Capture effect threshold
Background noise level
Slot duration
SIFS duration
DIFS duration
PIFS duration
Packet error rate
Central frequency
OFDM symbol duration
Legacy PHY header duration
SU spatial streams
HE header duration
Length of MAC’s service field
Length of MAC’s MPDU delimiter
Length of MAC header
Length of MAC’s tail
Single-channel carrier sense area
16
5
-82 dBm
15 dBm
0 dB
0 dB
12000 bits
240 bits
160 bits
112 bits
64
20 dB
-95 dBm
9 µs
16 µs
34 µs
25 µs
0.1
5 GHz
16 µs
20 µs
1
32 µs
16 bits
32 bits
272 bits
6 bits
5384.6 m2
The data rate (r) used in the data packet transmissions
among all the nodes has always been the highest allowed
by the MCS in the 11ax depending on the distance between
AP and STA. Specifically, r = Nsc Ym Yc nss , where Nsc is
9. It is important to note that the 11ax amendment is still in an
early development stage and newer version may include changes with
respect the parameters considered in this work.
10. All of the source code of Komondor is open, encouraging sharing
of algorithms between contributors and providing the ability for people
to improve on the work of others under the GNU General Public
License v3.0. The release used in this work and all the output files of the
performed simulations can be found at https://github.com/wn-upf/
Komondor.
15
the number of subcarriers (dependent on the transmission
channel width), Ym is the number of modulation bits and Yc
is the coding rate. The latter two parameters depend on the
MCS. Nsc can be 234, 468, 980 or 1960 for 20, 40, 80, and 160
MHz channel widths, respectively. For instance, for the MCS
providing highest data rate, Ym = 10 bits and Yc = 5/6. Accordingly, the data rate for 20 MHz channel transmissions is
defined by r20MHz = 52Ym Yc . With such parameters we can
define the duration of the different packets transmissions,
and the duration of a successful transmission accordingly:
[9]
[10]
[11]
[12]
Lsf + LRTS + Ltail
Tofdm ,
r20MHz
Lsf + LCTS + Ltail
Tofdm ,
tCTS = Tphy +
r20MHz
HE
tdata = Tphy + Tphy
+
Lsf + nagg (Ldel + Lmac + Ldata ) + Ltail
+
Tofdm ,
r
Lsf + LBACK + Ltail
tBACK = Tphy +
Tofdm ,
r20MHz
tsuc = tRTS + SIFS + tCTS + SIFS+
+ tdata + SIFS + tBACK + DIFS + Tslot .
tRTS = Tphy +
ACKNOWLEDGMENT
This work has been partially supported by the Spanish
Ministry of Economy and Competitiveness under the Maria
de Maeztu Units of Excellence Programme (MDM-20150502), and by a Gift from the Cisco University Research Program (CG#890107, Towards Deterministic Channel Access
in High-Density WLANs) Fund, a corporate advised fund
of Silicon Valley Community Foundation. The work done
by S. Barrachina-Muñoz is supported by a FI grant from the
Generalitat de Catalunya.
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
IEEE 802.11n. Standard for Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY): Enhancements for High
Throughput. IEEE, 2009.
I. P802.11ac. Standard for Wireless LAN Medium Access Control
(MAC) and Physical Layer (PHY) specifications: Enhancements
for Very High Throughput for Operation in Bands below 6 GHz.
IEEE, 2014.
B. Bellalta. IEEE 802.11ax: High-efficiency WLANs. IEEE Wireless
Communications, 23(1):38–46, 2016.
B. Bellalta, A. Checco, A. Zocca, and J. Barcelo. On the interactions
between multiple overlapping WLANs using channel bonding.
IEEE Transactions on Vehicular Technology, 65(2):796–812, 2016.
M. Park. IEEE 802.11ac: Dynamic bandwidth channel access. In
Communications (ICC), 2011 IEEE International Conference on, pages
1–5. IEEE, 2011.
B. Bellalta, A. Faridi, J. Barcelo, A. Checco, and P. Chatzimisios.
Channel bonding in short-range wlans. In European Wireless 2014;
20th European Wireless Conference; Proceedings of, pages 1–7. VDE,
2014.
S. Vasthav, S. Srikanth, and V. Ramaiyan. Performance analysis of
an IEEE 802.11ac WLAN with dynamic bandwidth channel access.
In Communication (NCC), 2016 Twenty Second National Conference
on, pages 1–6. IEEE, 2016.
A. Faridi, B. Bellalta, and A. Checco. Analysis of Dynamic Channel
Bonding in Dense Networks of WLANs. IEEE Transactions on
Mobile Computing, 2016.
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
S. Barrachina-Muñoz and F. Wilhelmi. Komondor v1.0 - 11axHDWLANsSim: first stable release implementing basic IEEE 802.11ax
features. GitHub release, 2017. [Online] GitHub release: https:
//github.com/wn-upf/Komondor/releases/tag/v1.0.
L. Deek, E. Garcia-Villegas, E. Belding, S. Lee, and K. Almeroth.
The impact of channel bonding on 802.11n network management.
In Proceedings of the Seventh Conference on emerging Networking
EXperiments and Technologies, page 11. ACM, 2011.
M. Y. Arslan, K. Pelechrinis, I. Broustis, S. V. Krishnamurthy,
S. Addepalli, and K. Papagiannaki. Auto-configuration of 802.11n
WLANs. In Proceedings of the 6th International Conference, page 27.
ACM, 2010.
L. Deek, E. Garcia-Villegas, E. Belding, S. Lee, and K. Almeroth.
Joint rate and channel width adaptation for 802.11 MIMO wireless
networks. In Sensor, Mesh and Ad Hoc Communications and Networks
(SECON), 2013 10th Annual IEEE Communications Society Conference
on, pages 167–175. IEEE, 2013.
M. X. Gong, B. Hart, L. Xia, and R. Want. Channel bounding and
MAC protection mechanisms for 802.11ac. In Global Telecommunications Conference (GLOBECOM 2011), 2011 IEEE, pages 1–5. IEEE,
2011.
M. Kim, T. Ropitault, S. Lee, and N. Golmie. A Throughput
Study for Channel Bonding in IEEE 802.11ac Networks. IEEE
Communications Letters, 2017.
M. Han, Sami K., L. X. Cai, and Y. Cheng. Performance Analysis
of Opportunistic Channel Bonding in Multi-Channel WLANs. In
Global Communications Conference (GLOBECOM), 2016 IEEE, pages
1–6. IEEE, 2016.
S. Joshi, P. Pawelczak, D. Cabric, and J. Villasenor. When channel
bonding is beneficial for opportunistic spectrum access networks.
IEEE Transactions on Wireless Communications, 11(11):3942–3956,
2012.
T. Moscibroda, R. Chandra, Y. Wu, S. Sengupta, P. Bahl, and
Y. Yuan. Load-aware spectrum distribution in wireless LANs. In
Network Protocols, 2008. ICNP 2008. IEEE International Conference
on, pages 137–146. IEEE, 2008.
A. Nabil, M. J. Abdel-Rahman, and A. B. MacKenzie. Adaptive
Channel Bonding in Wireless LANs Under Demand Uncertainty.
To appear in the Proceedings of the IEEE International Symposium
on Personal, Indoor and Mobile Radio Communications (PIMRC),
Montreal, QC, Canada, October 2017.
C. Kai, Y. Liang, T. Huang, and X. Chen. A Channel Allocation
Algorithm to Maximize Aggregate Throughputs in DCB WLANs.
arXiv preprint arXiv:1703.03909, 2017.
P. Chatzimisios, A. Boucouvalas, and V. Vitsas. Performance
analysis of IEEE 802.11 DCF in presence of transmission errors. In
Communications, 2004 IEEE International Conference on, volume 7,
pages 3854–3858. IEEE, 2004.
R. Boorstyn, A. Kershenbaum, B. Maglaris, and V. Sahin. Throughput analysis in multihop CSMA packet radio networks. IEEE
Transactions on Communications, 35(3):267–274, 1987.
G. Bianchi. Performance analysis of the IEEE 802.11 distributed
coordination function. IEEE Journal on selected areas in communications, 18(3):535–547, 2000.
B. Bellalta. Throughput Analysis in High Density WLANs. IEEE
Communications Letters, 21(3):592–595, 2017.
B. Nardelli and E. Knightly. Closed-form throughput expressions
for CSMA networks with collisions and hidden terminals. In
INFOCOM, 2012 Proceedings IEEE, pages 2309–2317. IEEE, 2012.
S. C. Liew, C. H. Kai, H. C. Leung, and P. Wong. Back-of-theenvelope computation of throughput distributions in CSMA wireless networks. IEEE Transactions on Mobile Computing, 9(9):1319–
1331, 2010.
B. Bellalta, A. Zocca, C. Cano, A. Checco, J. Barcelo, and A. Vinel.
Throughput analysis in CSMA/CA networks using continuous
time Markov networks: a tutorial. In Wireless Networking for
Moving Objects, pages 115–133. Springer, 2014.
H. Salameh, M. Krunz, and D. Manzi. Spectrum bonding and aggregation with guard-band awareness in cognitive radio networks.
IEEE Transactions on Mobile Computing, 13(3):569–581, 2014.
F. P. Kelly. Reversibility and stochastic networks. Cambridge University Press, 2011.
R. Jain, D. Chiu, and W. Hawe. A quantitative measure of fairness
and discrimination for resource allocation in shared computer system,
volume 38. Eastern Research Laboratory, Digital Equipment Corporation Hudson, MA, 1984.
16
[30] G. Chen and B. K. Szymanski. Reusing simulation components:
cost: a component-oriented discrete event simulator. In Proceedings
of the 34th conference on Winter simulation: exploring new frontiers,
pages 776–782. Winter Simulation Conference, 2002.
[31] J. Medbo and J-E. Berg. Simple and accurate path loss modeling
at 5 GHz in indoor environments with corridors. In Vehicular Technology Conference, 2000. IEEE-VTS Fall VTC 2000. 52nd, volume 1,
pages 30–36. IEEE, 2000.
Sergio Barrachina-Muñoz obtained his B.Sc.
degree in Telematics Engineering and his M.Sc.
in Intelligent Interactive Systems in 2015 and
2016, respectively, both from Universitat Pompeu Fabra (UPF), Barcelona. Currently, he is a
PhD student and teacher assistant in the Department of Information and Communication Technologies (DTIC) at Universitat Pompeu Fabra
(UPF). His main research interests are focused
on developing autonomous learning methods
and techniques for improving the performance of
next-generation wireless networks.
Francesc Wilhelmi holds a BSc degree in
Telematics Engineering from the Universitat
Pompeu Fabra (2015) with a focus on Broadband and Wireless Communications. With the
aim of applying new techniques for solving
many well-known problems in communications,
Francesc obtained his MSc degree in Intelligent
and Interactive Systems also from the UPF in
2016. He is now a PhD Student in the Wireless Networking Group (WN) of the Department
of Information and Communication Technologies
(DTIC) at the UPF. The main topics of his PhD Thesis are related
to spatial reuse in high-density wireless networks through power and
sensitivity adjustment by taking advantage of Reinforcement Learning
(RL) techniques.
Boris Bellalta is an Associate Professor in
the Department of Information and Communication Technologies (DTIC) at Universitat Pompeu Fabra (UPF). He obtained his degree in
Telecommunications Engineering from Universitat Politècnica de Catalunya (UPC) in 2002
and the Ph.D. in Information and Communication
Technologies from UPF in 2007. His research
interests are in the area of wireless networks,
with emphasis on the design and performance
evaluation of new architectures and protocols.
The results from his research have been published in more than 100
international journal and conference papers. He is currently involved
in several international and national research projects, including the
coordination of the ENTOMATIC FP7 collaborative project. At UPF he
is giving several courses on networking, queuing theory and wireless
networks. He is co-designer and coordinator of the interuniversity (UPF
and UPC) masters degree in Wireless Communications.
| 7 |
Comparison of channels: criteria for
domination by a symmetric channel
Anuran Makur and Yury Polyanskiy∗
arXiv:1609.06877v2 [] 22 Nov 2017
Abstract
This paper studies the basic question of whether a given channel V can be dominated (in the precise sense of being
more noisy) by a q-ary symmetric channel. The concept of “less noisy” relation between channels originated in network
information theory (broadcast channels) and is defined in terms of mutual information or Kullback-Leibler divergence.
We provide an equivalent characterization in terms of χ2 -divergence. Furthermore, we develop a simple criterion for
domination by a q-ary symmetric channel in terms of the minimum entry of the stochastic matrix defining the channel
V . The criterion is strengthened for the special case of additive noise channels over finite Abelian groups. Finally, it
is shown that domination by a symmetric channel implies (via comparison of Dirichlet forms) a logarithmic Sobolev
inequality for the original channel.
Index Terms
Less noisy, degradation, q-ary symmetric channel, additive noise channel, Dirichlet form, logarithmic Sobolev inequalities.
C ONTENTS
I
Introduction
I-A
Preliminaries . . . . . . . . . . . . . . .
I-B
Channel preorders in information theory
I-C
Symmetric channels and their properties
I-D
Main question and motivation . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
2
2
4
6
preorder
. . . . .
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6
7
7
8
8
9
III
Less noisy domination and degradation regions
III-A
Less noisy domination and degradation regions for additive noise channels . . . . . . . . . . . .
III-B
Less noisy domination and degradation regions for symmetric channels . . . . . . . . . . . . . .
10
11
12
IV
Equivalent characterizations of less noisy preorder
IV-A
Characterization using χ2 -divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV-B
Characterizations via the Löwner partial order and spectral radius . . . . . . . . . . . . . . . . .
13
13
14
V
Conditions for less noisy domination over additive noise channels
V-A
Necessary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
V-B
Sufficient conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
16
17
VI
Sufficient conditions for degradation over general channels
19
VII
Less noisy domination and logarithmic Sobolev inequalities
22
II
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Main results
II-A
χ2 -divergence characterization of the less noisy
II-B
Less noisy domination by symmetric channels
II-C
Structure of additive noise channels . . . . . .
II-D
Comparison of Dirichlet forms . . . . . . . . .
II-E
Outline . . . . . . . . . . . . . . . . . . . . . .
VIII Conclusion
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
26
∗ A. Makur and Y. Polyanskiy are with the Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology,
Cambridge, MA 02139, USA (e-mail: [email protected]; [email protected]).
This research was supported in part by the National Science Foundation CAREER award under grant agreement CCF-12-53205, and in part by
the Center for Science of Information (CSoI), an NSF Science and Technology Center, under grant agreement CCF-09-39370.
This work was presented at the 2017 IEEE International Symposium on Information Theory (ISIT) [1].
1
Appendix A: Basics of majorization theory
26
Appendix B: Proofs of propositions 4 and 12
27
Appendix C: Auxiliary results
28
References
30
I. I NTRODUCTION
For any Markov chain U → X → Y , it is well-known that the data processing inequality, I(U ; Y ) ≤ I(U ; X),
holds. This result can be strengthened to [2]:
I(U ; Y ) ≤ ηI(U ; X)
(1)
where the contraction coefficient η ∈ [0, 1] only depends on the channel PY |X . Frequently, one gets η < 1 and
the resulting inequality is called a strong data processing inequality (SDPI). Such inequalities have been recently
simultaneously rediscovered and applied in several disciplines; see [3, Section 2] for a short survey. In [3, Section 6],
it was noticed that the validity of (1) for all PU,X is equivalent to the statement that an erasure channel with erasure
probability 1 − η is less noisy than the given channel PY |X . In this way, the entire field of SDPIs is equivalent to
determining whether a given channel is dominated by an erasure channel.
This paper initiates the study of a natural extension of the concept of SDPI by replacing the distinguished role
played by erasure channels with q-ary symmetric channels. We give simple criteria for testing this type of domination
and explain how the latter can be used to prove logarithmic Sobolev inequalities. In the next three subsections, we
introduce some basic definitions and notation. We state and motivate our main question in subsection I-D, and present
our main results in section II.
A. Preliminaries
The following notation will be used in our ensuing discussion. Consider any q, r ∈ N , {1, 2, 3, . . . }. We let
Rq×r (respectively Cq×r ) denote the set of all real (respectively complex) q × r matrices. Furthermore, for any matrix
A ∈ Rq×r , we let AT ∈ Rr×q denote the transpose of A, A† ∈ Rr×q denote the Moore-Penrose pseudoinverse of A,
R(A) denote the range (or column space) of A, and ρ (A) denote the spectral radius of A (which is the maximum
q×q
of the absolute values of all complex eigenvalues of A) when q = r. We let R0
( Rq×q
sym denote the sets of positive
q×q
semidefinite and symmetric matrices, respectively. In fact, R0 is a closed convex cone (with respect to the Frobenius
q×q
norm). We also let PSD denote the Löwner partial order over Rq×q
sym : for any two matrices A, B ∈ Rsym , we write
q×q
A PSD B (or equivalently, A − B PSD 0, where 0 is the zero matrix) if and only if A − B ∈ R0 . To work with
probabilities, we let Pq , {p = (p1 , . . . , pq ) ∈ Rq : p1 , . . . , pq ≥ 0 and p1 + · · · + pq = 1} be the probability simplex
of row vectors in Rq , Pq◦ , {p = (p1 , . . . , pq ) ∈ Rq : p1 , . . . , pq > 0 and p1 + · · · + pq = 1} be the relative interior of
Pq , and Rq×r
sto be the convex set of row stochastic matrices (which have rows in Pr ). Finally, for any (row or column)
vector x = (x1 , . . . , xq ) ∈ Rq , we let diag(x) ∈ Rq×q denote the diagonal matrix with entries [diag(x)]i,i = xi for
each i ∈ {1, . . . , q}, and for any set of vectors S ⊆ Rq , we let conv (S) be the convex hull of the vectors in S.
B. Channel preorders in information theory
Since we will study preorders over discrete channels that capture various notions of relative “noisiness” between
channels, we provide an overview of some well-known channel preorders in the literature. Consider an input random
variable X ∈ X and an output random variable Y ∈ Y, where the alphabets are X = [q] , {0, 1, . . . , q − 1} and
Y = [r] for q, r ∈ N without loss of generality. We let Pq be the set of all probability mass functions (pmfs) of X,
where every pmf PX = (PX (0), . . . , PX (q − 1)) ∈ Pq and is perceived as a row vector. Likewise, we let Pr be the set
of all pmfs of Y . A channel is the set of conditional distributions WY |X that associates each x ∈ X with a conditional
q×r
pmf WY |X (·|x) ∈ Pr . So, we represent each channel with a stochastic matrix W ∈ Rsto
that is defined entry-wise as:
∀x ∈ X , ∀y ∈ Y, [W ]x+1,y+1 , WY |X (y|x)
(2)
where the (x + 1)th row of W corresponds to the conditional pmf WY |X (·|x) ∈ Pr , and each column of W has at least
one non-zero entry so that no output alphabet letters are redundant. Moreover, we think of such a channel as a (linear)
map W : Pq → Pr that takes any row probability vector PX ∈ Pq to the row probability vector PY = PX W ∈ Pr .
One of the earliest preorders over channels was the notion of channel inclusion proposed by Shannon in [4]. Given
s×t
two channels W ∈ Rq×r
sto and V ∈ Rsto for some q, r, s, t ∈ N, he stated that W includes V , denoted W inc V ,
2
r×t
if there exist a pmf g ∈ Pm for some m ∈ N, and two sets of channels {Ak ∈ Rsto
: k = 1, . . . , m} and
s×q
{Bk ∈ Rsto : k = 1, . . . , m}, such that:
m
X
V =
gk Bk W Ak .
(3)
k=1
Channel inclusion is preserved under channel addition and multiplication (which are defined in [5]), and the existence
of a code for V implies the existence of as good a code for W in a probability of error sense [4]. The channel inclusion
preorder includes the input-output degradation preorder, which can be found in [6], as a special case. Indeed, V is an
s×q
r×t
input-output degraded version of W , denoted W iod V , if there exist channels A ∈ Rsto
and B ∈ Rsto
such that
V = BW A. We will study an even more specialized case of Shannon’s channel inclusion known as degradation [7],
[8].
q×s
Definition 1 (Degradation Preorder). A channel V ∈ Rsto
is said to be a degraded version of a channel W ∈ Rq×r
sto
with the same input alphabet, denoted W deg V , if V = W A for some channel A ∈ Rr×s
sto .
We note that when Definition 1 of degradation is applied to general matrices (rather than stochastic matrices), it is
equivalent to Definition C.8 of matrix majorization in [9, Chapter 15]. Many other generalizations of the majorization
preorder over vectors (briefly introduced in Appendix A) that apply to matrices are also presented in [9, Chapter 15].
Körner and Marton defined two other preorders over channels in [10] known as the more capable and less noisy
preorders. While the original definitions of these preorders explicitly reflect their significance in channel coding,
we will define them using equivalent mutual information characterizations proved in [10]. (See [11, Problems 6.166.18] for more on the relationship between channel coding and some of the aforementioned preorders.) We say a
q×s
channel W ∈ Rq×r
sto is more capable than a channel V ∈ Rsto with the same input alphabet, denoted W mc V , if
I(PX , WY |X ) ≥ I(PX , VY |X ) for every input pmf PX ∈ Pq , where I(PX , WY |X ) denotes the mutual information
of the joint pmf defined by PX and WY |X . The next definition presents the less noisy preorder, which will be a key
player in our study.
q×r
Definition 2 (Less Noisy Preorder). Given two channels W ∈ Rsto
and V ∈ Rq×s
sto with the same input alphabet, let
YW and YV denote the output random variables of W and V , respectively. Then, W is less noisy than V , denoted
W ln V , if I(U ; YW ) ≥ I(U ; YV ) for every joint distribution PU,X , where the random variable U ∈ U has some
arbitrary range U, and U → X → (YW , YV ) forms a Markov chain.
An analogous characterization of the less noisy preorder using Kullback-Leibler (KL) divergence or relative entropy
is given in the next proposition.
q×r
Proposition 1 (KL Divergence Characterization of Less Noisy [10]). Given two channels W ∈ Rsto
and V ∈ Rq×s
sto
with the same input alphabet, W ln V if and only if D(PX W ||QX W ) ≥ D(PX V ||QX V ) for every pair of input
pmfs PX , QX ∈ Pq , where D(·||·) denotes the KL divergence.1
We will primarily use this KL divergence characterization of ln in our discourse because of its simplicity. Another
well-known equivalent characterization of ln due to van Dijk is presented below, cf. [12, Theorem 2]. We will derive
some useful corollaries from it later in subsection IV-B.
q×r
Proposition 2 (van Dijk Characterization of Less Noisy [12]). Given two channels W ∈ Rsto
and V ∈ Rq×s
sto with
the same input alphabet, consider the functional F : Pq → R:
∀PX ∈ Pq , F (PX ) , I(PX , WY |X ) − I(PX , VY |X ).
Then, W ln V if and only if F is concave.
The more capable and less noisy preorders have both been used to study the capacity regions of broadcast channels.
We refer readers to [13]–[15], and the references therein for further details. We also remark that the more capable and
less noisy preorders tensorize, as shown in [11, Problem 6.18] and [3, Proposition 16], [16, Proposition 5], respectively.
On the other hand, these preorders exhibit rather counter-intuitive behavior in the context of Bayesian networks (or
directed graphical models). Consider a Bayesian network with “source” nodes (with no inbound edges) X and “sink”
nodes (with no outbound edges) Y . If we select a node Z in this network and replace the channel from the parents of
Z to Z with a less noisy channel, then we may reasonably conjecture that the channel from X to Y also becomes less
noisy (motivated by the results in [3]). However, this conjecture is false. To see this, consider the Bayesian network in
1 Throughout this paper, we will adhere to the convention that ∞ ≥ ∞ is true. So, D(P W ||Q W ) ≥ D(P V ||Q V ) is not violated when
X
X
X
X
both KL divergences are infinity.
3
Fig. 1. Illustration of a Bayesian network where X1 , X2 , Z, Y ∈ {0, 1} are binary random variables, PZ|X2 is a BSC(δ) with δ ∈ (0, 1), and
PY |X1 ,Z is defined by a deterministic NOR gate.
Figure 1 (inspired by the results in [17]), where the source nodes are X1 ∼ Ber 21 and X2 = 1 (almost surely), the
node Z is the output of a binary symmetric channel (BSC) with crossover probability δ ∈ (0, 1), denoted BSC(δ), and
the sink node Y is the output of a NOR gate. Let I(δ) = I(X1 , X2 ; Y ) be the end-to-end mutual information. Then,
although BSC(0) ln BSC(δ) for δ ∈ (0, 1), it is easy to verify that I(δ) > I(0) = 0. So, when we replace the BSC(δ)
with a less noisy BSC(0), the end-to-end channel does not become less noisy (or more capable).
The next proposition illustrates certain well-known relationships between the various preorders discussed in this
subsection.
q×r
Proposition 3 (Relations between Channel Preorders). Given two channels W ∈ Rsto
and V ∈ Rq×s
sto with the same
input alphabet, we have:
1) W deg V ⇒ W iod V ⇒ W inc V ,
2) W deg V ⇒ W ln V ⇒ W mc V .
These observations follow in a straightforward manner from the definitions of the various preorders. Perhaps the
only nontrivial implication is W deg V ⇒ W ln V , which can be proven using Proposition 1 and the data processing
inequality.
C. Symmetric channels and their properties
We next formally define q-ary symmetric channels and convey some of their properties. To this end, we first introduce
some properties of Abelian groups and define additive noise channels. Let us fix some q ∈ N with q ≥ 2 and consider an
Abelian group (X , ⊕) of order q equipped with a binary “addition” operation denoted by ⊕. Without loss of generality,
we let X = [q], and let 0 denote the identity element. This endows an ordering to the elements of X . Each element
x ∈ X permutes the entries of the row vector (0, . . . , q − 1) to (σx (0), . . . , σx (q − 1)) by (left) addition in the Cayley
table of the group, where σx : [q] → [q] denotes a permutation of [q], and σx (y) = x ⊕y for every y ∈ X . So,
corresponding to each x ∈ X , we can define a permutation matrix Px , eσx (0) · · · eσx (q−1) ∈ Rq×q such that:
(4)
[v0 · · · vq−1 ] Px = vσx (0) · · · vσx (q−1)
for any v0 , . . . , vq−1 ∈ R, where for each i ∈ [q], ei ∈ Rq is the ith standard basis column vector with unity in the
(i + 1)th position and zero elsewhere. The permutation matrices {Px ∈ Rq×q : x ∈ X } (with the matrix multiplication
operation) form a group that is isomorphic to (X , ⊕) (see Cayley’s theorem, and permutation and regular representations
of groups in [18, Sections 6.11, 7.1, 10.6]). In particular, these matrices commute as (X , ⊕) is Abelian, and are jointly
unitarily diagonalizable by a Fourier matrix of characters (using [19, Theorem 2.5.5]). We now recall that given a row
vector x = (x0 , . . . , xq−1 ) ∈ Rq , we may define a corresponding X -circulant matrix, circX (x) ∈ Rq×q , that is defined
entry-wise as [20, Chapter 3E, Section 4]:
∀a, b ∈ [q], [circX (x)]a+1,b+1 , x−a⊕b .
(5)
where −a ∈ X denotes the inverse of a ∈ X . Moreover, we can decompose this X -circulant matrix as:
circX (x) =
q−1
X
xi PiT
(6)
i=0
since
write:
Pq−1
i=0
Pq−1
xi PiT a+1,b+1 =
i=0 xi eσi (a) b+1 = x−a⊕b for every a, b ∈ [q]. Using similar reasoning, we can
circX (x) = [P0 y · · · Pq−1 y] = P0 xT · · · Pq−1 xT
T
(7)
T
where y = x0 x−1 · · · x−(q−1) ∈ Rq , and P0 = Iq ∈ Rq×q is the q × q identity matrix. Using (6), we see that X circulant matrices are normal, form a commutative algebra, and are jointly unitarily diagonalizable by a Fourier matrix.
4
Furthermore, given two row vectors x, y ∈ Rq , we can define x circX (y) = y circX (x) as the X -circular convolution of
x and y, where the commutativity of X -circular convolution follows from the commutativity of X -circulant matrices.
A salient specialization of this discussion is the case where ⊕ is addition modulo q, and (X = [q], ⊕) is the
cyclic Abelian group Z/qZ. In this scenario, X -circulant matrices correspond to the standard circulant matrices which
are jointly unitarily diagonalized by the discrete Fourier transform (DFT) matrix. Furthermore, for each x ∈ [q], the
permutation matrix PxT = Pqx , where Pq ∈ Rq×q is the generator cyclic permutation matrix as presented in [19, Section
0.9.6]:
∀a, b ∈ [q], [Pq ]a+1,b+1 , ∆1,(b−a (mod q))
(8)
where ∆i,j is the Kronecker delta function, which is unity if i = j and zero otherwise. The matrix Pq cyclically shifts
any input row vector to the right once, i.e. (x1 , x2 , . . . , xq ) Pq = (xq , x1 , . . . , xq−1 ).
Let us now consider a channel with common input and output alphabet X = Y = [q], where (X , ⊕) is an Abelian
group. Such a channel operating on an Abelian group is called an additive noise channel when it is defined as:
Y =X ⊕Z
(9)
where X ∈ X is the input random variable, Y ∈ X is the output random variable, and Z ∈ X is the additive noise
random variable that is independent of X with pmf PZ = (PZ (0), . . . , PZ (q − 1)) ∈ Pq . The channel transition
probability matrix corresponding to (9) is the X -circulant stochastic matrix circX (PZ ) ∈ Rq×q
sto , which is also doubly
T
stochastic (i.e. both circX (PZ ) , circX (PZ ) ∈ Rq×q
sto ). Indeed, for an additive noise channel, it is well-known that the
pmf of Y , PY ∈ Pq , can be obtained from the pmf of X, PX ∈ Pq , by X -circular convolution: PY = PX circX (PZ ).
We remark that in the context of various channel symmetries in the literature (see [21, Section VI.B] for a discussion),
additive noise channels correspond to “group-noise” channels, and are input symmetric, output symmetric, Dobrushin
symmetric, and Gallager symmetric.
The q-ary symmetric channel is an additive noise channel on the Abelian group (X , ⊕) with noise pmf PZ =
wδ , (1 − δ, δ/(q − 1), . . . , δ/(q − 1)) ∈ Pq , where δ ∈ [0, 1]. Its channel transition probability matrix is denoted
Wδ ∈ Rq×q
sto :
h
q−1 T iT
Wδ , circX (wδ ) = wδ T PqT wδ T · · · PqT
wδ
(10)
which has 1 − δ in the principal diagonal entries and δ/(q − 1) in all other entries regardless of the choice of group
(X , ⊕). We may interpret δ as the total crossover probability of the symmetric channel. Indeed, when q = 2, Wδ
represents a BSC with crossover probability
δ ∈ [0, 1]. Although Wδ is only stochastic when δ ∈ [0, 1], we will refer to
the parametrized convex set of matrices Wδ ∈ Rq×q
sym : δ ∈ R with parameter δ as the “symmetric channel matrices,”
where each Wδ has the form (10) such that every row and column sums to unity. We conclude this subsection with a
list of properties of symmetric channel matrices.
Proposition 4 (Properties of Symmetric Channel Matrices). The symmetric channel matrices, Wδ ∈ Rq×q
sym : δ ∈ R ,
satisfy the following properties:
1) For every δ ∈ R, Wδ is a symmetric circulant matrix.
2) The DFT matrix Fq ∈ Cq×q , which is defined entry-wise as [Fq ]j,k = q −1/2 ω (j−1)(k−1) for 1 ≤ j, k ≤ q
√
where ω = exp (2πi/q) and i = −1, jointly diagonalizes
Wδ for every δ ∈ R. Moreover, the corresponding
eigenvalues or Fourier coefficients, {λj (Wδ ) = FqH Wδ Fq j,j : j = 1, . . . , q} are real:
1,
j=1
λj (Wδ ) =
δ
1 − δ − q−1 , j = 2, . . . , q
where FqH denotes the Hermitian transpose of Fq .
3) For all δ ∈ [0, 1], Wδ is a doubly stochastic matrix that has the uniform pmf u , (1/q, . . . , 1/q) as its stationary
distribution: uWδ= u.
T
δ
1
4) For every δ ∈ R\ q−1
, Wδ−1 = Wτ with τ = −δ/ 1 − δ − q−1
, and for δ = q−1
q
q , Wδ = q 11 is unit rank
T
and singular,
where 1 = [1 · · · 1] .
q−1
5) The set Wδ ∈ Rq×q
with the operation of matrix multiplication is an Abelian group.
sym : δ ∈ R\
q
Proof. See Appendix B.
5
D. Main question and motivation
As we mentioned at the outset, our work is partly motivated by [3, Section 6], where the authors demonstrate an
intriguing relation between less noisy domination by an erasure channel and the contraction coefficient of the SDPI (1).
q×(q+1)
q×s
For a common input alphabet X = [q], consider a channel V ∈ Rsto
and a q-ary erasure channel E ∈ Rsto
with
erasure probability ∈ [0, 1]. Recall that given an input x ∈ X , a q-ary erasure channel erases x and outputs e (erasure
symbol) with probability , and outputs x itself with probability 1 − ; the output alphabet of the erasure channel is
{e} ∪ X . It is proved in [3, Proposition 15] that E ln V if and only if ηKL (V ) ≤ 1 − , where ηKL (V ) ∈ [0, 1] is the
contraction coefficient for KL divergence:
ηKL (V ) ,
sup
PX ,QX ∈Pq
0<D(PX ||QX )<+∞
D (PX V ||QX V )
D (PX ||QX )
(11)
which equals the best possible constant η in the SDPI (1) when V = PY |X (see [3, Theorem 4] and the references
therein). This result illustrates that the q-ary erasure channel E with the largest erasure probability ∈ [0, 1] (or the
smallest channel capacity) that is less noisy than V has = 1 − ηKL (V ).2 Furthermore, there are several simple upper
bounds on ηKL that provide sufficient conditions for such less noisy domination. For example, if the `1 -distances between
the rows of V are bounded by 2(1 − α) for some α ∈ [0, 1], then ηKL ≤ 1 − α, cf. [22]. Another criterion follows from
Doeblin minorization [23, Remark III.2]: if for some pmf p ∈ Ps and some α ∈ (0, 1), V ≥ α 1p entry-wise, then
Eα deg V and ηKL (V ) ≤ 1 − α.
To extend these
ideas,
channel Wδ with the largest
we consider the following question: What is the q-ary symmetric
3
(or
the
smallest
channel
capacity)
such
that
W
V
?
Much
like
the bounds on ηKL in the
value of δ ∈ 0, q−1
δ
ln
q
erasure channel context, the goal of this paper is to address this question by establishing simple criteria for testing ln
domination by a q-ary symmetric channel. We next provide several other reasons why determining whether a q-ary
symmetric channel dominates a given channel V is interesting.
Firstly, if W ln V , then W ⊗n ln V ⊗n (where W ⊗n is the n-fold tensor product of W ) since ln tensorizes, and
n
n
I(U ; YW
) ≥ I(U ; YVn ) for every Markov chain U → X n → (YW
, YVn ) (see Definition 2). Thus, many impossibility
n
results (in statistical decision theory for example) that are proven by exhibiting bounds on quantities such as I(U ; YW
)
n
transparently carry over to statistical experiments with observations on the basis of YV . Since it is common to study
the q-ary symmetric observation model (especially with q = 2), we can leverage its sample complexity lower bounds
for other V .
Secondly, we present a self-contained information theoretic motivation. W ln V if and only if CS = 0, where
CS is the secrecy capacity of the Wyner wiretap channel with V as the main (legal receiver) channel and W as the
eavesdropper channel [24, Corollary 3], [11, Corollary 17.11]. Therefore, finding the maximally noisy q-ary symmetric
channel that dominates V establishes the minimal noise required on the eavesdropper link so that secret communication
is feasible.
Thirdly, ln domination turns out to entail a comparison of Dirichlet forms (see subsection II-D), and consequently,
allows us to prove Poincaré and logarithmic Sobolev inequalities for V from well-known results on q-ary symmetric
channels. These inequalities are cornerstones of the modern approach to Markov chains and concentration of measure
[25], [26].
II. M AIN RESULTS
In this section, we first delineate some guiding sub-questions of our study, indicate the main results that address
them, and then present these results in the ensuing subsections. We will delve into the following four leading questions:
1) Can we test the less noisy preorder ln without using KL divergence?
Yes, we can use χ2 -divergence as shown in Theorem 1.
2) Given a channel V ∈ Rq×q
sto , is there a simple sufficient condition for less noisy domination by a q-ary symmetric
channel Wδ ln V ?
Yes, a condition using degradation (which implies less noisy domination) is presented in Theorem 2.
3) Can we say anything stronger about less noisy domination by a q-ary symmetric channel when V ∈ Rq×q
sto is an
additive noise channel?
Yes, Theorem 3 outlines the structure of additive noise channels in this context (and Figure 2 depicts it).
2A
q-ary erasure channel E with erasure probability ∈ [0, 1] has channel
capacity
C() = log(q)(1 − ), which is linear and decreasing.
has channel capacity C(δ) = log(q) − H(wδ ), which is convex
q-ary symmetric channel Wδ with total crossover probability δ ∈ 0, q−1
q
and decreasing. Here, H(wδ ) denotes the Shannon entropy of the pmf wδ .
3A
6
4) Why are we interested in less noisy domination by q-ary symmetric channels?
Because this permits us to compare Dirichlet forms as portrayed in Theorem 4.
We next elaborate on these aforementioned theorems.
A. χ2 -divergence characterization of the less noisy preorder
Our most general result illustrates that although less noisy domination is a preorder defined using KL divergence,
one can equivalently define it using χ2 -divergence. Since we will prove this result for general measurable spaces, we
introduce some notation pertinent only to this result. Let (X , F), (Y1 , H1 ), and (Y2 , H2 ) be three measurable spaces,
and let W : H1 × X → [0, 1] and V : H2 × X → [0, 1] be two Markov kernels (or channels) acting on the same
source space (X , F). Given any probability measure PX on (X , F), we denote by PX W the probability measure on
(Y1 , H1 ) induced by the push-forward of PX .4 Recall that for any two probability measures PX and QX on (X , F),
their KL divergence is given by:
Z
dPX
dPX , if PX QX
log
D(PX ||QX ) ,
(12)
dQX
X
+∞,
otherwise
and their χ2 -divergence is given by:
Z
2
dPX
dQX − 1, if PX QX
χ2 (PX ||QX ) ,
X dQX
+∞,
otherwise
(13)
dPX
where PX QX denotes that PX is absolutely continuous with respect to QX , dQ
denotes the Radon-Nikodym
X
derivative of PX with respect to QX , and log(·) is the natural logarithm with base e (throughout this paper). Furthermore,
the characterization of ln in Proposition 1 extends naturally to general Markov kernels; indeed, W ln V if and only if
D(PX W ||QX W ) ≥ D(PX V ||QX V ) for every pair of probability measures PX and QX on (X , F). The next theorem
presents the χ2 -divergence characterization of ln .
Theorem 1 (χ2 -Divergence Characterization of ln ). For any Markov kernels W : H1 ×X → [0, 1] and V : H2 ×X →
[0, 1] acting on the same source space, W ln V if and only if:
χ2 (PX W ||QX W ) ≥ χ2 (PX V ||QX V )
for every pair of probability measures PX and QX on (X , F).
Theorem 1 is proved in subsection IV-A.
B. Less noisy domination by symmetric channels
Our remaining results are all concerned with less noisy (and degraded) domination by q-ary symmetric channels.
q×q
Suppose we are given a q-ary symmetric channel Wδ ∈ Rsto
with δ ∈ [0, 1], and another channel V ∈ Rq×q
sto with
common input and output alphabets. Then, the next result provides a sufficient condition for when Wδ deg V .
Theorem 2 (Sufficient Condition for Degradation by Symmetric Channels). Given a channel V ∈ Rq×q
sto with q ≥ 2
and minimum probability ν = min {[V ]i,j : 1 ≤ i, j ≤ q}, we have:
ν
0≤δ≤
⇒ Wδ deg V.
ν
1 − (q − 1)ν + q−1
Theorem 2 is proved in section VI. We note that the sufficient condition in Theorem 2 is tight as there exist channels
ν
V that violate Wδ deg V when δ > ν/(1 − (q − 1)ν + q−1
). Furthermore, Theorem 2 also provides a sufficient
condition for Wδ ln V due to Proposition 3.
4 Here, we can think of X and Y as random variables with codomains X and Y, respectively. The Markov kernel W behaves like the conditional
distribution of Y given X (under regularity conditions). Moreover, when the distribution of X is PX , the corresponding distribution of Y is
PY = PX W .
7
C. Structure of additive noise channels
Our next major result is concerned with understanding when q-ary symmetric channels operating on an Abelian group
(X , ⊕) dominate other additive noise channels on (X , ⊕), which are defined in (9), in the less noisy and degraded
senses. Given a symmetric channel Wδ ∈ Rq×q
sto with δ ∈ [0, 1], we define the additive less noisy domination region of
Wδ as:
Ladd
(14)
Wδ , {v ∈ Pq : Wδ = circX (wδ ) ln circX (v)}
which is the set of all noise pmfs whose corresponding channel transition probability matrices are dominated by Wδ
in the less noisy sense. Likewise, we define the additive degradation region of Wδ as:
add
DW
, {v ∈ Pq : Wδ = circX (wδ ) deg circX (v)}
δ
(15)
which is the set of all noise pmfs whose corresponding channel transition probability matrices are degraded versions
add
of Wδ . The next theorem exactly characterizes DW
, and “bounds” Ladd
Wδ in a set theoretic sense.
δ
Theorem 3 (Additive Less Noisy Domination
Degradation Regions for Symmetric Channels). Given a symmetric
and
q−1
and q ≥ 2, we have:
channel Wδ = circX (wδ ) ∈ Rq×q
sto with δ ∈ 0, q
add
DW
= conv wδ Pqk : k ∈ [q]
δ
⊆ conv wδ Pqk : k ∈ [q] ∪ wγ Pqk : k ∈ [q]
⊆ Ladd
Wδ ⊆ {v ∈ Pq : kv − uk`2 ≤ kwδ − uk`2 }
where the first set inclusion is strict for δ ∈ 0, q−1
and q ≥ 3, Pq denotes the generator cyclic permutation matrix
q
as defined in (8), u denotes the uniform pmf, k·k`2 is the Euclidean `2 -norm, and:
γ=
1−δ
.
δ
1 − δ + (q−1)
2
q×q
Furthermore, Ladd
: x ∈ X } defined
Wδ is a closed and convex set that is invariant under the permutations {Px ∈ R
add
add
in (4) corresponding to the underlying Abelian group (X , ⊕) (i.e. v ∈ LWδ ⇒ vPx ∈ LWδ for every x ∈ X ).
Theorem 3 is a compilation of several results. As explained at the very end of subsection V-B, Proposition 6 (in
subsection III-A), Corollary 1 (in subsection III-B), part 1 of Proposition 9 (in subsection V-A), and Proposition 11
(in subsection V-B) make up Theorem 3. We remark that according to numerical evidence, the second and third set
inclusions in Theorem 3 appear to be strict, and Ladd
Wδ seems to be a strictly convex set. The content of Theorem 3 and
these observations are illustrated in Figure 2, which portrays the probability simplex of noise pmfs for q = 3 and the
pertinent regions which capture less noisy domination and degradation by a q-ary symmetric channel.
D. Comparison of Dirichlet forms
As mentioned in subsection I-D, one of the reasons we study q-ary symmetric channels and prove Theorems 2 and
3 is because less noisy domination implies useful bounds between Dirichlet forms. Recall that the q-ary symmetric
channel Wδ ∈ Rq×q
sto with δ ∈ [0, 1] has uniform stationary distribution u ∈ Pq (see part 3 of Proposition 4). For any
channel V ∈ Rq×q
sto that is doubly stochastic and has uniform stationary distribution, we may define a corresponding
Dirichlet form:
1
(16)
∀f ∈ Rq , EV (f, f ) = f T (Iq − V ) f
q
T
where f = [f1 · · · fq ] ∈ Rq are column vectors, and Iq ∈ Rq×q denotes the q × q identity matrix (as shown in [25]
or [26]). Our final theorem portrays that Wδ ln V implies that the Dirichlet form corresponding to V dominates the
Dirichlet form corresponding to Wδ pointwise. The Dirichlet form corresponding to Wδ is in fact a scaled version of
the so called standard Dirichlet form:
!2
q
q
X
X
1
1
fk2 −
fk
(17)
∀f ∈ Rq , Estd (f, f ) , VARu (f ) =
q
q
k=1
k=1
which is the Dirichlet form corresponding to the q-ary symmetric channel W(q−1)/q = 1u with all uniform conditional
qδ
pmfs. Indeed, using Iq − Wδ = q−1
(Iq − 1u), we have:
∀f ∈ Rq , EWδ (f, f ) =
8
qδ
Estd (f, f ) .
q−1
(18)
Fig. 2. Illustration of the additive less noisy domination region and additive degradation region for a q-ary symmetric channel when q = 3 and
δ ∈ (0, 2/3): The gray triangle denotes the probability simplex of noise pmfs P3 . The dotted line denotes the parametrized family of noise pmfs of
3-ary symmetric channels {wδ ∈ P3 : δ ∈ [0, 1]}; its noteworthy points are w0 (corner of simplex, W0 is less noisy than every channel), wδ for
some fixed δ ∈ (0, 2/3) (noise pmf of 3-ary symmetric channel Wδ under consideration), w2/3 = u (uniform pmf, W2/3 is more noisy than every
channel), wτ with τ = 1−(δ/2) (Wτ is the extremal symmetric channel that is degraded by Wδ ), wγ with γ = (1−δ)/(1−δ +(δ/4)) (Wγ is a 3ary symmetric
by Wδ but Wδ ln Wγ ), and w1 (edge of simplex). The magenta triangle denotes the additive degradation
channel that is not degraded
regionconv wδ , wδ P3 , wδ P32 of Wδ . The green convex region denotes the additive less noisydomination region of Wδ , and the yellow region
conv wδ , wδ P3 , wδ P32 , wγ , wγ P3 , wγ P32
is its lower bound while the circular cyan region v ∈ P3 : kv − uk`2 ≤ kwδ − uk`2 (which is
a hypersphere for general q ≥ 3) is its upper bound. Note that we do not need to specify the underlying group because there is only one group of
order 3.
The standard Dirichlet form is the usual choice for Dirichlet form comparison because its logarithmic Sobolev constant
has been precisely computed in [25, Appendix, Theorem A.1]. So, we present Theorem 4 using Estd rather than EWδ .
q×q
Theorem 4 (Domination of Dirichlet Forms). Given the doubly stochastic channels Wδ ∈ Rsto
with δ ∈ 0, q−1
and
q
q×q
V ∈ Rsto , if Wδ ln V , then:
qδ
∀f ∈ Rq , EV (f, f ) ≥
Estd (f, f ) .
q−1
An extension of Theorem 4 is proved in section VII. The domination of Dirichlet forms shown in Theorem 4
has several useful consequences. A major consequence is that we can immediately establish Poincaré (spectral gap)
inequalities and logarithmic Sobolev inequalities (LSIs) for the channel V using the corresponding inequalities for
q×q
q-ary symmetric channels. For example, the LSI for Wδ ∈ Rsto
with q > 2 is:
(q − 1) log(q − 1)
D f 2 u||u ≤
EWδ (f, f )
(q − 2)δ
(19)
Pq
2
for all f ∈ Rq such that
k=1 fk = q, where we use (54) and the logarithmic Sobolev constant computed in
part 1 of Proposition 12. As shown in Appendix B, (19) is easily established using the known logarithmic Sobolev
constant corresponding to the standard Dirichlet form. Using the LSI for V that follows from (19) and Theorem 4,
we immediately obtain guarantees on the convergence rate and hypercontractivity properties of the associated Markov
semigroup {exp(−t(Iq − V )) : t ≥ 0}. We refer readers to [25] and [26] for comprehensive accounts of such topics.
E. Outline
We briefly outline the content of the ensuing sections. In section III, we study the structure of less noisy domination
and degradation regions of channels. In section IV, we prove Theorem 1 and present some other equivalent characterizations of ln . We then derive several necessary and sufficient conditions for less noisy domination among additive
9
noise channels in section V, which together with the results of section III, culminates in a proof of Theorem 3. Section
VI provides a proof of Theorem 2, and section VII introduces LSIs and proves an extension of Theorem 4. Finally,
we conclude our discussion in section VIII.
III. L ESS NOISY DOMINATION AND DEGRADATION REGIONS
In this section, we focus on understanding the “geometric” aspects of less noisy domination and degradation by
channels. We begin by deriving some simple characteristics of the sets of channels that are dominated by some fixed
channel in the less noisy and degraded senses. We then specialize our results for additive noise channels, and this
add
culminates in a complete characterization of DW
and derivations of certain properties of Ladd
Wδ presented in Theorem
δ
3.
Let W ∈ Rq×r
sto be a fixed channel with q, r ∈ N, and define its less noisy domination region:
LW , V ∈ Rq×r
(20)
sto : W ln V
as the set of all channels on the same input and output alphabets that are dominated by W in the less noisy sense.
Moreover, we define the degradation region of W :
q×r
DW , V ∈ Rsto
: W deg V
(21)
as the set of all channels on the same input and output alphabets that are degraded versions of W . Then, LW and DW
satisfy the properties delineated below.
Proposition 5 (Less Noisy Domination and Degradation Regions). Given the channel W ∈ Rq×r
sto , its less noisy
domination region LW and its degradation region DW are non-empty, closed, convex, and output alphabet permutation
symmetric (i.e. V ∈ LW ⇒ V P ∈ LW and V ∈ DW ⇒ V P ∈ DW for every permutation matrix P ∈ Rr×r ).
Proof.
Non-Emptiness of LW and DW : W ln W ⇒ W ∈ LW , and W deg W ⇒ W ∈ DW . So, LW and DW are
non-empty.
Closure of LW : Fix any two pmfs PX , QX ∈ Pq , and consider a sequence of channels Vk ∈ LW such that Vk → V ∈
Rq×r
sto (with respect to the Frobenius norm). Then, we also have PX Vk → PX V and QX Vk → QX V (with respect to
the `2 -norm). Hence, we get:
D (PX V ||QX V ) ≤ lim inf D (PX Vk ||QX Vk )
k→∞
≤ D (PX W ||QX W )
where the first line follows from the lower semicontinuity of KL divergence [27, Theorem 1], [28, Theorem 3.6,
Section 3.5], and
the second line holds because Vk ∈ LW . This implies that for any two pmfs PX , QX ∈ Pq , the set
S (PX , QX ) = V ∈ Rq×r
sto : D (PX W ||QX W ) ≥ D (PX V ||QX V ) is actually closed. Using Proposition 1, we have
that:
\
LW =
S (PX , QX ).
PX ,QX ∈Pq
So, LW is closed since it is an intersection of closed sets [29].
Closure of DW : Consider a sequence of channels Vk ∈ DW such that Vk → V ∈ Rq×r
sto . Since each Vk = W Ak
r×r
for some channel Ak ∈ Rr×r
sto belonging to the compact set Rsto , there exists a subsequence Akm that converges by
r×r
(sequential) compactness [29]: Akm → A ∈ Rsto
. Hence, V ∈ DW since Vkm = W Akm → W A = V , and DW is a
closed set.
Convexity of LW : Suppose V1 , V2 ∈ LW , and let λ ∈ [0, 1] and λ̄ = 1 − λ. Then, for every PX , QX ∈ Pq , we have:
D(PX W ||QX W ) ≥ D(PX (λV1 + λ̄V2 )||QX (λV1 + λ̄V2 ))
by the convexity of KL divergence. Hence, LW is convex.
r×r
Convexity of DW : If V1 , V2 ∈ DW so that V1 = W A1 and V2 = W A2 for some A1 , A2 ∈ Rsto
, then λV1 + λ̄V2 =
W (λA1 + λ̄A2 ) ∈ DW for all λ ∈ [0, 1], and DW is convex.
Symmetry of LW : This is obvious from Proposition 1 because KL divergence is invariant to permutations of its input
pmfs.
Symmetry of DW : Given V ∈ DW so that V = W A for some A ∈ Rr×r
sto , we have that V P = W AP ∈ DW for
every permutation matrix P ∈ Rr×r . This completes the proof.
10
While the channels in LW and DW all have the same output alphabet as W , as defined in (20) and (21), we may
extend the output alphabet of W by adding zero probability letters. So, separate less noisy domination and degradation
regions can be defined for each output alphabet size that is at least as large as the original output alphabet size of W .
A. Less noisy domination and degradation regions for additive noise channels
Often in information theory, we are concerned with additive noise channels on an Abelian group (X , ⊕) with X = [q]
and q ∈ N, as defined in (9). Such channels are completely defined by a noise pmf PZ ∈ Pq with corresponding channel
q×q
transition probability matrix circX (PZ ) ∈ Rq×q
sto . Suppose W = circX (w) ∈ Rsto is an additive noise channel with
noise pmf w ∈ Pq . Then, we are often only interested in the set of additive noise channels that are dominated by W .
We define the additive less noisy domination region of W :
Ladd
W , {v ∈ Pq : W ln circX (v)}
(22)
as the set of all noise pmfs whose corresponding channel transition matrices are dominated by W in the less noisy
sense. Likewise, we define the additive degradation region of W :
add
DW
, {v ∈ Pq : W deg circX (v)}
(23)
as the set of all noise pmfs whose corresponding channel transition matrices are degraded versions of W . (These
definitions generalize (14) and (15), and can also hold for any non-additive noise channel W .) The next proposition
add
illustrates certain properties of Ladd
W and explicitly characterizes DW .
Proposition 6 (Additive Less Noisy Domination and Degradation Regions). Given the additive noise channel W =
q×q
circX (w) ∈ Rsto with noise pmf w ∈ Pq , we have:
add
q×q
1) Ladd
: x ∈ X } defined
W and DW are non-empty, closed, convex, and invariant under the permutations {Px ∈ R
add
add
add
add
in (4) (i.e. v ∈ LW ⇒ vPx ∈ LW and v ∈ DW ⇒ vPx ∈ DW for every x ∈ X ).
add
2) DW
= conv ({wPx : x ∈ X }) = {v ∈ Pq : w X v}, where X denotes the group majorization preorder as
defined in Appendix A.
To prove Proposition 6, we will need the following lemma.
Lemma 1 (Additive Noise Channel Degradation). Given two additive noise channels W = circX (w) ∈ Rq×q
sto and
V = circX (v) ∈ Rq×q
sto with noise pmfs w, v ∈ Pq , W deg V if and only if V = W circX (z) = circX (z)W for some
z ∈ Pq (i.e. for additive noise channels W deg V , the channel that degrades W to produce V is also an additive
noise channel without loss of generality).
Proof. Since X -circulant matrices commute, we must have W circX (z) = circX (z)W for every z ∈ Pq . Furthermore,
V = W circX (z) for some z ∈ Pq implies that W deg V by Definition 1. So, it suffices to prove that W deg V implies
V = W circX (z) for some z ∈ Pq . By Definition 1, W deg V implies that V = W R for some doubly stochastic
q×q
channel R ∈ Rsto
(as V and W are doubly stochastic). Let r with rT ∈ Pq be the first column of R, and s = W r
T
with s ∈ Pq be the first column of V . Then, it is straightforward to verify using (7) that:
V = s P1 s P2 s · · · Pq−1 s
= W r P1 W r P2 W r · · · Pq−1 W r
= W r P1 r P2 r · · · Pq−1 r
where the third equality holds because {Px : x ∈ X } are X -circulant matrices which commute with W . Hence, V is
the product of W and an X -circulant stochastic matrix, i.e. V = W circX (z) for some z ∈ Pq . This concludes the
proof.
We emphasize that in Lemma 1, the channel that degrades W to produce V is only an additive noise channel
without loss of generality. We can certainly have V = W R with a non-additive noise channel R. Consider for instance,
V = W = 11T /q, where every doubly stochastic matrix R satisfies V = W R. However, when we consider V = W R
with an additive noise channel R, V corresponds to the channel W with an additional independent additive noise term
associated with R. We now prove Proposition 6.
Proof of Proposition 6.
add
Part 1: Non-emptiness, closure, and convexity of Ladd
W and DW can be proved in exactly the same way as in Proposition
5, with the additional observation that the set of X -circulant matrices is closed and convex. Moreover, for every x ∈ X :
W ln W Px = circX (wPx ) ln W
W deg W Px = circX (wPx ) deg W
11
where the equalities follow from (7). These inequalities and the transitive properties of ln and deg yield the invariance
add
q×q
of Ladd
: x ∈ X }.
W and DW with respect to {Px ∈ R
add
Part 2: Lemma 1 is equivalent to the fact that v ∈ DW
if and only if circX (v) = circX (w) circX (z) for some z ∈ Pq .
add
This implies that v ∈ DW if and only if v = w circX (z) for some z ∈ Pq (due to (7) and the fact that X -circulant
matrices commute). Applying Proposition 14 from Appendix A completes the proof.
We remark that part 1 of Proposition 6 does not require W to be an additive noise channel. The proofs of closure,
add
add
convexity, and invariance with respect to {Px ∈ Rq×q : x ∈ X } hold for general W ∈ Rq×q
sto . Moreover, LW and DW
add
add
are non-empty because u ∈ LW and u ∈ DW .
B. Less noisy domination and degradation regions for symmetric channels
Since q-ary symmetric channels for q ∈ N are additive noise channels, Proposition 6 holds for symmetric channels.
In this subsection, we deduce some simple results that are unique to symmetric channels. The first of these is a
specialization of part 2 of Proposition 6 which states that the additive degradation region of a symmetric channel can
be characterized by traditional majorization instead of group majorization.
Corollary 1 (Degradation Region of Symmetric Channel). The q-ary symmetric channel Wδ = circX (wδ ) ∈ Rq×q
sto for
δ ∈ [0, 1] has additive degradation region:
add
DW
= {v ∈ Pq : wδ maj v} = conv wδ Pqk : k ∈ [q]
δ
where maj denotes the majorization preorder defined in Appendix A, and Pq ∈ Rq×q is defined in (8).
Proof. From part 2 of Proposition 6, we have that:
add
DW
= conv ({wδ Px : x ∈ X }) = conv wδ Pqk : k ∈ [q]
δ
= conv wδ P : P ∈ Rq×q is a permutation matrix
= {v ∈ Pq : w maj v}
where the second and third equalities hold regardless of the choice of group (X , ⊕), because the sets of all cyclic
or regular permutations of wδ = (1 − δ, δ/(q − 1), . . . , δ/(q − 1)) equal {wδ Px : x ∈ X }. The final equality follows
from the definition of majorization in Appendix A.
With this geometric characterization of the additive degradation region, it is straightforward
to find the extremal
.
Indeed,
we compute τ by
symmetric channel Wτ that is a degraded version of Wδ for some fixed δ ∈ [0, 1]\ q−1
q
using the fact that the noise pmf wτ ∈ conv wδ Pqk : k = 1, . . . , q − 1 :
wτ =
q−1
X
λi wδ Pqi
(24)
i=1
for some λ1 , . . . , λq−1 ∈ [0, 1] such that λ1 + · · · + λq−1 = 1. Solving (24) for τ and λ1 , . . . , λq−1 yields:
τ =1−
and λ1 = · · · = λq−1 =
1
q−1 ,
δ
q−1
(25)
which means that:
q−1
1 X
wτ =
wδ Pqi .
(26)
q − 1 i=1
q−1
This is illustrated in Figure 2 for the case where δ ∈ 0, q−1
and τ > q−1
, the symmetric
q
q > δ. For δ ∈ 0, q
channels that are degraded versions of Wδ are {Wγ : γ ∈ [δ, τ ]}. In particular, for such γ ∈ [δ, τ ], Wγ = Wδ Wβ with
δ
β = (γ − δ)/(1 − δ − q−1
) using the proof of part 5 of Proposition 4 in Appendix B.
In the spirit of comparing symmetric and erasure channels as done in [15] for the binary input case, our next result
shows that a q-ary symmetric channel can never be less noisy than a q-ary erasure channel.
q×(q+1)
Proposition 7 (Symmetric Channel 6ln Erasure Channel). For q ∈ N\{1}, given a q-ary erasure channel E ∈ Rsto
with erasure probability ∈ (0, 1), there does not exist δ ∈ (0, 1) such that the corresponding q-ary symmetric channel
Wδ ∈ Rq×q
sto on the same input alphabet satisfies Wδ ln E .
12
Proof. For a q-ary erasure channel E with ∈ (0, 1), we always have D(uE ||∆0 E ) = +∞ for u, ∆0 = (1, 0, . . . , 0) ∈
Pq . On the other hand, for any q-ary symmetric channel Wδ with δ ∈ (0, 1), we have D(PX Wδ ||QX Wδ ) < +∞ for
every PX , QX ∈ Pq . Thus, Wδ 6ln E for any δ ∈ (0, 1).
In fact, the argument for Proposition 7 conveys that a symmetric channel Wδ ∈ Rq×q
sto with δ ∈ (0, 1) satisfies
Wδ ln V for some channel V ∈ Rq×r
only
if
D(P
V
||Q
V
)
<
+∞
for
every
P
,
Q
X
X
X
X ∈ Pq . Typically, we are only
sto
interested in studying q-ary symmetric channels with q ≥ 2 and δ ∈ 0, q−1
.
For
example,
the BSC with crossover
q
probability δ is usually studied for δ ∈ 0, 12 . Indeed, the less noisy domination characteristics of the extremal q-ary
q×q
symmetric channels with δ = 0 or δ = q−1
q are quite elementary. Given q ≥ 2, W0 = Iq ∈ Rsto satisfies W0 ln V ,
q×r
and W(q−1)/q = 1u ∈ Rq×q
sto satisfies V ln W(q−1)/q , for every channel V ∈ Rsto on a common input alphabet.
q×(q+1)
For the sake of completeness, we also note that for q ≥ 2, the extremal q-ary erasure channels E0 ∈ Rsto
and
q×(q+1)
q×r
E1 ∈ Rsto
, with = 0 and = 1 respectively, satisfy E0 ln V and V ln E1 for every channel V ∈ Rsto
on a
common input alphabet.
The result that the q-ary symmetric channel with uniform noise pmf W(q−1)/q is more noisy than every channel on
the same input alphabet has an analogue concerning additive white Gaussian noise (AWGN) channels. Consider all
additive noise channels of the form:
Y =X +Z
(27)
where X, Y ∈ R,
the 2input X is uncorrelated with the additive noise Z:2 E [XZ] = 0, and the noise Z has power
for some fixed σZ > 0. Let X = Xg ∼ N (0, σX ) (Gaussian distribution with mean 0 and
constraint E Z 2 ≤ σZ
2
) for some σX > 0. Then, we have:
variance σX
I Xg ; Xg + Z ≥ I Xg ; Xg + Zg
(28)
2
), Zg is independent of Xg , and equality occurs if and only if Z = Zg in distribution [28, Section
where Zg ∼ N (0, σZ
4.7]. This states that Gaussian noise is the “worst case additive noise” for a Gaussian source. Hence, the AWGN
channel is not more capable than any other additive noise channel with the same constraints. As a result, the AWGN
channel is not less noisy than any other additive noise channel with the same constraints (using Proposition 3).
IV. E QUIVALENT CHARACTERIZATIONS OF LESS NOISY PREORDER
Having studied the structure of less noisy domination and degradation regions of channels, we now consider the
problem of verifying whether a channel W is less noisy than another channel V . Since using Definition 2 or Proposition
1 directly is difficult, we often start by checking whether V is a degraded version of W . When this fails, we typically
resort to verifying van Dijk’s condition in Proposition 2, cf. [12, Theorem 2]. In this section, we prove the equivalent
characterization of the less noisy preorder in Theorem 1, and then present some useful corollaries of van Dijk’s
condition.
A. Characterization using χ2 -divergence
Recall the general measure theoretic setup and the definition of χ2 -divergence from subsection II-A. It is wellknown that KL divergence is locally approximated by χ2 -divergence, e.g. [28, Section 4.2]. While this approximation
sometimes fails globally, cf. [30], the following notable result was first shown by Ahlswede and Gács in the discrete
case in [2], and then extended to general alphabets in [3, Theorem 3]:
ηKL (W ) = ηχ2 (W ) ,
sup
PX ,QX
0<χ (PX ||QX )<+∞
χ2 (PX W ||QX W )
χ2 (PX ||QX )
(29)
2
for any Markov kernel W : H1 × X → [0, 1], where ηKL (W ) is defined as in (11), ηχ2 (W ) is the contraction coefficient
for χ2 -divergence, and the suprema in ηKL (W ) and ηχ2 (W ) are taken over all probability measures PX and QX on
(X , F). Since ηKL characterizes less noisy domination with respect to an erasure channel as mentioned in subsection I-D,
(29) portrays that ηχ2 also characterizes this. We will now prove Theorem 1 from subsection II-A, which generalizes
(29) and illustrates that χ2 -divergence actually characterizes less noisy domination by an arbitrary channel.
Proof of Theorem 1. In order to prove the forward direction, we recall the local approximation of KL divergence using
χ2 -divergence from [28, Proposition 4.2], which states that for any two probability measures PX and QX on (X , F):
2
lim+ 2 D λPX + λ̄QX ||QX = χ2 (PX ||QX )
(30)
λ→0 λ
13
where λ̄ = 1 − λ for λ ∈ (0, 1), and both sides of (30) are finite or infinite together. Then, we observe that for any
two probability measures PX and QX , and any λ ∈ [0, 1], we have:
D λPX W + λ̄QX W ||QX W ≥ D λPX V + λ̄QX V ||QX V
since W ln V . Scaling this inequality by
2
λ2
and letting λ → 0 produces:
χ2 (PX W ||QX W ) ≥ χ2 (PX V ||QX V )
as shown in (30). This proves the forward direction.
To establish the converse direction, we recall an integral representation of KL divergence using χ2 -divergence
presented in [3, Appendix A.2] (which can be distilled from the argument in [31, Theorem 1]):5
Z ∞ 2
χ (PX ||QtX )
D(PX ||QX ) =
dt
(31)
t+1
0
1
t
PX + t+1
QX for t ∈ [0, ∞), and both
for any two probability measures PX and QX on (X , F), where QtX = 1+t
sides of (31) are finite or infinite together (as a close inspection of the proof in [3, Appendix A.2] reveals). Hence, for
every PX and QX , we have by assumption:
χ2 PX W ||QtX W ≥ χ2 PX V ||QtX V
which implies that:
Z
0
∞
Z ∞ 2
χ2 (PX W ||QtX W )
χ (PX V ||QtX V )
dt ≥
dt
t+1
t+1
0
⇒ D(PX W ||QX W ) ≥ D(PX V ||QX V ) .
Hence, W ln V , which completes the proof.
B. Characterizations via the Löwner partial order and spectral radius
We will use the finite alphabet setup of subsection I-B for the remaining discussion in this paper. In the finite alphabet
q×s
setting, Theorem 1 states that W ∈ Rq×r
sto is less noisy than V ∈ Rsto if and only if for every PX , QX ∈ Pq :
χ2 (PX W ||QX W ) ≥ χ2 (PX V ||QX V ) .
(32)
This characterization has the flavor of a Löwner partial order condition. Indeed, it is straightforward to verify that for
any PX ∈ Pq and QX ∈ Pq◦ , we can write their χ2 -divergence as:
−1
χ2 (PX ||QX ) = JX diag(QX )
T
JX
.
(33)
where JX = PX − QX . Hence, we can express (32) as:
−1
JX W diag(QX W )
T
W T JX
≥ JX V diag(QX V )
−1
T
V T JX
(34)
for every JX = PX − QX such that PX ∈ Pq and QX ∈ Pq◦ . This suggests that (32) is equivalent to:
−1
W diag(QX W )
−1
W T PSD V diag(QX V )
VT
(35)
for every QX ∈ Pq◦ . It turns out that (35) indeed characterizes ln , and this is straightforward to prove directly. The
next proposition illustrates that (35) also follows as a corollary of van Dijk’s characterization in Proposition 2, and
presents an equivalent spectral characterization of ln .
q×s
q×r
and V ∈ Rsto
Proposition 8 (Löwner and Spectral Characterizations of ln ). For any pair of channels W ∈ Rsto
on the same input alphabet [q], the following are equivalent:
1) W ln V
2) For every PX ∈ Pq◦ , we have:
W diag(PX W )
5 Note
−1
−1
W T PSD V diag(PX V )
that [3, Equation (78)], and hence [1, Equation (7)], are missing factors of
14
1
t+1
VT
inside the integrals.
−1
−1
3) For every PX ∈ Pq◦ , we have R V diag(PX V ) V T ⊆ R W diag(PX W ) W T and:6
†
−1 T
−1
T
V diag(PX V ) V
= 1.
ρ W diag(PX W ) W
Proof. (1 ⇔ 2) Recall the functional F : Pq → R, F (PX ) = I(PX , WY |X ) − I(PX , VY |X ) defined in Proposition 2,
cf. [12, Theorem 2]. Since F : Pq → R is continuous on its domain Pq , and twice differentiable on Pq◦ , F is concave
if and only if its Hessian is negative semidefinite for every PX ∈ Pq◦ (i.e. −∇2 F (PX ) PSD 0 for every PX ∈ Pq◦ )
q×q
[32, Section 3.1.4]. The Hessian matrix of F , ∇2 F : Pq◦ → Rsym
, is defined entry-wise for every x, x0 ∈ [q] as:
∂2F
(PX )
∂PX (x)∂PX (x0 )
2
∇ F (PX ) x,x0 =
where we index the matrix ∇2 F (PX ) starting at 0 rather than 1. Furthermore, a straightforward calculation shows that:
−1
∇2 F (PX ) = V diag(PX V )
V T − W diag(PX W )
−1
WT
for every PX ∈ Pq◦ . (Note that the matrix inverses here are well-defined because PX ∈ Pq◦ ). Therefore, F is concave
if and only if for every PX ∈ Pq◦ :
−1
W diag(PX W )
W T PSD V diag(PX V )
−1
V T.
This establishes the equivalence between parts 1 and 2 due to van Dijk’s characterization of ln in Proposition 2.
(2 ⇔ 3) We now derive the spectral characterization of ln using part 2. Recall the well-known fact (see [33, Theorem
1 parts (a),(f)] and [19, Theorem 7.7.3 (a)]):
†
Given positive semidefinite matrices A, B ∈ Rq×q
0 , A PSD B if and only if R(B) ⊆ R(A) and ρ A B ≤ 1.
−1
−1
Since W diag(PX W ) W T and V diag(PX V ) V T are positive semidefinite for every PX ∈ Pq◦ , applying this fact
−1
−1
shows that part 2 holds if and only if for every PX ∈ Pq◦ , we have R V diag(PX V ) V T ⊆ R W diag(PX W ) W T
and:
†
−1
−1 T
T
ρ
W diag(PX W ) W
V diag(PX V ) V
≤ 1.
−1
To prove that this inequality is actually an equality, for any PX ∈ Pq◦ , let A = W diag(PX W ) W T and B =
−1
V diag(PX V ) V T . It suffices to prove that: R(B) ⊆ R(A) and ρ A† B ≤ 1 if and only if R(B) ⊆ R(A) and
ρ A† B = 1. The converse direction is trivial, so we only establish the forward direction. Observe that PX A = 1T
†
and PX B = 1T . This implies that 1T A† B = PX (AA† )B = PX B = 1T , where
(AA )B =† B because R(B) ⊆
†
†
R(A) and AA is the orthogonal projection matrix onto R(A). Since ρ A B ≤ 1 and A B has an eigenvalue
of 1, we have ρ A† B = 1. Thus, we have proved that part 2 holds if and only if for every PX ∈ Pq◦ , we have
−1
−1
R V diag(PX V ) V T ⊆ R W diag(PX W ) W T and:
†
−1
−1 T
T
W diag(PX W ) W
V diag(PX V ) V
= 1.
ρ
This completes the proof.
The Löwner characterization of ln in part 2 of Proposition 8 will be useful for proving some of our ensuing results.
We remark that the equivalence between parts 1 and 2 can be derived by considering several other functionals. For
instance, for any fixed pmf QX ∈ Pq◦ , we may consider the functional F2 : Pq → R defined by:
F2 (PX ) = D(PX W ||QX W ) − D(PX V ||QX V )
(36)
−1
−1
2
which has Hessian matrix, ∇2 F2 : Pq◦ → Rq×q
W T − V diag(PX V ) V T , that does
sym , ∇ F2 (PX ) = W diag(PX W )
not depend on QX . Much like van Dijk’s functional F , F2 is convex (for all QX ∈ Pq◦ ) if and only if W ln V . This is
reminiscent of Ahlswede and Gács’ technique to prove (29), where the convexity of a similar functional is established
[2].
As another example, for any fixed pmf QX ∈ Pq◦ , consider the functional F3 : Pq → R defined by:
F3 (PX ) = χ2 (PX W ||QX W ) − χ2 (PX V ||QX V )
−1
(37)
2
which has Hessian matrix, ∇2 F3 : Pq◦ → Rq×q
W T − 2 V diag(QX V )
sym , ∇ F3 (PX ) = 2 W diag(QX W )
◦
does not depend on PX . Much like F and F2 , F3 is convex for all QX ∈ Pq if and only if W ln V .
6 Note
that [1, Theorem 1 part 4] neglected to mention the inclusion relation R V diag(PX V )−1 V T
15
−1
V T , that
⊆ R W diag(PX W )−1 W T .
Finally, we also mention some specializations of the spectral radius condition in part 3 of Proposition 8. If q ≥ r
and W has full column rank, the expression for spectral radius in the proposition statement can be simplified to:
−1
(38)
ρ (W † )T diag(PX W ) W † V diag(PX V ) V T = 1
using basic properties of the Moore-Penrose pseudoinverse. Moreover, if q = r and W is non-singular, then the MoorePenrose pseudoinverses in (38) can be written as inverses, and the inclusion relation between the ranges in part 3 of
Proposition 8 is trivially satisfied (and can be omitted from the proposition statement). We have used the spectral radius
condition in this latter setting to (numerically) compute the additive less noisy domination region in Figure 2.
V. C ONDITIONS FOR LESS NOISY DOMINATION OVER ADDITIVE NOISE CHANNELS
We now turn our attention to deriving several conditions for determining when q-ary symmetric channels are less
noisy than other channels. Our interest in q-ary symmetric channels arises from their analytical tractability; Proposition
4 from subsection I-C, Proposition 12 from section VII, and [34, Theorem 4.5.2] (which conveys that q-ary symmetric
channels have uniform capacity achieving input distributions) serve as illustrations of this tractability. We focus on
additive noise channels in this section, and on general channels in the next section.
A. Necessary conditions
q×q
We first present some straightforward necessary conditions for when an additive noise channel W ∈ Rsto
with
q×q
q ∈ N is less noisy than another additive noise channel V ∈ Rsto on an Abelian group (X , ⊕). These conditions can
obviously be specialized for less noisy domination by symmetric channels.
Proposition 9 (Necessary Conditions for ln Domination over Additive Noise Channels). Suppose W = circX (w) and
V = circX (v) are additive noise channels with noise pmfs w, v ∈ Pq such that W ln V . Then, the following are true:
1) (Circle Condition) kw − uk`2 ≥ kv − uk`2 .
2) (Contraction Condition) ηKL (W ) ≥ ηKL (V ).
3) (Entropy Condition) H (v) ≥ H (w), where H : Pq → R+ is the Shannon entropy function.
Proof.
Part 1: Letting PX = (1, 0, . . . , 0) and QX = u in the χ2 -divergence characterization of ln in Theorem 1 produces:
2
2
q kw − uk`2 = χ2 (w||u) ≥ χ2 (v||u) = q kv − uk`2
since uW = uV = u, and PX W = w and PX V = v using (7). (This result can alternatively be proved using part 2
of Proposition 8 and Fourier analysis.)
Part 2: This easily follows from Proposition 1 and (11).
Part 3: Letting PX = (1, 0, . . . , 0) and QX = u in the KL divergence characterization of ln in Proposition 1 produces:
log (q) − H (w) = D (w||u) ≥ D (v||u) = log (q) − H (v)
via the same reasoning as part 1. This completes the proof.
We remark that the aforementioned necessary conditions have many generalizations. Firstly, if W, V ∈
doubly stochastic matrices, then the generalized circle condition holds:
≥ V − W q−1
W − W q−1
q
Fro
q
Fro
Rq×q
sto
are
(39)
where W(q−1)/q = 1u is the q-ary symmetric channel whose conditional pmfs are all uniform, and k·kFro denotes the
Frobenius norm. Indeed, letting PX = ∆x = (0, . . . , 1, . . . , 0) for x ∈ [q], which has unity in the (x + 1)th position,
in the proof of part 1 and then adding the inequalities corresponding to every x ∈ [q] produces (39). Secondly, the
q×s
contraction condition in Proposition 9 actually holds for any pair of general channels W ∈ Rq×r
sto and V ∈ Rsto on a
common input alphabet (not necessarily additive noise channels). Moreover, we can start with Theorem 1 and take the
suprema of the ratios in χ2 (PX W ||QX W ) /χ2 (PX ||QX ) ≥ χ2 (PX V ||QX V ) /χ2 (PX ||QX ) over all PX (6= QX ) to
get:
ρmax (QX , W ) ≥ ρmax (QX , V )
(40)
for any QX ∈ Pq , where ρmax (·) denotes maximal correlation which is defined later in part 3 of Proposition 12, cf.
[35], and we use [36, Theorem 3] (or the results of [37]). A similar result also holds for the contraction coefficient for
KL divergence with fixed input pmf (see e.g. [36, Definition 1] for a definition).
16
B. Sufficient conditions
q×q
We next portray a sufficient condition for when an additive noise channel V ∈ Rsto
is a degraded version of a
q×q
symmetric channel Wδ ∈ Rsto . By Proposition 3, this is also a sufficient condition for Wδ ln V .
Proposition 10 (Degradation by Symmetric Channels). Given an additive noise channel V = circX (v) with noise pmf
v ∈ Pq and minimum probability τ = min{[V ]i,j : 1 ≤ i, j ≤ q}, we have:
0 ≤ δ ≤ (q − 1) τ ⇒ Wδ deg V
where Wδ ∈
Rq×q
sto
is a q-ary symmetric channel.
Proof. Using Corollary 1, it suffices to prove that the noise pmf w(q−1)τ maj v. Since 0 ≤ τ ≤ 1q , we must have
0 ≤ (q − 1)τ ≤ q−1
q . So, all entries of w(q−1)τ , except (possibly) the first, are equal to its minimum entry of τ . As
v ≥ τ (entry-wise), w(q−1)τ maj v because the conditions of part 3 in Proposition 13 in Appendix A are satisfied.
It is compelling to find a sufficient condition for Wδ ln V that does not simply ensure Wδ deg V (such as Proposition
10 and Theorem 2). The ensuing proposition elucidates such a sufficient condition for additive noise channels. The
general strategy for finding such a condition for additive noise channels is to identify a noise pmf that belongs to
add
add
Ladd
Wδ \DWδ . One can then use Proposition 6 to explicitly construct a set of noise pmfs that is a subset of LWδ but
add
strictly includes DWδ . The proof of Proposition 11 finds such a noise pmf (that corresponds to a q-ary symmetric
channel).
Proposition 11 (Less Noisy Dominationby Symmetric
Channels). Given an additive noise channel V = circX (v) with
we
have:
noise pmf v ∈ Pq and q ≥ 2, if for δ ∈ 0, q−1
q
v ∈ conv wδ Pqk : k ∈ [q] ∪ wγ Pqk : k ∈ [q]
then Wδ ln V , where Pq ∈ Rq×q is defined in (8), and:
γ=
1−δ
δ
,
1
.
∈
1
−
δ
q−1
1 − δ + (q−1)
2
Proof. Due to Proposition 6 and {wγ Px : x ∈ X } = {wγ Pqk : k ∈ [q]}, it suffices toprove that Wδ ln Wγ . Since
q−1
δ = 0 ⇒ γ = 1 and δ = q−1
. So, we assume that
⇒ γ = q−1
q
q , Wδ ln Wγ is certainly true for δ ∈ 0, q
δ ∈ 0, q−1
,
which
implies
that:
q
q−1
1−δ
∈
,1 .
γ=
δ
q
1 − δ + (q−1)
2
Since our goal is to show Wδ ln Wγ , we prove the equivalent condition in part 2 of Proposition 8 that for every
PX ∈ Pq◦ :
−1
Wδ diag(PX Wδ )
−1
WδT PSD Wγ diag(PX Wγ )
WγT
⇔ Wγ−1 diag(PX Wγ ) Wγ−1 PSD Wδ−1 diag(PX Wδ ) Wδ−1
⇔ diag(PX Wγ ) PSD Wγ Wδ−1 diag(PX Wδ ) Wδ−1 Wγ
−1
−21
⇔ Iq PSD diag(PX Wγ ) 2 Wτ diag(PX Wδ )Wτ diag(PX Wγ )
−1
−21
⇔ 1 ≥ diag(PX Wγ ) 2 Wτ diag(PX Wδ )Wτ diag(PX Wγ )
⇔ 1 ≥ diag(PX Wγ )
− 12
Wτ diag(PX Wδ )
op
1
2
op
where the second equivalence holds because Wδ and Wγ are symmetric and invertible (see part 4 of Proposition 4
and [19, Corollary 7.7.4]), the third and fourth equivalences are non-singular ∗-congruences with Wτ = Wδ−1 Wγ =
Wγ Wδ−1 and:
γ−δ
τ=
>0
δ
1 − δ − q−1
which can be computed as in the proof of Proposition 15 in Appendix C, and k·kop denotes the spectral or operator
norm.7
q×q
7 Note that we cannot use the strict Löwner partial order
PSD (for A, B ∈ Rsym , A PSD B if and only if A − B is positive definite) for these
equivalences as 1T Wγ−1 diag(PX Wγ ) Wγ−1 1 = 1T Wδ−1 diag(PX Wδ ) Wδ−1 1.
17
−1
1
It is instructive to note that if Wτ ∈ Rq×q
transition matrix diag(PX Wγ ) 2 Wτ diag(PX Wδ ) 2
sto , then the divergence
p
√
T
T
has right singular vector PX Wδ and left singular vector PX Wγ corresponding to its maximum singular value
of unity (where the square roots are applied entry-wise)–see [36] and the references therein. So, Wτ ∈ Rq×q
sto is a
δ
sufficient condition for Wδ ln Wγ . Since Wτ ∈ Rq×q
if
and
only
if
0
≤
τ
≤
1
if
and
only
if
δ
≤
γ
≤
1
−
sto
q−1 , the
latter condition also implies that Wδ ln Wγ . However, we recall from (25) in subsection III-B that Wδ deg Wγ for
δ
δ
, while we seek some 1 − q−1
< γ ≤ 1 for which Wδ ln Wγ . When q = 2, we only have:
δ ≤ γ ≤ 1 − q−1
γ=
1−δ
δ
=1−
=1−δ
δ
q
−
1
1 − δ + (q−1)
2
which implies that Wδ deg Wγ is true for q = 2. On the other hand, when q ≥ 3, it is straightforward to verify that:
1−δ
δ
γ=
∈ 1−
,1
δ
q−1
1 − δ + (q−1)
2
.
since δ ∈ 0, q−1
q
From the preceding discussion, it suffices to prove for q ≥ 3 that for every PX ∈ Pq◦ :
−21
diag(PX Wγ )
−21
Wτ diag(PX Wδ )Wτ diag(PX Wγ )
Since τ > 0, and 0 ≤ τ ≤ 1 does not produce γ > 1 −
strictly negative entries along the diagonal. Notice that:
δ
q−1 ,
≤ 1.
op
we require that τ > 1 (⇔ γ > 1 −
δ
q−1 )
so that Wτ has
∀x ∈ [q], diag(∆x Wγ ) PSD Wγ Wδ−1 diag(∆x Wδ ) Wδ−1 Wγ
where ∆x = (0, . . . , 1, . . . , 0) ∈ Pq denotes the Kronecker delta pmf with unity at the (x + 1)th position, implies that:
diag(PX Wγ ) PSD Wγ Wδ−1 diag(PX Wδ ) Wδ−1 Wγ
for every PX ∈ Pq◦ , because convex combinations preserve the Löwner relation. So, it suffices to prove that for every
x ∈ [q]:
− 1
− 1
≤1
diag wγ Pqx 2 Wτ diag wδ Pqx Wτ diag wγ Pqx 2
op
where Pq ∈ Rq×q is defined in (8), because ∆x M extracts the (x + 1)th row of a matrix M ∈ Rq×q . Let us define
− 1
− 1
Ax , diag wγ Pqx 2 Wτ diag wδ Pqx Wτ diag wγ Pqx 2 for each x ∈ [q]. Observe that for every x ∈ [q], Ax ∈ Rq×q
0
is
diagonalizable by the real spectral theorem [38, Theorem 7.13], and has a strictly positive eigenvector
porthogonally
wγ Pqx corresponding to the eigenvalue of unity:
q
q
∀x ∈ [q],
wγ Pqx Ax = wγ Pqx
p
so that all other eigenvectors of Ax have some strictly negative entries since they are orthogonal to wγ Pqx . Suppose Ax
is entry-wise non-negative for every x ∈ [q]. Then, the largest eigenvalue (known as the Perron-Frobenius eigenvalue)
and the spectral radius of each Ax is unity by the Perron-Frobenius theorem [19, Theorem 8.3.4], which proves that
kAx kop ≤ 1 for every x ∈ [q]. Therefore, it is sufficient to prove that Ax is entry-wise non-negative for every x ∈ [q].
− 1
Equivalently, we can prove that Wτ diag wδ Pqx Wτ is entry-wise non-negative for every x ∈ [q], since diag wγ Pqx 2
scales the rows or columns of the matrix it is pre- or post-multiplied with using strictly positive scalars.
We now show the equivalent condition below that the minimum possible entry of Wτ diag wδ Pqx Wτ is non-negative:
0 ≤ min
q
X
x∈[q]
r=1
1≤i,j≤q |
[Wτ ]i,r [Wδ ]x+1,r [Wτ ]r,j
=
=
{z
[Wτ diag(wδ Pqx )Wτ ]i,j
}
τ (1 − δ)(1 − τ ) δτ (1 − τ )
δτ 2
+
+ (q − 2)
.
2
q−1
(q − 1)
(q − 1)3
The above equality holds because for i 6= j:
q
q
δ X
δ X
[Wτ ]i,r [Wτ ]r,i ≥
[Wτ ]i,r [Wτ ]r,j
q − 1 r=1 |
{z
} q − 1 r=1
= [Wτ ]2i,r ≥ 0
18
(41)
2
δ
is clearly true (using, for example, the rearrangement inequality in [39, Section 10.2]), and adding 1−δ− q−1
[Wτ ]i,k ≥
δ
[Wτ ]i,p
0 (regardless of the value of 1 ≤ k ≤ q) to the left summation increases its value, while adding 1 − δ − q−1
[Wτ ]p,j < 0 (which exists for an appropriate value 1 ≤ p≤ q as τ > 1) to the right summation decreases its value.
As a result, the minimum possible entry of Wτ diag wδ Pqx Wτ can be achieved with x + 1 = i 6= j or i 6= j = x + 1.
δ
We next substitute τ = (γ − δ)/ 1 − δ − q−1
into (41) and simplify the resulting expression to get:
!
δ
(q − 2) δ (γ − δ)
δ
.
0 ≤ (γ − δ)
−γ
1−δ+
+
1−
2
q−1
q−1
(q − 1)
1−δ
The right hand side of this inequality is quadratic in γ with roots γ = δ and γ = 1−δ+(δ/(q−1)
2 ) . Since the coefficient
2
of γ in this quadratic is strictly negative:
δ
δ
(q − 2) δ
<0⇔1−δ+
2 − 1−δ+ q−1
2 >0
(q − 1)
(q − 1)
|
{z
}
coefficient of γ 2
the minimum possible entry of Wτ diag wδ Pqx Wτ is non-negative if and only if:
δ≤γ≤
where we use the fact that
completes the proof.
1−δ
1−δ+(δ/(q−1)2 )
1−δ
δ
1 − δ + (q−1)
2
δ
≥ 1 − q−1
≥ δ. Therefore, γ =
1−δ
1−δ+(δ/(q−1)2 )
produces Wδ ln Wγ , which
Heretofore we have derived results concerning less noisy domination and degradation regions in section III, and
proven several necessary and sufficient conditions for less noisy domination of additive noise channels by symmetric
channels in this section. We finally have all the pieces in place to establish Theorem 3 from section II. In closing this
section, we indicate the pertinent results that coalesce to justify it.
Proof of Theorem 3. The first equality follows from Corollary 1. The first set inclusion is obvious, and its strictness
follows from the proof of Proposition 11. The second set inclusion follows from Proposition 11. The third set inclusion
follows from the circle condition (part 1) in Proposition 9. Lastly, the properties of Ladd
Wδ are derived in Proposition
6.
VI. S UFFICIENT CONDITIONS FOR DEGRADATION OVER GENERAL CHANNELS
q×q
While Propositions 10 and 11 present sufficient conditions for a symmetric channel Wδ ∈
sto to be less noisy than
Rq−1
an additive noise channel, our more comprehensive objective is to find the maximum δ ∈ 0, q such that Wδ ln V
for any given general channel V ∈ Rq×r
sto on a common input alphabet. We may formally define this maximum δ (that
characterizes the extremal symmetric channel that is less noisy than V ) as:
q−1
δ ? (V ) , sup δ ∈ 0,
: Wδ ln V
(42)
q
and for every 0 ≤ δ < δ ? (V ), Wδ ln V . Alternatively, we can define a non-negative (less noisy) domination factor
function for any channel V ∈ Rq×r
sto :
µV (δ) ,
sup
PX ,QX ∈Pq :
0<D(PX Wδ ||QX Wδ )<+∞
D (PX V ||QX V )
D (PX Wδ ||QX Wδ )
(43)
with δ ∈ 0, q−1
, which is analogous to the contraction coefficient for KL divergence since µV (0) , ηKL (V ). Indeed,
q
we may perceive PX Wδ and QX Wδ in the denominator of (43) as pmfs inside the “shrunk” simplex conv({wδ Pqk :
k ∈ [q]}), and (43) represents a contraction coefficient of V where the supremum is taken over this “shrunk” simplex.8
For simplicity, consider a channel V ∈ Rq×r
sto that is strictly positive entry-wise, and has domination factor
function
q−1
+
µV : 0, q−1
→
R
,
where
the
domain
excludes
zero
because
µ
is
only
interesting
for
δ
∈
0,
, and this
V
q
q
exclusion also affords us some analytical simplicity. It is shown in Proposition 15 of Appendix C that µV is always
8 Pictorially,
the “shrunk” simplex is the magenta triangle in Figure 2 while the simplex itself is the larger gray triangle.
19
finite on 0, q−1
, continuous, convex, strictly increasing, and has a vertical asymptote at δ =
q
PX , QX ∈ Pq :
µV (δ) D (PX Wδ ||QX Wδ ) ≥ D (PX V ||QX V )
q−1
q .
Since for every
we have µV (δ) ≤ 1 if and only if Wδ ln V . Hence, using the strictly increasing property of µV : 0,
we can also characterize δ ? (V ) as:
δ ? (V ) = µ−1
V (1)
(44)
q−1
q
→ R+ ,
(45)
where µ−1
V denotes the inverse function of µV , and unity is in the range of µV by Theorem 2 since V is strictly
positive entry-wise.
q×r
We next briefly delineate how one might computationally approximate δ ? (V ) for a given general channel V ∈ Rsto
.
?
From part 2 of Proposition 8, it is straightforward to obtain the following minimax characterization of δ (V ):
δ ? (V ) =
inf
sup
PX ∈Pq◦ δ∈S(PX )
δ
(46)
−1
−1
: Wδ diag(PX Wδ ) WδT PSD V diag(PX V ) V T . The infimum in (46) can be naïvely
where S(PX ) = δ ∈ 0, q−1
q
approximated by sampling several PX ∈ Pq◦ . The supremum in (46) can be estimated by verifying collections of rational
(ratio of polynomials) inequalities in δ. This is because the positive semidefiniteness of a matrix is equivalent to the
non-negativity of all its principal minors by Sylvester’s criterion [19, Theorem 7.2.5]. Unfortunately, this procedure
appears to be rather cumbersome.
Since analytically computing δ ? (V ) also seems intractable, we now prove Theorem 2 from section II. Theorem 2
provides a sufficient condition for Wδ deg V (which implies Wδ ln V using Proposition 3) by restricting its attention
?
to the case where V ∈ Rq×q
sto with q ≥ 2. Moreover, it can be construed as a lower bound on δ (V ):
ν
(47)
δ ? (V ) ≥
ν
1 − (q − 1)ν + q−1
where ν = min {[V ]i,j : 1 ≤ i, j ≤ q} is the minimum conditional probability in V .
q×q
Proof of Theorem 2. Let the channel V ∈ Rsto
have the conditional pmfs v1 , . . . , vq ∈ Pq as its rows:
T
V = v1T v2T · · · vqT .
From the proof of Proposition 10, we know that w(q−1)ν maj vi for every i ∈ {1, . . . , q}. Using part 1 of Proposition
13 in Appendix A (and the fact that the set of all permutations of w(q−1)ν is exactly the set of all cyclic permutations
of w(q−1)ν ), we can write this as:
∀i ∈ {1, . . . , q} , vi =
q
X
pi,j w(q−1)ν Pqj−1
j=1
Pq
where Pq ∈ R
is given in (8), and {pi,j ≥ 0 : 1 ≤ i, j ≤ q} are the convex weights such that j=1 pi,j = 1 for
every i ∈ {1, . . . , q}. Defining P ∈ Rq×q
sto entry-wise as [P ]i,j = pi,j for every 1 ≤ i, j ≤ q, we can also write this
9
equation as V = P W(q−1)ν . Observe that:
!
q
X
Y
P =
pi,ji Ej1 ,...,jq
q×q
1≤j1 ,...,jq ≤q
i=1
Qq
where { i=1 pi,ji : 1 ≤ j1 , . . . , jq ≤ q} form a product pmf of convex weights, and for every 1 ≤ j1 , . . . , jq ≤ q:
T
Ej1 ,...,jq , ej1 ej2 · · · ejq
where ei ∈ Rq is the ith standard basis (column) vector that has unity at the ith entry and zero elsewhere. Hence, we
get:
!
q
X
Y
V =
pi,ji Ej1 ,...,jq W(q−1)ν .
1≤j1 ,...,jq ≤q
i=1
q×q
9 Matrices of the form V = P W
are not necessarily degraded versions of W(q−1)ν : W(q−1)ν 6deg V (although
(q−1)ν with P ∈ Rsto
we certainly have input-output degradation: W(q−1)ν iod V ). As a counterexample, consider W1/2 for q = 3, and P = [1 0 0; 1 0 0; 0 1 0],
where the colons separate the rows of the matrix. If W1/2 deg P W1/2 , then there exists A ∈ R3×3
sto such that P W1/2 = W1/2 A. However,
−1
A = W1/2
P W1/2 = (1/4) [3 0 1; 3 0 1; −1 4 1] has a strictly negative entry, which leads to a contradiction.
20
Suppose there exists δ ∈ 0, q−1
such that for all j1 , . . . , jq ∈ {1, . . . , q}:
q
∃Mj1 ,...,jq ∈ Rq×q
sto , Ej1 ,...,jq W(q−1)ν = Wδ Mj1 ,...,jq
i.e. Wδ deg Ej1 ,...,jq W(q−1)ν . Then, we would have:
X
q
Y
1≤j1 ,...,jq ≤q
i=1
V = Wδ
!
pi,ji
Mj1 ,...,jq
{z
|
}
stochastic matrix
which implies that Wδ deg V .
q×q
We will demonstrate that for every j1 , . . . , jq ∈{1, . . . , q}, there exists Mj1 ,...,jq ∈ Rsto
such that Ej1 ,...,jq W(q−1)ν =
ν
1
Wδ Mj1 ,...,jq when 0 ≤ δ ≤ ν/ 1−(q−1)ν+ q−1 . Since 0 ≤ ν ≤ q , the preceding inequality implies that 0 ≤ δ ≤ q−1
q ,
1
1
where δ = q−1
is
possible
if
and
only
if
ν
=
.
When
ν
=
,
V
=
W
is
the
channel
with
all
uniform
conditional
(q−1)/q
q
q
q
pmfs, and W(q−1)/q deg V clearly holds. Hence, we assume that 0 ≤ ν < 1q so that 0 ≤ δ < q−1
q , and establish the
equivalent condition that for every j1 , . . . , jq ∈ {1, . . . , q}:
Mj1 ,...,jq = Wδ−1 Ej1 ,...,jq W(q−1)ν
−δ
using part 4 of Proposition 4. Clearly, all
is a valid stochastic matrix. Recall that Wδ−1 = Wτ with τ = 1−δ−(δ/(q−1))
the rows of each Mj1 ,...,jq sum to unity. So, it remains to verify that each Mj1 ,...,jq has non-negative entries. For any
j1 , . . . , jq ∈ {1, . . . , q} and any i, j ∈ {1, . . . , q}:
Mj1 ,...,jq i,j ≥ ν (1 − τ ) + τ (1 − (q − 1) ν)
where the right hand side is the minimum possible entry of any Mj1 ,...,jq (with equality when j1 > 1 and j2 = j3 =
· · · = jq = 1 for example) as τ < 0 and 1 − (q − 1) ν > ν. To ensure each Mj1 ,...,jq is entry-wise non-negative, the
minimum possible entry must satisfy:
ν (1 − τ ) + τ (1 − (q − 1) ν) ≥ 0
δν
δ (1 − (q − 1) ν)
⇔ ν+
−
≥0
δ
δ
1 − δ − q−1
1 − δ − q−1
and the latter inequality is equivalent to:
δ≤
ν
1 − (q − 1) ν +
ν
q−1
.
This completes the proof.
Rq×q
sto ,
We remark that if V = E2,1,...,1 W(q−1)ν ∈
then this proof illustrates that Wδ deg V if and only if 0 ≤ δ ≤
ν
. Hence, the condition in Theorem 2 is tight when no further information about V is known.
ν/ 1 − (q − 1)ν + q−1
It is worth juxtaposing Theorem 2 and Proposition 10. The upper bounds on δ from these results satisfy:
ν
(48)
ν ≤ (q − 1) ν
| {z }
1 − (q − 1)ν + q−1
|
{z
} upper bound in
upper bound in Theorem 2
Proposition 10
where we have equality if and only if ν = 1q , and it is straightforward to verify that (48) is equivalent to ν ≤ 1q .
Moreover, assuming that q is large and ν = o (1/q), the upper bound in Theorem 2 is ν/ 1 + o (1) + o 1/q 2 = Θ (ν),
while the upper bound in Proposition 10 is Θ (qν).10 (Note that both bounds are Θ (1) if ν = 1q .) Therefore, when
q×q
V ∈ Rq×q
sto is an additive noise channel, δ = O (qν) is enough for Wδ deg V , but a general channel V ∈ Rsto requires
δ = O (ν) for such degradation. So, in order to account for q different conditional pmfs in the general case (as opposed
to a single conditional pmf which characterizes the channel in the additive noise case), we loose a factor of q in the
upper bound on δ. Furthermore, we can check using simulations that Wδ ∈ Rq×q
sto is not in general less noisy than
V ∈ Rq×q
sto for δ = (q − 1)ν. Indeed, counterexamples can be easily obtained by letting V = Ej1 ,...,jq Wδ for specific
values of 1 ≤ j1 , . . . , jq ≤ q, and computationally verifying that Wδ 6ln V + J ∈ Rq×q
sto for appropriate choices of
perturbation matrices J ∈ Rq×q with sufficiently small Frobenius norm.
10 We use the Bachmann-Landau asymptotic notation here. Consider the (strictly) positive functions f : N → R and g : N → R. The little-o notation
is defined as: f (q) = o (g(q)) ⇔ limq→∞ f (q)/g(q) = 0. The big-O notation is defined as: f (q) = O (g(q)) ⇔ lim supq→∞ |f (q)/g(q)| <
+∞. Finally, the big-Θ notation is defined as: f (q) = Θ (g(q)) ⇔ 0 < lim inf q→∞ |f (q)/g(q)| ≤ lim supq→∞ |f (q)/g(q)| < +∞.
21
We have now proved Theorems 1, 2, and 3 from section II. The next section relates our results regarding less noisy
and degradation preorders to LSIs, and proves Theorem 4.
VII. L ESS NOISY DOMINATION AND LOGARITHMIC S OBOLEV INEQUALITIES
Logarithmic Sobolev inequalities (LSIs) are a class of functional inequalities that shed light on several important
phenomena such as concentration of measure, and ergodicity and hypercontractivity of Markov semigroups. We refer
readers to [40] and [41] for a general treatment of such inequalities, and more pertinently to [25] and [26], which
present LSIs in the context of finite state-space Markov chains. In this section, we illustrate that proving a channel
q×q
W ∈ Rq×q
sto is less noisy than a channel V ∈ Rsto allows us to translate an LSI for W to an LSI for V . Thus, important
information about V can be deduced (from its LSI) by proving W ln V for an appropriate channel W (such as a
q-ary symmetric channel) that has a known LSI.
We commence by introducing some appropriate notation and terminology associated with LSIs. For fixed input and
output alphabet X = Y = [q] with q ∈ N, we think of a channel W ∈ Rq×q
sto as a Markov kernel on X . We assume
that the “time homogeneous” discrete-time Markov chain defined by W is irreducible, and has unique stationary
distribution (or invariant measure) π ∈ Pq such that πW = π. Furthermore, we define the Hilbert space L2 (X , π) of
all real functions with domain X endowed with the inner product:
X
∀f, g ∈ L2 (X , π) , hf, giπ ,
π(x)f (x)g(x)
(49)
x∈X
2
2
and induced norm k·kπ . We construe W : L (X , π) → L (X , π) as a conditional expectation operator that takes a
T
function f ∈ L2 (X , π), which we can write as a column vector f = [f (0) · · · f (q − 1)] ∈ Rq , to another function
2
q
W f ∈ L (X , π), which we can also write as a column vector W f ∈ R . Corresponding to the discrete-time Markov
chain W , we may also define a continuous-time Markov semigroup:
q×q
∀t ≥ 0, Ht , exp (−t (Iq − W )) ∈ Rsto
(50)
where the “discrete-time derivative” W −Iq is the Laplacian operator that forms the generator of the Markov semigroup.
The unique stationary distribution of this Markov semigroup is also π, and we may interpret Ht : L2 (X , π) → L2 (X , π)
as a conditional expectation operator for each t ≥ 0 as well.
In order to present LSIs, we define the Dirichlet form EW : L2 (X , π) × L2 (X , π) → R:
∀f, g ∈ L2 (X , π) , EW (f, g) , h(Iq − W ) f, giπ
(51)
which is used to study properties of the Markov chain W and its associated Markov semigroup Ht ∈ Rq×q
sto : t ≥ 0 .
(EW is technically only a Dirichlet form when W is a reversible Markov chain, i.e. W is a self-adjoint operator, or
equivalently, W and π satisfy the detailed balance condition [25, Section 2.3, page 705].) Moreover, the quadratic
form defined by EW represents the energy of its input function, and satisfies:
W + W∗
2
(52)
f, f
∀f ∈ L (X , π) , EW (f, f ) = Iq −
2
π
where W ∗ : L2 (X , π) → L2 (X , π) is the adjoint operator of W . Finally, we introduce a particularly important
Dirichlet form corresponding to the channel W(q−1)/q = 1u, which has all uniform conditional pmfs and uniform
stationary distribution π = u, known as the standard Dirichlet form:
Estd (f, g) , E1u (f, g) = COVu (f, g)
!
!
X f (x)g(x)
X f (x)
X g(x)
=
−
q
q
q
x∈X
x∈X
2
(53)
x∈X
for any f, g ∈ L (X , u). The quadratic form defined by the standard Dirichlet form is presented in (17) in subsection
II-D.
q×q
:t≥0
We now present the LSIs associated with the Markov chain W and the Markov semigroup Ht ∈ Rsto
2
it defines. The LSI for the Markov semigroup with constant α ∈ R states that for every f ∈ L (X , π) such that
kf kπ = 1, we have:
X
1
D f 2 π || π =
π(x)f 2 (x) log f 2 (x) ≤ EW (f, f )
(54)
α
x∈X
22
where we construe µ = f 2 π ∈ Pq as a pmf such that µ(x) = f (x)2 π(x) for every x ∈ X , and f 2 behaves like the
Radon-Nikodym derivative (or density) of µ with respect to π. The largest constant α such that (54) holds:
α(W ) ,
inf
f ∈L2 (X ,π):
kf kπ =1
D(f 2 π||π)6=0
EW (f, f )
D (f 2 π || π)
(55)
is called the logarithmic Sobolev constant (LSI constant) of the Markov chain W (or the Markov chain (W + W ∗ )/2).
Likewise, the LSI for the discrete-time Markov chain with constant α ∈ R states that for every f ∈ L2 (X , π) such
that kf kπ = 1, we have:
1
D f 2 π || π ≤ EW W ∗ (f, f )
(56)
α
where EW W ∗ : L2 (X , π) × L2 (X , π) → R is the “discrete” Dirichlet form. The largest constant α such that (56)
holds is the LSI constant of the Markov chain W W ∗ , α(W W ∗ ), and we refer to it as the discrete logarithmic Sobolev
constant of the Markov chain W . As we mentioned earlier, there are many useful consequences of such LSIs. For
example, if (54) holds with constant (55), then for every pmf µ ∈ Pq :
∀t ≥ 0, D (µHt ||π) ≤ e−2α(W )t D (µ||π)
(57)
where the exponent 2α(W ) can be improved to 4α(W ) if W is reversible [25, Theorem 3.6]. This is a measure of
∗
ergodicity of the semigroup Ht ∈ Rq×q
sto : t ≥ 0 . Likewise, if (56) holds with constant α(W W ), then for every pmf
µ ∈ Pq :
n
∀n ∈ N, D (µW n ||π) ≤ (1 − α(W W ∗ )) D (µ||π)
(58)
as mentioned in [25, Remark, page 725] and proved in [42]. This is also a measure of ergodicity of the Markov chain
W.
Although LSIs have many useful consequences, LSI constants are difficult to compute analytically. Fortunately, the
LSI constant corresponding to Estd has been computed in [25, Appendix, Theorem A.1]. Therefore, using the relation
in (18), we can compute LSI constants for q-ary symmetric channels as well. The next proposition collects the LSI
constants for q-ary symmetric channels (which are irreducible for δ ∈ (0, 1]) as well as some other related quantities.
q×q
Proposition 12 (Constants of Symmetric Channels). The q-ary symmetric channel Wδ ∈ Rsto
with q ≥ 2 has:
1) LSI constant:
(
δ,
q=2
α(Wδ ) =
(q−2)δ
(q−1) log(q−1) , q > 2
for δ ∈ (0, 1].
2) discrete LSI constant:
(
α(Wδ Wδ∗ ) = α(Wδ0 ) =
2δ(1 − δ),
(q−2)(2q−2−qδ)δ
(q−1)2 log(q−1) ,
q=2
q>2
qδ
for δ ∈ (0, 1], where δ 0 = δ 2 − q−1
.
3) Hirschfeld-Gebelein-Rényi maximal correlation corresponding to the uniform stationary distribution u ∈ Pq :
ρmax (u, Wδ ) = 1 − δ −
δ
q−1
q×r
for δ ∈ [0, 1], where for any channel W ∈ Rsto
and any source pmf PX ∈ Pq , we define the maximal
correlation between the input random variable X ∈ [q] and the output random variable Y ∈ [r] (with joint pmf
PX,Y (x, y) = PX (x)WY |X (y|x)) as:
ρmax (PX , W ) ,
sup
E [f (X)g(Y )].
f :[q]→R, g:[r]→R
E[f (X)]=E[g(Y )]=0
E[f 2 (X)]=E[g 2 (Y )]=1
4) contraction coefficient for KL divergence bounded by:
2
δ
δ
1−δ−
≤ ηKL (Wδ ) ≤ 1 − δ −
q−1
q−1
23
for δ ∈ [0, 1].
Proof. See Appendix B.
In view of Proposition 12 and the intractability of computing LSI constants for general Markov chains, we often
q×q
“compare” a given irreducible channel V ∈ Rq×q
sto with a q-ary symmetric channel Wδ ∈ Rsto to try and establish
an LSI for it. We assume for the sake of simplicity that V is doubly stochastic and has uniform stationary pmf (just
like q-ary symmetric channels). Usually, such a comparison between Wδ and V requires us to prove domination of
Dirichlet forms, such as:
qδ
∀f ∈ L2 (X , u) , EV (f, f ) ≥ EWδ (f, f ) =
Estd (f, f )
(59)
q−1
where we use (18). Such pointwise domination results immediately produce LSIs, (54) and (56), for V . Furthermore,
they also lower bound the LSI constants of V ; for example:
α(V ) ≥ α(Wδ ) .
(60)
This is turn begets other results such as (57) and (58) for the channel V (albeit with worse constants in the exponents
since the LSI constants of Wδ are used instead of those for V ). More general versions of Dirichlet form domination
between Markov chains on different state spaces with different stationary distributions, and the resulting bounds on
their LSI constants are presented in [25, Lemmata 3.3 and 3.4]. We next illustrate that the information theoretic notion
of less noisy domination is a sufficient condition for various kinds of pointwise Dirichlet form domination.
Theorem 40 (Domination of Dirichlet Forms). Let W, V ∈ Rq×q
sto be doubly stochastic channels, and π = u be the
uniform stationary distribution. Then, the following are true:
1) If W ln V , then:
∀f ∈ L2 (X , u) , EV V ∗ (f, f ) ≥ EW W ∗ (f, f ) .
T
T
2) If W ∈ Rq×q
0 is positive semidefinite, V is normal (i.e. V V = V V ), and W ln V , then:
3) If W = Wδ ∈ Rq×q
sto
∀f ∈ L2 (X , u) , EV (f, f ) ≥ EW (f, f ) .
and Wδ ln V , then:
is any q-ary symmetric channel with δ ∈ 0, q−1
q
∀f ∈ L2 (X , u) , EV (f, f ) ≥
qδ
Estd (f, f ) .
q−1
Proof.
Part 1: First observe that:
1 T
f Iq − W W T f
q
1 T
2
∀f ∈ L (X , u) , EV V ∗ (f, f ) = f Iq − V V T f
q
∀f ∈ L2 (X , u) , EW W ∗ (f, f ) =
where we use the facts that W T = W ∗ and V T = V ∗ because the stationary distribution is uniform. This implies
that EV V ∗ (f, f ) ≥ EW W ∗ (f, f ) for every f ∈ L2 (X , u) if and only if Iq − V V T PSD Iq − W W T , which is true if
and only if W W T PSD V V T . Since W ln V , we get W W T PSD V V T from part 2 of Proposition 8 after letting
P X = u = PX W = P X V .
Part 2: Once again, we first observe using (52) that:
1 T
W + WT
2
∀f ∈ L (X , u) , EW (f, f ) = f
Iq −
f,
q
2
V +VT
1
f.
∀f ∈ L2 (X , u) , EV (f, f ) = f T Iq −
q
2
So, EV (f, f ) ≥ EW (f, f ) for every f ∈ L2 (X , u) if and only if W + W T /2 PSD V + V T /2. Since W W T PSD
V V T from the proof of part 1, it is sufficient to prove that:
W W T PSD V V T ⇒
W + WT
V +VT
PSD
.
2
2
(61)
Lemma 2 in Appendix C establishes the claim in (61) because W ∈ Rq×q
0 and V is a normal matrix.
q−1
Part 3: We note that when V is a normal matrix, this result follows from part 2 because Wδ ∈ Rq×q
,
0 for δ ∈ 0, q
24
as can be seen from part 2 of Proposition 4. For a general doubly stochastic channel V , we need to prove that
qδ
EV (f, f ) ≥ EWδ (f, f ) = q−1
Estd (f, f ) for every f ∈ L2 (X , u) (where we use (18)). Following the proof of part 2,
it is sufficient to prove (61) with W = Wδ :11
V +VT
Wδ2 PSD V V T ⇒ Wδ PSD
2
where Wδ2 = Wδ WδT and Wδ = Wδ + WδT /2. Recall the Löwner-Heinz theorem [43], [44], (cf. [45, Section 6.6,
Problem 17]), which states that for A, B ∈ Rq×q
0 and 0 ≤ p ≤ 1:
A PSD B ⇒ Ap PSD B p
(62)
or equivalently, f : [0, ∞) → R, f (x) = xp is an operator monotone function for p ∈ [0, 1]. Using (62) with p = 21
(cf. [19, Corollary 7.7.4 (b)]), we have:
1
Wδ2 PSD V V T ⇒ Wδ PSD V V T 2
1
T 2
because the Gramian matrix V V T ∈ Rq×q
is the unique positive semidefinite square root matrix of
0 . (Here, V V
V V T .)
Let V V T = QΛQT and (V + V T )/2 = U ΣU T be the spectral decompositions of V V T and (V + V T )/2, where Q
and U are orthogonal matrices with eigenvectors as columns, and Λ and Σ are diagonal matrices of eigenvalues. Since
√
V V T and (V + V T )/2 are both doubly stochastic, they both have the unit norm eigenvector 1/ q corresponding to
the maximum eigenvalue of unity. In fact, we have:
1
1
V +VT
1
1
T 2 1
VV
and
√ =√
√ =√
q
q
2
q
q
1
1
1
where we use the fact that (V V T ) 2 = QΛ 2 QT is the spectral decomposition of (V V T ) 2 . For any matrix A ∈ Rq×q
sym , let
λ1 (A) ≥ λ2 (A) ≥ · · · ≥ λq (A) denote the eigenvalues of A in descending order. Without loss of generality, we assume
1
that [Λ]j,j = λj (V V T ) and [Σ]j,j = λj ((V + V T )/2) for every 1 ≤ j ≤ q. So, λ1 ((V V T ) 2 ) = λ1 ((V + V T )/2) = 1,
√
and the first columns of both Q and U are equal to 1/ q.
From part 2 of Proposition 4, we have Wδ = QDQT = U DU T , where D is the diagonal matrix of eigenvalues
δ
such that [D]1,1 = λ1 (Wδ ) = 1 and [D]j,j = λj (Wδ ) = 1 − δ − q−1
for 2 ≤ j ≤ q. Note that we may use either of
√
the eigenbases, Q or U , because they both have first column 1/ q, which is the eigenvector of Wδ corresponding to
λ1 (Wδ ) = 1 since Wδ is doubly stochastic, and the remaining eigenvector columns are permitted to be any orthonormal
√
δ
basis of span(1/ q)⊥ as λj (Wδ ) = 1 − δ − q−1
for 2 ≤ j ≤ q. Hence, we have:
Wδ PSD V V T
Wδ PSD
21
1
1
⇔ QDQT PSD QΛ 2 QT ⇔ D PSD Λ 2 ,
V +VT
⇔ U DU T PSD U ΣU T ⇔ D PSD Σ.
2
1
1
In order to show that D PSD Λ 2 ⇒ D PSD Σ, it suffices to prove that Λ 2 PSD Σ. Recall from [45, Corollary
3.1.5] that for any matrix A ∈ Rq×q , we have:12
1
A + AT
T 2
∀i ∈ {1, . . . , q} , λi AA
.
(63)
≥ λi
2
1
Hence, Λ 2 PSD Σ is true, cf. [46, Lemma 2.5]. This completes the proof.
Theorem 40 includes Theorem 4 from section II as part 3, and also provides two other useful pointwise Dirichlet form
domination results. Part 1 of Theorem 40 states that less noisy domination implies discrete Dirichlet form domination.
In particular, if we have Wδ ln V for some irreducible q-ary symmetric channel Wδ ∈ Rq×q
sto and irreducible doubly
stochastic channel V ∈ Rq×q
sto , then part 1 implies that:
n
∀n ∈ N, D (µV n ||u) ≤ (1 − α(Wδ Wδ∗ )) D (µ||u)
(64)
2
that (61) trivially holds for W = Wδ with δ = (q−1)/q, because W(q−1)/q = W(q−1)/q
= 1u PSD V V T implies that V = W(q−1)/q .
states that for any matrix A ∈ Rq×q , the ith largest eigenvalue of the symmetric part of A is less than or equal to the ith largest singular
value of A (which is the ith largest eigenvalue of the unique positive semidefinite part (AAT )1/2 in the polar decomposition of A) for every
1 ≤ i ≤ q.
11 Note
12 This
25
for all pmfs µ ∈ Pq , where α(Wδ Wδ∗ ) is computed in part 2 of Proposition 12. However, it is worth mentioning that
(58) for Wδ and Proposition 1 directly produce (64). So, such ergodicity results for the discrete-time Markov chain V
do not require the full power of the Dirichlet form domination in part 1. Regardless, Dirichlet form domination results,
such as in parts 2 and 3, yield several functional inequalities (like Poincaré inequalities and LSIs) which have many
other potent consequences as well.
Parts 2 and 3 of Theorem 40 convey that less noisy domination also implies the usual (continuous) Dirichlet form
domination under regularity conditions. We note that in part 2, the channel W is more general than that in part 3, but
the channel V is restricted to be normal (which includes the case where V is an additive noise channel). The proofs of
these parts essentially consist of two segments. The first segment uses part 1, and the second segment illustrates that
pointwise domination of discrete Dirichlet forms implies pointwise domination of Dirichlet forms (as shown in (59)).
This latter segment is encapsulated in Lemma 2 of Appendix C for part 2, and requires a slightly more sophisticated
proof pertaining to q-ary symmetric channels in part 3.
VIII. C ONCLUSION
In closing, we briefly reiterate our main results by delineating a possible program for proving LSIs for certain Markov
q×q
chains. Given an arbitrary irreducible doubly stochastic channel V ∈ Rsto
with minimum entry ν = min{[V ]i,j : 1 ≤
i, j ≤ q} > 0 and q ≥ 2, we can first use Theorem 2 to generate a q-ary symmetric channel Wδ ∈ Rq×q
sto with
ν
such that Wδ deg V . This also means that Wδ ln V , using Proposition 3. Moreover, the
δ = ν/ 1 − (q − 1)ν + q−1
δ parameter can be improved using Theorem 3 (or Propositions 10 and 11) if V is an additive noise channel. We can
then use Theorem 40 to deduce a pointwise domination of Dirichlet forms. Since Wδ satisfies the LSIs (54) and (56)
with corresponding LSI constants given in Proposition 12, Theorem 40 establishes the following LSIs for V :
1
D f 2 u || u ≤
EV (f, f )
(65)
α(Wδ )
1
EV V ∗ (f, f )
(66)
D f 2 u || u ≤
α(Wδ Wδ∗ )
for every f ∈ L2 (X , u) such that kf ku = 1. These inequalities can be used to derive a myriad of important facts
about V . We note that the equivalent characterizations of the less noisy preorder in Theorem 1 and Proposition 8
are particularly useful for proving some of these results. Finally, we accentuate that Theorems 2 and 3 address our
motivation in subsection I-D by providing analogs of the relationship between less noisy domination by q-ary erasure
channels and contraction coefficients in the context of q-ary symmetric channels.
A PPENDIX A
BASICS OF MAJORIZATION THEORY
Since we use some majorization arguments in our analysis, we briefly introduce the notion of group majorization
over row vectors in Rq (with q ∈ N) in this appendix. Given a group G ⊆ Rq×q of matrices (with the operation of
matrix multiplication), we may define a preorder called G-majorization over row vectors in Rq . For two row vectors
x, y ∈ Rq , we say that x G-majorizes y if y ∈ conv ({xG : G ∈ G}), where {xG : G ∈ G} is the orbit of x under the
group G. Group majorization intuitively captures a notion of “spread” of vectors. So, x G-majorizes y when x is more
spread out than y with respect to G. We refer readers to [9, Chapter 14, Section C] and the references therein for a
thorough treatment of group majorization. If we let G be the symmetric group of all permutation matrices in Rq×q ,
then G-majorization corresponds to traditional majorization of vectors in Rq as introduced in [39]. The next proposition
collects some results about traditional majorization.
Proposition 13 (Majorization [9], [39]). Given row vectors x = (x1 , . . . , xq ) , y = (y1 , . . . , yq ) ∈ Rq , let x(1) ≤ · · · ≤
x(q) and y(1) ≤ · · · ≤ y(q) denote the re-orderings of x and y in ascending order. Then, the following are equivalent:
1) x majorizes y, or equivalently, y resides in the convex hull of all permutations of x.
2) y = xD for some doubly stochastic matrix D ∈ Rq×q
sto .
3) The entries of x and y satisfy:
k
X
and
i=1
q
X
i=1
x(i) ≤
x(i) =
k
X
i=1
q
X
y(i) , for k = 1, . . . , q − 1 ,
y(i) .
i=1
When these conditions are true, we will write x maj y.
26
In the context of subsection I-C, given an Abelian group (X , ⊕) of order q, another useful notion of G-majorization
can be obtained by letting G = {Px ∈ Rq×q : x ∈ X } be the group of permutation matrices defined in (4) that is
isomorphic to (X , ⊕). For such choice of G, we write x X y when x G-majorizes (or X -majorizes) y for any two
row vectors x, y ∈ Rq . We will only require one fact about such group majorization, which we present in the next
proposition.
Proposition 14 (Group Majorization). Given two row vectors x, y ∈ Rq , x X y if and only if there exists λ ∈ Pq
such that y = x circX (λ).
Proof. Observe that:
x X y ⇔ y ∈ conv ({xPz : z ∈ X })
⇔ y = λ circX (x) for some λ ∈ Pq
⇔ y = x circX (λ) for some λ ∈ Pq
where the second step follows from (7), and the final step follows from the commutativity of X -circular convolution.
Proposition 14 parallels the equivalence between parts 1 and 2 of Proposition 13, because circX (λ) is a doubly stochastic matrix for every pmf λ ∈ Pq . In closing this appendix, we mention a well-known special case
of such
group majorization. When (X , ⊕) is the cyclic Abelian group Z/qZ of integers with addition modulo q,
G = Iq , Pq , Pq2 , . . . , Pqq−1 is the group of all cyclic permutation matrices in Rq×q , where Pq ∈ Rq×q is defined in
(8). The corresponding notion of G-majorization is known as cyclic majorization, cf. [47].
A PPENDIX B
P ROOFS OF PROPOSITIONS 4 AND 12
Proof of Proposition 4.
Part 1: This is obvious from (10).
Part 2: Since the DFT matrix jointly diagonalizes all circulant matrices, it diagonalizes every Wδ for δ ∈ R (using part
1). The corresponding eigenvalues are all real because Wδ is symmetric. To explicitly compute these eigenvalues, we
refer to [19, Problem 2.2.P10]. Observe that for any row vector x = (x0 , . . . , xq−1 ) ∈ Rq , the corresponding circulant
matrix satisfies:
!
q−1
q−1
X
X
circZ/qZ (x) =
xk Pqk = Fq
xk Dqk FqH
k=0
k=0
√
= Fq diag( q xFq ) FqH
where the first equality follows from (6) for the group Z/qZ [19, Section 0.9.6], Dq = diag((1, ω, ω 2 , . . . , ω q−1 )), and
Pq = Fq Dq FqH ∈ Rq×q is defined in (8). Hence, we have:
λj (Wδ ) =
q
X
(wδ )k ω (j−1)(k−1)
k=1
=
1,
1−δ−
δ
q−1 ,
j=1
j = 2, . . . , q
where wδ = (1 − δ, δ/(q − 1), . . . , δ/(q − 1)).
Part 3: This is also obvious from (10)–recall that a square stochastic matrix is doubly stochastic if and only if its
stationary distribution is uniform [19, Section 8.7].
−δ
by direct computation:
Part 4: For δ 6= q−1
δ
q , we can verify that Wτ Wδ = Iq when τ = 1−δ− q−1
τ
δ
[Wτ Wδ ]j,j = (1 − τ ) (1 − δ) + (q − 1)
q−1
q−1
= 1 , for j = 1, . . . , q ,
δ (1 − τ ) τ (1 − δ)
τδ
[Wτ Wδ ]j,k =
+
+ (q − 2)
2
q−1
q−1
(q − 1)
= 0 , for j 6= k and 1 ≤ j, k ≤ q .
27
The δ = q−1
q case follows from (10).
is closed over matrix multiplication. Indeed, for , δ ∈ R\ q−1
, we can
Part 5: The set Wδ : δ ∈ R\ q−1
q
q
q−1
δ
straightforwardly verify that W Wδ = Wτ with τ = + δ − δ − q−1 . Moreover, τ 6= q because Wτ is invertible
(since W and Wδ are invertible using part 4). The set also includes the identity matrix as W0 = Iq , and multiplicative
inverses (using
the associativity of matrix multiplication and the commutativity of circulant matrices
part 4). Finally,
is an Abelian group.
proves that Wδ : δ ∈ R\ q−1
q
Proof of Proposition 12.
q×q
Part 1: We first recall from [25, Appendix, Theorem A.1] that the Markov chain 1u ∈ Rsto
with uniform stationary
distribution π = u ∈ Pq has LSI constant:
( 1
q=2
Estd (f, f )
2,
α(1u) =
inf
=
.
1− q2
2 u || u)
2
D
(f
f ∈L (X ,u):
log(q−1) , q > 2
kf ku =1
D(f 2 u||u)6=0
qδ
α (1u), which proves part 1.
Now using (18), α(Wδ ) = q−1
T
2
Part 2: Observe that Wδ Wδ∗ = W
δ Wδ = Wδ = Wδ0 , where the first equality holds because Wδ has uniform
qδ
0
stationary pmf, and δ = δ 2 − q−1 using the proof of part 5 of Proposition 4. As a result, the discrete LSI constant
α(Wδ Wδ∗ ) = α(Wδ0 ), which we can calculate using part 1 of this proposition.
Part 3: It is well-known in
the literature that
ρmax (u, Wδ ) equals the second largest singular value of the divergence
√ −1
√
transition matrix diag u
Wδ diag u = Wδ (see [36, Subsection I-B] and the references therein). Hence, from
δ
part 2 of Proposition 4, we have ρmax (u, Wδ ) = 1 − δ − q−1
.
q×r
Part 4: First recall the Dobrushin contraction coefficient (for total variation distance) for any channel W ∈ Rsto
:
ηTV (W ) ,
kPX W − QX W k`1
kPX − QX k`1
PX ,QX ∈Pq :
sup
(67)
PX 6=QX
1
=
max WY |X (·|x) − WY |X (·|x0 )
2 x,x0 ∈[q]
(68)
`1
where k·k`1 denotes the `1 -norm, and the second equality is Dobrushin’s two-point characterization of ηTV [48]. Using
this characterization, we have:
ηTV (Wδ ) =
0
1
max wδ Pqx − wδ Pqx
2 x,x0 ∈[q]
= 1−δ−
`1
δ
q−1
where wδ is the noise pmf of Wδ for δ ∈ [0, 1], and Pq ∈ Rq×q is defined in (8). It is well-known in the literature
(see e.g. the introduction of [49] and the references therein) that:
2
ρmax (u, Wδ ) ≤ ηKL (Wδ ) ≤ ηTV (Wδ ) .
(69)
Hence, the value of ηTV (Wδ ) and part 3 of this proposition establish part 4. This completes the proof.
A PPENDIX C
AUXILIARY RESULTS
Proposition 15 (Properties of Domination FactorFunction). Given a channel V ∈ Rq×r
sto that is strictly positive entry+
wise, its domination factor function µV : 0, q−1
→
R
is
continuous,
convex,
and
strictly increasing. Moreover, we
q
have limδ→ q−1 µV (δ) = +∞.
q
Proof. We first prove that µV is finite on 0, q−1
. For any PX , QX ∈ Pq and any δ ∈ 0, q−1
, we have:
q
q
2
D(PX V ||QX V ) ≤ χ2 (PX V ||QX V ) ≤
2
≤
k(PX − QX )V k`2
ν
2
kPX − QX k`2 kV kop
ν
28
where the first inequality is well-known (see e.g. [36, Lemma 8]) and ν = min {[V ]i,j : 1 ≤ i ≤ q, 1 ≤ j ≤ r}, and:
1
2
k(PX − QX )Wδ k`2
2
2
1
δ
2
≥ kPX − QX k`2 1 − δ −
2
q−1
D(PX Wδ ||QX Wδ ) ≥
where the first inequality follows from Pinsker’s inequality (see e.g. [36, Proof of Lemma 6]), and the second inequality
follows from part 2 of Proposition 4. Hence, we get:
2
2 kV kop
q−1
(70)
∀δ ∈ 0,
, µV (δ) ≤
2 .
q
δ
ν 1 − δ − q−1
To prove that µV is strictly increasing, observe that Wδ0 deg Wδ for 0 < δ 0 < δ <
with:
δδ 0
+
δ0
1 − δ0 −
q−1
δ − δ0
q−1
=
∈ 0,
δ0
q
1 − δ 0 − q−1
δ0
p=δ−
0
1−δ −
δ0
q−1
+
1−
δδ 0
q−1
δ0 −
q−1
q ,
because Wδ = Wδ0 Wp
δ0
q−1
where we use part 4 of Proposition 4, the proof of part 5 of Proposition 4 in Appendix B, and the fact that Wp =
Wδ−1
0 Wδ . As a result, we have for every PX , QX ∈ Pq :
D (PX Wδ ||QX Wδ ) ≤ ηKL (Wp ) D (PX Wδ0 ||QX Wδ0 )
using the SDPI for KL divergence, where part 4 of Proposition 12 reveals that ηKL (Wp ) ∈ (0, 1) since p ∈ 0, q−1
.
q
Hence, we have for 0 < δ 0 < δ < q−1
:
q
µV (δ 0 ) ≤ ηKL (Wp ) µV (δ)
(71)
using (43), and the fact that 0 < D (PX Wδ0 ||QX Wδ0 ) < +∞ if and only if 0 < D (PX Wδ ||QX Wδ ) < +∞. This
implies that µV is strictly increasing.
We next establish that µV is convex and continuous. For any fixed PX , QX ∈ Pq such that PX 6= QX , consider
the function δ 7→ D (PX V ||QX V ) /D (PX Wδ ||QX Wδ ) with domain 0, q−1
. This function is convex, because δ 7→
q
D (PX Wδ ||QX Wδ ) is convex by the convexity of KL divergence, and the reciprocal of a non-negative convex function
is convex. Therefore, µV is convex since (43) defines it as a pointwise supremum of a collection of convex functions.
Furthermore, we note that µV is also continuous since a convex function is continuous on the interior of its domain.
Finally, observe that:
limq−1
inf µV (δ) ≥
δ→
q
=
sup
limq−1
inf
PX ,QX ∈Pq δ→
PX 6=QX
sup
q
D (PX V ||QX V )
D (PX Wδ ||QX Wδ )
D (PX V ||QX V )
lim sup D (PX Wδ ||QX Wδ )
PX ,QX ∈Pq
PX 6=QX
δ→ q−1
q
= +∞
where the first inequality follows from the minimax inequality and (43) (note that 0 < D (PX Wδ ||QX Wδ ) < +∞ for
PX 6= QX and δ close to q−1
q ), and the final equality holds because PX W(q−1)/q = u for every PX ∈ Pq .
q×q
Lemma 2 (Gramian Löwner Domination implies Symmetric Part Löwner Domination). Given A ∈ Rq×q
0 and B ∈ R
that is normal, we have:
A + AT
B + BT
A2 = AAT PSD BB T ⇒ A =
PSD
.
2
2
Proof. Since AAT PSD BB T PSD 0, using the Löwner-Heinz theorem (presented in (62)) with p = 21 , we get:
1
1
A = AAT 2 PSD BB T 2 PSD 0
29
q×q
where the first equality holds because A ∈ R0
. It suffices to now prove that (BB T )1/2 PSD B + B T /2, as
the transitive property of PSD will produce A PSD B + B T /2. Since B is normal, B = U DU H by the complex
spectral theorem [38, Theorem 7.9], where U is a unitary matrix and D is a complex diagonal matrix. Using this
unitary diagonalization, we have:
B + BT
= U Re{D} U H
2
since |D| PSD Re{D}, where |D| and Re{D} denote the element-wise magnitude and real part of D, respectively.
This completes the proof.
U |D|U H = BB T
12
PSD
ACKNOWLEDGMENT
We would like to thank an anonymous reviewer and the Associate Editor, Chandra Nair, for bringing the reference
[12] to our attention. Y.P. would like to thank Dr. Ziv Goldfeld for discussions on secrecy capacity.
R EFERENCES
[1] A. Makur and Y. Polyanskiy, “Less noisy domination by symmetric channels,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany,
June 25-30 2017, pp. 2463–2467.
[2] R. Ahlswede and P. Gács, “Spreading of sets in product spaces and hypercontraction of the Markov operator,” Ann. Probab., vol. 4, no. 6, pp.
925–939, December 1976.
[3] Y. Polyanskiy and Y. Wu, “Strong data-processing inequalities for channels and Bayesian networks,” in Convexity and Concentration, ser. The
IMA Volumes in Mathematics and its Applications, E. Carlen, M. Madiman, and E. M. Werner, Eds., vol. 161. New York: Springer, 2017,
pp. 211–249.
[4] C. E. Shannon, “A note on a partial ordering for communication channels,” Information and Control, vol. 1, pp. 390–397, 1958.
[5] ——, “The zero error capacity of a noisy channel,” IRE Trans. Inform. Theory, vol. 2, no. 3, pp. 706–715, September 1956.
[6] J. H. Cohen, J. H. B. Kemperman, and G. Zbăganu, Comparisons of Stochastic Matrices with Applications in Information Theory, Statistics,
Economics and Population Sciences. Ann Arbor: Birkhäuser, 1998.
[7] T. M. Cover, “Broadcast channels,” IEEE Trans. Inform. Theory, vol. IT-18, no. 1, pp. 2–14, January 1972.
[8] P. P. Bergmans, “Random coding theorem for broadcast channels with degraded components,” IEEE Trans. Inform. Theory, vol. IT-19, no. 2,
pp. 197–207, March 1973.
[9] A. W. Marshall, I. Olkin, and B. C. Arnold, Inequalities: Theory of Majorization and Its Applications, 2nd ed., ser. Springer Series in Statistics.
New York: Springer, 2011.
[10] J. Körner and K. Marton, “Comparison of two noisy channels,” in Topics in Information Theory (Second Colloq., Keszthely, 1975), Amsterdam:
North-Holland, 1977, pp. 411–423.
[11] I. Csiszár and J. Körner, Information Theory: Coding Theorems for Discrete Memoryless Systems, 2nd ed. New York: Cambridge University
Press, 2011.
[12] M. van Dijk, “On a special class of broadcast channels with confidential messages,” IEEE Trans. Inform. Theory, vol. 43, no. 2, pp. 712–714,
March 1997.
[13] A. El Gamal, “The capacity of a class of broadcast channels,” IEEE Trans. Inform. Theory, vol. IT-25, no. 2, pp. 166–169, March 1979.
[14] C. Nair, “Capacity regions of two new classes of 2-receiver broadcast channels,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Seoul, South
Korea, June 28-July 3 2009, pp. 1839–1843.
[15] Y. Geng, C. Nair, S. Shamai, and Z. V. Wang, “On broadcast channels with binary inputs and symmetric outputs,” IEEE Trans. Inform. Theory,
vol. 59, no. 11, pp. 6980–6989, November 2013.
[16] D. Sutter and J. M. Renes, “Universal polar codes for more capable and less noisy channels and sources,” April 2014, arXiv:1312.5990v3
[].
[17] F. Unger, “Better gates can make fault-tolerant computation impossible,” Electronic Colloquium on Computational Complexity (ECCC), no.
164, November 2010.
[18] M. Artin, Algebra, 2nd ed. New Jersey: Pearson Prentice Hall, 2011.
[19] R. A. Horn and C. R. Johnson, Matrix Analysis, 2nd ed. New York: Cambridge University Press, 2013.
[20] P. Diaconis, Group Representations in Probability and Statistics, S. S. Gupta, Ed. USA: Inst. Math. Stat. Monogr., 1988, vol. 11.
[21] Y. Polyanskiy, “Saddle point in the minimax converse for channel coding,” IEEE Trans. Inform. Theory, vol. 59, no. 5, pp. 2576–2595, May
2013.
[22] J. E. Cohen, Y. Iwasa, G. Rautu, M. B. Ruskai, E. Seneta, and G. Zbăganu, “Relative entropy under mappings by stochastic matrices,” Linear
Algebra Appl., vol. 179, pp. 211–235, January 1993.
[23] M. Raginsky, “Strong data processing inequalities and Φ-Sobolev inequalities for discrete channels,” IEEE Trans. Inform. Theory, vol. 62,
no. 6, pp. 3355–3389, June 2016.
[24] I. Csiszár and J. Körner, “Broadcast channels with confidential messages,” IEEE Trans. Inform. Theory, vol. IT-24, no. 3, pp. 339–348, May
1978.
[25] P. Diaconis and L. Saloff-Coste, “Logarithmic Sobolev inequalities for finite Markov chains,” Ann. Appl. Probab., vol. 6, no. 3, pp. 695–750,
1996.
[26] R. Montenegro and P. Tetali, Mathematical Aspects of Mixing Times in Markov Chains, ser. Found. Trends Theor. Comput. Sci., M. Sudan,
Ed. Boston-Delft: now Publishers Inc., 2006, vol. 1, no. 3.
[27] E. C. Posner, “Random coding strategies for minimum entropy,” IEEE Trans. Inform. Theory, vol. IT-21, no. 4, pp. 388–391, July 1975.
[28] Y. Polyanskiy and Y. Wu, “Lecture notes on information theory,” August 2017, Lecture Notes 6.441, Department of Electrical Engineering and
Computer Science, MIT, Cambridge, MA, USA.
[29] W. Rudin, Principles of Mathematical Analysis, 3rd ed., ser. International Series in Pure and Applied Mathematics. New York: McGraw-Hill,
Inc., 1976.
[30] V. Anantharam, A. A. Gohari, S. Kamath, and C. Nair, “On hypercontractivity and a data processing inequality,” in Proc. IEEE Int. Symp. Inf.
Theory (ISIT), Honolulu, HI, USA, June 29-July 4 2014, pp. 3022–3026.
30
[31] M.-D. Choi, M. B. Ruskai, and E. Seneta, “Equivalence of certain entropy contraction coefficients,” Linear Algebra Appl., vol. 208-209, pp.
29–36, September 1994.
[32] S. Boyd and L. Vandenberghe, Convex Optimization. New York: Cambridge University Press, 2004.
[33] C. St˛epniak, “Ordering of nonnegative definite matrices with application to comparison of linear models,” Linear Algebra Appl., vol. 70, pp.
67–71, October 1985.
[34] R. G. Gallager, Information Theory and Reliable Communication. New York: John Wiley & Sons, Inc., 1968.
[35] A. Rényi, “On measures of dependence,” Acta Math. Hungar., vol. 10, no. 3-4, pp. 441–451, 1959.
[36] A. Makur and L. Zheng, “Bounds between contraction coefficients,” in Proc. 53rd Allerton Conference, Allerton House, UIUC, Illinois, USA,
September 29-October 2 2015, pp. 1422–1429.
[37] O. V. Sarmanov, “Maximal correlation coefficient (non-symmetric case),” Dokl. Akad. Nauk, vol. 121, no. 1, pp. 52–55, 1958.
[38] S. Axler, Linear Algebra Done Right, 2nd ed., ser. Undergraduate Texts in Mathematics. New York: Springer, 2004.
[39] G. H. Hardy, J. E. Littlewood, and G. Pólya, Inequalities, 1st ed. London: Cambridge University Press, 1934.
[40] M. Ledoux, “Concentration of measure and logarithmic Sobolev inequalities,” in Séminaire de Probabilités XXXIII, ser. Lecture Notes in
Mathematics, J. Azéma, M. Émery, M. Ledoux, and M. Yor, Eds., vol. 1709. Berlin, Heidelberg: Springer, 1999, pp. 120–216.
[41] D. Bakry, “Functional inequalities for Markov semigroups,” in Probability Measures on Groups: Recent Directions and Trends, ser. Proceedings
of the CIMPA-TIFR School, Tata Institute of Fundamental Research, Mumbai, India, 2002, S. G. Dani and P. Graczyk, Eds. New Delhi,
India: Narosa Publishing House, 2006, pp. 91–147.
[42] L. Miclo, “Remarques sur l’hypercontractivité at l’évolution de éntropie pour des chaînes de Markov finies,” Séminaire de probabilités de
Strasbourg, vol. 31, pp. 136–167, 1997.
[43] K. Löwner, “Über monotone matrixfunktionen,” Math. Z., vol. 38, no. 1, pp. 177–216, 1934.
[44] E. Heinz, “Beiträge zur störungstheorie der spektralzerlegung,” Math. Ann., vol. 123, pp. 415–438, 1951.
[45] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis. New York: Cambridge University Press, 1991.
[46] P. Diaconis and L. Saloff-Coste, “Nash inequalities for finite Markov chains,” J. Theoret. Probab., vol. 9, no. 2, pp. 459–510, 1996.
[47] A. Giovagnoli and H. P. Wynn, “Cyclic majorization and smoothing operators,” Linear Algebra Appl., vol. 239, pp. 215–225, May 1996.
[48] R. L. Dobrushin, “Central limit theorem for nonstationary Markov chains. I,” Theory Probab. Appl., vol. 1, no. 1, pp. 65–80, 1956.
[49] Y. Polyanskiy and Y. Wu, “Dissipation of information in channels with input constraints,” IEEE Trans. Inform. Theory, vol. 62, no. 1, pp.
35–55, January 2016.
31
| 10 |
On Time-Bandwidth Product of Multi-Soliton Pulses
Alexander Span∗, Vahid Aref† , Henning Bülow† , and Stephan ten Brink∗
arXiv:1705.09468v1 [] 26 May 2017
∗
Institute of Telecommunications, University of Stuttgart, Stuttgart, Germany
†
Nokia Bell Labs, Stuttgart, Germany
Abstract—Multi-soliton pulses are potential candidates for
fiber optical transmission where the information is modulated
and recovered in the so-called nonlinear Fourier domain. While
this is an elegant technique to account for the channel nonlinearity, the obtained spectral efficiency, so far, is not competitive
with the classic Nyquist-based schemes. In this paper, we study
the evolution of the time-bandwidth product of multi-solitons
as they propagate along the optical fiber. For second and third
order soliton pulses, we numerically optimize the pulse shapes
to achieve the smallest time-bandwidth product when the phase
of the spectral amplitudes is used for modulation. Moreover, we
analytically estimate the pulse-duration and bandwidth of multisolitons in some practically important cases. Those estimations
enable us to approximate the time-bandwidth product for higher
order solitons.
I. I NTRODUCTION
Advances made over the past decade in coherent optical
technology have significantly improved transmission capacities
to a point where Kerr nonlinearity once again becomes the
limiting factor. The equalization of nonlinear effects is usually
very complex and has a limited gain due to the mixing of
signal and noise on the channel. The optical channel is usually modeled by the Nonlinear Schrödinger Equation (NLSE)
which describes the interplay between Kerr nonlinearity and
chromatic dispersion along the fiber.
The Nonlinear Fourier Transform (NFT) is a potential way
of generating pulses matched to a channel governed by the
NLSE. It maps a pulse to the nonlinear Fourier spectrum with
some beneficial properties. This elegant technique, known also
as inverse scattering method [1], has found applications in
fiber optics when the on-off keying of first order solitons was
developed in the 1970s [2]. Following [3], [4], it has regained
attention as coherent technology allows to exploit all degrees
of freedom offered by the nonlinear spectrum.
Multi-soliton pulses are specific solutions of the NLSE.
Using the NFT, an N −th order soliton, denoted here by
N −soliton, is mapped to a set of N distinct nonlinear frequencies, called eigenvalues, and the corresponding spectral
amplitudes. The key advantage of this representation is that
the complex pulse evolution along the fiber can be expressed
in terms of spectral amplitudes which evolve linearly in the
nonlinear spectrum. Moreover, the transformation is independent of the other spectral amplitudes and eigenvalues. These
properties motivate to modulate data using spectral amplitudes.
On-off keying of 1-soliton pulses, also called fundamental
solitons, has been intensively studied two decades ago for
different optical applications (see [2] and reference therein). To
increase spectral efficiency, it has been proposed to modulate
multi-solitons [3]. One possibility is the independent on-off
keying of N predefined eigenvalues. The concept has been
experimentally shown up to using 10 eigenvalues in [5], [6].
The other possibility is to modulate the spectral amplitudes of
N eigenvalues. The QPSK modulation of spectral amplitudes
has been verified experimentally up to 7 eigenvalues in [7],
[8], [9]. All of these works have a small spectral efficiency.
Characterizing the spectral efficiency of multi-soliton pulses
is still an open problem. First, the statistics of noisy received
pulses in the nonlinear spectrum have not yet been fully
understood, even though there are insightful studies for some
special cases and under some assumptions [10], [11], [12].
Second, the bandwidth and the pulse-duration change as a
multi-soliton propagates along a fiber or as spectral amplitudes
are modulated. The nonlinear evolution makes it hard to
estimate the time-bandwidth product of a multi-soliton.
In this paper, we study the evolution of pulse-duration
and bandwidth of multi-soliton pulses along an optical fiber
link. We numerically optimize the time-bandwidth product
of N −soliton pulses for N = 2 and 3. The results provide
some guidelines for N > 3. We focus on scenarios where the
phases of N spectral amplitudes are modulated independently.
However, our results can also be applied to on-off keying
modulation schemes. We assume that the link is long enough
so that the pulse-duration and bandwidth can reach their respective maximum. We also neglect inter-symbol interference.
Our results show that the optimization of [13] is suboptimal
when the evolution along the fiber is taken into account.
We further introduce a class of N −solitons which are
provably symmetric. A subset of these pulses are already
used in [7], [8], [13]. We derive an analytic approximation
of their pulse-duration. Numerical observations exhibit that
the approximation is tight and can serve as a lower-bound for
other N −solitons. To the best of our knowledge, this is the
first result on the pulse-duration of multi-solitons. We also
approximate the time-bandwidth product by lower-bounding
the maximal bandwidth.
II. P RELIMINARIES ON M ULTI -S OLITON P ULSES
In this section, we briefly explain the nonlinear Fourier
transform (NFT), the characterization of multi-soliton pulses
in the corresponding nonlinear spectrum and how they can be
generated via the inverse NFT.
A. Nonlinear Fourier Transform
The pulse propagation along an ideally lossless and noiseless fiber is characterized using the standard Nonlinear
Schrödinger Equation (NLSE)
∂2
∂
q(t, z) + j 2 q(t, z) + 2j|q(t, z)|2 q(t, z) = 0.
∂z
∂t
(1)
The physical pulse Q(τ, ℓ) at location ℓ along the fiber is then
described by
p
|β2 |
|β2 |
τ
, ℓ 2 with P0 · T02 =
,
Q (τ, ℓ) = P0 q
T0 2T0
γ
where β2 < 0 is the chromatic dispersion and γ is the Kerr
nonlinearity of the fiber, and T0 determines the symbol rate.
The closed-form solutions of the NLSE (1) can be described
in a nonlinear spectrum defined by the following so-called
Zakharov-Shabat system [1]
∂ ϑ1 (t; z)
−jλ
q (t, z)
ϑ1 (t; z)
=
, (2)
−q ∗ (t, z)
jλ
ϑ2 (t; z)
∂t ϑ2 (t; z)
with the boundary condition
ϑ1 (t; z)
1
→
exp (−jλt) for t → −∞
ϑ2 (t; z)
0
each recursion. The main advantage of this algorithm is that
it is exact with a low computational complexity and it can be
used to derive some properties of multi-soliton pulses.
Algorithm 1: INFT from Darboux Transform [16]
Input : Discrete Spectrum {λk , Qd (λk )}; k = 1, . . . , N
Output: N −soliton waveform q(t)
begin
for k ← 1 to Ndo
QN
(0)
d (λk )
ρk (t) ←− Q
m=1,m6=k
λk −λ∗
k
λk −λm
λk −λ∗
m
q (0) ←− 0;
for k ← 1 to N do
(k−1)
ρ(t) ←− ρk
(t);
e2jλk t ;
∗
under the assumption that q(t; z) → 0 decays sufficiently
fast as |t| → ∞ (faster than any polynomial). The nonlinear
Fourier coefficients (Jost coefficients) are defined as
ρ (t)
q (k) (t) ←− q (k−1) (t) + 2j(λk − λ∗k ) 1+|ρ(t)|
2;
(4)
for m ← k + 1 to N do
(k)
ρm (t) ←−
λ −λ∗
(k−1)
k
k (ρ(k−1) (t)−ρ(t))
(λm −λk )ρm
(t)+ 1+|ρ(t)|
2
m
;
λk −λ∗
(k−1)
∗
k
∗
(t)
λm −λk − 1+|ρ(t)|2 1+ρ (t)ρm
a (λ; z) = lim ϑ1 (t; z) exp (jλt)
t→∞
b (λ; z) = lim ϑ2 (t; z) exp (−jλt) .
(5)
t→∞
The set Ω denotes the set of simple roots of a(λ; z) with
positive imaginary part, which are called eigenvalues as they
do not change in terms of z, i.e. λk (z) = λk . The nonlinear
spectrum is usually described by the following two parts:
(i) Continuous Part: the spectral amplitude Qc (λ; z) =
b(λ; z)/a(λ; z) for real frequencies λ ∈ R.
(ii) Discrete Part: {λk , Qd (λk ; z)} where λk ∈ Ω, i.e.
a(λk ; z) = 0, and Qd (λk ; z) = b(λk ; z)/ ∂a(λ;z)
∂λ |λ=λk .
An N −soliton pulse is described by the discrete part only and
the continuous part is equal to zero (for any z). The discrete
part contains N pairs of eigenvalue and the corresponding
spectral amplitude, i.e. {λk , Qd (λk ; z)}, 1 ≤ k ≤ N .
An important property of the nonlinear spectrum is its
simple linear evolution given by [3]
Qd (λk ; z) = Qd (λk ) exp(−4jλ2k z),
(3)
where we define Qd (λk ) = Qd (λk ; z = 0). The transformation is linear and depends only on its own eigenvalue λk . This
property motivates for modulation of data over independently
evolving spectral amplitudes.
Note that there are several methods to compute the nonlinear
spectrum by numerically solving the Zakharov-Shabat system.
Some of these methods are summarized in [3],[14].
B. Inverse NFT
The Inverse NFT (INFT) maps the given nonlinear spectrum
to the corresponding pulse in time-domain. For the special
case of the spectrum without the continuous part, the Darboux
Transformation can be applied to generate the corresponding
multi-soliton pulse [15]. Algorithm 1 shows the pseudo-code
of the inverse transform, as described in [16]. It generates an
N −soliton q (t) recursively by adding a pair {λk , Qd (λk )} in
(λ∗ denotes the complex conjugate of λ)
C. Definition of Pulse Duration and Bandwidth
In this paper, we consider an N −soliton with the eigenval+
ues on the imaginary axis, i.e. {λk = jσk }N
k=1 and σk ∈ R .
Without loss of generality, we assume that σk < σk+1 . As
such an N −soliton propagates along the fiber, the pulse does
not disperse and the pulse shape can be repeated periodically.
An N −soliton pulse has unbounded support and exponentially decreasing tails in time and (linear) frequency domain.
As this pulse is transformed according to the NLSE, e.g. propagation along the ideal optical fiber, its shape can drastically
change as all Qd (λk ; z) are evolved in z. Despite of nontrivial
pulse variation and various peak powers, the
PNenergy of the
pulse remains fixed and equal to Etotal = 4 k=1 Im{λk }.
As a result, the pulse-duration and the bandwidth of a
multi-soliton pulse are well-defined if they are characterized
in terms of energy: the pulse duration Tw (and bandwidth
Bw , respectively) is defined as the smallest interval (frequency
band) containing Etrunc = (1 − ε)Etotal of the soliton energy.
Note that truncation causes small perturbations of eigenvalues.
In practical applications, the perturbations become even larger
due to inter-symbol-interference (ISI) when a train of truncated
soliton pulses is used for fiber optical communication. Thus,
there is a trade-off: ε must be kept small such that the
truncation causes only small perturbations, but large enough
to have a relatively small time-bandwidth product.
Note that truncating a signal in time-domain may slightly
change its linear Fourier spectrum in practice. For simplicity,
we however computed Tw and Bw with respect to the original
pulse as the difference is negligible for ε ≪ 1.
III. S YMMETRIC M ULTI -S OLITON P ULSES
In this section, we address the special family of multi-soliton
pulses which are symmetric in time domain. An application
of such solitons for optical fiber transmission is studied in [7]
where the symmetric 2-solitons are used for data modulation.
Theorem 1. Let Ω = {jσ1 , jσ2 , . . . , jσN } be the set of
eigenvalues on the imaginary axis where σk ∈ R+ , for
1 ≤ k ≤ N . The corresponding N −soliton q(t) is a symmetric
pulse, i.e. q(t) = q(−t), and keeps this property during the
propagation in z, if and only if the spectral amplitudes are
chosen as
N
Y
σk + σm
.
(6)
|Qd,sym (jσk )| = 2σk
σk − σm
m=1;m6=k
Sketch of Proof. The proof is based
on Algorithm 1 with
ρ∗ (t)
the following steps: (i) g(t) = 1+|ρ(t)|
2 is symmetric, if
ρ∗ (−t)ρ(t) = 1.
(7)
(ii) The update rule (5) preserves the property (7): if ρ(t) and
(k−1)
(k)
ρm (t) satisfy (7), then ρm (t) will satisfy (7) as well.
(0)
(iii) Because of (6), ρk (t) satisfies (7) for all k.
(k)
(iv) Using induction, ρm satisfies (7) for all m and k.
(v) According to (4) and step (i), q(t) is symmetric.
It is already mentioned in [17] that (6) leads to a symmetric
multi-soliton in amplitude. Theorem 1 implies that (6) is not
only sufficient but also necessary to have q(t) = q(−t).
As it is shown in the next section, we numerically observe
that these symmetric pulses have the smallest pulse duration1
among all solitons with the same set of eigenvalues Ω (but
different |Qd (λk )|). Assuming σ1 = mink {σk }, this minimum
pulse-duration can be well approximated by
N
X
1
σm + σ1
Tsym (ε) ≈
2
ln
2σ1
σm − σ1
m=2
!!
PN
2
m=1 σm
− ln
+ ln
, (8)
ε
σ1
where ε is defined earlier as the energy threshold. The approximation
becomes tight as ε → 0 and is only valid if
PN
ε ≪ σ1 / m=1 σm . Verification of (8) follows readily by
describing an N −soliton by the sum of N terms according to
(4), and showing that in the limit |t| → ∞, the dominant term
behaves as sech(2σ1 (|t| − t0 )) for some t0 and all other terms
decay exponentially faster.
IV. T IME -BANDWIDTH P RODUCT
Consider the transmission of an N −soliton with eigenvalues
{jσk }N
k=1 over an ideal fiber of length zL . Each spectral amplitude Qd (jσk ; z) = |Qd (jσk ; z)| exp(jφk (z)) is transformed
along the fiber according to (3). Equivalently,
|Qd (jσk ; z)| = |Qd (jσk ; z = 0)|
φk (z) = φk (0) + 4σk2 z
1 It
is correct when ε is small enough.
for z ≤ zL . It means that φk (z) changes with a distinct speed
proportional to σk2 . Different phase combinations correspond
to different soliton pulse shapes with generally different pulseduration and bandwidth. It implies that Tw and Bw of a pulse
are changing along the transmission. Furthermore, if the φk (0)
are independently modulated for each eigenvalue with a constellation of size M , e.g. M −PSK, this results in M N initial
phase combinations (N log2 (M ) bits per soliton) associated
with different initial pulse shapes. Such transmission scenarios
are demonstrated experimentally for M = 4, N = 2 [7] and
N = 7 [8]. To avoid a considerable ISI between neighboring
pulses in a train of N -solitons for transmission in time or
frequency, we should consider Tw and Bw larger than their
respective maximum along the link.
For a given set of eigenvalues and fixed |Qd (jσk ; z = 0)|,
the maxima depend on M N initial phase combinations and
the fiber length zL . To avoid these constraints, we maximize
Tw and Bw over all possible phase combinations:
Tmax =
max
φk ,1≤k≤N
Tw and Bmax =
max
φk ,1≤k≤N
Bw .
These quantities occur in the worst case but can be reached
in their vicinity when N is small, e.g. 2 or 3, or when M is
very large, or the transmission length zL is large enough.
In the rest of this section, we address the following fundamental questions:
(i) How do Tmax and Bmax change in terms of {jσk }N
k=1
and {|Qd (jσk )|}N
k=1 ?
(ii) What is the smallest time-bandwidth product for a given
N , i.e.
(Tmax Bmax )⋆ =
min
min
σk ,1≤k≤N |Qd (jσk )|,1≤k≤N
Tmax Bmax
and which is the optimal choice for {jσk⋆ }N
k=1 and
.
{|Q⋆d (jσk⋆ )|}N
k=1
The following properties preserving the time-bandwidth
product decrease the number of parameters to optimize:
(i) If q(t) has eigenvalues {jσk }N
k=1 , then 1/σ1 · q(t/σ1 ) will
with
the same time-bandwidth
have eigenvalues {j σσk1 }N
k=1
product. It implies that Tmax Bmax only depends on the N − 1
eigenvalue ratios σk /σ1 .
N
(ii) If {φk }N
k=1 corresponds to q(t), then {φk − φ1 }k=1
corresponds to q(t) exp(jφ1 ). Thus, we assume φ1 = 0.
(iii) Instead of directly optimizing {|Qd (jσk )|}N
k=1 , it is
equivalent to optimize ηk > 0 defined by
|Qd (jσk )| = ηk |Qd,sym (jσk )|.
Using {ηk }N
k=1 has two advantages. The first one is the
generalization of Theorem 1. If {ηk }N
k=1 corresponds to q(t),
then {1/ηk }N
corresponds
to
q(−t).
The proof is similar to
k=1
the one of Theorem 1. Moreover, {e−2σk t0 ηk }N
k=1 corresponds
to q(t+t0 ). Thus, it suffices to assume η1 = 1 and η2 ∈ (0, 1].
A. Optimization of Spectral Amplitudes
Consider a given set of eigenvalues Ω = {jσk }N
k=1 . We
want to optimize {ηk }N
k=2 to minimize Tmax Bmax . Recall that
N
{|Qd (jσk )|}N
k=1 , and thus {ηk }k=1 do not change along z.
with σN = maxk {σk }. The bound becomes tight when an
N −soliton is the linear superposition of N separate 1-solitons.
We performed such a numerical optimization for N = 2
and N = 3 and for different { σσk1 }N
k=2 . For each ε, we found
the optimal {ηk⋆ }N
k=2 with the smallest Tmax (ε)Bmax (ε).
B. Optimization of Eigenvalues
In general, an N −soliton has a larger Tmax Bmax than
a 1−soliton but it has also N times, e.g. Qd (jσk ), more
dimensions for encoding data. To have a fair comparison,
we use a notion of “time-bandwidth product per eigenvalue”
defined as
σk
1
σk
T ·B N ({ }N
) = Tmax Bmax ({ }N
, {ηk⋆ }N
k=2 )
σ1 k=2
N
σ1 k=2
where Tmax Bmax is already optimized in terms of {ηk }, seperately for each eigenvalue combination. This is an important
parameter as the spectral efficiency will be O(1/T ·B N ). For
a 1-soliton with “sech” shape in time and frequency domain,
we have
T ·B 1 = Tw (ε)Bw (ε) = π −2 ln2 (2/ε),
Tmax
30
20
10
(a)
-5
-4
-3
log (η2 )
-2
-1
0
-2
-1
0
8
Bmax , Bmin
We present the optimization method for N = 2. In this
case, there are two parameters to optimize: φ2 and η2 ∈ (0, 1].
Consider a given energy threshold ε. For each chosen η2 , we
find Tmax (ε) and Bmax (ε) by exhaustive search. The phase
φ2 ∈ [0, 2π) is first quantized uniformly by 64 phases. At
each phase, a 2-soliton is generated using Algorithm 1 and
then Tw (ε) and Bw (ε) are computed. To estimate Tmax (ε),
another round of search is performed with a finer resolution
around the quantized phase with the largest Tw (ε). Similarly,
Bmax (ε) is estimated.
Fig. 1 illustrates Tmax (ε) and Bmax (ε) in terms of log(η2 )
for different energy thresholds ε when Ω = { 21 j, 1j}. We also
depict Bmin (ε), the minimum bandwidth of 2-soliton pulses
with a given η2 and various φ2 . Fig. 1 indicates the following
features that we observed for any pairs of {jσ1 , jσ2 }.
We can see that for any ε, the smallest Tmax is attained at
η2 = 1 (log(η2 ) = 0) which corresponds to the symmetric 2soliton defined in Sec. III. We also observe that Bmax reaches
the largest value at η2 = 1 while Bmin reaches its minimum.
As log(η2 ) decreases, Tmax increases gradually up to some
point and then it linearly increases in | log(η2 )|. The behaviour
of Bmax is the opposite. It decreases very fast in | log(η2 )| up
to some η2 and then converges slowly to the bandwidth defined
by the 1-soliton spectrum with λ = jσ2 . In fact, we have two
separate 1-solitons without any interaction when η2 = 0. As
η2 increases to 1, the distance between these two 1-solitons
decreases, resulting in more nonlinear interaction but smaller
Tmax . The largest Bmax − Bmin at η2 = 1 indicates the largest
amount of interaction.
The above features seem general for N −solitons. In particular, Tmax becomes minimum if the N −soliton is symmetric.
Moreover, Bmax can be lower-bounded by
!!
PN
2
2σN
k=1 σk
− ln
ln
Bsep (ε) = 2
π
ε
σN
Bmax
Bmin
6
4
2
(b)
0
-5
-4
ε = 10−3
-3
log (η2 )
ε = 10−4
ε = 10−6
ε = 10−10
Fig. 1. (a) Pulse duration Tmax and (b) bandwidth Bmax/min for 2-soliton
pulse (λ1 = 0.5j, λ2 = 1j) when maximized (minimized) over all phase
combinations of spectral amplitudes
where ε is the energy threshold defined in Section II-C.
For N = 2 and N = 3, we numerically optimized Tmax (ε)Bmax (ε) for different values of { σσk1 }N
k=2 and
{ηk }N
.
Fig.
2-(a)
shows
the
numerical
optimization
of
k=2
T ·B 2 in terms of σ2 /σ1 for different choices of ε where
the best {ηk⋆ }N
k=2 were chosen for each eigenvalue ratio. We
normalized T ·B 2 by T ·B 1 to see how much the “timebandwidth product per eigenvalue” can be decreased. Fig. 2(b) shows a similar numerical optimization for N = 3 and
ε = 10−4 . We have the following observations:
(i) T ·B N is sensitive to the choice of eigenvalues. For
instance, equidistant eigenvalues, i.e. σk = kσ1 , are a bad
choice in terms of spectral efficiency.
(ii) The ratio T ·B N /T ·B 1 gets smaller as ε vanishes. The
intuitive reason is that as ε → 0, we get Tmax ≈ 2σ11 ln( 2ε )
(see (8)) which is the pulse-duration of the 1-soliton.
(iii) For a practical value of ε ∼ 10−4 −10−3 , T ·B N decreases
very slowly in N . Moreover, the optimal σk⋆ are close. This
can make the detection challenging in presence of noise. For
ε = 10−4 ,
T ·B 2 /T ·B 1 = 0.87 for σ2⋆ /σ1⋆ = 1.11
T ·B 3 /T ·B 1 = 0.83 for σ2⋆ /σ1⋆ = 1.28, σ3⋆ /σ1⋆ = 1.35
(iv) Choosing the above optimal {σk⋆ /σ1⋆ }, and the optimal
{|Q⋆d (σk⋆ )}, the resulting solitons for N = 2, 3 are shown
in Fig. 3 for different phase combinations and two energy
thresholds ε. This figure gives some guidelines for a larger N :
the optimal N −soliton has eigenvalues close to each other and
significantly seperated pulse centers, why the optimum pulse
looks similar to a train of 1-solitons with eigenvalues close
to each other. The pulse centers should be close to minimize
T ·B 2 /T ·B 1
1.2
ε = 10−3
1
ε = 10−6
ε = 10−4
0.8
simulation
approximation (9)
ε = 10−10
0.6
1
1.2
1.4
1.6
1.8
2
σ2 /σ1
(a)
2
1.2
T ·B 3 /T ·B 1
1.1
σ2 /σ1
1.7
1
1.3
0.9
1
0.8
1.2
1.4
1.6
1.8
2
σ3 /σ1
(b)
Fig. 2. Gain of time-bandwidth product per eigenvalue of (a) second and (b)
third order solitons with eigenvalues jσk in relation to first order pulses
Tmax but not too close to avoid a large interaction which comes
along with a growth of Bmax .
1.5
|q (t)|
ε = 10−4
1
ε = 10−10
0.5
0
−10
−5
0
5
10
5
10
t
|q (t)|
1.5
1
0.5
0
−10
−5
0
t
Fig. 3. Time domain signal of optimum second and third order soliton pulse
for different phase combinations of the spectral amplitudes (same color)
For ε ≪ 1, an estimate on T ·B N at optimal {ηk∗ } can be
given by (9), where Tmax and Bmax are estimated by Tsym (ε)
and Bsep (ε), respectively (see Fig. 1).
Tsym (ε)Bsep (ε)
,
(9)
N
For the second order case, these approximations for various
ε are plotted in Fig. 2-(a) by dashed lines. We see that the
approximation becomes better for small ε. This approximation
can be used to predict T ·B N for a large N .
T ·B N ≈
V. C ONCLUSION
We studied the evolution of the pulse-duration and the
bandwidth of N −soliton pulses along the optical fiber. We
focused on solitons with eigenvalues located on the imaginary
axis. The class of symmetric soliton pulses was introduced
and an analytical approximation of their pulse-duration was
derived.
The phase of the spectral amplitudes was assumed to be
used for modulation while their magnitudes were kept fixed.
We numerically optimized the location of eigenvalues and
the magnitudes of spectral amplitudes for 2− and 3−solitons
in order to minimize the time-bandwidth product. It can
be observed that the time-bandwidth product per eigenvalue
improves in the soliton order N , but very slowly. Another
observation is, that the optimal N −soliton pulse looks similar
to a train of first-order pulses.
There are some remarks about our optimization. As an
N −soliton propagates, the phases of the spectral amplitudes
change with different speeds. We assumed that all possible
combinations of phases occur during transmission. This is the
worst case scenario which is likely to happen for N = 2 and
N = 3 but becomes less probable for large N . Moreover, the
same magnitudes of spectral amplitudes are used for any phase
combination while they can be tuned according to the phases.
Without these assumptions, the time-bandwidth product will
decrease. However, it becomes harder to estimate as there are
many more parameters to optimize.
R EFERENCES
[1] A. Shabat and V. Zakharov, “Exact theory of two-dimensional selffocusing and one-dimensional self-modulation of waves in nonlinear
media,” Soviet physics JETP, vol. 34, no. 1, p. 62, 1972.
[2] L. F. Mollenauer and J. P. Gordon, Solitons in optical fibers: fundamentals and applications. Academic Press, 2006.
[3] M. Yousefi, F. Kschischang, “Information transmission using the nonlinaer fourier transform: I-III,” IEEE Trans. Inf. Theory, vol. 60, 2014.
[4] J. E. Prilepsky, S. A. Derevyanko, K. J. Blow, I. Gabitov, and S. K. Turitsyn, “Nonlinear inverse synthesis and eigenvalue division multiplexing
in optical fiber channels,” Phys. review lett., vol. 113, no. 1, 2014.
[5] Z. Dong, S. Hari, et al., “Nonlinear frequency division multiplexed
transmissions based on nft,” IEEE Photon. Technol. Lett., vol. 27, 2015.
[6] V. Aref, Z. Dong, and H. Buelow, “Design aspects of multi-soliton pulses
for optical fiber transmission,” in IEEE Photon. Conf. (IPC), 2016.
[7] V. Aref, H. Buelow, K. Schuh, and W. Idler, “Experimental demonstration of nonlinear frequency division multiplexed transmission,” in 41st
Europ. Conf. on Optical Comm. (ECOC), 2015.
[8] H. Buelow, V. Aref, and W. Idler, “Transmission of waveforms determined by 7 eigenvalues with psk-modulated spectral amplitudes,” in
42nd Europ. Conf. on Optical Comm. (ECOC), 2016.
[9] A. Geisler and C. Schaeffer, “Experimental nonlinear frequency division
multiplexed transmission using eigenvalues with symmetric real part,”
in 42nd Europ. Conf. on Optical Comm. (ECOC), 2016.
[10] S. A. Derevyanko, S. K. Turitsyn, and D. A. Yakushev, “Fokker-planck
equation approach to the description of soliton statistics in optical fiber
transmission systems,” JOSA B, vol. 22, no. 4, 2005.
[11] Q. Zhang and T. H. Chan, “A spectral domain noise model for optical
fibre channels,” in IEEE Int. Symp. on Inf. Theory (ISIT), 2015, 2015.
[12] S. Wahls, “Second order statistics of the scattering vector defining the d-t nonlinear fourier transform,” in Int. ITG Conf. on Systems, Comm. and Coding (ITG SCC), 2017.
[13] S. Hari, M. I. Yousefi, and F. R. Kschischang, “Multieigenvalue communication,” J. Lightw. Technol., vol. 34, no. 13, 2016.
[14] S. Wahls and H. V. Poor, “Fast numerical nonlinear fourier transforms,”
IEEE Trans. Inf. Theory, vol. 61, no. 12, 2015.
[15] V. B. Matveev and V. Matveev, Darboux transformations and solitons.
Springer-Verlag, 1991.
[16] V. Aref, “Control and detection of discrete spectral amplitudes in
nonlinear fourier spectrum,” arXiv preprint arXiv:1605.06328, 2016.
[17] H. A. Haus and M. N. Islam, “Theory of the soliton laser,”
IEEE J. Quantum Electron., vol. QE-21, no. 8, 1985.
| 7 |
Logical Methods in Computer Science
Vol. 8 (2:18) 2012, pp. 1–42
www.lmcs-online.org
Submitted
Published
Oct. 19, 2011
Jun. 28, 2012
SOFTWARE MODEL CHECKING WITH EXPLICIT SCHEDULER AND
SYMBOLIC THREADS
ALESSANDRO CIMATTI, IMAN NARASAMDYA, AND MARCO ROVERI
Fondazione Bruno Kessler
e-mail address: {cimatti,narasamdya,roveri}@fbk.eu
Abstract. In many practical application domains, the software is organized into a set of
threads, whose activation is exclusive and controlled by a cooperative scheduling policy:
threads execute, without any interruption, until they either terminate or yield the control
explicitly to the scheduler.
The formal verification of such software poses significant challenges. On the one side,
each thread may have infinite state space, and might call for abstraction. On the other
side, the scheduling policy is often important for correctness, and an approach based on
abstracting the scheduler may result in loss of precision and false positives. Unfortunately,
the translation of the problem into a purely sequential software model checking problem
turns out to be highly inefficient for the available technologies.
We propose a software model checking technique that exploits the intrinsic structure of
these programs. Each thread is translated into a separate sequential program and explored
symbolically with lazy abstraction, while the overall verification is orchestrated by the
direct execution of the scheduler. The approach is optimized by filtering the exploration
of the scheduler with the integration of partial-order reduction.
The technique, called ESST (Explicit Scheduler, Symbolic Threads) has been implemented and experimentally evaluated on a significant set of benchmarks. The results
demonstrate that ESST technique is way more effective than software model checking applied to the sequentialized programs, and that partial-order reduction can lead to further
performance improvements.
1. Introduction
In many practical application domains, the software is organized into a set of threads that
are activated by a scheduler implementing a set of domain-specific rules. Particularly relevant is the case of multi-threaded programs with cooperative scheduling, shared-variables and
with mutually-exclusive thread execution. With cooperative scheduling, there is no preemption: a thread executes, without interruption, until it either terminates or explicitly yields
the control to the scheduler. This programming model, simply called cooperative threads
1998 ACM Subject Classification: D.2.4.
Key words and phrases: Software Model Checking, Counter-Example Guided Abstraction Refinement,
Lazy Predicate Abstraction, Multi-threaded program, Partial-Order Reduction.
l
LOGICAL METHODS
IN COMPUTER SCIENCE
c
DOI:10.2168/LMCS-8 (2:18) 2012
CC
A. Cimatti, I. Narasamdya, and M. Roveri
Creative Commons
2
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
in the following, is used in several software paradigms for embedded systems (e.g., SystemC [Ope05], FairThreads [Bou06], OSEK/VDX [OSE05], SpecC [GDPG01]), and also
in other domains (e.g., [CGM+ 98]).
Such applications are often critical, and it is thus important to provide highly effective
verification techniques. In this paper, we consider the use of formal techniques for the
verification of cooperative threads. We face two key difficulties: on the one side, we must
deal with the potentially infinite state space of the threads, which often requires the use of
abstractions; on the other side, the overall correctness often depends on the details of the
scheduling policy, and thus the use of abstractions in the verification process may result in
false positives.
Unfortunately, the state of the art in verification is unable to deal with such challenges. Previous attempts to apply various software model checking techniques to cooperative threads (in specific domains) have demonstrated limited effectiveness. For example, techinques like [KS05, TCMM07, CJK07] abstract away significant aspects of the
scheduler and synchronization primitives, and thus they may report too many false positives, due to loss of precision, and their applicability is also limited. Symbolic techniques,
like [MMMC05, HFG08], show poor scalability because too many details of the scheduler are
included in the model. Explicit-state techniques, like [CCNR11], are effective in handling
the details of the scheduler and in exploring possible thread interleavings, but are unable
to counter the infinite nature of the state space of the threads [GV04]. Unfortunately, for
explicit-state techniques, a finite-state abstraction is not easily available in general.
Another approach could be to reduce the verification of cooperative threads to the
verification of sequential programs. This approach relies on a translation from (or sequentialization of) the cooperative threads to the (possibly non-deterministic) sequential
programs that contain both the mapping of the threads in the form of functions and the
encoding of the scheduler. The sequentialized program can be analyzed by means of “offthe-shelf” software model checking techniques, such as [CKSY05, McM06, BHJM07], that
are based on the counter-example guided abstraction refinement (CEGAR) [CGJ+ 03] paradigm. However, this approach turns out to be problematic. General purpose analysis
techniques are unable to exploit the intrinsic structures of the combination of scheduler and
threads, hidden by the translation into a single program. For instance, abstraction-based
techniques are inefficient because the abstraction of the scheduler is often too aggressive,
and many refinements are needed to re-introduce necessary details.
In this paper we propose a verification technique which is tailored to the verification
of cooperative threads. The technique translates each thread into a separate sequential
program; each thread is analyzed, as if it were a sequential program, with the lazy predicate
abstraction approach [HJMS02, BHJM07]. The overall verification is orchestrated by the
direct execution of the scheduler, with techniques similar to explicit-state model checking.
This technique, in the following referred to as Explicit-Scheduler/Symbolic Threads (ESST)
model checking, lifts the lazy predicate abstraction for sequential software to the more
general case of multi-threaded software with cooperative scheduling.
Furthermore, we enhance ESST with partial-order reduction [God96, Pel93, Val91]. In
fact, despite its relative effectiveness, ESST often requires the exploration of a large number
of thread interleavings, many of which are redundant, with subsequent degradations in the
run time performance and high memory consumption [CMNR10]. POR essentially exploits
the commutativity of concurrent transitions that result in the same state when they are executed in different orders. We integrate within ESST two complementary POR techniques,
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
3
persistent sets and sleep sets. The POR techniques in ESST limit the expansion of the
transitions in the explicit scheduler, while leave the nature of the symbolic analysis of the
threads unchanged. The integration of POR in ESST algorithm is only seemingly trivial,
because POR could in principle interact negatively with the lazy predicate abstraction used
for analyzing the threads.
The ESST algorithm has been implemented within the Kratos software model checker
[CGM+ 11]. Kratos has a generic structure, encompassing the cooperative threads framework, and has been specialized for the verification of SystemC programs [Ope05] and of
FairThreads programs [Bou06]. Both SystemC and FairThreads fall within the paradigm
of cooperative threads, but they have significant differences. This indicates that the ESST
approach is highly general, and can be adapted to specific frameworks with moderate effort.
We carried out an extensive experimental evaluation over a significant set of benchmarks
taken and adapted from the literature. We first compare ESST with the verification of
sequentialized benchmarks, and then analyze the impact of partial-order reduction. The
results clearly show that ESST dramatically outperforms the approach based on sequentialization, and that both POR techniques are very effective in further boosting the performance of ESST.
This paper presents in a general and coherent manner material from [CMNR10] and
from [CNR11]. While in [CMNR10] and in [CNR11] the focus is on SystemC, the framework presented in this paper deals with the general case of cooperative threads, without
focussing on a specific programming framework. In order to emphasize the generality of the
approach, the experimental evaluation in this paper has been carried out in a completely
different setting than the one used in [CMNR10] and in [CNR11], namely the FairThreads
programming framework. We also considered a set of new benchmarks from [Bou06] and
from [WH08], in addition to adapting some of the benchmarks used in [CNR11] to the
FairThreads scheduling policy. We also provide proofs of correctness of the proposed techniques in Appendix A.
The structure of this paper is as follows. Section 2 provides some background in software
model checking via the lazy predicate abstraction. Section 3 introduces the programming
model to which ESST can be applied. Section 4 presents the ESST algorithm. Section 5
explains how to extend ESST with POR techniques. Section 6 shows the experimental
evaluation. Section 7 discusses some related work. Finally, Section 8 draws conclusions and
outlines some future work.
2. Background
In this section we provide some background on software model checking via the lazy predicate abstraction for sequential programs.
2.1. Sequential Programs. We consider sequential programs written in a simple imperative programming language over a finite set Var of integer variables, with basic control-flow
constructs (e.g., sequence, if-then-else, iterative loops) where each operation is either an
assignment or an assumption. An assignment is of the form x := exp, where x is a variable
and exp is either a variable, an integer constant, an explicit nondeterministic construct ∗,
or an arithmetic operation. To simplify the presentation, we assume that the considered
programs do not contain function calls. Function calls can be removed by inlining, under
4
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
l0
[false]
l1
l8
[true]
l2
x := *
[x < 0]
l3
l4
l5
y := x
y := -x
[y < 0]
le
[x >= 0]
l6
[y >= 0]
l7
Figure 1: An example of acontrol-flow graph.
the assumption that there are no recursive calls (a typical assumption in embedded software). An assumption is of the form [bexp], where bexp is a Boolean expression that can
be a relational operation or an operation involving Boolean operators. Subsequently, we
denote by Ops the set of program operations.
Without loss of generality, we represent a program P by a control-flow graph (CFG).
Definition 2.1 (Control-Flow Graph). A control-flow graph G for a program P is a tuple
(L, E, l0 , Lerr ) where
(1) L is the set of program locations,
(2) E ⊆ L × Ops × L is the set of directed edges labelled by a program operation from the
set Ops,
(3) l0 ∈ L is the unique entry location such that, for any location l ∈ L and any operation
op ∈ Ops, the set E does not contain any edge (l, op, l0 ), and
(4) Lerr ⊆ L of is the set of error locations such that, for each le ∈ Lerr , we have (le , op, l) 6∈
E for all op ∈ Ops and for all l ∈ L.
In this paper we are interested in verifying safety properties by reducing the verification
problem to the reachability of error locations.
Example 2.2. Figure 1 depicts an example of a CFG. Typical program assertions can be
represented by branches going to error locations. For example, the branches going out of l6
can be the representation of assert(y >= 0).
A state s of a program is a mapping from variables to their values (in this case integers).
Let State be the set of states, we have s ∈ State = Var → Z. We denote by Dom(s) the
domain of a state s. We also denote by s[x1 7→ v1 , . . . , xn 7→ vn ] the state obtained from
s by substituting the image of xi in s by vi for all i = 1, . . . , n. Let G = (L, E, l0 , Lerr )
be the CFG for a program P . A configuration γ of P is a pair (l, s), where l ∈ L and s
is a state. We assume some first-order language in which one can represent a set of states
symbolically. We write s |= ϕ to mean the formula ϕ is true in the state s, and also say
that s satisfies ϕ, or that ϕ holds at s. A data region r ⊆ State is a set of states. A data
region r can be represented symbolically by a first-order formula ϕr , with free variables
from Var , such that all states in r satisfy ϕr ; that is, r = {s | s |= ϕr }. When the context is
clear, we also call the formula ϕr data region as well. An atomic region, or simply a region,
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
5
is a pair (l, ϕ), where l ∈ L and ϕ is a data region, such that the pair represents the set
{(l, s) | s |= ϕ} of program configurations. When the context is clear, we often refer to the
both kinds of region as simply region.
The semantics of an operation op ∈ Ops can be defined by the strongest post-operator
SP op . For a formula ϕ representing a region, the strongest post-condition SP op (ϕ) represents
the set of states that are reachable from any of the states in the region represented by ϕ after
the execution of the operation op. The semantics of assignment and assumption operations
are as follows:
SP x:=exp (ϕ) = ∃x′ .ϕ[x/x′ ] ∧ (x = exp[x/x′ ]), for exp 6= ∗,
SP x:=∗ (ϕ) = ∃x′ .ϕ[x/x′ ] ∧ (x = a), where a is a fresh variable, and
SP [bexp] (ϕ) = ϕ ∧ bexp,
where ϕ[x/x′ ] and exp[x/x′ ], respectively, denote the formula obtained from ϕ and the
expression obtained from exp by replacing the variable x′ for x. We define the application
of the strongest post-operator to a finite sequence σ = op1 , . . . , opn of operations as the
successive application of the strongest post-operator to each operator as follows: SP σ (ϕ) =
SP opn (. . . SP op1 (ϕ) . . .).
2.2. Predicate Abstraction. A program can be viewed as a transition system with transitions between configurations. The set of configurations can potentially be infinite because
the states can be infinite. Predicate abstraction [GS97] is a technique for extracting a finite
transition system from a potentially infinite one by approximating possibly infinite sets of
states of the latter system by Boolean combinations of some predicates.
Let Π be a set of predicates over program variables in some quantifier-free theory T . A
precision π is a finite subset of Π. A predicate abstraction ϕπ of a formula ϕ over a precision
π is a Boolean formula over π that is entailed by ϕ in T , that is, the formula ϕ ⇒ ϕπ is
valid in T . To avoid losing precision, we are interested in the strongest Boolean combination
ϕπ , which is called Boolean predicate abstraction [LNO06]. As described in [LNO06], for a
formula ϕ, the more predicates we have in the precision π, the more expensive the computation of Boolean predicate abstraction. We refer the reader to [LNO06, CCF+ 07, CDJR09]
for the descriptions of advanced techniques for computing predicate abstractions based on
Satisfiability Modulo Theory (SMT) [BSST09].
Given a precision π, we can define the abstract strongest post-operator SP πop for an operation op. That is, the abstract strongest post-condition SP πop (ϕ) is the formula (SP op (ϕ))π .
2.3. Predicate-Abstraction based Software Model Checking. One prominent software model checking technique is the lazy predicate abstraction [BHJM07] technique. This
technique is a counter-example guided abstraction refinement (CEGAR) [CGJ+ 03] technique based on on-the-fly construction of an abstract reachability tree (ART). An ART
describes the reachable abstract states of the program: a node in an ART is a region (l, ϕ)
describing an abstract state. Children of an ART node (or abstract successors) are obtained
by unwinding the CFG and by computing the abstract post-conditions of the node’s data
region with respect to the unwound CFG edge and some precision π. That is, the abstract
successors of a node (l, ϕ) is the set {(l1 , ϕ1 ), . . . , (ln , ϕn )}, where, for i = 1, . . . , n, we have
(l, opi , li ) is a CFG edge, and ϕi = SP πopi i (ϕ) for some precision πi . The precision πi can be
associated with the location li or can be associated globally with the CFG itself. The ART
edge connecting a node (l, ϕ) with its child (l′ , ϕ′ ) is labelled by the operation op of the
6
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
CFG edge (l, op, l′ ). In this paper computing abstract successors of an ART node is also
called node expansion. An ART node (l, ϕ) is covered by another ART node (l′ , ϕ′ ) if l = l′
and ϕ entails ϕ′ . A node (l, ϕ) can be expanded if it is not covered by another node and its
data region ϕ is satisfiable. An ART is complete if no further node expansion is possible.
An ART node (l, ϕ) is an error node if ϕ is satisfiable and l is an error location. An ART
is safe if it is complete and does not contain any error node. Obtaining a safe ART implies
that the program is safe.
The construction of an ART for a the CFG G = (L, E, l0 , Lerr ) for a program P starts
from its root (l0 , ⊤). During the construction, when an error node is reached, we check if
the path from the root to the error node is feasible. An ART path ρ is a finite sequence
ε1 , . . . , εn of edges in the ART such that, for every i = 1, . . . , n − 1, the target node of εi
is the source node of εi+1 . Note that, the ART path ρ corresponds to a path in the CFG.
We denote by σρ the sequence of operations labelling the edges of the ART path ρ. A
counter-example path is an ART path ε1 , . . . , εn such that the source node of ε1 is the root
of the ART and the target node of εn is an error node. A counter-example path ρ is feasible
if and only if SP σρ (true) is satisfiable. An infeasible counter-example path is also called
spurious counter-example. A feasible counter-example path witnesses that the program P
is unsafe.
An alternative way of checking feasibility of a counter-example path ρ is to create a
path formula that corresponds to the path. This is achieved by first transforming the sequence σρ = op1 , . . . , opn of operations labelling ρ into its single-static assignment (SSA)
form [CFR+ 91], where there is only one single assignment to each variable. Next, a constraint for each operation is generated by rewriting each assignment x := exp into the
equality x = exp, with nondeterministic construct ∗ being translated into a fresh variable,
and turning each assumption [bexp] into the constraint bexp. The path formula is the conjunction of the constraint generated by each operation. A counter-example path ρ is feasible
if and only if its corresponding path formula is satisfiable.
Example 2.3. Suppose that the operations labelling a counter-example path are
x := y, [x > 0], x := x + 1, y := x, [y < 0],
then, to check the feasibility of the path, we check the satisfiability of the following formula:
x1 = y0 ∧ x1 > 0 ∧ x2 = x1 + 1 ∧ y1 = x2 ∧ y1 < 0.
If the counter-example path is infeasible, then it has to be removed from the constructed
ART by refining the precisions. Such a refinement amounts to analyzing the path and
extracting new predicates from it. One successful method for extracting relevant predicates
at certain locations of the CFG is based on the computation of Craig interpolants [Cra57], as
shown in [HJMM04]. Given a pair of formulas (ϕ− , ϕ+ ) such that ϕ− ∧ ϕ+ is unsatisfiable,
a Craig interpolant of (ϕ− , ϕ+ ) is a formula ψ such that ϕ− ⇒ ψ is valid, ψ ∧ ϕ+ is
unsatisfiable, and ψ contains only variables that are common to both ϕ− and ϕ+ . Given
an infeasible counter-example ρ, the predicates can be extracted from interpolants in the
following way:
(1) Let σρ = op1 , . . . , opn , and let the sub-path σρi,j such that i ≤ j denote the sub-sequence
opi , opi+1 , . . . , opj of σρ .
(2) For every k = 1, . . . , n − 1, let ϕ1,k be the path formula for the sub-path σρ1,k and
ϕk+1,n be the path formula for the sub-path σρk+1,n , we generate an interpolant ψ k of
(ϕ1,k , ϕk+1,n ).
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
Threaded sequential
program
Thread
T1
....... Thread
update/query
scheduler state
7
set state
Primitive
Functions
query result
get state
Scheduler
TN
pass control
Figure 2: Programming model.
(3) The predicates are the (un-SSA) atoms in the interpolant ψ k for k = 1, . . . , n.
The discovered predicates are then added to the precisions that are associated with some
locations in the CFG. Let p be a predicate extracted from the interpolant ψ k of (ϕ1,k , ϕk+1,n )
for 1 ≤ k < n. Let ε1 , . . . , εn be the sequence of edges labelled by the operations op1 , . . . , opn ,
that is, for i = 1, . . . , n, the edge εi is labelled by opi . Let the nodes (l, ϕ) and (l′ , ϕ′ ) be
the source and target nodes of the edge εk . The predicate p can be added to the precision
associated with the location l′ .
Once the precisions have been refined, the constructed ART is analyzed to remove the
sub part containing the infeasible counter-example path, and then the ART is reconstructed
using the refined precisions.
Lazy predicate abstraction has been implemented in several software model checkers,
including Blast [BHJM07], CpaChecker [BK11], and Kratos [CGM+ 11]. For details
and in-depth illustrations of ART constructions, we refer the reader to [BHJM07].
3. Programming Model
In this paper we analyze shared-variable multi-threaded programs with exclusive thread
(there is at most one running thread at a time) and cooperative scheduling policy (the
scheduler never preempts the running thread, but waits until the running thread cooperatively yields the control back to the scheduler). At the moment we do not deal with dynamic
thread creations. This restriction is not severe because typically multi-threaded programs
for embedded system designs are such that all threads are known and created a priori, and
there are no dynamic thread creations.
Our programming model is depicted in Figure 2. It consists of three components: a socalled threaded sequential program, a scheduler, and a set of primitive functions. A threaded
sequential program (or threaded program) P is a multi-threaded program consisting of a set
of sequential programs T1 , . . . , TN such that each sequential program Ti represent a thread.
From now on, we will refer to the sequential programs in the threaded programs as threads.
We assume that the threaded program has a main thread, denoted by main, from which
the execution starts. The main thread is responsible for initializing the shared variables.
Let P be a threaded program, we denote by GVar the set of shared (or global) variables
of P and by LVar T the set of local variables of the thread T in P . We assume that
LVar T ∩ GVar = ∅ for every thread T and LVar Ti ∩ LVar Tj = ∅ for each two threads Ti
and Tj such that i 6= j. We denote by GT the CFG for the thread T . All operations in GT
only access variables in LVar T ∪ GVar .
The scheduler governs the executions of threads. It employs a cooperative scheduling
policy that only allows at most one running thread at a time. The scheduler keeps track of a
set of variables that are necessary to orchestrate the thread executions and synchronizations.
We denote such a set by SVar . For example, the scheduler can keep track of the states
8
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
of threads and events, and also the time delays of event notifications. The mapping from
variables in SVar to their values form a scheduler state. Passing the control to a thread can
be done, for example, by simply setting the state of the thread to running. Such a control
passing is represented by the dashed line in Figure 2.
Primitive functions are special functions used by the threads to communicate with the
scheduler by querying or updating the scheduler state. To allow threads to call primitive
functions, we simply extend the form of assignment described in Section 2.1 as follows: the
expression exp of an assignment x := exp can also be a call to a primitive function. We
assume that such a function call is the top-level expression exp and not nested in another
expression. Calls to primitive functions do not modify the values of variables occurring in
the threaded program. Note that, as primitive function calls only occur on the right-hand
side of assignment, we implicitly assume that every primitive function has a return value.
The primitive functions can be thought of as a programming interface between the
threads and the scheduler. For example, for event-based synchronizations, one can have a
primitive function wait event(e) that is parametrized by an event name e. This function
suspends the calling thread by telling the scheduler that it is now waiting for the notification
of event e. Another example is the function notify event(e) that triggers the notification
of event e by updating the event’s state, which is tracked by the scheduler, to a value
indicating that it has been notified. In turn, the scheduler can wake up the threads that
are waiting for the notification of e by making them runnable.
We now provide a formal semantics for our programming model. Evaluating expressions
in program operations involves three kinds of state:
(1) The state si of local variables of some thread Ti (Dom(si ) = LVar Ti ).
(2) The state gs of global variables (Dom(gs) = GVar ).
(3) The scheduler state S (Dom(S) = SVar ).
The evaluation of the right-hand side expression of an assignment requires a scheduler state
because the expression can be a call to a primitive function whose evaluation depends on
and can update the scheduler state.
We require, for each thread T , there is a variable stT ∈ Dom(S) that indicates the state
of T . We consider the set {Running, Runnable, Waiting} as the domain of stT , where each
element in the set has an obvious meaning. The elements Running, Runnable, and Waiting
can be thought of as enumerations that denote different integers. We say that the thread
T is running, runnable, or waiting in a scheduler state S if S(stT ) is, respectively, Running,
Runnable, or Waiting . We denote by SState the set of all scheduler states. Given a threaded
program with N threads T1 , . . . , TN , by the exclusive running thread property, we have, for
every state S ∈ SState, if, for some i, we have S(stTi ) = Running, then S(stTj ) 6= Running
for all j 6= i, where 1 ≤ i, j ≤ N .
The semantics of expressions in program operations are given by the following two
evaluation functions
[[·]]E : exp → ((State × State × SState) → (Z × SState))
[[·]]B : bexp → ((State × State × SState) → {true, f alse}).
The function [[·]]E takes as arguments an expression occurring on the right-hand side of
an assignment and the above three kinds of state, and returns the value of evaluating the
expression over the states along with the possible updated scheduler state. The function
[[·]]B takes as arguments a boolean expression and the local and global states, and returns
the valuation of the boolean expression. Figure 3 shows the semantics of expressions in
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
Variable
Integer constant
Nondeterministic
construct
Binary arithmetic
operation
Primitive
function call
Relational operation
Binary boolean
operation
9
[[x]]E (s, gs, S) = (v, S), where v = s(x) if x ∈ Dom(s) or v = gs(x) if
x ∈ Dom(gs).
[[c]]E (s, gs, S) = (c, S).
[[∗]]E (s, gs, S) = (v, S), for some v ∈ Z.
[[exp1 ⊗exp2 ]]E (s, gs, S) = (v1⊗v2, S), where v1 = proj1 ([[exp1 ]]E (s, gs, S))
and v2 = proj1 ([[exp2 ]]E (s, gs, S)).
[[f (exp1 , . . . , expn )]](s, gs, S) = (v, S′ ), where (v, S′ ) = f ′ (v1 , . . . , vn , S)
and vi = proj1 [[expi ]]E (s, gs, S), for i = 1, . . . , n.
[[exp1 ⊙ exp2 ]]B (s, gs, S) = v1 ⊙ v2, where v1 = proj1 ([[exp1 ]]E (s, gs, S))
and v2 = proj1 ([[exp2 ]]E (s, gs, S)).
[[bexp1 ⋆ bexp2 ]]B (s, gs, S) = v1 ⋆ v2, where v1 = [[bexp1 ]]B (s, gs, S) and
v2 = [[bexp2 ]]B (s, gs, S).
Figure 3: Semantics of expressions in program operations.
program operations given by the evaluation functions [[·]]E and [[·]]B . To extract the result
of evaluation function, we use the standard projection function proji to get the i-th value
of a tuple. The rules for unary arithmetic operations and unary boolean operations can
be defined similarly to their binary counterparts. For primitive functions, we assume that
every n-ary primitive function f is associated with an (n + 1)-ary function f ′ such that the
first n arguments of f ′ are the values resulting from the evaluations of the arguments of f ,
and the (n + 1)-th argument of f ′ is a scheduler state. The function f ′ returns a pair of
value and updated scheduler state.
Next, we define the meaning of a threaded program by using the operational semantics
in terms of the CFGs of the threads. The main ingredient of the semantics is the notion
of run-time configuration. Let GT = (L, E, l0 , Lerr ) be the CFG for a thread T . A thread
configuration γT of T is a pair (l, s), where l ∈ L and s is a state such that Dom(s) = LVar T .
Definition 3.1 (Configuration). A configuration γ of a threaded program P with N threads
T1 , . . . , TN is a tuple hγT1 , . . . , γTN , gs, Si where
• each γTi is a thread configuration of thread Ti ,
• gs is the state of global variables, and
• S is the scheduler state.
For succinctness, we often refer the thread configuration γTi = (l, s) of the thread Ti as
the indexed pair (l, s)i . A configuration hγT1 , . . . , γTN , gs, Si, is an initial configuration for
a threaded program if for each i = 1, . . . , N , the location l of γTi = (l, s) is the entry of the
CFG GTi of Ti , and S(stmain ) = Running and S(stTi ) 6= Running for all Ti 6= main.
Let SState N o ⊂ SState be the set of scheduler states such that every state in SState N o
has no running thread, and SState One ⊂ SState be the set of scheduler states such that
every state in SState One has exactly one running thread. A scheduler with a cooperative
scheduling policy can simply be defined as a function Sched : SState N o → P(SState One ).
The transitions of the semantics are of the form
op
Edge transition:
γ → γ′
·
Scheduler transition: γ → γ ′
where γ, γ ′ are configurations and op is the operation labelling an edge. Figure 4 shows the
semantics of threaded programs. The first three rules show that transitions over edges of the
10
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
GTi = (L, E, l0 , Lerr ) (l, [bexp], l′ ) ∈ E
S(stTi ) = Running
[[[bexp]]]B (s, gs, S) = true
[bexp]
(1)
hγT1 , . . . , (l, s)i , . . . , γTN , gs, Si → hγT1 , . . . , (l′ , s)i , . . . , γTN , gs, Si
GTi = (L, E, l0 , Lerr )
[[x := exp]]E (s, gs, S) = (v, S′ )
hγT1 , . . . , (l, s)i , . . . , γTN , gs, Si
GTi = (L, E, l0 , Lerr )
[[x := exp]]E (s, gs, S) = (v, S′ )
hγT1 , . . . , (l, s)i , . . . , γTN , gs, Si
(l, x := exp, l′ ) ∈ E
s′ = s[x 7→ v]
x:=exp
→
hγT1 , . . . , (l′ , s′ )i , . . . , γTN , gs, S′ i
(l, x := exp, l′ ) ∈ E
gs′ = gs[x 7→ v]
x:=exp
→
∀i.S(stTi ) 6= Running
·
S(stTi ) = Running
x ∈ LVar Ti
S(stTi ) = Running
x ∈ GVar
hγT1 , . . . , (l′ , s)i , . . . , γTN , gs′ , S′ i
S′ ∈ Sched (S)
hγT1 , . . . , γTN , gs, Si → hγT1 , . . . , γTN , gs, S′ i
(2)
(3)
(4)
Figure 4: Operational semantics of threaded sequential programs.
CFG GT of a thread T are defined if and only if T is running, as indicated by the scheduler
state. The first rule shows that a transition over an edge labelled by an assumption is
defined if the boolean expression of the assumption evaluates to true. The second and third
rules show the updates of the states caused by the assignment. Finally, the fourth rule
describes the running of the scheduler.
Definition 3.2 (Computation Sequence, Run, Reachable Configuration). A computation
sequence γ0 , γ1 , . . . of a threaded program P is either a finite or an infinite sequence of
op
·
configurations of P such that, for all i, either γi → γi+1 for some operation op or γi → γi+1 .
A run of a threaded program P is a computation sequence γ0 , γ1 , . . . such that γ0 is an
initial configuration. A configuration γ of P is reachable from a configuration γ ′ if there
is a computation sequence γ0 , . . . , γn such that γ0 = γ ′ and γn = γ. A configuration γ is
reachable in P if it is reachable from an initial configuration.
A configuration hγT1 , . . . , (l, s)i , . . . , γTN , gs, Si of a threaded program P is an error
configuration if CFG GTi = (L, E, l0 , Lerr ) and l ∈ Lerr . We say a threaded program P is
safe iff no error configuration is reachable in P ; otherwise, P is unsafe.
4. Explicit-Scheduler Symbolic-Thread (ESST)
In this section we present our novel technique for verifying threaded programs. We call
our technique Explicit-Scheduler Symbolic-Thread (ESST) [CMNR10]. This technique is a
CEGAR based technique that combines explicit-state techniques with the lazy predicate
abstraction described in Section 2.3. In the same way as the lazy predicate abstraction,
ESST analyzes the data path of the threads by means of predicate abstraction and analyzes the flow of control of each thread with explicit-state techniques. Additionally, ESST
includes the scheduler as part of its model checking algorithm and analyzes the state of the
scheduler with explicit-state techniques.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
11
4.1. Abstract Reachability Forest (ARF). The ESST technique is based on the onthe-fly construction and analysis of an abstract reachability forest (ARF). An ARF describes the reachable abstract states of the threaded program. It consists of connected
abstract reachability trees (ARTs), each describing the reachable abstract states of the running thread. The connections between one ART with the others in an ARF describe possible
thread interleavings from the currently running thread to the next running thread.
Let P be a threaded program with N threads T1 , . . . , TN . A thread region for the thread
Ti , for 1 ≤ i ≤ N , is a set of thread configurations such that the domain of the states of the
configurations is LVar
S Ti ∪ GVar . A global region for a threaded program P is a set of states
whose domain is i=1,...,N LVar Ti ∪ GVar .
Definition 4.1 (ARF Node). An ARF node for a threaded program P with N threads
T1 , . . . , TN is a tuple
(hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S),
where (li , ϕi ), for i = 1, . . . , N , is a thread region for Ti , ϕ is a global region, and S is the
scheduler state.
Note that, by definition, the global region, along with the program locations and the
scheduler state, is sufficient for representing the abstract state of a threaded program.
However, such a representation will incur some inefficiencies in computing the predicate
abstraction. That is, without any thread regions, the precision is only associated with
the global region. Such a precision will undoubtedly contains a lot of predicates about the
variables occurring in the threaded program. However, when we are interested in computing
an abstraction of a thread region, we often do not need the predicates consisting only of
variables that are local to some other threads.
In ESST we can associate a precision with a location li of the CFG GT for thread T ,
denoted by πli , with a thread T , denoted by πT , or the global region ϕ, denoted by π. For a
precision πT and for every location l of GT , we have πT ⊆ πl for the precision πl associated
with the location l. Given a predicate ψ and a location l of the CFG GTi , and let fvar (ψ)
be the set of free variables of ψ, we can add ψ into the following precisions:
• If fvar (ψ) ⊆ LVar Ti , then ψ can be added into π, πTi , or πl .
• If fvar (ψ) ⊆ LVar
S Ti ∪ GVar , then ψ can be added into π, πTi , or πl .
• If fvar (ψ) ⊆ j=1,...,N LVar Tj ∪ GVar , then ψ can be added into π.
4.2. Primitive Executor and Scheduler. As indicated by the operational semantics of
threaded programs, besides computing abstract post-conditions, we need to execute calls
to primitive functions and to explore all possible schedules (or interleavings) during the
construction of an ARF. For the calls to primitive functions, we assume that the values
passed as arguments to the primitive functions are known statically. This is a limitation of
the current ESST algorithm, and we will address this limitation in our future work.
Recall that, SState denotes the set of scheduler states, and let PrimitiveCall be the set
of calls to primitive functions. To implement the semantic function [[exp]]E , where exp is a
primitive function call, we introduce the function
Sexec : (SState × PrimitiveCall ) → (Z × SState).
This function takes as inputs a scheduler state, a call f (~x) to a primitive function f , and
returns a value and an updated scheduler state resulting from the execution of f on the
12
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
arguments ~x. That is, Sexec(S, f (~x)) essentially computes [[f (~x)]]E (·, ·, S). Since we assume
that the values of ~x are known statically, we deliberately ignore, by ·, the states of local
and global variables.
Example 4.2. Let us consider a primitive function call wait event(e) that suspends a
running thread T and makes the thread wait for a notification of an event e. Let evT be
the variable in the scheduler state that keeps track of the event whose notification is waited
for by T . The state S′ of (·, S′ ) = Sexec(S, wait event(e)) is obtained from the state S by
changing the status of running thread to Waiting , and noting that the thread is waiting for
event e, that is, S′ = S[sT 7→ Waiting, evT 7→ e].
Finally, to implement the scheduler function Sched in the operational semantics, and
to explore all possible schedules, we introduce the function
Sched : SState N o → P(SState One ).
This function takes as an input a scheduler state and returns a set of scheduler states that
represent all possible schedules.
4.3. ARF Construction. We expand an ARF node by unwinding the CFG of the running
thread and by running the scheduler. Given an ARF node
(hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S),
we expand the node by the following rules [CMNR10]:
E1. If there is a running thread Ti in S such that the thread performs an operation op and
(li , op, li′ ) is an edge of the CFG GTi of thread Ti , then we have two cases:
• If op is not a call to primitive function, then the successor node is
(hl1 , ϕ′1 i, . . . , hli′ , ϕ′i i, . . . , hlN , ϕ′N i, ϕ′ , S),
where
π l′
(i) ϕ′i = SP opi (ϕi ∧ ϕ) and πli′ is the precision associated with li′ ,
πl
j
(ϕj ∧ ϕ) for j 6= i and πlj is the precision associated with lj ,
(ii) ϕ′j = SP havoc(op)
if op possibly updates global variables, otherwise ϕ′j = ϕj , and
(iii) ϕ′ = SP πop (ϕ) and π is the precision associated with the global region.
The function havoc collects all global variables possibly updated by op, and builds
a new operation where these variables are assigned with fresh variables. The edge
connecting the original node and the resulting successor node is labelled by the
operation op.
• If op is a primitive function call x := f (~y), then the successor node is
(hl1 , ϕ′1 i, . . . , hli′ , ϕ′i i, . . . , hlN , ϕ′N i, ϕ′ , S′ ),
where
(i) (v, S′ ) = Sexec(S, f (~y )),
(ii) op′ is the assignment x := v,
π l′
(iii) ϕ′i = SP opi′ (ϕi ∧ ϕ) and πli′ is the precision associated with li′ ,
πl
j
(iv) ϕ′j = SP havoc(op
′ ) (ϕj ∧ ϕ) for j 6= i and πlj is the precision associated with lj
if op possibly updates global variables, otherwise ϕ′j = ϕj , and
(v) ϕ′ = SP πop′ (ϕ) and π is the precision associated with the global region.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
13
The edge connecting the original node and the resulting successor node is labelled
by the operation op′ .
E2. If there is no running thread in S, then, for each S′ ∈ Sched(S), we create a successor
node
(hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S′ ).
We call such a connection between two nodes an ARF connector.
Note that, the rule E1 constructs the ART that belongs to the running thread, while the
connections between the ARTs that are established by ARF connectors in the rule E2
represent possible thread interleavings or context switches.
An ARF node (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S) is the initial node if for all i = 1, . . . , N , the
location li is the entry location of the CFG GTi of thread Ti and ϕi is true, ϕ is true, and
S(smain ) = Running and S(sTi ) 6= Running for all Ti 6= main.
We construct an ARF by applying the rules E1 and E2 starting from the initial node.
A node can be expanded if the node is not covered by other nodes and if the conjunction
of all its thread regions and the global region is satisfiable.
Definition 4.3 (Node Coverage). An ARF node (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S) is covered by
′ , ϕ′ i, ϕ′ , S′ ) if l = l′ for i = 1, . . . , N , S = S′ , and
another ARF
node (hl1′ , ϕ′1 i, . . . , hlN
i
i
N
V
ϕ ⇒ ϕ′ and i=1,...,N (ϕi ⇒ ϕ′i ) are valid.
An ARF is complete if it is closed under the expansion
V of rules E1 and E2. An ARF
node (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S) is an error node if ϕ ∧ i=1,...,N ϕi is satisfiable, and at
least one of the locations l1 , . . . , lN is an error location. An ARF is safe if it is complete
and does not contain any error node.
4.4. Counter-example Analysis. Similar to the lazy predicate abstraction for sequential
programs, during the construction of an ARF, when we reach an error node, we check if
the path in the ARF from the initial node to the error node is feasible.
Definition 4.4 (ARF Path). An ARF path ρ̂ = ρ1 , κ1 , ρ2 , . . . , κn−1 , ρn is a finite sequence
of ART paths ρi connected by ARF connectors κj , such that
(1) ρi , for i = 1, . . . , n, is an ART path,
(2) κj , for j = 1, . . . , n − 1, is an ARF connector, and
j+1
(3) for every j = 1, . . . , n − 1, such that ρj = εj1 , . . . , εjm and ρj+1 = εj+1
, the
1 , . . . , εl
j
j+1
target node of εm is the source node of κj and the source node of ε1 is the target
node of κj .
A suppressed ARF path sup(ρ̂) of ρ̂ is the sequence ρ1 , . . . , ρn .
A counter-example path ρ̂ is an ARF path such that the source node of ε1 of ρ1 =
ε1 , . . . , εm is the initial node, and the target node of ε′k of ρn = ε′1 , . . . , ε′k is an error node.
Let σsup(ρ̂) denote the sequence of operations labelling the edges in sup(ρ̂). We say that a
counter-example path ρ̂ is feasible if and only if SP σsup(ρ̂) (true) is satisfiable. Similar to the
case of sequential programs, one can check the feasibility of ρ̂ by checking the satisfiability
of the path formula corresponding to the SSA form of σsup(ρ̂) .
Example 4.5. Suppose that the top path in Figure 5 is a counter-example path (the target
node of the last edge is an error node). The bottom path is the suppressed version of the
top one. The dashed edge is an ARF connector. To check feasibility of the path by means of
14
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
x := x+y
x := z
y := 7
[x < y+z]
Suppressed
x := x+y
y := 7
x := z
[x < y+z]
Figure 5: An example of a counter-example path.
satisfiability of the corresponding path formula, we check the satisfiability of the following
formula:
x1 = x0 + y0 ∧ y1 = 7 ∧ x2 = z0 ∧ x2 < y1 + z0.
4.5. ARF Refinement. When the counter-example path ρ̂ is infeasible, we need to rule
out such a path by refining the precision of nodes in the ARF. ARF refinement amounts to
finding additional predicates to refine the precisions. Similar to the case of sequential programs, these additional predicates can be extracted from the path formula corresponding to
sequence σsup(ρ̂) by using the Craig interpolant refinement method described in Section 2.3.
As described in Section 4.1 newly discovered predicates can be added to precisions
associated to locations, threads, or the global region. Consider again the Craig interpolant
method in Section 2.3. Let ε1 , . . . , εn be the sequence of edges labelled by the operations
op1 , . . . , opn of σsup(ρ̂) , that is, for i = 1, . . . , n, the edge εi is labelled by opi . Let p be a
predicate extracted from the interpolant ψ k of (ϕ1,k , ϕk+1,n ) for 1 ≤ k < n, and let the
nodes
(hl1 , ϕ1 i, . . . , hli , ϕi i, . . . , hlN , ϕN i, ϕ, S)
and
(hl1 , ϕ′1 i, . . . , hli′ , ϕ′i i, . . . , hlN , ϕ′N i, ϕ′ , S′ )
be, respectively, the source and target nodes of the edge εk such that the running thread
in the source node’s scheduler state is the thread Ti . If p contains only variables local to
Ti , then we can add p to the precision associated with the location li′ , to the the precision
associated with Ti , or to the precision associated with the global region. Other precisions
refinement strategies are applicable. For example, one might add a predicate into the
precision associated with the global region if and only if the predicate contains variables
local to several threads.
Similar to the ART refinement in the case of sequential programs, once the precisions
are refined, we refine the ARF by removing the infeasible counter-example path or by
removing part of the ARF that contains the infeasible path, and then reconstruct again the
ARF using the refined precisions.
4.6. Havocked Operations. Computing the abstract strongest post-conditions with respect to the havocked operation in the rule E1 is necessary, not only to keep the regions of
the ARF node consistent, but, more importantly, to maintain soundness: never reports safe
for an unsafe case. Suppose that the region of a non-running thread T is the formula x = g,
where x is a variable local to T and g is a shared global variable. Suppose further that the
global region is true. If the running thread T ′ updates the value of g with, for example,
the assignment g := w, for some variable w local to T ′ , then the region x = g of T might
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
15
no longer hold, and has to be invalidated. Otherwise, when T resumes, and, for example,
checks for an assertion assert(x = g), then no assertion violation can occur. One way to
keep the region of T consistent is to update the region using the havoc(g := w) operation,
l
(x = g),
as shown in the rule E1. That is, we compute the successor region of T as SP πg:=a
where a is a fresh variable and l is the current location of T . The fresh variable a essentially
denotes an arbitrary value that is assigned to g.
Note that, by using a havoc(op) operation, we do not leak variables local to the running
thread when we update the regions of non-running threads. Unfortunately, the use of
havoc(op) can cause loss of precision. One way to address this issue is to add predicates
containing local and global variables to the precision associated with the global region. An
alternative approach, as described in [DKKW11], is to simply use the operation op (leaking
the local variables) when updating the regions of non-running threads.
4.7. Summary of ESST. The ESST algorithm takes a threaded program P as an input
and, when its execution terminates, returns either a feasible counter-example path and
reports that P is unsafe, or a safe ARF and reports that P is safe. The execution of
ESST(P ) can be illustrated in Figure 6:
(1) Start with an ARF consisting only of the initial node, as shown in Figure 6(a).
(2) Pick an ARF node that can be expanded and apply the rules E1 or E2 to grow the ARF,
as shown in Figures 6(b) and 6(c). The different colors denote the different threads to
which the ARTs belong.
(3) If we reach an error node, as shown by the red line in Figure 6(d), we analyze the
counter-example path.
(a) If the path is feasible, then report that P is unsafe.
(b) If the path is spurious, then refine the ARF:
(i) Discover new predicates to refine abstractions.
(ii) Undo part of the ARF, as shown in Figure 6(e).
(iii) Goto (2) to reconstruct the ARF.
(4) If the ARF is safe, as shown in Figure 6(f), then report that P is safe.
4.8. Correctness of ESST. To prove the correctness of ESST, we need to introduce
several notions and notations that relate the ESST algorithm with the operational semantics
in Section 3. Given two states s1 and s2 whose domains are disjoint, we denote by s1 ∪ s2
the union of two states such that Dom(s1 ∪ s2 ) is Dom(s1 ) ∪ Dom(s2 ), and, for every
x ∈ Dom(s1 ∪ s2 ), we have
s1 (x) if x ∈ Dom(s1 );
(s1 ∪ s2 )(x) =
s2 (x) otherwise.
Let P be a threaded program with N threads, and γ be a configuration
h(l1 , s1 ), . . . , (lN , sN ), gs, Si,
of P . Let η be an ARF node
′
(hl1′ , ϕ1 i, . . . , hlN
, ϕN i, ϕ, S′ ),
for P . We say that the configuration γ satisfies the ARF node η,
S denoted by γ |= η if and
only if for all i = 1, . . . , N , we have li = li′ and si ∪ gs |= ϕi , i=1,...,N si ∪ gs |= ϕ, and
S = S′ .
16
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
main
(a)
main
(b)
main
(c)
main
(d)
main
(e)
(f)
Figure 6: ARF construction in ESST.
By the above definition, it is easy to see that, for any initial configuration γ0 of P , we
have γ0 |= η0 for the initial ARF node η0 . In the sequel we refer to the configurations of
P and the ARF nodes (or connectors) for P when we speak about configurations and ARF
nodes (or connectors), respectively.
We now show that the node expansion rules E1 and E2 create successor nodes that are
over-approximations of the configurations reachable by performing operations considered in
the rules.
Lemma 4.6. Let η and η ′ be ARF nodes for a threaded program P such that η ′ is a successor
node of η. Let γ be a configuration of P such that γ |= η. The following properties hold:
(1) If η ′ is obtained from η by the rule E1 with the performed operation op, then, for any
op
configuration γ ′ of P such that γ → γ ′ , we have γ ′ |= η ′ .
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
17
(2) If η ′ is obtained from η by the rule E2, then, for any configuration γ ′ of P such that
·
γ → γ ′ and the scheduler states of η ′ and γ ′ coincide, we have γ ′ |= η ′ .
Let ε be an ART edge with source node
η = (hl1 , ϕ1 i, . . . , hli , ϕi i, . . . hlN , ϕN i, ϕ, S)
and target node
η ′ = (hl1 , ϕ′1 i, . . . , hli′ , ϕ′i i, . . . hlN , ϕ′N i, ϕ′ , S′ ),
such that S(sTi ) = Running and for all j 6= i, we have S(sTj ) 6= Running. Let GTi =
(L, E, l0 , Lerr ) be the CFG for Ti such that (li , op, li′ ) ∈ E. Let γ and γ ′ be configurations.
op
ε
We denote by γ → γ ′ if γ |= η, γ ′ |= η ′ , and γ → γ ′ . Note that, the operation op is the
operation labelling the edge of CFG, not the one labelling the ART edge ε. Similarly, we
κ
·
denote by γ → γ ′ for an ARF connector κ if γ |= η, γ ′ |= η ′ , and γ → γ ′ . Let ρ̂ = ξ1 , . . . , ξm
be an ARF path. That is, for each i = 1, . . . , m, the element ξi is either an ART edge or an
ρ̂
ARF connector. We denote by γ → γ ′ if there exists a computation sequence γ1 , . . . , γm+1
ξi
such that γi → γi+1 for all i = 1, . . . , m, and γ = γ1 and γ ′ = γm+1 .
In Section 3 the notion of strongest post-condition is defined as a set of reachable states
after executing some operation. We now try to relate the notion of configuration with the
notion of strongest post-condition. Let γ be a configuration
γ = h(l1 , s1 ), . . . , (li , si ), . . . , (lN , sN ), gs, Si,
S
and ϕ be a formula whose free variables range over k=1,...,N Dom(sk ) ∪ Dom(gs). We say
S
that the configuration satisfies the formula ϕ, denoted by γ |= ϕ if k=1,...,N sk ∪ gs |= ϕ.
Suppose that in the above configuration γ we have S(sTi ) = Running and S(sTj ) 6= Running
for all j 6= i. Let GTi = (L, E, l0 , Lerr ) be the CFG for Ti such that (li , op, li′ ) ∈ E. Let op
ˆ
be op if op does not contain any primitive function call, otherwise op
ˆ be op′ as in the second
case of the expansion rule E1. Then, for any configuration
γ ′ = h(l1 , s1 ), . . . , (li′ , s′i ), . . . , (lN , sN ), gs′ , S′ i,
op
′
such that γ → γ ′ , we have γ ′ |= SP op
ˆ (ϕ). Note that, the scheduler states S and S are not
constrained by, respectively, ϕ and SP op
ˆ (ϕ), and so they can be different.
When ESST(P ) terminates and reports that P is safe, we require that, for every
configuration γ reachable in P , there is a node in F such that the configuration satisfies the
node. We denote by Reach(P ) the set of configurations reachable in P , and by Nodes(F)
the set of nodes in F.
Theorem 4.7 (Correctness). Let P be a threaded program. For every terminating execution
of ESST(P ), we have the following properties:
ρ̂
(1) If ESST(P ) returns a feasible counter-example path ρ̂, then we have γ → γ ′ for an
initial configuration γ and an error configuration γ ′ of P .
(2) If ESST(P ) returns a safe ARF F, then for every configuration γ ∈ Reach(P ), there
is an ARF node η ∈ Nodes(F) such that γ |= η.
18
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
5. ESST + Partial-Order Reduction
The ESST algorithm often has to explore a large number of possible thread interleavings.
However, some of them might be redundant because the order of interleavings of some
threads is irrelevant. Given N threads such that each of them accesses a disjoint set of
variables, there are N ! possible interleavings that ESST has to explore. The constructed
ARF will consists of 2N abstract states (or nodes). Unfortunately, the more abstract states
to explore, the more computations of abstract strongest post-conditions are needed, and
the more coverage checks are involved. Moreover, the more interleavings to explore, the
more possible spurious counter-example paths to rule out, and thus the more refinements are
needed. As refinements result in keeping track of additional predicates, the computations of
abstract strongest post-conditions become expensive. Consequently, exploring all possible
interleavings degrades the performance of ESST and leads to state explosion.
Partial-order reduction techniques (POR) [God96, Pel93, Val91] have been successfully
applied in explicit-state software model checkers like SPIN [Hol05] and VeriSoft [God05]
to avoid exploring redundant interleavings. POR has also been applied to symbolic model
checking techniques as shown in [KGS06, WYKG08, ABH+ 01]. In this section we will extend
the ESST algorithm with POR techniques. However, as we will see, such an integration
is not trivial because we need to ensure that in the construction of the ARF the POR
techniques do not make ESST unsound.
5.1. Partial-Order Reduction (POR). Partial-order reduction (POR) is a model checking technique that is aimed at combating the state explosion by exploring only representative subset of all possible interleavings. POR exploits the commutativity of concurrent
transitions that result in the same state when they are executed in different orders.
We present POR using the standard notions and notations used in [God96, CGP99].
We model a concurrent program as a transition system M = (S, S0 , T ), where S is the finite
set of states, S0 ⊂ S is the set of initial states, and T is a set of transitions such that for
α
each α ∈ T , we have α ⊂ S × S. We say that α(s, s′ ) holds and often write it as s → s′
α
if (s, s′ ) ∈ α. A state s′ is a successor of a state s if s → s′ for some transition α ∈ T . In
the following we will only consider deterministic transitions, and often write s′ = α(s) for
α(s, s′ ). A transition α is enabled in a state s if there is a state s′ such that α(s, s′ ) holds.
The set of transitions enabled in a state s is denoted by enabled (s). A path from a state s
α
α
in a transition system is a finite or infinite sequence s0 →0 s1 →1 · · · such that s = s0 and
α
si →i si+1 for all i. A path is empty if the sequence consists only of a single state. The
length of a finite path is the number of transitions in the path.
Let M = (S, S0 , T ) be a transition system, we denote by Reach(S0 , T ) ⊆ S the set of
states reachable from the states in S0 by the transitions in T : for a state s ∈ Reach(S0 , T ),
αn−1
α
there is a finite path s0 →0 . . . → sn system such that s0 ∈ S0 and s = sn . In this work we
are interested in verifying safety properties in the form of program assertion. To this end,
we assume that there is a set Terr ⊆ T of error transitions such that the set
EM,Terr = {s ∈ S | ∃s′ ∈ S.∃α ∈ Terr . α(s′ , s) holds }
is the set of error states of M with respect to Terr . A transition system M = (S, S0 , T ) is
safe with respect to the set Terr ⊆ T of error transitions iff Reach(S0 , T ) ∩ EM,Terr = ∅.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
19
Selective search in POR exploits the commutativity of concurrent transitions. The
concept of commutativity of concurrent transitions can be formulated by defining an independence relation on pairs of transitions.
Definition 5.1 (Independence Relation, Independent Transitions). An independence relation I ⊆ T × T is a symmetric, anti-reflexive relation such that for each state s ∈ S and for
each (α, β) ∈ I the following conditions are satisfied:
Enabledness: If α is in enabled (s), then β is in enabled (s) iff β is in enabled (α(s)).
Commutativity: If α and β are in enabled (s), then α(β(s)) = β(α(s)).
We say that two transitions α and β are independent of each other if for every state s they
satisfy the enabledness and commutativity conditions. We also say that two transitions
α and β are independent in a state s of each other if they satisfy the enabledness and
commutativity conditions in s.
In the sequel we will use the notion of valid dependence relation to select a representative
subset of transitions that need to be explored.
Definition 5.2 (Valid Dependence Relation). A valid dependence relation D ⊆ T × T is
a symmetric, reflexive relation such that for every (α, β) 6∈ D, the transitions α and β are
independent of each other.
5.1.1. The Persistent Set Approach. To reduce the number of possible interleavings, in every
state visited during the state space exploration one only explores a representative subset
of transitions that are enabled in that state. However, to select such a subset we have to
avoid possible dependencies that can happen in the future. To this end, we appeal to the
notion of persistent set [God96].
Definition 5.3 (Persistent Set). A set P ⊆ T of enabled transitions in a state s is persistent
αn−1
α
α
α
in s if for every finite non-empty path s = s0 →0 s1 →1 · · · → sn →n sn+1 such that αi 6∈ P
for all i = 0, . . . , n, we have αn independent of any transition in P in sn .
Note that the persistent set in a state is not unique. To guarantee the existence of
successor state, we impose the successor-state condition on the persistent set: the persistent
set in s is empty iff so is enabled (s). In the sequel we assume persistent sets satisfy the
successor-state condition. We say that a state s is fully expanded if the persistent set in s
equals enabled (s). It is easy to see that, for any transition α not in the persistent set P in
a state s, the transition α is disabled in s or independent of any transition in P .
We denote by Reach red (S0 , T ) ⊆ S the set of states reachable from the states in S0
by the transitions in T such that, during the state space exploration, in every visited state
we only explore the transitions in the persistent set in that state. That is, for a state
αn−1
α
s ∈ Reach red (S0 , T ), there is a finite path s0 →0 . . . → sn in the transition system such
that s0 ∈ S0 and s = sn , and αi is in the persistent set of si , for i = 0, . . . , n − 1. It is easy
to see that Reach red (S0 , T ) ⊆ Reach(S0 , T ).
To preserve safety properties of a transition system, we need to guarantee that the
reduction by means of persistent sets does not remove all interleavings that lead to an
error state. To this end, we impose the cycle condition on Reach red (S0 , T ) [CGP99, Pel93]:
a cycle is not allowed if it contains a state in which a transition α is enabled, but α is
never included in the persistent set of any state s on the cycle. That is, if there is a cycle
20
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
α
αn−1
s0 →0 . . . → sn = s0 induced by the states s0 , . . . , sn−1 in Reach red (S0 , T ) such that αi is
persistent in si , for i = 0, . . . , n − 1 and α ∈ enabled (sj ) for some 0 ≤ j < n, then α must
be in the persistent set of any of s0 , . . . , sn−1 .
Theorem 5.4. A transition system M = (S, S0 , T ) is safe w.r.t. a set Terr ⊆ T of error
transitions iff Reach red (S0 , T ) that satisfies the cycle condition does not contain any error
state from EM,Terr .
5.1.2. The Sleep Set Approach. The sleep set POR technique exploits independencies of
enabled transitions in the current state. For example, suppose that in some state s there
are two enabled transitions α and β, and they are independent of each other. Suppose
further that the search explores α first from s. Then, when the search explores β from
β
s such that s → s′ for some state s′ , we associate with s′ a sleep set containing only α.
From s′ the search only explores transitions that are not in the sleep set of s′ . That is,
although the transition α is still enabled in s′ , it will not be explored. Both persistent
set and sleep set techniques are orthogonal and complementary, and thus can be applied
simultaneously. Note that the sleep set technique only removes transitions, and not states.
Thus, Theorem 5.4 still holds when the sleep set technique is applied.
5.2. Applying POR to ESST. The key idea of applying POR to ESST is to select a
representative subset of scheduler states output by the scheduler in ESST. That is, instead
of creating successor nodes with all scheduler states from {S1 , . . . , Sn } = Sched(S), for some
state S, we create successor nodes with the representative subset of {S1 , . . . , Sn }. However,
such an application is non-trivial. The ESST algorithm is based on the construction of
an ARF that describes the reachable abstract states, while the exposition of POR before
is based on the analysis of reachable concrete states. As we will see later, some POR
properties that hold in the concrete state space do not hold in the abstract state space.
Nevertheless, in applying POR to ESST one needs to guarantee that the original ARF is
safe if and only if the reduced ARF, obtained by the restriction on the scheduler’s output,
is safe. In particular, the construction of reduced ARF has to check if the cycle condition
is satisfied in its concretization.
To integrate POR techniques into the ESST algorithm, we first need to identify fragments in the threaded program that count as transitions in the transition system. In the
previous description of POR the execution of a transition is atomic, that is, its execution
cannot be interleaved by the executions of other transitions. We introduce the notion of
atomic block as the notion of transition in the threaded program. Intuitively, an atomic
block is a block of operations between calls to primitive functions that can suspend the
thread. Let us call such primitive functions blocking functions.
An atomic block of a thread is a rooted subgraph of the CFG such that the subgraph
satisfies the following conditions:
(1) its unique entry is the entry of the CFG or the location that immediately follows a call
to a blocking function;
(2) its exit is the exit of the CFG or the location that immediately follows a call to a
blocking function; and
(3) there is no call to a blocking function in any CFG path from the entry to an exit except
the one that precedes the exit.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
l0
l0
l1
...
l2
l0
l1
...
...
l3
l2
l4
l1
...
(a)
l2
l4
...
l3
l4
x:=wait(. . .)
l5
l6
...
l3
x:=wait(. . .)
l7
21
x:=wait(. . .)
l5
l7
l6
(b)
l5
l7
l6
(c)
Figure 7: Identifying atomic blocks.
Note that an atomic block has a unique entry, but can have multiple exits. We often identify
an atomic block by its entry. Furthermore, we denote by ABlock the set of atomic blocks.
Example 5.5. Consider a thread whose CFG is depicted in Figure 7(a). Let wait(. . .) be
the only call to a blocking function in the CFG. Figures 7(b) and (c) depicts the atomic
blocks of the thread. The atomic block in Figure 7(b) starts from l0 and exits at l5 and l7 ,
while the one in Figure 7(c) starts from l5 and exits at l5 and l7 .
Note that, an atomic block can span over multiple basic blocks or even multiple large
blocks in the basic block or large block encoding [BCG+ 09]. In the sequel we will use the
terms transition and atomic block interchangeably.
Prior to computing persistent sets, we need to compute valid dependence relations.
The criteria for two transitions being dependent are different from one application domain
to the other. Cooperative threads in many embedded system domains employ event-based
synchronizations through event waits and notifications. Different domains can have different
types of event notification. For generality, we anticipate two kinds of notification: immediate
and delayed notifications. An immediate notification is materialized immediately at the
current time or at the current cycle (for cycle-based semantics). Threads that are waiting
for the notified events are made runnable upon the notification. A delayed notification is
scheduled to be materialized at some future time or at the end of the current cycle. In some
domains delayed notifications can be cancelled before they are triggered.
For example, in a system design language that supports event-based synchronization, a
pair (α, β) of atomic blocks are in a valid dependence relation if one of the following criteria
is satisfied: (1) the atomic block α contains a write to a shared (or global) variable g, and
the atomic block β contains a write or a read to g; (2) the atomic block α contains an
immediate notification of an event e, and the atomic block β contains a wait for e; (3) the
atomic block α contains a delayed notification of an event e, and the atomic block β contains
a cancellation of a notification of e. Note that the first criterion is a standard criterion for
two blocks to become dependent on each other. That is, the order of executions of the
two blocks is relevant because different orders yield different values assigned to variables.
The second and the third criteria are specific to event-based synchronization language. An
event notification can make runnable a thread that is waiting for a notification of the event.
A waiting thread misses an event notification if the thread waited for such a notification
22
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
Algorithm 1 Persistent sets.
Input: a set Ben of enabled atomic blocks.
Output: a persistent set P .
(1) Let B := {α}, where α ∈ Ben .
(2) For each atomic block α ∈ B:
(a) If α ∈ Ben (α is enabled):
• Add into B every atomic block β such that (α, β) ∈ D.
(b) If α 6∈ Ben (α is disabled):
• Add into B a necessary enabling set for α with Ben .
(3) Repeat step 2 until no more atomic blocks can be added into B.
(4) P := B ∩ Ben .
after another thread had made the notification. Thus, the order of executions of atomic
blocks containing event waits and event notifications is relevant. Similarly for the delayed
notification in the third criterion. Given criteria for being dependent, one can use static
analysis techniques to compute a valid dependence relation.
To have small persistent sets, we need to know whether a disabled transition that has
a dependence relation with the currently enabled ones can be made enabled in the future.
To this end, we use the notion of necessary enabling set introduced in [God96].
Definition 5.6 (Necessary Enabling Set). Let M = (S, S0 , T ) be a transition system such
that a transition α ∈ T is diabled in a state s ∈ S. A set Tα,s ⊆ T is a necessary enabling
αn−1
α
set for α in s if for every finite path s = s0 →0 · · · → sn in M such that α is disabled in
si , for all 0 ≤ i < n, but is enabled in sn , a transition tj , for some 0 ≤ j ≤ n − 1, is in
Tα,s . A set Tα,Ten ⊆ T , for Ten ⊆ T , is a necessary enabling set for α with Ten if Tα,Ten is a
necessary enabling set for α in every state s such that Ten is the set of enabled transitions
in s.
Intuitively, a necessary enabling set Tα,s for a transition α in a state s is a set of transitions
such that α cannot become enabled in the future before at least a transition in Tα,s is
executed.
Algorithm 1 computes persistent sets using a valid dependence relation D. It is easy to
see that the persistent set computed by the algorithm satisfies the successor-state condition.
The algorithm is also a variant of the stubborn set algorithm presented in [God96], that is,
we use a valid dependence relation as the interference relation used in the latter algorithm.
We apply POR to the ESST algorithm by modifying the ARF node expansion rule E2,
described in Section 4 in two steps. First we compute a persistent set from a set of scheduler
states output by the function Sched. Second, we ensure that the cycle condition is satisfied
by the concretization of the constructed ARF.
We introduce the function Persistent that computes a persistent set of a set of scheduler states. Persistent takes as inputs an ARF node and a set S of scheduler states, and
outputs a subset S ′ of S. The input ARF node keeps track of the thread locations, which
are used to identify atomic blocks, while the input scheduler states keep track of the status
of the threads. From the ARF node and the set S, the function Persistent extracts the
set Ben of enabled atomic blocks. Persistent then computes a persistent set P from Ben
using Algorithm 1. Finally, Persistent constructs back a subset S ′ of the input set S of
scheduler states from the persistent set P .
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
23
Algorithm 2 ARF expansion algorithm for non-running node.
Input: a non-running ARF node η that contains no error locations.
(1) Let NonRunning(ARFPath(η, F)) be η0 , . . . , ηm such that η = ηm
(2) If there exists i < m such that ηi covers η:
′ , ϕ′ i, ϕ′ , S′ ).
(a) Let ηm−1 = (hl1′ , ϕ′1 i, . . . , hlN
N
(b) If Persistent(ηm−1 , Sched(S′ )) ⊂ Sched(S′ ):
• For all S′′ ∈ Sched(S′ ) \ Persistent(ηm−1 , Sched(S′ )):
′ , ϕ′ i, ϕ′ , S′′ ).
− Create a new ART with root node (hl1′ , ϕ′1 i, . . . , hlN
N
(3) If η is covered: Mark η as covered.
(4) If η is not covered: Expand η by rule E2’.
Let η = (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S) be an ARF node that is going to be expanded. We
replace the rule E2 in the following way: instead of creating a new ART for each state
S′ ∈ Sched(S), we create a new ART whose root is the node (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S′ )
for each state S′ ∈ Persistent(η, Sched(S)) (rule E2’).
To guarantee the preservation of safety properties, we have to check that the cycle
condition is satisfied. Following [CGP99], we check a stronger condition: at least one state
along the cycle is fully expanded. In the ESST algorithm a potential cycle occurs if an ARF
node is covered by one of its predecessors in the ARF. Let η = (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S)
be an ARF node. We say that the scheduler state S is running if there is a running thread
in S. We also say that the node η is running if its scheduler state S is. Note that during
ARF expansion the input of Sched is always a non-running scheduler state. A path in an
ARF can be represented as a sequence η0 , . . . , ηm of ARF nodes such that for all i, we have
ηi+1 is a successor of ηi in the same ART or there is an ARF connector from ηi to ηi+1 .
Given an ARF node η of ARF F, we denote by ARFPath(η, F) the ARF path η0 , . . . , ηm
such that η0 has neither a predecessor ARF node nor an incoming ARF connector, and
ηm = η. Let ρ̂ be an ARF path, we denote by NonRunning(ρ̂) the maximal subsequence of
non-running node in ρ̂.
Algorithm 2 shows how a non-running ARF node η is expanded in the presence of
POR. We assume that η is not an error node. The algorithm fully expands the immediate
non-running predecessor node of η when a potential cycle is detected. Otherwise the node
is expanded as usual.
Our POR technique slightly differs from that of [CGP99]. On computing the successor
states of a state s, the technique in [CGP99] tries to compute a persistent set P in s that
does not create a cycle. That is, particularly for the depth-first search (DFS) exploration,
for every α in P , the successor state α(s) is not in the DFS stack. If it does not succeed, then
it fully expands the state. Because the technique in [CGP99] is applied to the explicit-state
model checking, computing the successor state α(s) is cheap.
In our context, to detect a cycle, one has to expand an ARF node by a transition (or
an atomic block) that can span over multiple operations in the CFG, and thus may require
multiple applications of the rule E1. As the rule involves expensive computations of abstract
strongest post-conditions, detecting a cycle using the technique in [CGP99] is bound to be
expensive.
In addition to coverage check, in the above algorithm one can also check if the detected
cycle is spurious. We only fully expand a node iff the detected cycle is not spurious. When
24
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
Algorithm 3 Sleep sets.
Input:
• a set Ben of enabled atomic blocks.
• a sleep set Z.
Output:
• a reduced set Bred ⊆ Ben of enabled atomic blocks.
• a mapping MZ : Bred → P(ABlock )
(1) Bred := Ben \ Z.
(2) For all α ∈ Bred :
(a) For all β ∈ Z:
• If (α, β) 6∈ D (α and β are independent): MZ [α] := MZ [α] ∪ {β}.
(b) Z := Z ∪ {α}.
cycles are rare, the benefit of POR can be defeated by the price of generating and solving
the constraints representing the cycle.
POR based on sleep sets can also be applied to ESST. First, we extend the node of
ARF to include a sleep set. That is, an ARF node is a tuple (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S, Z),
where the sleep set Z is a set of atomic blocks. The sleep set is ignored during coverage
check. Second, from the set of enabled atomic blocks and the sleep set of the current node,
we compute a subset of enabled atomic blocks and a mapping from every atomic block in
the former subset to a successor sleep set.
Let D be a valid dependence relation, Algorithm 3 shows how to compute a reduced set
of enabled transitions Bred and a mapping MZ to successor sleep sets using D. The input
of the algorithm is a set Ben of enabled atomic blocks and the sleep set Z of the current
node. Note that the set Ben can be a persistent set obtained by Algorithm 1.
Similar to the persistent set technique, we introduce the function Sleep that takes as
inputs an ARF node η and a set of scheduler states S, and outputs a subset S ′ of S along
with the above mapping MZ . From the ARF node and the scheduler states, Sleep extracts
the set Ben of enabled atomic blocks and the current sleep set. Sleep then computes
a subset Bred of Ben of enabled atomic blocks and the mapping MZ using Algorithm 3.
Finally, Sleep constructs back a subset S ′ of the input set S of scheduler states from the
set Bred of enabled atomic blocks.
Let η = (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S, Z) be an ARF node that is going to be expanded.
We replace the rule E2 in the following way: let (S ′ , MZ ) = Sleep(η, Sched(S)), create a
new ART whose root is the node (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S′ , MZ [l′ ]) for each S′ ∈ S ′ such
that l′ is the atomic block of the running thread in S′ (rule E2”).
One can easily combine persistent and sleep sets by replacing the above computation
(S ′ , MZ ) = Sleep(η, Sched(S)) by (S ′ , MZ ) = Sleep(η, Persistent(η, Sched(S))).
5.3. Correctness of ESST + POR. The correctness of POR with respect to verifying
program assertions in transition systems has been shown in Theorem 5.4. The correctness
proof relies on the enabledness and commutativity of independent transitions. However,
the proof is applied in the concrete state space of the transition system, while the ESST
algorithm works in the abstract state space represented by the ARF. The following observation shows that two transitions that are independent in the concrete state space may not
commute in the abstract state space.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
α
η2 (p)
β
η3 (p ∨ q)
η1 (p)
25
β
η4 (p ∨ q)
α
η5 (p)
Figure 8: Independent transitionsdo not commute inabstract state space.
For simplicity of presentation, we represent an abstract state by a formula representing
a region. Let g1 , g2 be global variables, and p, q be predicates such that p ⇔ (g1 < g2 ) and
q ⇔ (g1 = g2). Let α be the transition g1 := g1 - 1 and β be the transition g2 := g2 - 1.
It is obvious that α and β are independent of each other. However, Figure 8 shows that the
two transitions do no commute when we start from an abstract state η1 such that η1 ⇔ p.
The edges in the figure represent the computation of abstract strongest post-condition of
the corresponding abstract states and transitions.
Even though two independent transitions do not commute in the abstract state space,
they still commute in the concrete state space overapproximated by the abstract state space,
as shown by the lemma below.
Lemma 5.7. Let α and β be transitions that are independent of each other such that
for concrete states s1 , s2 , s3 and abstract state η we have s1 |= η, and both α(s1 , s2 ) and
β(s2 , s3 ) hold. Let η ′ be the abstract successor state of η by applying the abstract strongest
post-operator to η and β, and η ′′ be the abstract successor state of η ′ by applying the abstract
strongest post-operator to η ′ and α. Then, there are concrete states s4 and s5 such that:
β(s1 , s4 ) holds, s4 |= η ′ , β(s4 , s5 ) holds, s5 |= η ′′ , and s3 = s5 .
The above lemma shows that POR can be applied in the abstract state space. Let
ESSTPOR be the ESST algorithm with POR. The correctness of POR in ESST is stated
by the following theorem:
Theorem 5.8. Let P be a threaded sequential program. For every terminating executions of
ESST(P ) and ESSTPOR (P ), we have that ESST(P ) reports safe iff so does ESSTPOR (P ).
6. Experimental Evaluation
In this section we show an experimental evaluation of the ESST algorithm in the verification
of multi-threaded programs in the FairThreads [Bou06] programming framework. The aim
of this evaluation is to show the effectiveness of ESST and of the partial-order reduction
applied to ESST. By following the same methodology, the ESST algorithm can be adapted
to other programming frameworks, like SpecC [GDPG01] and OSEK/VDX [OSE05], with
moderate effort.
6.1. Verifying FairThreads. FairThreads is a framework for programming multi-threaded
software that allows for mixing both cooperative and preemptive threads. As we want to
apply ESST, we only deal with the cooperative threads. FairThreads includes a scheduler
that executes threads according to a simple round-robin policy. FairThreads also provides
a programming interface that allows threads to synchronize and communicate with each
others. Examples of synchronization primitives of FairThreads are as follows: await(e) for
26
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
start
instant
no runnable thread
run
thread
await/join/cooperate
select
thread
end−
of−instant
runnable thread
end
no runnable
thread
Figure 9: The scheduler of FairThreads.
waiting for the notification of event e if such a notification does not exist, generate(e) for
generating a notification of event e, cooperate for yielding the control back to the scheduler,
and join(t) for waiting for the termination of thread t.
The scheduler of FairThreads is shown in Figure 9. At the beginning all threads are
set to be runnable. The executions of threads consist of a series of instants in which the
scheduler runs all runnable threads, in a deterministic round-robin fashion, until there are
no more runnable threads.
A running thread can yield the control back to the scheduler either by waiting for an
event notification (await), by cooperating (cooperate), or by waiting for another thread to
terminate (join). A thread that executes the primitive await(e) can observe the notification
of e even though the notification occurs long before the execution of the primitive, so long
as the execution of await(e) is still in the same instant of the notification of e. Thus, the
execution of await does not necessarily yields the control back to the scheduler.
When there are no more runnable threads, the scheduler enters the end-of-instant phase.
In this phase the scheduler wakes up all threads that had cooperated during the last instant,
and also clears all event notifications. The scheduler then starts a new instant if there are
runnable threads; otherwise the execution ends.
The operational semantics of cooperative FairThreads has been described in [Bou02].
However, it is not clear from the semantics whether the round-robin order of the thread
executions remains the same from one instant to the other. Here, we assume that the order
is the same from one instant to the other. The operational semantics does not specify
either the initial round-robin order of the thread executions. Thus, for the verification, one
needs to explore all possible round-robin orders. This situation could easily degrade the
performance of ESST and possibly lead to state explosion. The POR techniques described
in Section 5 could in principle address this problem.
In this section we evaluate two software model checking approaches for the verification
of FairThreads programs. In the first approach we rely on a translation from FairThreads
into sequential programs (or sequentialization), such that the resulting sequential programs
contain both the mapping of the cooperative threads in the form of functions and the
encoding of the FairThreads scheduler. The thread activations are encoded as function
calls from the scheduler function to the functions that correspond to the threads. The
program can be thought of as jumping back and forth between the “control level” imposed
by the scheduler, and the “logical level” implemented by the threads. Having the sequential
program, we then use off-the-shelf software model checkers to verify the programs.
In the second approach we apply the ESST algorithm to verify FairThreads programs.
In this approach we define a set of primitive functions that implement FairThreads synchronization functions, and instantiate the scheduler of ESST with the FairThreads scheduler.
We then translate the FairThreads program into a threaded program such that there is
a one-to-one correspondence between the threads in the FairThreads program and in the
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
27
resulting threaded program. Furthermore, each call to a FairThreads synchronization function is translated into a call to the corresponding primitive function. The ESST algorithm
is then applied to the resulting threaded program.
6.2. Experimental evaluation setup. The ESST algorithm has been implemented in the
Kratos software model checker [CGM+ 11]. In this work we have extended Kratos with
the FairThreads scheduler and the primitive functions that correspond to the FairThreads
synchronization functions.
We have carried out a significant experimental evaluation on a set of benchmarks taken
and adapted from the literature on verification of cooperative threads. For example, the
fact* benchmarks are extracted from [JBGT10], which describes a synchronous approach to
verifying the absence of deadlocks in FairThreads programs. We adapted the benchmarks
by recoding the bad synchronization, that can cause deadlocks, as an unreachable false
assertion. The gear-box benchmark is taken from the case study in [WH08]. This case
study is about an automated gearbox control system that consists of a five-speed gearbox
and a dry clutch. Our adaptation of this benchmark does not model the timing behavior of
the components and gives the same priority to all tasks (or threads) of the control system.
In our case we considered the verification of safety properties that do not depend on the
timing behavior. Ignoring the timing behavior in this case results in more non-determinism
than that of the original case study. The ft-pc-sfifo* and ft-token-ring* benchmarks
are taken and adapted from, respectively, the pc-sfifo* and token-ring* benchmarks used
in [CMNR10, CNR11]. All considered benchmarks satisfy the restriction of ESST: the
arguments passed to every call to a primitive function are constants.
For the sequentialized version of FairThreads programs, we experimented with several
state-of-the-art predicate-abstraction based software model checkers, including SatAbs3.0 [CKSY05], CpaChecker [BK11], and the sequential analysis of Kratos [CGM+ 11].
We also experimented with CBMC-4.0 [CKL04] for bug hunting with bounded model checking (BMC) [BCCZ99]. For the BMC experiment, we set the size of loop unwindings to 5
and consider only the unsafe benchmarks. All benchmarks and tools’ setup are available at
http://es.fbk.eu/people/roveri/tests/jlmcs-esst.
We ran the experiments on a Linux machine with Intel-Xeon DC 3GHz processor and
4GB of RAM. We fixed the time limit to 1000 seconds, and the memory limit to 4GB.
6.3. Results of Experiments. The results of experiments are shown in Table 1, for the
run times, and in Table 2, for the numbers of explored abstract states by ESST. The
column V indicates the status of the benchmarks: S for safe and U for unsafe. In the
experiments we also enable the POR techniques in ESST. The column No-POR indicates
that during the experiments POR is not enabled. The column P-POR indicates that only
the persistent set technique is enabled, while the column S-POR indicates that only the
sleep set technique is enabled. The column PS-POR indicates that both the persistent set
and the sleep set techniques are enabled. We mark the best results with bold letters, and
denote the out-of-time results by T.O.
The results clearly show that ESST outperforms the predicate abstraction based sequentialization approach. The main bottleneck in the latter approach is the number of
predicates that the model checkers need to keep track of to model details of the scheduler.
For example, on the ft-pc-sfifo1.c benchmark SatAbs, CpaChecker, and the sequential
28
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
Name
V
SatAbs
fact1
fact1-bug
fact1-mod
fact2
gear-box
ft-pc-sfifo1
ft-pc-sfifo2
ft-token-ring.3
ft-token-ring.4
ft-token-ring.5
ft-token-ring.6
ft-token-ring.7
ft-token-ring.8
ft-token-ring.9
ft-token-ring.10
ft-token-ring-bug.3
ft-token-ring-bug.4
ft-token-ring-bug.5
ft-token-ring-bug.6
ft-token-ring-bug.7
ft-token-ring-bug.8
ft-token-ring-bug.9
ft-token-ring-bug.10
S
U
S
S
S
S
S
S
S
S
S
S
S
S
S
U
U
U
U
U
U
U
U
9.07
22.18
4.41
69.05
T.O
57.08
715.31
115.66
448.86
T.O
T.O
T.O
T.O
T.O
T.O
111.10
306.41
860.29
T.O
T.O
T.O
T.O
T.O
Sequentialization
CpaChecker Kratos
14.26
8.06
8.18
17.25
T.O
56.56
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
2.90
0.39
0.50
15.40
T.O
44.49
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
T.O
CBMC
No-POR
15.09
158.76
*407.36
*751.44
T.O
T.O
T.O
T.O
T.O
0.01
0.01
0.40
0.01
T.O
0.30
0.39
0.48
5.20
213.37
T.O
T.O
T.O
T.O
T.O
0.10
1.70
66.09
T.O
T.O
T.O
T.O
T.O
ESST
P-POR S-POR
0.01
0.01
0.40
0.01
473.55
0.30
0.39
0.29
1.10
6.20
92.39
T.O
T.O
T.O
T.O
0.10
0.30
1.80
26.29
T.O
T.O
T.O
T.O
0.01
0.01
0.39
0.01
44.89
0.29
0.30
0.20
0.29
0.50
0.69
0.99
1.80
3.89
9.60
0.10
0.10
0.10
0.20
0.30
0.60
1.40
3.60
PS-POR
0.01
0.03
0.39
0.01
44.19
0.29
0.39
0.20
0.29
0.40
0.49
0.80
0.89
1.70
2.10
0.10
0.10
0.10
0.10
0.20
0.29
0.60
0.79
Table 1: Run time results of the experimental evaluation (in seconds).
analysis of Kratos needs to keep track of, respectively, 71, 37, and 45 predicates. On the
other hand, ESST only needs to keep track of 8 predicates on the same benchmark.
Regarding the refinement steps, ESST needs less abstraction-refinement iterations than
other techniques. For example, starting with the empty precision, the sequential analysis
of Kratos needs 8 abstraction-refinement iterations to verify fact2, and 35 abstractionrefinement iterations to verify ft-pc-sfifo1. ESST, on the other hand, verifies fact2
without performing any refinements at all, and verifies ft-pc-sfifo1 with only 3 abstractionrefinement iterations.
The BMC approach, represented by CBMC, is ineffective on our benchmarks. First,
the breadth-first nature of the BMC approach creates big formulas on which the satisfiability
problems are hard. In particular, CBMC employs bit-precise semantics, which contributes
to the hardness of the problems. Second, for our benchmarks, it is not feasible to identify the
size of loop unwindings that is sufficient for finding the bug. For example, due to insufficient
loop unwindings, CBMC reports safe for the unsafe benchmarks ft-token-ring-bug.4 and
ft-token-ring-bug.5 (marked with “*”). Increasing the size of loop unwindings only results
in time out.
Table 1 also shows that the POR techniques boost the performance of ESST and allow
us to verify benchmarks that could not be verified given the resource limits. In particular
we get the best results when the persistent set and sleep set techniques are applied together.
Additionally, Table 2 shows that the POR techniques reduce the number of abstract states
explored by ESST. This reduction also implies the reductions on the number of abstract
post computations and on the number of coverage checks.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
Name
fact1
fact1-bug
fact1-mod
fact2
gear-box
ft-pc-sfifo1
ft-pc-sfifo2
ft-token-ring.3
ft-token-ring.4
ft-token-ring.5
ft-token-ring.6
ft-token-ring.7
ft-token-ring.8
ft-token-ring.9
ft-token-ring.10
ft-token-ring-bug.3
ft-token-ring-bug.4
ft-token-ring-bug.5
ft-token-ring-bug.6
ft-token-ring-bug.7
ft-token-ring-bug.8
ft-token-ring-bug.9
ft-token-ring-bug.10
No-POR
P-POR
S-POR
PS-POR
66
49
269
49
180
540
1304
7464
50364
496
2698
17428
-
66
49
269
49
204178
180
287
575
2483
7880
32578
223
914
2801
11302
-
66
49
269
29
60823
180
310
228
375
699
1239
2195
4290
8863
16109
113
179
328
611
1064
2133
4310
8039
66
49
269
29
58846
180
287
180
266
395
518
963
1088
2628
3292
89
125
181
251
457
533
1281
1632
29
Table 2: Numbers of explored abstract states.
Despite the effectiveness showed by the obtained results, the following remarks are
in order. POR, in principle, could interact negatively with the ESST algorithm. The
construction of ARF in ESST is sensitive to the explored scheduler states and to the
tracked predicates. POR prunes some scheduler states that ESST has to explore. However,
exploring such scheduler states can yield a smaller ARF than if they are omitted. In
particular, for an unsafe benchmark, exploring omitted scheduler states can lead to the
shortest counter-example path. Furthermore, exploring the omitted scheduler states could
lead to spurious counter-example ARF paths that yield predicates that allow ESST to
perform less refinements and construct a smaller ARF.
6.4. Verifying SystemC. SystemC is a C++ library that has widely been used to write
executable models of systems-on-chips. The library consists of a language to model the
component architecture of the system and also to model the parallel behavior of the system
by means of sequential threads. Similar to FairThreads, the SystemC scheduler employs a
cooperative scheduling, and the execution of the scheduler is divided into a series of so-called
delta cycles, which correspond to the notion of instant.
Despite their similarities, the scheduling policy and the behavior of synchronization
primitives of SystemC and FairThreads have significant differences. For example, the
FairThreads scheduler employs a round-robin scheduling, while the SystemC scheduler can
execute any runnable thread. Also, in FairThreads a notification of an event performed by
some thread can later still be observed by another thread, as long as the execution of the
other thread is still in the same instant as the notification. In SystemC the latter thread
will simply miss the notification.
30
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
In [CMNR10, CNR11], we report on the application of ESST to the verification of SystemC models. We follow a similar approach, comparing ESST and the sequentialization
approach, and also experimenting with POR in ESST. The results of those experiments
show the same patterns as the results reported here for FairThreads: the ESST approach
outperforms the sequentialization approach, and the POR techniques improve further the
performance of ESST in terms of run time and the the number of visited abstract states.
These results allow us to conclude that the ESST algorithm, along with the POR techniques, is a very effective and general technique for the verification of cooperative threads.
7. Related Work
There have been a plethora of works on developing techniques for the verification of multithreaded programs, both for general ones and for those with specific scheduling policies.
Similar to the work in this paper, many of these existing techniques are concerned with
verifying safety properties. In this section we review some of these techniques and describe
how they are related to our work.
7.1. Verification of Cooperative Threads. Techniques for verifying multi-threaded programs with cooperative scheduling policy have been considered in different application
domains: [MMMC05, GD05, KS05, TCMM07, HFG08, BK08, CMNR10] for SystemC,
[JBGT10] for FairThreads, [WH08] for OSEK/VDX, and [CJK07] for SpecC. Most of
these techniques either embed details of the scheduler in the programs under verification
or simply abstract away those details. As shown in [CMNR10], verification techniques that
embed details of the scheduler show poor scalability. On the other hand, abstracting away
the scheduler not only makes the techniques report too many false positives, but also limits their applicability. The techniques described in [MMMC05, TCMM07, HFG08] only
employs explicit-state model checking techniques, and thus they cannot handle effectively
infinite-domain inputs for threads. ESST addresses these issues by analyzing the threads
symbolically and by orchestrating the overall verification by direct execution of the scheduler
that can be modeled faithfully.
7.2. Thread-modular Model Checking. In the traditional verification methods, such
as the one described in [OG76], safety properties are proved with the help of assertions
that annotate program statements. These annotations form the pre- and post-conditions
for the statements. The correctness of the assertions is then proved by proof rules that are
similar to the Floyd-Hoare proof rules [Flo67, Hoa83] for sequential programs. The method
in [OG76] requires a so-called interference freedom test to ensure that no assertions used in
the proof of one thread are invalidated by the execution of another thread. Such a freedom
test makes this method non-modular (each thread cannot be verified in isolation from other
threads).
Jones [Jon83] introduces thread-modular reasoning that verifies each thread separately
using assumption about the other threads. In this work the interference information is incorporated into the specifications as environment assumptions and guarantee relations. The
environment assumptions model the interleaved transitions of other threads by describing
their possible updates of shared variables. The guarantee relations describe the global state
updates of the whole program. However, the formulation of the environment assumptions
in [Jon83] and [OG76] incurs a significant verification cost.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
31
Flanagan and Qadeer [FQ03] describe a thread-modular model checking technique that
automatically infers environment assumptions. First, a guarantee relation for each of the
thread is inferred. The assumption relation for a thread is then the disjunction of all the
guarantee relations of the other threads. Similar to ESST, this technique computes an overapproximation of the reachable concrete states of the multi-threaded program by abstraction
using the environment assumptions. However, unlike ESST, the thread-modular model
checking technique is incomplete since it can report false positives.
Similar to ESST, the work in [HJMQ03] describes a CEGAR-based thread-modular
model checking technique, that analyzes the data-flow of each thread symbolically using
predicate abstraction, starting from a very coarse over-approximation of the thread’s data
states and successively refining the approximation using predicates discovered during the
CEGAR loop. Unlike ESST, the thread-modular algorithm also analyzes the environment
assumption symbolically starting with an empty environment assumption and subsequently
weakening it using the refined abstractions of threads’ data states.
Chaki et. al. [COYC03] describe another CEGAR-based model checking technique.
Like ESST, the programs considered by this technique have a fixed number of threads.
But, unlike other previous techniques that deal with shared-variable multi-threaded programs, the threads considered by this technique use message passing as the synchronization
mechanism. This technique uses two levels of abstractions over each individual thread.
The first abstraction level is predicate abstraction. The second one, which is applied to
the result of the first abstraction, is action-guided abstraction. The parallel composition
of the threads is performed after the second abstraction has been applied. Compositional
reasoning is used during the check for spuriousness of a counter-example by projecting and
examining the counter-example on each individual thread separately.
Recently, Gupta et. al. [GPR11] have proposed a new predicate abstraction and refinement technique for verifying multi-threaded programs Similar to ESST, the technique
constructs an ART for each thread. But unlike ESST, branches in the constructed ART
might not correspond to a CFG unwinding but correspond to transitions of the environment.
The technique uses a declarative formulation of the refinement to describe constraints on
the desired predicates for thread reachability and environment transition. Depending on
the declarative formulation, the technique can generate a non-modular proof as in [OG76]
or a modular proof as in [FQ03].
7.3. Bounded Model Checking. Another approach to verifying multi-threaded programs
is by bounded model checking (BMC) [BCCZ99]. For multi-threaded programs, the bound
is concerned, not only with the length (or depth) of CFG unwinding, as in the case of
sequential programs, but also with the number of scheduler invocations or the number of
context switches. This approach is sound and complete, but only up to the given bound.
Prominent techniques that exploit the BMC approach include [God05] and [QR05].
The work in [God05] limits the number of scheduler invocations. While the work in [QR05]
bounds the number of context switches. That is, given a bound k, the technique verifies
if a multi-threaded program can fail an assertion through an execution with at most k
context switches. This technique relies on regular push-down systems [Sch00] for a finite
representation of the unbounded number of stack configurations. The ESST algorithm can
easily be made depth bounded or context-switch bounded by not expanding the constructed
ARF node when the number of ARF connectors leading to the node has reached the bound.
32
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
The above depth bounded and context-switch bounded model checking techniques are
ineffective in finding errors that appear only after each thread has a chance to complete
its execution. To overcome this limitation, Musuvathi and Qadeer [MQ07] have proposed
a BMC technique that bounds the number of context switches caused only by scheduler
preemptions. Such a bound gives the opportunity for each thread to complete its execution.
The state-space complexity imposed by the previously described BMC techniques grows
with the number of threads. Thus, those techniques are ineffective for verifying multithreaded programs that allow for dynamic creations of threads. Recently a technique called
delay bounded scheduling has been proposed in [EQR11]. Given a bound k, a deterministic
scheduler is made non-deterministic by allowing the scheduler to delay its next executed
thread at most k times. The bound k is chosen independently of the number of threads.
This technique has been used for the analysis and testing of concurrent programs [MQ06].
SAT/SMT-based BMC has also been applied to the verification of multi-threaded programs. In [RG05] a SAT-based BMC that bounds the number of context switches has been
described. In this work, for each thread, a set of constraints describing the thread is generated using BMC techniques for sequential programs [CKL04]. Constraints for concurrency
describing both the number of context switches and the reading or writing of global variables
are then added to the previous sets of constraints. The work in [GG08] is also concerned
with efficient modeling of multi-threaded programs using SMT-based BMC. Unlike [RG05],
in this work the constraints for concurrency are added lazily during the BMC unrolling.
7.4. Verification via Sequentialization. Yet another approach used for verifying multithreaded programs is by reducing bounded concurrent analysis to sequential analysis. In
this approach the multi-threaded program is translated into a sequential program such
that the latter over-approximates the bounded reachability of the former. The resulting
sequential program can then be analyzed using any existing model checker for sequential
programs.
This approach has been pioneered by the work in [QW04]. In this work a multi-threaded
program is converted to a sequential one that simulates all the interleavings generated by
multiple stacks of the multi-threaded program using its single stack. The simulation itself
is bounded by the size of a multiset that holds existing runnable threads at any time during
the execution of a thread.
Lal and Reps [LR09] propose a translation from multi-threaded programs to sequential
programs that reduces the context-bounded reachability of the former to the reachability of
the latter for any context bound. Given a bound k, the translation constructs a sequential
program that tracks, at any point, only the local state of one thread, the global state,
and k copies of the global state. In the translation each thread is processed separately
from the others, and updates of global states caused by context switches in the processed
thread are modeled by guessing future states using prophecy variables and constraining
these variables at an appropriate control point in the execution. Due to the prophecy
variables, the resulting sequential program explores more reachable states than that of the
original multi-threaded program. A similar translation has been proposed in [TMP09]. But
this translation requires the sequential program to call the individual thread multiple times
from scratch to recompute the local states at context switches.
As shown in Section 6, and also in [CMNR10], the verification of multi-threaded programs via sequentialization and abstraction-based software model checking techniques turns
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
33
out to suffer from several inefficiencies. First, the encoding of the scheduler makes the sequential program more complex and harder to verify. Second, details of the scheduler are
often needed to verify the properties, and thus abstraction-based technique requires many
abstraction-refinement iterations to re-introduce the abstracted details.
7.5. Partial-Order Reduction. POR is an effective technique for reducing the search
space by avoiding visiting redundant executions. It has been mostly adopted in explicit-state
model checkers, like SPIN [Hol05, HP95, Pel96], VeriSoft [God05], and Zing [AQR+ 04].
Despite their inability to handle infinite-domain inputs, the maturity of these model checkers, in particular the support for POR, has attracted research on encodings of multithreaded programs into the language that the model checkers accept. In [CCNR11] we
verify SystemC models by encoding them in Promela, the language accepted by the SPIN
model checker. The work shows that the resulting encodings lose the intrinsic structures of
the multi-threaded programs that are important to enable optimizations like POR.
There have been several attempts at applying POR to symbolic model checking techniques [ABH+ 01, KGS06, WYKG08]. In these applications POR is achieved by statically
adding constraints describing the reduction technique into the encoding of the program.
The work in [ABH+ 01] apply POR technique to symbolic BDD-based invariant checking.
While the work in [WYKG08] describes an approach that can be considered as a symbolic
sleep-set based technique. They introduce the notion of guarded independence relation,
where a pair of transitions are independent of each other if certain conditions specified in
the pair’s guards are satisfied. The POR techniques applied into ESST can be extended
to use guarded independence relation by exploiting the thread and global regions. Finally,
the work in [KGS06] uses patterns of lock acquisition to refine the notion of independence
transition, which subsequently yields better reductions.
8. Conclusions and Future Work
In this paper we have presented a new technique, called ESST, for the verification of sharedvariable multi-threaded programs with cooperative scheduling. The ESST algorithm uses
explicit-state model checking techniques to handle the scheduler, while analyzes the threads
using symbolic techniques based on lazy predicate abstraction. Such a combination allows
the ESST algorithm to have a precise model of the scheduler, to handle it efficiently, and
also to benefit from the effectiveness of explicit-state techniques in systematic exploration
of thread interleavings. At the same time, the use of symbolic techniques allows the ESST
algorithm to deal with threads that potentially have infinite state space. ESST is futher
enhanced with POR techniques, that prevents the exploration of redundant thread interleavings. The results of experiments carried out on a general class of benchmarks for
SystemC and FairThreads cooperative threads clearly shows that ESST outperforms the
verification approach based on sequentialization, and that POR can effectively improve the
performance.
As future work, we will proceed along different directions. We will experiment with lazy
abstraction with interpolants [McM06], to improve the performance of predicate abstraction
when there are too many predicates to keep track of. We will also investigate the possibility
of applying symmetry reduction [DKKW11] to deal with cases where there are multiple
threads of the same type, and possibly with parametrized configurations.
34
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
We will extend the ESST algorithm to deal with primitive function calls whose arguments cannot be inferred statically. This requires a generalization of the scheduler exploration with a hybrid (explicit-symbolic or semi-symbolic) approach, and the use of SMT
techniques to enumerate all possible next states of the scheduler. Finally, we will look into
the possibility of applying the ESST algorithm to the verification of general multi-threaded
programs. This work amounts to identifying important program locations in threads where
the control must be returned to the scheduler.
References
[ABH+ 01]
R. Alur, R. K. Brayton, T. A. Henzinger, S. Qadeer, and S. K. Rajamani. Partial-order reduction
in symbolic state-space exploration. Formal Methods in System Design, 18(2):97–116, 2001.
[AQR+ 04] T. Andrews, S. Qadeer, S. K. Rajamani, J. Rehof, and Y. Xie. Zing: A model checker for
concurrent software. In R. Alur and D. A. Peled, editors, CAV, volume 3114 of LNCS, pages
484–487. Springer, 2004.
[BCCZ99] A. Biere, A. Cimatti, E. M. Clarke, and Y. Zhu. Symbolic Model Checking without BDDs. In
R. Cleaveland, editor, TACAS, volume 1579 of LNCS, pages 193–207. Springer, 1999.
[BCG+ 09] D. Beyer, A. Cimatti, A. Griggio, M. E. Keremoglu, and R. Sebastiani. Software model checking
via large-block encoding. In FMCAD, pages 25–32. IEEE, 2009.
[BHJM07] D. Beyer, T. A. Henzinger, R. Jhala, and R. Majumdar. The software model checker Blast.
STTT, 9(5-6):505–525, 2007.
[BK08]
N. Blanc and D. Kroening. Race analysis for SystemC using model checking. In S. R. Nassif
and J. S. Roychowdhury, editors, ICCAD, pages 356–363. IEEE, 2008.
[BK11]
D. Beyer and M. E. Keremoglu. CPAchecker: A Tool for Configurable Software Verification.
In G. Gopalakrishnan and S. Qadeer, editors, CAV, volume 6806 of LNCS, pages 184–190.
Springer, 2011.
[Bou02]
F. Boussinot. Operational Semantics of Cooperative Fair Threads, 2002. http://wwwsop.inria.fr/meije/rp/FairThreads/FTC/documentation/semantics.pdf.
[Bou06]
F. Boussinot. FairThreads: mixing cooperative and preemptive threads in C. Concurrency and
Computation: Practice and Experience, 18(5):445–469, 2006.
[BSST09] C. W. Barrett, R. Sebastiani, S. A. Seshia, and C. Tinelli. Satisfiability modulo theories. In
A. Biere, M. Heule, H. van Maaren, and T. Walsh, editors, Handbook of Satisfiability, volume
185 of Frontiers in Art. Int. and Applications, pages 825–885. IOS Press, 2009.
[CCF+ 07] R. Cavada, A. Cimatti, A. Franzén, K. Kalyanasundaram, M. Roveri, and R. K. Shyamasundar.
Computing Predicate Abstractions by Integrating BDDs and SMT Solvers. In FMCAD, pages
69–76. IEEE, 2007.
[CCNR11] D. Campana, A. Cimatti, I. Narasamdya, and M. Roveri. An analytic evaluation of SystemC
encodings in promela. In A. Groce and M. Musuvathi, editors, SPIN, volume 6823 of LNCS,
pages 90–107. Springer, 2011.
[CDJR09] A. Cimatti, J. Dubrovin, T. Junttila, and M. Roveri. Structure-aware computation of predicate
abstraction. In FMCAD, pages 9–16. IEEE, 2009.
[CFR+ 91] R. Cytron, J. Ferrante, B. K. Rosen, M. N. Wegman, and F. K Zadeck. Efficiently computing
static single assignment form and the control dependence graph. ACM Trans. Program. Lang.
Syst., 13(4):451–490, 1991.
[CGJ+ 03] E. M. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith. Counterexample-guided abstraction
refinement for symbolic model checking. J. ACM, 50(5):752–794, 2003.
[CGM+ 98] A. Cimatti, F. Giunchiglia, G. Mongardi, D. Romano, F. Torielli, and P. Traverso. Formal verification of a railway interlocking system using model checking. Formal Asp. Comput., 10(4):361–
380, 1998.
[CGM+ 11] A. Cimatti, A. Griggio, A. Micheli, I. Narasamdya, and M. Roveri. Kratos - a software model
checker for SystemC. In G. Gopalakrishnan and S. Qadeer, editors, CAV, volume 6806 of LNCS,
pages 310–316. Springer, 2011.
[CGP99]
E. M. Clarke, O. Grumberg, and D. A. Peled. Model Checking. The MIT Press, 1999.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
[CJK07]
[CKL04]
[CKSY05]
[CMNR10]
[CNR11]
[COYC03]
[Cra57]
[DKKW11]
[EQR11]
[Flo67]
[FQ03]
[GD05]
[GDPG01]
[GG08]
[God96]
[God05]
[GPR11]
[GS97]
[GV04]
[HFG08]
[HJMM04]
[HJMQ03]
[HJMS02]
[Hoa83]
[Hol05]
35
E. M. Clarke, H. Jain, and D. Kroening. Verification of SpecC using predicate abstraction.
Formal Methods in System Design, 30(1):5–28, 2007.
E. M. Clarke, D. Kroening, and F. Lerda. A Tool for Checking ANSI-C Programs. In K. Jensen
and A. Podelski, editors, TACAS, volume 2988 of LNCS, pages 168–176. Springer, 2004.
E. M. Clarke, D. Kroening, N. Sharygina, and K. Yorav. SATABS: SAT-Based Predicate Abstraction for ANSI-C. In N. Halbwachs and L. D. Zuck, editors, TACAS, volume 3440 of LNCS,
pages 570–574. Springer, 2005.
A. Cimatti, A. Micheli, I. Narasamdya, and M. Roveri. Verifying systemc: A software model
checking approach. In R. Bloem and N. Sharygina, editors, FMCAD, pages 51–59. IEEE, 2010.
A. Cimatti, I. Narasamdya, and M. Roveri. Boosting lazy abstraction for systemc with partial
order reduction. In P. A. Abdulla and K. R. M. Leino, editors, TACAS, volume 6605 of LNCS,
pages 341–356. Springer, 2011.
S. Chaki, J. Ouaknine, K. Yorav, and E. M. Clarke. Automated compositional abstraction
refinement for concurrent c programs: A two-level approach. ENTCS, 89(3):417–432, 2003.
W. Craig. Linear reasoning. a new form of the herbrand-gentzen theorem. Journal of Symbolic
Logic, 22:250–268, 1957.
A. F. Donaldson, A. Kaiser, D. Kroening, and T. Wahl. Symmetry-aware predicate abstraction
for shared-variable concurrent programs. In G. Gopalakrishnan and S. Qadeer, editors, CAV,
volume 6806 of Lecture Notes in Computer Science, pages 356–371. Springer, 2011.
M. Emmi, S. Qadeer, and Z. Rakamaric. Delay-bounded scheduling. In T. Ball and M. Sagiv,
editors, POPL, pages 411–422. ACM, 2011.
R. W. Floyd. Assigning meaning to programs. In J. T. Schwartz, editor, Proceedings of Symposium in Applied Mathematics, pages 19–32, 1967.
C. Flanagan and S. Qadeer. Thread-modular model checking. In T. Ball and S. K. Rajamani,
editors, SPIN, volume 2648 of LNCS, pages 213–224. Springer, 2003.
D. Große and R. Drechsler. CheckSyC: an efficient property checker for RTL SystemC designs.
In ISCAS (4), pages 4167–4170. IEEE, 2005.
A. Gerstlauer, R. Doemer, J. Peng, and D. D. Gajski. System Design: A Practical Guide with
SpecC. Kluwer Academic Publishers, Boston, MA, USA, June 2001.
M. K. Ganai and A. Gupta. Efficient modeling of concurrent systems in BMC. In K. Havelund,
R. Majumdar, and J. Palsberg, editors, SPIN, volume 5156 of LNCS, pages 114–133. Springer,
2008.
P. Godefroid. Partial-Order Methods for the Verification of Concurrent Systems - An Approach
to the State-Explosion Problem, volume 1032 of LNCS. Springer, 1996.
P. Godefroid. Software Model Checking: The VeriSoft Approach. F. M. in Sys. Des., 26(2):77–
101, 2005.
A. Gupta, C. Popeea, and A. Rybalchenko. Predicate abstraction and refinement for verifying
multi-threaded programs. In T. Ball and M. Sagiv, editors, POPL, pages 331–344. ACM, 2011.
Susanne Graf and Hassen Saı̈di. Construction of abstract state graphs with pvs. In Orna Grumberg, editor, CAV, volume 1254 of LNCS, pages 72–83. Springer, 1997.
A. Groce and W. Visser. Heuristics for model checking Java programs. STTT, 6(4):260–276,
2004.
P. Herber, J. Fellmuth, and S. Glesner. Model checking SystemC designs using timed automata.
In C. H. Gebotys and G. Martin, editors, CODES+ISSS, pages 131–136. ACM, 2008.
T. A. Henzinger, R. Jhala, R. Majumdar, and K. L. McMillan. Abstractions from proofs. In
N. D. Jones and X. Leroy, editors, POPL, pages 232–244. ACM, 2004.
T. A. Henzinger, R. Jhala, R. Majumdar, and S. Qadeer. Thread-modular abstraction refinement. In W. A. Hunt Jr. and F. Somenzi, editors, CAV, volume 2725 of LNCS, pages 262–274.
Springer, 2003.
T. A. Henzinger, R. Jhala, R. Majumdar, and G. Sutre. Lazy abstraction. In POPL, pages
58–70. ACM, 2002.
C. A. R. Hoare. An axiomatic basis for computer programming (reprint). Commun. ACM,
26(1):53–56, 1983.
G. J. Holzmann. Software model checking with SPIN. Advances in Computers, 65:78–109, 2005.
36
[HP95]
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
G. J. Holzmann and D. A. Peled. An improvement in formal verification. In 7th IFIP WG6.1
Int. Conf. on Formal Description Techniques VII, pages 197–211, London, UK, UK, 1995.
[JBGT10] K. Johnson, L. Besnard, T. Gautier, and J. P. Talpin. A synchronous approach to threaded
program verification. In Proc. of the 10th International Workshop on Automated Verification of
Critical Systems, 2010.
[Jon83]
C. B. Jones. Tentative steps toward a development method for interfering programs. ACM
Trans. Program. Lang. Syst., 5(4):596–619, 1983.
[KGS06]
V. Kahlon, A. Gupta, and N. Sinha. Symbolic model checking of concurrent programs using
partial orders and on-the-fly transactions. In T. Ball and R. B. Jones, editors, CAV, volume
4144 of LNCS, pages 286–299. Springer, 2006.
[KS05]
D. Kroening and N. Sharygina. Formal verification of SystemC by automatic hardware/software
partitioning. In MEMOCODE, pages 101–110. IEEE, 2005.
[LNO06]
S. K. Lahiri, R. Nieuwenhuis, and A. Oliveras. SMT techniques for fast predicate abstraction.
In T. Ball and R. B. Jones, editors, CAV, volume 4144 of LNCS, pages 424–437. Springer, 2006.
[LR09]
A. Lal and T. W. Reps. Reducing concurrent analysis under a context bound to sequential
analysis. Formal Methods in System Design, 35(1):73–97, 2009.
[McM06]
K. L. McMillan. Lazy abstraction with interpolants. In T. Ball and R. B. Jones, editors, CAV,
volume 4144 of LNCS, pages 123–136. Springer, 2006.
[MMMC05] M. Moy, F. Maraninchi, and L. Maillet-Contoz. Lussy: A toolbox for the analysis of systemson-a-chip at the transactional level. In ACSD, pages 26–35. IEEE, 2005.
[MQ06]
M. Musuvathi and S. Qadeer. Chess: Systematic stress testing of concurrent software. In
G. Puebla, editor, LOPSTR, volume 4407 of LNCS, pages 15–16. Springer, 2006.
[MQ07]
M. Musuvathi and S. Qadeer. Iterative context bounding for systematic testing of multithreaded
programs. In J. Ferrante and K. S. McKinley, editors, PLDI, pages 446–455. ACM, 2007.
[OG76]
S. S. Owicki and D. Gries. An axiomatic proof technique for parallel programs. Acta Inf., 6:319–
340, 1976.
[Ope05]
IEEE 1666: SystemC language Reference Manual, 2005.
[OSE05]
OSEK. OSEK/VDX Operating System Specification 2.2.3, 2005. http://www.osek-vdx.org.
[Pel93]
D. A. Peled. All from one, one for all: on model checking using representatives. In CAV, volume
697 of LNCS, pages 409–423. Springer, 1993.
[Pel96]
D. A. Peled. Combining partial order reductions with on-the-fly model-checking. Formal Methods in System Design, 8(1):39–64, 1996.
[QR05]
S. Qadeer and J. Rehof. Context-bounded model checking of concurrent software. In N. Halbwachs and L. D. Zuck, editors, TACAS, volume 3440 of LNCS, pages 93–107. Springer, 2005.
[QW04]
S. Qadeer and D. Wu. Kiss: keep it simple and sequential. In W. Pugh and C. Chambers,
editors, PLDI, pages 14–24. ACM, 2004.
[RG05]
I. Rabinovitz and O. Grumberg. Bounded model checking of concurrent programs. In K. Etessami and S. K. Rajamani, editors, CAV, volume 3576 of LNCS, pages 82–97. Springer, 2005.
[Sch00]
S. Schwoon. Model-Checking Pushdown Systems. PhD thesis, Lehrstuhl für informatik VII der
Technischen Universität München, 2000.
[TCMM07] C. Traulsen, J. Cornet, M. Moy, and F. Maraninchi. A SystemC/TLM Semantics in Promela
and Its Possible Applications. In D. Bosnacki and S. Edelkamp, editors, SPIN, volume 4595 of
LNCS, pages 204–222. Springer, 2007.
[TMP09]
S. L. Torre, P. Madhusudan, and G. Parlato. Reducing context-bounded concurrent reachability
to sequential reachability. In A. Bouajjani and O. Maler, editors, CAV, volume 5643 of LNCS,
pages 477–492. Springer, 2009.
[Val91]
A. Valmari. Stubborn sets for reduced state generation. In APN 90: Proceedings on Advances
in Petri nets 1990, pages 491–515, New York, NY, USA, 1991. Springer-Verlag.
[WH08]
L. Waszniowski and Z. Hanzálek. Formal verification of multitasking applications based on
timed automata model. Real-Time Systems, 38(1):39–65, 2008.
[WYKG08] C. Wang, Z. Yang, V. Kahlon, and A. Gupta. Peephole partial order reduction. In C. Ramakrishnan and J. Rehof, editors, TACAS, volume 4963 of LNCS, pages 382–396. Springer,
2008.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
37
Appendix A. Proofs of Lemmas and Theorems.
Lemma (4.6). Let η and η ′ be ARF nodes for a threaded program P such that η ′ is a
successor node of η. Let γ be a configuration of P such that γ |= η. The following properties
hold:
(1) If η ′ is obtained from η by the rule E1 with the performed operation op, then, for any
op
configuration γ ′ of P such that γ → γ ′ , we have γ ′ |= η ′ .
(2) If η ′ is obtained from η by the rule E2, then, for any configuration γ ′ of P such that
·
γ → γ ′ and the scheduler states of η ′ and γ ′ coincide, we have γ ′ |= η ′ .
Proof. We first prove property (1). Let η and η ′ be as follows:
η = (hl1 , ϕ1 i, . . . , hli , ϕi i, . . . hlN , ϕN i, ϕ, S)
η ′ = (hl1 , ϕ′1 i, . . . , hli′ , ϕ′i i, . . . hlN , ϕ′N i, ϕ′ , S′ ),
such that S(sTi ) = Running and for all j =
6 i, we have S(sTj ) 6= Running. Let GTi =
(L, E, l0 , Lerr ) be the CFG for Ti such that (li , op, li′ ) ∈ E. Let γ and γ ′ be as follows:
γ = h(l1 , s1 ), . . . , (li , si ), . . . , (lN , sN ), gs, Si
γ ′ = h(l1 , s1 ), . . . , (li′ , s′i ), . . . , (lN , sN ), gs′ , S′′ i,
op
such that γ → γ ′ . We need to prove that γ ′ |= η ′ . Let op
ˆ be op if op contains no primitive
function call, or be op′ as in the second case of the rule E1. First, from γ |= η, we have
si ∪ gs |= ϕi . By the definition of operational semantics of op
ˆ and the definition of SP op
ˆ (ϕi ),
π
(ϕ
).
Since
SP
(ϕ
)
implies
SP
(ϕ
)
for
any
precision
π,
and
it follows that s′i ∪ gs′ |= SP op
i
i
i
ˆ
op
ˆ
op
ˆ
π l′
′
i
s′i ∪gs′ |= ϕ′i . A similar
ϕ′i is SP op
ˆ (ϕi ) for some precision πli′ associated with li , it follows that S
reasoning can be applied to prove that s′j ∪ gs′ |= ϕ′j for j 6= i and i=1,...,N s′i ∪ gs′ |= ϕ′ .
We remark that the havoc(op)
ˆ operation only makes the values of global variables possibly
assigned in op
ˆ unconstrained. To prove that γ ′ |= η ′ , it remains to show that S′ and S′′
coincide. Now, consider the case where op
ˆ does not contain any call to primitive function. It
is then trivial that S′ = S′′ . Otherwise, if op
ˆ contains a call to primitive function, then, since
the primitive executor follows the operational semantics, that is, Sexec(S, f (~x)) computes
[[f (~x)]](·, ·, S), we have S′ = S′′ . Hence, we have proven that γ ′ |= η ′ .
For property (2), let η and η ′ be as follows:
η = (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S)
η ′ = (hl1 , ϕ1 i, . . . , hlN , ϕN i, ϕ, S′ ),
such that S(sTi ) 6= Running for all i = 1, . . . , N . Let γ and γ ′ be as follows:
γ = h(l1 , s1 ), . . . , (lN , sN ), gs, Si
γ ′ = h(l1 , s′1 ), . . . , (lN , s′N ), gs′ , S′′ i,
By the operational semantics, we have si = s′i for all i = 1, . . . , N , and gs = gs′ . Since
S′ = S′′ , it follows from γ |= η that γ ′ |= η ′ .
Theorem (4.7). Let P be a threaded program. For every terminating execution of ESST(P ),
we have the following properties:
ρ̂
(1) If ESST(P ) returns a feasible counter-example path ρ̂, then we have γ → γ ′ for an
initial configuration γ and an error configuration γ ′ of P .
(2) If ESST(P ) returns a safe ARF F, then for every configuration γ ∈ Reach(P ), there
is an ARF node η ∈ Nodes(F) such that γ |= η.
38
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
Proof. We first prove property (1). Let the counter-example path ρ̂ be the sequence
ξ1 , . . . , ξm , such that, for each i = 1, . . . , m, the element ξi is either an ART edge or an
ARF connector. We need to show the existence of a computation sequence γ1 , . . . , γm+1
ξi
such that γi → γi+1 for all i = 1, . . . , m, and γ = γ1 and γ ′ = γm+1 . Let ρ̂j , for 0 ≤ j ≤ m,
denote the prefix ξ1 , . . . , ξj of ρ̂. Let ψ j be the strongest post-condition after performing the
operations in the suppressed version of ρ̂j . That is, ψ j is SP σsup(ρ̂j ) (true). For k = 1, . . . , m,
we need to show that, for any configuration γk satisfying ψ k−1 and the source node of ξk ,
there is a configuration γk+1 such that γk+1 satisfies ψ k and the target node of ξk .
First, any configuration satisfies true, and thus γ |= true. By definition of counterexample path, the source node of ξ1 is an initial node η0 . Any initial configuration satisfies
the initial node, and thus γ |= η0 . Second, take any 1 ≤ k ≤ m, and assume that we have
a configuration γk satisfying ψ k−1 and the source node of ξk . Consider the case where ξk
is an ART edge obtained by unwinding CFG edge labelled by an operation op. Let op
ˆ be
the label of the ART edge. That is, op
ˆ = op if op has no primitive function call; otherwise
op
ˆ = op′ where op′ is defined in the second case of rule E1. Since ψ m is satisfiable, then so
op
ˆ
is ψ k . It means that there is a configuration γ ′ such that γk → γ ′ and γ ′ |= ψ k . Recall that
the scheduler state of γk+1 is not constrained by ψ k and primitive function calls can only
modify scheduler states. Thus, there is a configuration γk+1 that differs from γ ′ only in the
op
scheduler state, such that γk → γk+1 and γk+1 |= ψ k . When op has no primitive function
call, then we simply take γ ′ as γk+1 . By Lemma 4.6, it follows that γk+1 satisfies the target
ξ
k
node of ξk , and hence we have γk →
γk+1 , as required.
Consider now the case where ξk is an ARF connector. The connector ξk is suppressed
in the computation of the strongest post-condition, that is ψ k is ψ k−1 . We obtain γk+1
from γk by replacing γk ’s scheduler state with the scheduler state in the target node of ξk .
Since free variables of ψ k do not range over variables tracked by the scheduler state and
γk |= ψ k−1 , we have γk+1 |= ψ k . By the construction of γk+1 and by Lemma 4.6, it follows
ξ
k
that γk+1 satisfies the target node of ξk , and hence we have γk →
γk+1 , as required.
We now prove property (2). We prove that, for any run γ0 , γ1 , . . . of P and for any
configuration γi in the run, there is a node η ∈ Nodes(F) such that γi |= η. We prove the
property by induction on the length l of the run:
Case l = 1: This case is trivial because the initial configuration γ0 satisfies the initial node,
and the construction of an ARF starts with the initial node.
Case l > 1: Let η ∈ Nodes(F) be an ARF node such that the configuration γn |= η. If η
is covered by another node η ′ ∈ Nodes(F), then, by Definition 4.3 of node coverage, we
have γl |= η ′ . Thus, we pick such an ARF node η such that it is not covered by other
nodes.
op
Consider the transition γl → γl+1 from γl to γl+1 . By the rule E1, the node η has
a successor node η ′ obtained by performing the operation op. By Lemma 4.6, we have
γl+1 |= η ′ , as required.
·
Now, consider the transition γl → γl+1 . Because the scheduler Sched implements
the function Sched in the operational semantics, then, by the rule E2, the node η has
a successor node η ′ whose scheduler state coincide with γl+1 . By Lemma 4.6, we have
γl+1 |= η ′ , as required.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
39
Theorem (5.4). A transition system M = (S, S0 , T ) is safe w.r.t. a set Terr ⊆ T of error
transitions iff Reach red (S0 , T ) that satisfies the cycle condition does not contain any error
state from EM,Terr .
Proof. If the transition system M is safe w.r.t. Terr , then Reach red (S0 , T ) ∩ EM,Terr = ∅
follows obviously because Reach(S0 , T ) ∩ EM,Terr = ∅ and Reach red (S0 , T ) ⊆ Reach(S0 , T ).
For the other direction, let us assume the transition system M being unsafe w.r.t. Terr .
Without loss of generality we also assume that Terr = {α}. We prove that for every state
s0 ∈ S such that there is a path of length n > 0 leading to an error state se , then there
is a path from s0 to an error state s′e such that the path consists only of transitions in the
persistent sets of visited states. When the state s0 is in S0 , then the states visited by the
latter path are only states in Reach red (S0 , T ). We first show the proof for n = 1 and n = 2,
and then we generalize it for arbitrary n > 1.
α
Case n = 1: Let s0 ∈ S be such that s0 → se holds for an error state se . By the successorstate condition, the persistent set in s0 is non-empty. If the only persistent set in s0 is
α
the singleton set {α}, then the path s0 → se is the path leading to an error state and the
path consists only of transitions in the persistent sets of visited states. Suppose that the
transition α is not in the persistent set in s0 . Take the greatest m > 0 such that there is
a path
γm−1
γ0
γ1
s0 → s1 → · · · → sm ,
where for all i = 0, . . . , m − 1, the set Pi is the persistent set in state si , the transition
γi is in Pi , and the transition α in not in Pi (see Figure 10). First, the above path exists
because of the successor-state condition and it must be finite because the set S of states
is finite. The path cannot form a cycle, otherwise by the cycle condition the transition
α will have been in the persistent set in one of the states that form the cycle. That
is, by the above path, we delay the exploration of α as long as possible. Second, since
the transition α is enabled in s0 and is independent in si of any transition in Pi for all
i = 0, . . . , m − 1 (otherwise Pi is not a persistent set), then α remains enabled in sj for
j = 1, . . . , m. Third, since m is the greatest number, we have α in the persistent set in
α
the state sm , and furthermore sm → s′e holds for an error state s′e . Thus, the path
γm−1
γ0
α
s0 → · · · → sm → s′e
is the path from s0 leading to an error state s′e involving only transitions in the persistent
sets of visited states.
Case n = 2: Let s0 ∈ S be such that there is a path
β0
β1 =α
s0 → s′1 → se
for some state s′1 and an error state se . By the successor-state condition, the persistent
set in s0 is non-empty. If the only persistent set in s0 is the singleton set {β0 }, then the
β0
path s0 → s′1 consists only of transition in the persistent set. By the case n = 1, it is
guaranteed that there is a path from s′1 leading to an error state s′e such that the path
consists only of transitions in the persistent sets of visited states. Thus, there is a path
from s0 leading to an error state s′e such that the path consists only of transitions in the
persistent sets of visited states.
40
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
Suppose that the transition β0 is not in the persistent set in s0 . Take the greatest
m > 0 such that there is a path
γ0
γm−1
γ1
s0 → s1 → · · · → sm ,
where for all i = 0, . . . , m − 1, the set Pi is the persistent set in state si , the transition γi
is in Pi , and the transition β0 in not in Pi (see Figure 10). With the same reasoning as
in the case of n = 1, the above path exists, and is finite and acyclic. That is, we delay
the exploration of β0 as long as possible.
Consider now the path
γ0
γm−1
γ1
β0
s0 → s1 → · · · → sm → s′m+1 .
We show that an error state is reachable from the state s′m+1 . First, since the transitions
γ0 and β0 are independent in s0 , the transitions γ0 and β0 are enabled, respectively,
in the states s′1 and s1 , and they commute in the state s′2 . The transition γ0 is also
independent of the transition α in s′1 , otherwise P0 is not a persistent set in s0 . Thus,
the transition α is enabled in s′2 . Second, since the transitions γ1 and β0 are independent
in s1 , the transitions γ1 and β0 are enabled, respectively, in the states s′2 and s2 , and
they commute in the state s′3 . The transition γ1 is independent of the transition α in s′2 ,
otherwise P1 is not a persistent set in s1 . Thus, the transition α is enabled in s′3 .
By repeatedly applying the above reasoning, it follows that the transition α is enabled
in the state s′m+1 . If the singleton set {α} is the only persistent set in s′m+1 , then we are
done. That is, the path
γ0
γ1
γm−1
β0
α
s0 → s1 → · · · → sm → s′m+1 → s′e
is the path from s0 leading to an error state s′e such that it consists only of transitions
in the persistent sets of visited states.
In the same way as in the case of n = 1, if the transition α is not in the persistent set
in s′m+1 , then we can delay α as long as possible by taking the greatest k > 0 such that
there is a path
γm+k−1
γm+1
γm
s′m+1 → s′m+2 → · · · → s′m+k+1 ,
where for all l = 1, . . . , k+1, the set Pm+l is the persistent set in state s′m+l , the transition
γm+l−1 is in Pm+l , and the transition α in not in Pm+l . Thus, the path
γ0
γ1
γm−1
β0
s0 → s1 → · · · → sm → s′m+1 · · ·
γm+k−1
→
α
s′m+k+1 → s′e
is the path from s0 leading to an error state s′e such that it consists only of transitions
in the persistent sets of visited states.
Case n > 1: Let s0 ∈ S be such that there is a path
β0
β1
s0 → s′1 → · · ·
βn−1 =α
→
se
for some state s′1 and an error state se . By the successor-state condition, the persistent
set in s0 is non-empty. If the only persistent set in s0 is the singleton set {β0 }, then the
β0
path s0 → s′1 consists only of transition in the persistent set. By the case n − 1, it is
guaranteed that there is a path from s′1 leading to an error state s′e such that the path
consists only of transitions in the persistent sets of visited states. Thus, there is a path
from s0 leading to an error state s′e such that the path consists only of transitions in the
persistent sets of visited states.
EXPLICIT-SCHEDULER SYMBOLIC-THREAD
41
s0
β0
γ0
s1
γ1
β0
γ0
s′2
γ1
s′3
α
s2
β0
sm−1
γm−1
sm
β0
s0
α
γ0
s1
α
sm−1
γm−1
sm
α
β0
sm−1
γm−1
sm
sm+1
α
β0
β0
α
γ1
s′3
β1
γ
′ 0
s2
β1
s′1
β1
γ0
γ1
s′m
γ α
′ m−1
sm+1
β1
γ0
γ1
βn−1 = α
α
α
γm−1
α
γm−1
s′m+k+1
α
α
β0
s2
α
α
s′m+k
γm+k−1
β0
s1
γ1
s′m+2
s2
s0
γ0
se
s′m
α
γ
m−1
′
γm
se
α
γ1
β0
s′1
β1 = α
α
s′e
s′e
n=1
n=2
n>1
Figure 10: Cases of the proof of Theorem 5.4.
Suppose that the transition β0 is not in the persistent set in s0 . Take the greatest
m > 0 such that there is a path
γ0
γm−1
γ1
s0 → s1 → · · · → sm ,
where for all i = 0, . . . , m − 1, the set Pi is the persistent set in state si , the transition
γi is in Pi , and the transition β0 in not in Pi (see Figure 10). That is, we delay the
exploration of β0 as long as possible.
Consider now the path
γ0
γ1
γm−1
β0
s0 → s1 → · · · → sm → s′m+1 .
With the same reasoning as in the case of n = 2, we have the transition β1 enabled in
the state s′m+1 , and we can postpone the exploration of β1 as long as possible. When β1
gets explored, the transition β2 is enabled in the successor state. By repeatedly applying
the same reasoning for transitions βk for k = 2, . . . , n − 1, the path formed in a similar
way to that of the case of n = 2 is the path from s0 leading to an error state s′e such that
the path consists only of transitions in the persistent sets of visited states.
Lemma (5.7). Let α and β be transitions that are independent of each other such that
for concreate states s1 , s2 , s3 and abstract state η we have s1 |= η, and both α(s1 , s2 ) and
β(s2 , s3 ) hold. Let η ′ be the abstract successor state of η by applying the abstract strongest
post-operator to η and β, and η ′′ be the abstract successor state of η ′ by applying the abstract
strongest post-operator to η ′ and α. Then, there are concrete states s4 and s5 such that:
β(s1 , s4 ) holds, s4 |= η ′ , β(s4 , s5 ) holds, s5 |= η ′′ , and s3 = s5 .
Proof. By the independence of α and β, we have β(s1 , s4 ) holds. By the abstract strongest
post-operator, we have s4 |= η ′ . By the independence of α and β, we have β(s4 , s5 ) holds.
By the abstract strongest post-operator and the fact that s4 |= η ′ , we have s5 |= η ′′ . Finally
by the independence of α and β, we have s3 = s5 .
Theorem (5.8). Let P be a threaded sequential program. For every terminating executions of ESST(P ) and ESSTPOR (P ), we have that ESST(P ) reports safe iff so does
ESSTPOR (P ).
se
42
A. CIMATTI, I. NARASAMDYA, AND M. ROVERI
Proof. First, we first prove the left-to-right direction of iff and then prove the other direction.
(=⇒) : Assume that ESST(P ) returns a safe ARF F. Assume to the contrary that
ESSTP OR reports unsafe and returns a counter-example path ρ̂. By Theorem 4.7, we have
ρ̂
γ → γ ′ for an initial configuration γ and an error configuration γ ′ of P . That is, the error
configuration is in Reach(P ). Again, by Theorem 4.7, there is an ARF node η ∈ Nodes(F)
such that γ ′ |= η. But then the node η is an error node, and F is not safe, which contradicts
our assumption that F is safe.
(⇐=) : We lift Theorem 5.4 and its proof to the case of abstract transition system
or abstract state space with the help of Lemma 5.7. The lifting amounts to establishing
correspondences between the transition system M = (S, S0 , T ) in Theorem 5.4 and the ARF
constructed by ESST and ESSTP OR . First, since the executions are terminating, the set of
reachable scheduler states is finite. Now let the set of ARF nodes reachable by the rules E1
and E2 correspond to the set S. That is, the set S is now the set of ARF nodes. The set
S0 contains only the initial node. A transition in T represents either an ART path ρ that
starts from the root of the ART and ends with a leaf of the ART, or an ARF connector.
The error transitions Terr contains every transition in T such that the transition represents
an ART path ρ with an error node as the end node. The set EM,Terr consists of error nodes.
αn−1
α
α
Every path s0 →0 s1 →1 · · · → sn , corresponds the the following path in the ARF:
(1) for i = 0, . . . , n, the node si is a node in the ARF,
(2) for i = 0, . . . , n − 1, there is an ARF path from si to si+1 that is represented by the
transition αi , and
(3) for i = 0, . . . , n − 1, if the transition αi leads to a node s covered by another node s′ ,
then si+1 is s′ .
We now exemplify how we address the issue of commutativity in the proof of Theorem 5.4.
Consider the case n = 2 where transitions γ0 , β0 and β0 , γ0 commute in s′2 . In the case
of abstract state space, they might not commute. However, by Lemma 5.7, they commute
in the concrete state space. Thus, the transition α is still enabled after performing the
transitions γ0 , β0 .
This work is licensed under the Creative Commons Attribution-NoDerivs License. To view
a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/ or send a
letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or
Eisenacher Strasse 2, 10777 Berlin, Germany
| 6 |
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
1
Multiple scan data association by
convex variational inference (extended version)
arXiv:1607.07942v2 [] 23 Jan 2018
Jason L. Williams, Senior Member, IEEE and Roslyn A. Lau, Student Member, IEEE
Abstract—Data association, the reasoning over correspondence
between targets and measurements, is a problem of fundamental
importance in target tracking. Recently, belief propagation (BP)
has emerged as a promising method for estimating the marginal
probabilities of measurement to target association, providing fast,
accurate estimates. The excellent performance of BP in the particular formulation used may be attributed to the convexity of the
underlying free energy which it implicitly optimises. This paper
studies multiple scan data association problems, i.e., problems
that reason over correspondence between targets and several sets
of measurements, which may correspond to different sensors or
different time steps. We find that the multiple scan extension of
the single scan BP formulation is non-convex and demonstrate
the undesirable behaviour that can result. A convex free energy
is constructed using the recently proposed fractional free energy
(FFE). A convergent, BP-like algorithm is provided for the single
scan FFE, and employed in optimising the multiple scan free
energy using primal-dual coordinate ascent. Finally, based on a
variational interpretation of joint probabilistic data association
(JPDA), we develop a sequential variant of the algorithm that is
similar to JPDA, but retains consistency constraints from prior
scans. The performance of the proposed methods is demonstrated
on a bearings only target localisation problem.
I. I NTRODUCTION
Multiple target tracking is complicated by data association,
the unknown correspondence between measurements and
targets. The classical problem arises under the assumption
that measurements are received in scans (i.e., a collection of
measurements made at a single time), and that within each
scan, each target corresponds to at most one measurement, and
each measurement corresponds to at most one target.
Techniques for addressing data association may be classified
as either single scan (considering a single scan of data at a
time) or multiple scan (simultaneously considering multiple
scans), and as either maximum a posteriori (MAP) (finding the
most likely correspondence), or marginal-based (calculating the
full marginal distribution for each target). Common methods
include:
1) Global nearest neighbour (GNN), e.g., [1], is a single
scan MAP method, which finds the MAP correspondence
in the latest scan, and proceeds to the next scan assuming
that correspondence was correct
J. L. Williams (e-mail: [email protected]) is with the
National Security, Intelligence, Surveillance and Reconnaissance Division,
Defence Science and Technology Group, Australia, and the School of Electrical
Engineering and Computer Science, Queensland University of Technology,
Australia. R. A. Lau (e-mail: [email protected]) is with the
Maritime Division, Defence Science and Technology Group, Australia and
the Research School of Computer Science, Australian National University,
Australia.
2) Multiple hypothesis tracking (MHT) [2]–[4] is a multiple
scan MAP method, which in each scan seeks to find the
MAP correspondence over a recent history of scans
3) Joint probabilistic data association (JPDA) [5] is a
single scan marginal-based method, which calculates
the marginal distribution of each target, and proceeds by
approximating the joint distribution as the product of its
marginals
Classical JPDA additionally approximates the distribution of
each target as a moment-matched Gaussian distribution; in
this paper, we use the term JPDA more generally to refer to
the approach that retains the full marginal distribution of each
target in a manner similar to [6]–[10].
Compared to GNN, MHT and JPDA, multiple scan variants
of JPDA, e.g., [11], have received less attention. One may posit
that this is due to their formidable computational complexity:
While there exist fast approximations to the multiple scan
MAP problem such as Lagrangian relaxation [12], [13], no
such equivalents have existed for either single scan or multiple
scan JPDA.
Variational inference (e.g., [14], [15]) describes the collection
of methods that use optimisation (or calculus of variations)
to approximate difficult inference problems in probabilistic
graphical models (PGMs). Methods within this framework
include belief propagation (BP) [16], [17], mean field (MF)
[18],1 hybrid BP/MF approaches [19], [20], tree-reweighted
sum product (TRSP) [21] and norm-product BP (NPBP) [22].
Excellent performance has been demonstrated in a variety of
problems, typified by the recognition that turbo coding is an
instance of BP [23].
Variational inference was first applied to data association in
[24]–[26], addressing the problem of distributed tracking using
wireless sensor networks. The problem was formulated with
vertices corresponding to targets and sensors, where sensor
nodes represent the joint association of all sensor measurements
in the scan to targets. A related sensor network application
was studied in [27].
In contrast to these methods, which hypothesise the joint
association of all sensor measurements via a single variable,
the approach in [28]–[32] formulates the single scan problem
in terms of a bipartite graph, where vertices hypothesise
the measurement associated with a particular target, or the
target associated with a particular measurement. Empirically, it
was found that BP converges reliably and produces excellent
estimates of both the marginal association probabilities [30],
1 Mean field is also referred to as variational Bayes; following [14], we
use the term variational inference more generally, to refer to the entire family
of optimisation-based methods.
2
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
[32] and the partition function [31]. Convergence of BP in the
II. BACKGROUND AND M ODEL
two related formulations was proven in [33], [34].
We use the abbreviated notation p(x), p(z|x), etc, to
In [35] it was shown that, when correctly parameterised, represent the probability density function (PDF) or point mass
the variational inference problem that underlies the bipartite function (PMF) of the random variable corresponding to the
formulation is convex. This may be understood to be the source value x, z conditioned on x, etc.
of the empirically observed robustness of the approach when
applied to the bipartite model. For example, it was shown in
[36] that the approximations produced by BP tend to be either A. Probabilistic graphical models and variational inference
very good, or very bad. This may be understood through the PGMs [14], [15], [47] aim to represent and manipulate the
intuition that BP converges to a “good” approximation if the joint probability distributions of many variables efficiently
objective function that it implicitly optimises is locally convex by exploiting factorisation. The Kalman filter [48] and the
in the area between the starting point and the optimal solution, hidden Markov model (HMM) [49] are two examples of
and a “bad” approximation if it is non-convex. Accordingly, if algorithms that exploit sparsity of a particular kind (i.e., a
the problem is globally convex, a major source of degenerate Markov chain) to efficiently conduct inference on systems
cases is eliminated.
involving many random variables. Inference methods based on
Various applications merging the BP formulation in [32] with the PGM framework generalise these algorithms to a wider
MF (as proposed in [20]) and expectation maximisation (EM) variety of state spaces and dependency structures.
PGMs have been developed for undirected graphical models
were examined in [37]–[41]. PGM methods provide a path for
extending the bipartite model to multiple scan problems; this (Markov random fields), directed graphical models (Bayes
has been studied in [42], [43], which extends the single scan BP nets) and factor graphs. In this work we consider a subclass
formulation of [32] to multiple scans using a restricted message of pairwise undirected models, involving vertices (i.e., random
passing schedule, rather than optimising the variational problem variables) v ∈ V, and edges (i.e., dependencies) e ∈ E ⊂ V ×V,
to convergence. An alternative PGM formulation of the same and where the joint distribution can be written as:2
Y
Y
problem was utilised in [44] for the purpose of parameter
p(x
)
∝
ψ
(x
)
ψi,j (xi , xj ).
V
v
v
identification. However, these approaches lose the convexity
v∈V
(i,j)∈E
property of the single scan bipartite formulation, which is
understood to be the source of the robustness of this special As an example, a Markov chain involving variables
case.
(x1 , . . . , xT ) may be formulated by setting ψ1 (x1 ) = p(x1 ),
ψt (xt ) = 1 for t > 1, and edges ψt−1,t (xt−1 , xt ) =
p(xt |xt−1 ), t ∈ {2, . . . , T } representing the Markov transition
A. Contributions
kernels, although other formulations are possible.
Exact inference can be conducted on tree-structured graphs
This paper addresses the multiple scan data association
using belief propagation (BP), which operates by passing mesproblem using a convexification of the multiple scan model.
sages between neighbouring vertices. We denote by µi→j (xj )
We consider methods that optimise the variational problem to
the message sent from vertex i ∈ V to vertex j ∈ V where
convergence, rather than using restricted message schedules as
(i, j) ∈ E. The iterative update equations are then:
proposed in [42], [43]. A preliminary look at the convergence
X
Y
and performance of BP in multiple scan problems was also
µi→j (xj ) ∝
ψi,j (xi , xj )ψi (xi )
µj 0 →i (xi ).
included in [30]. In section IV, we perform a thorough
xi
(j 0 ,i)∈E,j 0 6=j
evaluation of these methods on a bearings only localisation
(1)
problem, and find that each of these methods can, in a This is also known as the sum-product algorithm. If the
challenging environment, give vanishingly small likelihood summations are replaced with maximisations, then we arrive
to the true solution in a significant portion of cases.
at max-product BP, which generalises the Viterbi algorithm
While our preliminary study [45] applying an extension of [50], providing the MAP joint state of all variables in the
the fractional free energy (FFE) (introduced in [46]) to tracking tree-structured graph. At convergence of sum-product BP, the
problems showed promise, the absence of a rapidly converging, marginal distribution at a vertex v can be calculated as:
Y
BP-like algorithm for solving it has limited its practical use.
p(x
)
∝
ψ
(x
)
µi→v (xv ).
(2)
v
v
v
In this paper, we provide such an algorithm, and prove its
(v,i)∈E
convergence.
Subsequently, the FFE is used as a building block in a In the case of a Markov chain, if all vertices are jointly
multiple scan formulation, which is shown in section IV to Gaussian, BP is equivalent to a Kalman smoother. Similarly,
address the undesirable behaviour of the previous methods. The if all vertices are discrete, BP is equivalent to inference on an
proposed method results in a convex variational problem for the HMM using the forward-backward algorithm. BP unifies these
multiple scan model, and a convergent algorithm for solving algorithms and extends them from chains to trees.
the problem is developed. Importantly, association consistency
2 In the general setting, the joint distribution is a product of maximal
constraints from previous scans are retained. The sequential
cliques [14, p9]. Since the graph is undirected, we assume that E is symmetric,
version of the algorithm is motivated by a new variational i.e., if (i, j) ∈ E then (j, i) ∈ E. We need only incorporate one of these two
interpretation of JPDA.
factors in the distribution.
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
3
Inference in cyclic graphs (graphs that have cycles, i.e., that For tree-structured graphs, BP can be shown to converge to the
are not tree-structured) is far more challenging. Conceptually, optimal value of this convex, variational optimisation problem.
one can always convert an arbitrary cyclic graph to a tree by It has further been shown that the feasible set described by
merging vertices (e.g., so-called junction tree representations) (8)-(11) is exact, i.e., any feasible solution can be obtained by
[15], [47], but in practical problems, the dimensionality of a valid joint distribution, and any valid joint distribution maps
the agglomerated variables tends to be prohibitive. BP may to a feasible solution.
If a graph contains a leaf vertex xi that is connected only
be applied to cyclic graphs; practically, this simply involves
repeated application of (1) until convergence occurs (i.e., until to vertex xj via factor ψi,j (xi , xj ),3 then inference can be
the maximum change between subsequent messages is less than performed equivalently by eliminating vertex xi , and replacing
a pre-set threshold). Unfortunately, this is neither guaranteed the vertex factor for xj with [15, ch 9]
X
to converge to the right answer, nor to converge at all.
ψ̃j (xj ) =
ψi,j (xi , xj ).
(12)
Nevertheless, it has exhibited excellent empirical performance
xi
in many practical problems [36]. For example, the popular
iterative turbo decoding algorithm has been shown to be an Given the resulting marginal distribution for p(xj ) (or an
approximation thereof), the pairwise joint distribution (or belief)
instance of BP applied to a cyclic graph [23].
The current understanding of BP in cyclic graphs stems of (xi , xj ) can be reconstructed as:
from [17]. It has been shown (e.g., [14, Theorem 3.4]) that one
ψi,j (xi , xj )
.
(13)
p(xi , xj ) = p(xj )
can recover exact marginal probabilities from an optimisation
ψ̃j (xj )
of a convex function known as the Gibbs free energy. In the
single-vertex case (or if all variables are merged into a single Lemma 2 provides a variational viewpoint of vertex elimination,
interpreting it as a partial minimisation of the pairwise joint of
vertex), this can be written as described in lemma 1.
the neighbour and leaf, conditioned on the marginal distribution
Lemma 1. The Gibbs free energy variational problem for a
of the neighbour of the leaf. The theorem uses the conditional
single random variable x can be written as:
entropy, defined as: [51]
X
X
minimise − H(x) − E[log ψ(x)]
(3)
H(xi |xj ) = −
q(xj )
q(xi |xj ) log q(xi |xj )
(14)
q(x)
X
xj
xi
subject to q(x) ≥ 0,
q(x) = 1,
(4)
XX
q(xi , xj )
x
=−
q(xi , xj ) log P
. (15)
0
P
x0i q(xi , xj )
xi xj
where E[log ψ(x)] ,
x q(x) log ψ(x), and H(x) =
−E[log q(x)] is the entropy of the distribution q(x) (all Conditional entropy was shown to be a concave function of
expectations and entropies are under the distribution q). The the joint in [52]. Note that, due to marginalisation constraints,
solution of the optimisation is q(x) = P ψ(x)
∝ ψ(x).
0
x0 ψ(x )
H(xi |xj ) = H(xi , xj ) − H(xj ) = H(xi ) − I(xi ; xj ). (16)
Similar expressions apply for continuous random variables,
Lemma 2. Let J[q(xj )] be the solution of the following
replacing sums with integrals. The objective in (3) can be
optimisation problem:
recognised as the Kullback-Leibler (KL) divergence between
q(x) and the (unnormalised) distribution ψ(x). If the graph is
J[q(xj )] = minimise − H(xi |xj ) − E[log ψi,j (xi , xj )]
q(xi ,xj )≥0
a tree, the entropy can be decomposed as:
(17)
X
X
X
H(x) =
H(xv ) −
I(xi ; xj ),
(5)
subject to
q(xi , xj ) = q(xj ) ∀ xj .
(18)
v∈V
(i,j)∈E
I(xi ; xj ) = H(xi ) + H(xj ) − H(xi , xj ).
(6)
I(xi ; xj ) is the mutual information between xi and xj [51].
Accordingly, the variational problem can be written as:
X
minimise −
{H(xv ) + E[log ψv (xv )]}
q(xv ),q(xi ,xj )
−
X
v∈V
{−I(xi ; xj ) + E[log ψi,j (xi , xj )]}
(7)
subject to q(xi , xj ) ≥ 0 ∀ (i, j) ∈ E, ∀ xi , xj
X
q(xv ) = 1 ∀ v ∈ V
(8)
(i,j)∈E
(9)
xv
X
q(xi , xj ) = q(xi ) ∀ (i, j) ∈ E
(10)
q(xi , xj ) = q(xj ) ∀ (i, j) ∈ E.
(11)
xj
X
xi
xi
Then
J[q(xj )] = −
X
q(xj ) log
xj
X
ψi,j (xi , xj ),
(19)
xi
and the minimum of the optimisation of (17) is attained at
q(xi , xj ) = q(xj )q(xi |xj ), where
ψi,j (xi , xj )
q(xi |xj ) = P
.
0
x0 ψi,j (xi , xj )
(20)
i
Proof of this result can be found in [22, App C]. On treestructured graphs, BP may be viewed as successive applications
of variable elimination, followed by reconstruction.
If the graph has cycles (i.e., is not a tree), then the entropy
does not decompose into the form in (5)-(7). Furthermore, the
3 Without loss of generality, assume that neither vertex has a vertex factor,
as this can be incorporated into the edge factor.
4
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
feasible set in (8)-(11) is an outer bound to the true feasible set, state of all targets is X = (x1 , . . . , xn ). We denote the
i.e., there are feasible combinations of marginal distributions set of measurements received in scan s ∈ S = {1, . . . , S}
s
that do not correspond to a valid joint distribution. Nevertheless, by Zs = {z 1s , . . . , z m
s }. We consider a batch-processing
as an approximation, one may solve the optimisation in (7)- algorithm, where all scans s ∈ S are processed at once, and a
(11). The objective in (7) is referred to as the Bethe free energy sequential method, where scans are introduced incrementally,
(BFE) after [53], a connection identified in [17]. For a cyclic but some reprocessing is performed on each scan in the
graph, it differs from the Gibbs free energy, but is a commonly window after a new scan is revealed. Our goal is to avoid the
utilised approximation.
need to explicitly enumerate or reason over global association
It was shown in [17] that, if it converges, the solution hypotheses, i.e., hypotheses in the joint state space of all
obtained by BP is a local minimum of the BFE. The BP targets. Instead, marginal distributions of each target are stored,
message iterates in (1) can be viewed as a general iterative and dependencies between targets are accounted for using
method for solving a series of fixed point equations derived variational methods.
from the optimality conditions of the BFE variational problem
We assume that the prior information for each target is
(see [14, 4.1.3]). The marginal probability estimates obtained independent, such that the prior distribution of X is
using BP, denoted in this paper by the symbol q, are referred
n
Y
to as beliefs.
p(X) ∝
ψ i (xi ),
(21)
TRSP [14], [21] provides a convex alternative to the BFE, by
i=1
applying weights γi,j ∈ [0, 1] to the mutual information terms
i
i
I(xi ; xj ). If the weights correspond to a convex combination where the factors ψ (x ) collectively represent the joint.
We use the symbol i ∈ {1, . . . , n} to refer to a target index,
of embedded trees (i.e., a convex combination of weighted,
j
∈
{1, . . . , ms } to refer to a measurement index, and s ∈
tree-structured sub-graphs, where in a given graph γi,j = 1 if
S
to
refer to a measurement scan index. Each target i is
the edge is included, and γi,j = 0 otherwise), then the resulting
detected
in each scan s with probability Psd (xi ), target-related
free energy is convex. A rigorous method for minimising energy
measurements follow the model ps (z s |xi ), and false alarms
functions of this form was provided in [22].
fa
Finally, MF [18] approaches the problem by approximating occur according to a PPP with intensity λs (z s ).
The relationship between targets and measurements is
the entropy and expectation in (3) assuming that the joint
described
via a set of latent association variables, comprising:
distribution is in a tractable form, e.g., the product of the
1) For each target i ∈ {1, . . . , n}, an association variable
marginal distributions. Consequently, the expectations (the
ais ∈ {0, 1, . . . , ms }, the value of which is an index to
second term of (3)) are non-convex, and resulting methods
the measurement with which the target is hypothesised to
tend to underestimate the support of the true distribution, e.g.,
be associated in scan s (zero if the target is hypothesised
finding a single mode of a multi-modal distribution. It is also
to have not been detected)
possible to use a hybrid of MF and BP, as proposed in [19],
2) For each measurement j ∈ {1, . . . , ms }, an association
[20].
variable bjs ∈ {0, 1, . . . , n}, the value of which is an index
B. Multiple scan data association
to the target with which the measurement is hypothesised
to be associated (zero if the measurement is hypothesised
The problem we consider is that of data association across
to be a false alarm)
multiple scans, involving many targets, the state of which is
to be estimated through point measurements. We assume that This redundant representation implicitly ensures that each
many targets are present, each target gives rise to at most one measurement corresponds to at most one target, and each target
measurement (excluding so-called extended target problems, corresponds to at most one measurement. It was shown in [32]
where targets may produce multiple measurements), and each that, for the single scan case, this choice of formulation results
measurement is related to at most one target (excluding so- in an approximate algorithm with guaranteed convergence and
called merged measurement problems, e.g., where multiple remarkable accuracy.
s
targets fall within a resolution cell). We assume that false
Denoting as = (a1s , . . . , ans ) and bs = (b1s , . . . , bm
s ), the
alarms occur according to a Poisson point process (PPP).
joint distribution of the measurements and association variables
For clarity of presentation, we assume that all measurements for scan s can be written as:
related to the i-th target are independent conditioned on the
Y
target state xi . If xi is the target state vector at a given time,
i
as i
d
i
p(Z
,
a
,
b
|X)
∝
P
(x
)p
(z
|x
)
s
s s
s s
s
this effectively restricts the problem such that the multiple
i
i|as >0
scans correspond to different sensors at the same time instant,
or the target state is static. Problems involving multiple time
Y
Y
j
steps can be addressed by replacing xi with the joint state
×
[1 − Psd (xi )] ×
λfa
(z
)
s
s
i
j
over a time window, e.g., xi = (xi1 , . . . , xit ). An alternative
i|as =0
j|bs =0
approach involving association history hypotheses is discussed
× ψs (as , bs ), (22)
in [54, app A].
We assume that the number of targets n is known, though the where ψs (as , bs ) = 1 if as and bs form a consistent
method can easily be extended to an unknown, time-varying association event (i.e., if ais = j > 0 then bjs = i and vice
number of targets using the ideas in [10], [55]. The joint versa), and ψs (as , bs ) = 0 otherwise.
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
5
Fig. 2. Bipartite formulation of a single scan data association problem.
optimisation, but first we review the single scan formulation
and the underlying variational problem.
Following similar lines to [35], we show in [54, app B-A] that
the Bethe variational problem for the single scan formulation
can be solved by minimising the objective:
Fig. 1. Graphical model formulation of multiple scan problem.
The quantity of interest is the posterior distribution
Y
p(X, aS , bS |ZS ) ∝ p(X)
p(Zs , as , bs |X).
minimise
i,j
qs
(23)
s∈S
fa j
j=1 λs (z s )
Dividing (22) through by
(since the measurement
values are constants in (23)), this can be written as
p(X, aS , bS |ZS ) ∝
ms
n
Y
Y
Y
ψsi (xi , ais )
ψ i (xi )
ψsi,j (ais , bjs ) , (24)
j=1
s∈S
qsi,j log
i=1 j=0
ms
n X
X
where
qsi,j
wsi,j
+
ms
X
qs0,j log qs0,j
j=1
(1 − qsi,j ) log(1 − qsi,j )
−
Qms
i=1
ms
n X
X
(27)
i=1 j=1
subject to
ms
X
j=0
n
X
qsi,j = 1 ∀ i ∈ {1, . . . , n}
(28)
qsi,j = 1 ∀ j ∈ {1, . . . , ms }
(29)
i=0
0 ≤ qsi,j ≤ 1
(30)
where qsi,j = q(ais = j) = q(bjs = i) is the belief that target i
is associated with measurement j (or, if j = 0, that target i
ψsi (xi , ais ) =
(25)
is missed, or if i = 0, that measurement j is a false alarm4 ),
1 − Psd (xi ),
ais = 0
and wsi,j = ψ i (ais = j) is the node factor, which will be
and the functions
defined subsequently in (32). The constraints in (28) and
(
i
j
j
i
(29)
are referred to as consistency constraints, as they are
0, as = j, bs 6= i or bs = i, as 6= j
(26) a necessary condition for the solution to correspond to a valid
ψsi,j (ais , bjs ) =
1, otherwise
joint association event distribution.
It is not obvious that the optimisation in (27)-(30) is convex,
provide a factored form of ψs (as , bs ), collectively ensuring
that the redundant sets of association variables (a1s , . . . , ans ) but it can be proven using the result in [35], which shows that
s
a closely related objective (excluding terms involving qsi,0 and
and (b1s , . . . , bm
s ) are consistent (i.e., setting the probability
of any event in which the collections are inconsistent to zero). qs0,j ) is convex on the subset in which (30) and either (28) or
A graphical model representation of (24) is illustrated in (29) apply. Details can be found in the proof of lemma 3.
In [46] it was shown that, if the correct fractional coefficient
figure 1.
Over multiple time steps, JPDA operates by calculating the γ ∈ [−1, 1] is incorporated on the final term in the objective
marginal distribution of each target, pi (xi ), fitting a Gaussian (27) (excluding false alarms and missed detections, and setting
to this distribution, and proceeding to the next time step ms = n), the value of the modified objective function at
approximating the joint prior distribution by the product of the optimum is the same as the Gibbs free energy objective.
these approximated marginal distributions. It may be applied In the formulation that incorporates false alarms and missed
similarly to multiple sensor problems, introducing an arbitrary detections, the fractional free energy (FFE) objective is:5
order to the sensors.
ms
ms
n X
X
X
q i,j
C. Single scan BP data association
FBγ ([qsi,j ]) =
qsi,j log si,j + γ
qs0,j log qs0,j
w
s
i=1 j=0
j=1
The single scan version of the model in section II-B was
ms
n X
X
studied in [30], [32], with similar formulations (excluding false
−γ
(1 − qsi,j ) log(1 − qsi,j ). (31)
alarms and missed detections) examined in [31], [35]; the
i=1 j=1
graphical model for the single scan data association problem
is illustrated in figure 2. In [32], [34], simplified BP equations
4 If j = 0 then there is no corresponding q(bj ), and if i = 0 then there
s
were provided, and convergence was proven; these results do
is no corresponding q(ais ).
not apply to the multiple scan problem. In the present work, we
5 Note that we reverse the sign of γ in comparison to [46], so that γ = 1
seek to address the multiple scan problem using ideas in convex yields the regular BFE.
( P d (xi )p
s
j
i
s (z s |x )
j
λfa
(z
)
s
s
,
ais = j > 0
6
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
Despite the fact that the “right” value of γ is not known for any
particular problem, our preliminary investigations in [45], and
the results in section IV, show that fractional values γ ∈ [0, 1]
can yield improved beliefs.6 Practically, the value could be
chosen a priori based on the problem parameters. It is straightforward to show that the inclusion of the fractional coefficient
γ ∈ [−1, 1] retains convexity of (31) (on the appropriate
subset); again, details are in the proof of lemma 3.
We now consider how these results may be applied to solve
the single scan problem incorporating the kinematic states xi ,
as illustrated in figure 3(a) (or, equivalently, the single scan
version of figure 1). Whilst a solution for the belief of xi
can be recovered from a solution of (27)-(30) using lemma
2, a variational problem formulation incorporating xi admits
extension to multiple scan problems. As in (24), this model
involves factors ψ i (xi ) and ψsi (xi , ais ), where in (27),
Z
wsi,j = ψsi (ais = j) = ψ i (xi )ψsi (xi , ais = j)dxi . (32)
It is shown in [54, app B-A] that we can arrive at the problem in
(27)-(30) by performing a partial minimisation of the following
objective over q i (xi ) and the pairwise joint qsi (xi , ais ):
FB ([q i (xi )], [qsi (xi , ais )], [qsi,j ]) = −
n
X
H(xi )
f (q) is closed and convex, then the conjugate dual of f ∗ (λ) is
the original function f (q), thus f (q) and f ∗ (λ) are alternate
representations of the same object.
Dual functions are useful in constrained convex optimisation
since the optimal value of the primal
min f (q)
q
max −f ∗ (AT λ),
i=1 j=1
subject to the constraints:
and if f (q) is strictly convex then, given the optimal solution
λ∗ of the dual, the optimal value q ∗ of the primal can be
recovered as the solution of the unconstrained optimisation
min f (q) − (AT λ∗ )T q.
qsi,j
Z
=
n
X
qs0,j
≥ 0,
= q i (xi ),
(34)
(35)
qsi (xi , j)dxi ,
(36)
qsi,j = 1,
(37)
One additional usefulness of conjugate duality over Lagrangian duality is its ability to tractably address objectives
that decompose additively. In this work, we utilise the primaldual framework developed in [22], which addresses problems
of the form:
n
X
min f (q) +
hi (q),
(43)
i=1
where f (q) and hi (q) are proper, closed, convex functions.
Constraints are addressed by admitting extended real-valued
functions. It is shown in [22], [57] that the dual of (43) is
λ1 ,...,λn
Pn
i=1 λi ) −
n
X
h∗i (λi ).
(44)
i=1
Thus, assuming smoothness of f ∗ (or strict convexity of f ),
the dual optimisation can be performed via block coordinate
ascent, iteratively performing the following steps for each i:
X
µ :=
λj ,
(45)
j6=i
i=0
ms Z
X
(42)
q
max −f ∗ (−
qsi (xi , ais ) ≥ 0,
ms
X
qsi (xi , ais )
i
as =0
(41)
λ
q
+ E[log ψ i (xi )] + H(ais |xi ) + E[log ψsi (xi , ais )]
m
ms
n X
s
X
X
0,j
0,j
+
qs log qs −
(1 − qsi,j ) log(1 − qsi,j ), (33)
(40)
is the same as the optimal value of the dual optimisation
i=1
j=1
subject to Aq = 0
λi := arg max −f ∗ (−λi − µ) − h∗i (λi ).
(46)
λi
qsi (xi , ais )dxi = 1.
(38)
ais =0
D. Conjugate duality and primal-dual coordinate ascent
The Fenchel-Legendre conjugate dual of a function f (q) is
defined as: [56]
∗
T
f (λ) = sup λ q − f (q).
(39)
q
The dual f ∗ (λ) is convex regardless of convexity of f (q),
since it is constructed as the supremum of a family of linear
functions. The key outcome of conjugate duality is that, if
6 e.g., in high SNR cases (very high P d , very low λfa ), the BFE objective
tends to yield solutions that are almost integral, i.e., are closer to MAP
solutions rather than marginal probabilities, as illustrated later in figure 6(a).
The inclusion of the γ coefficient on the term involving qs0,j retains the
property of the BFE that the optimisation provides a near-exact result when
targets are well-spaced.
The method in [22] shows that the block optimisations
required in (46) can be performed via primal minimisations, i.e.,
the updated value λi can be obtained through the optimisation
q ∗ := arg min f (q) + hi (q) + µT q,
(47)
q
λi := −µ − ∇f (q ∗ ).
(48)
In [22], the authors develop the norm-product belief propagation algorithm for convexifications of general PGM inference
problems. While these methods could be applied to the problem
of interest, they do not exploit the unique problem structure and
resulting convexity discussed in section II-C. Consequently,
the necessary convexification procedure would produce an
unnecessarily large change to the BFE. Thus we adopt the
optimisation framework of [22], but the solution does not
exactly fit the NPBP algorithm, and so it is necessary to develop
it from the basic framework.
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
III. VARIATIONAL MULTIPLE SCAN DATA ASSOCIATION
7
It can be shown that the modification of the factors ψ̄1i (xi , ai1 )
effectively incorporates the Lagrange multipliers for the constraints that are being relaxed, and a linearisation of the concave
term (the third line in the objective in figure 3(a)), such that
the change in 3) above attempts to nullify changes 1) and
2), i.e., the solution of the modified variational problem is
the same as the original one. The linearisation of the concave
term is reminiscent of a single iteration of the convex-concave
procedure in [58].
The proposed solution is illustrated in figure 3(d). Compared
to JPDA-BP (figure 3(b)), it makes the following modifications:
The standard approach to tracking using JPDA and related
methods is to calculate the marginal distribution of each target,
and proceed to the next scan approximating the posterior as
the product of the single-target marginal distributions.7 In
many cases, this approach is surprisingly effective. The method
proposed in this work is based on a variational interpretation of
the JPDA approach, which gives rise to a family of formulations
that includes the JPDA-like approach and MSBP. We refer to the
JPDA-like approach using the BP approximation of marginal
association probabilities (beliefs), as JPDA-BP. In comparison,
true JPDA uses exact marginal association probabilities, and
1) As in the FFE objective in (31), weP
applyPfractional
n
ms
approximates the posterior as a Gaussian at each step.
weights γs to the concave terms − i=1 j=1
(1 −
P
i,j
i,j
In particular, we examine the single scan BP approach (e.g.,
qs ) log(1 − qs ), such that if
s γs < 1 then the
[10]) which calculates association beliefs, and approximates
objective function willPbe strictly convex
n
the joint as the product of the beliefs (although the steps taken
2) We retain constraints i=0 qsi,j = 1 for both scans
are not unique to the BP estimate of marginal probabilities).
3) If the coefficient γs differs from the value used for that
Consider the example in figure 3(a), where in scan 1 (S = {1})
scan at the previous time step, we modify the factor
the beliefs are calculated through the optimisation of (33).
ψsi (xi , ais ) such that the solution remains unchanged
Denote the solution as q i (xi , ai1 ) and q1i,j . JPDA-BP moves
(before the following scan is incorporated)
forward to the next scan approximating the joint as:
The reason for the latter step is that it was found to be desirable
n
Y
to use values γs closer to 1 in the current (most recent) scan,
p(X, a1 ) ≈
q i (xi , ai1 ).
(49) and reducing to zero in earlier scans. In our experiments, we use
i=1
the selection γs = 0.55 in the current scan, and γs = 0 in past
This approximated joint distribution can be formulated as a scans. In this case, the algorithm approximates the objective
PGM using the graph in the left-hand side of figure 3(b), function in a similar manner to JPDA-BP, but the consistency
where the factor ψ̄1i (xi , ai1 ) is modified such that the simplified constraints from previous scans are retained. Thus, unlike
formulation results in the same solution as the original problem JPDA, when later scans are processed, the constraints which
in figure 3(a). This does not mean to say that the objective on ensure that the origin of past measurements is consistently
the left-hand side of figure 3(b) is equivalent to that in figure explained remain enforced. In section IV, we will see that this
3(a). Note that we could eliminate the nodes ai1 from the graph can result in improved performance.
as they will subsequently remain leaf nodes, but we choose to
JPDA and JPDA-BP operate sequentially, at each stage
retain them for comparison to the proposed algorithm.
introducing a new set of nodes, as in figure 3(b). Following
When a second scan of measurements is introduced (S = approximation, association variables from past scans are leaf
{1, 2}), JPDA-BP effectively solves the problem in the right- nodes, and thus can be eliminated. Like MSBP, the proposed
hand side of figure 3(b) where, for the newest scan (s = method maintains past association variables, and introduces a
2), ψ̄2i (xi , ai2 ) , ψ2i (xi , ai2 ). Although the data for q1i,j (i.e., new set in each scan. The sequential variant of the algorithm
ψ̄1i (xi , ai1 )) is unchanged from the left-hand side of figure 3(b), operates by performing one or more forward-backward sweeps
introduction of new information in scan 2 will, in general, over recent scans. Complexity could be mitigated by performing
modify the belief values for the first scan, q1i,j . Consistency the operations on past scans intermittently, rather than upon
constraints (specifically, (37)) will no longer hold. If BP was receipt of every scan.
applied directly without the approximations made by JPDA (i.e.,
In what follows (and in figure 3), we assume that the target
(49)), we would arrive at the problem in figure 3(c). Comparing state xi is discrete. This can be achieved by using a particle
the variational problems at scan 2 in figures 3(b) (JPDA-BP) representation, such that the prior distribution ψ i (xi ) contains
and 3(c) (MSBP), we note the following differences:
the
weights of the particles. An expression such as
P prior
Pn Pms
i
i
i,j
i,j
ψ
(x
)ψsi (xi , ais ) should be interpreted as the sum of
1) The concave term − i=1 j=1 (1 − qs ) log(1 − qs )
xi
appears only for scan 2 in figure 3(b), but for both scans the function evaluated at the particle locations. The output
of the inference procedure developed is used to reweight the
in figure 3(c) P
n
2) The constraint i=0 qsi,j = 1 appears only for scan 2 in particles, e.g., recovering an estimate of the PDF of continuous
kinematic state ξ i as:
figure 3(b), but for both scans in figure 3(c)
X
3) The factors ψ̄1i (xi , ai1 ) have been modified in figure 3(b)
q i (ξ i ) ≈
q i (xi )δ(ξ i − xi ).
(50)
but remain at their original values in figure 3(c)
4) The objective function in figure 3(b) is convex, whereas
the objective in figure 3(c) is non-convex
7 It can be shown (e.g., [15, p277]) that the product of the marginals is
the distribution with independent targets that best matches the exact joint
distribution.
xi
The association history formulation of appendix A may be
applied simply by replacing xi with the single target association
history aiS . Some simplifications can be made in this case since
H(ais |aiS ) = 0.
8
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
Fig. 3. Probabilistic graphical model and Bethe free energy (a) single scan formulation, (b) multiple scan formulation using JPDA-BP approximation, (c)
standard multiple scan approach using MSBP, and (d) convex multiple scan approach, where shading depicts the weighting of the corresponding entropy and
mutual information terms. The formulation in the diagrams excludes false alarm events (qs0,j ) for simplicity; these events are modelled in the formulation in
the text. For the newest scan (s = 2) in the right of (b) and (d), ψ̄2i (xi , ai2 ) , ψ2i (xi , ai2 ).
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
A. Bethe free energy function
In [54, app B-B], we show that the Bethe free energy for
the multiple scan formulation in section II-B (and figure 3(c))
can be written as:
FB ([q i (xi )], [qsi (xi , ais )], [qsi,j ]) =
n
X
H(xi ) + E[log ψ i (xi )]
−
i=1
−
n
XX
H(ais |xi ) + E[log ψsi (xi , ais )]
s∈S i=1
+
ms
XX
qs0,j log qs0,j −
s∈S j=1
ms
n X
XX
(1 − qsi,j ) log(1 − qsi,j ),
s∈S i=1 j=1
(51)
9
The use of re-weighting to obtain a convex free energy is
closely related to the TRSP algorithm [21].
In the development that follows, we provide a decomposition
of the convex free energy that permits application of primaldual coordinate ascent (PDCA). First we give the basic form,
and show that the components are convex. Then we state
weights which ensure that the objective is the same as (58).
Then, in section III-C, we provide algorithms to minimise each
block that needs to be solved in PDCA.
In order to optimise (58), we consider decompositions of
the expression of the form:
FBγ,β ([q i (xi )], [qsi (xi , ais )]) = f ([q i (xi )], [qsi (xi , ais )])
Xn
+
hs,1 ([q i (xi )], [qsi (xi , ais )])
s∈S
subject to the constraints:
o
+ hs,2 ([q i (xi )], [qsi (xi , ais )]) , (60)
qsi (xi , ais ) ≥ 0, q i (xi ) ≥ 0
ms
XX
qsi (xi , ais ) = 1
(52)
where
(53)
xi ais =0
ms
X
qsi (xi , ais ) = q i (xi ) ∀ i, xi ∀ s ∈ S
(54)
f ([q i (xi )], [qsi (xi , ais )])
n
X
=−
κf,x H(xi ) + E[log ψ i (xi )]
i=1
ais =0
qsi,j
=
X
qsi (xi , j)
xi
i,j
qs ≥
n
X
qsi,j
i=0
∀ i, j, ∀ s ∈ S
(55)
(56)
= 1 ∀ j, ∀ s ∈ S,
(57)
B. Convexification of energy function
In this work, we consider convex free energies of the form:
FBγ,β ([q i (xi )], [qsi (xi , ais )], [qsi,j ])
n
X
i
=
i=1
−
H(ais |xi ) + E[log ψsi (xi , ais )]
s∈S i=1
+
X
s∈S
−
X
s∈S
γs
βs
ms
X
qs0,j log qs0,j
j=1
ms
n X
X
(1 −
qsi,j ) log(1
−
qsi,j ).
(58)
i=1 j=1
The difference between (58) and (51) is the incorporation of
the coefficients γs ∈ [0, 1) and βs ∈ (0, 1] in the final two
terms. We will show that (58) is convex if η ≥ 0 (strictly
convex if η > 0), where
X
η =1−
γs .
(59)
s∈S
hs,1 ([q i (xi )], [qsi (xi , ais )])
n
X
=−
κs,1,x H(xi ) + κs,1,s H(xi , ais )
i=1
ms
X
+ βs
qs0,j log qs0,j
j=1
− γs
ms
n X
X
(1 − qsi,j ) log(1 − qsi,j ),
i=1 j=1
i
i
hs,2 ([q (x )], [qsi (xi , ais )])
n
X
κs,2,x H(xi )
=−
i=1
+ κs,2,s H(xi , ais ) .
(62)
(63)
Note that we do not consider (60) or (62) to depend on [qsi,j ]
as these are uniquely determined from [qsi (xi , ais )] by the
constraints in (55)–(57) (which will be enforced whenever
we consider the block hs,1 , the only block that includes these
variables).
Immediate statements that can be made regarding convexity
of (61)–(63) include:
H(x ) + E[log ψ i (xi )]
n
XX
κf,s H(xi , ais ) + E[log ψsi (xi , ais )] , (61)
s∈S i=1
0 ∀ i, j, ∀ s ∈ S
where q i (xi ) is the belief (i.e., approximate probability) of the
state of target i, qsi,j is the belief that target i is associated
with measurement z js , qs0,j is the belief that measurement z jS
is not associated with any target, and the set S indexes the
measurement scans under consideration.
−
−
n
XX
1) f is strictly convex if κf,x > 0 and κf,s > 0; this is the
consequence of the convexity of entropy
2) hs,2 is convex if the consistency constraints (54) are
enforced for scan s, κs,2,s ≥ 0, and κs,2,x + κs,2,s ≥ 0;
this is the consequence of the convexity of entropy, and
of conditional entropy
Convexity of hs,1 is proven in the lemma below. Subsequently,
the decomposition in terms of f , hs,1 and hs,2 (for each s ∈ S)
is utilised in the PDCA framework introduced in section II-D.
10
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
Lemma 3. If constraints (52)–(57) are enforced for scan s,
κs,1,s ≥ 0, βs ≥ 0 and κs,1,s + κs,1,x ≥ γs , then hs,1 is
convex.
The proof of lemma 3 is in [54, app C]. The following
lemma provides coefficients which fit (58) into the form (60),
in order to permit solution using PDCA.
Lemma 4. Given the following, (60) is equivalent to (58):
η
κf,x = κf,s =
,
(64)
S+1
η
κs,1,x = −
,
(65)
S+1
η
,
(66)
κs,1,s = γs +
S
+
1
2η
κs,2,x = − 1 − γs −
,
(67)
S+1
2η
κs,2,s = 1 − γs −
,
(68)
S+1
where S = |S| is the number of scans in the problem
Lemma 4 can be shown by substituting the coefficients into
(60), and showing that the coefficients of H(xi ) sum to −(S −
1), and the coefficients of H(xi , ais ) sum to 1. Examining the
values in lemma 4, we find (60) is convex (under the constraints
(52)–(57)). This in turn shows that (58) is convex.
where
φi (xi ) = log ψ i (xi ) − µi (xi ),
= log ψsi (xi , ais ) − µis (xi , ais ),
(72)
#
"
X
φi (xi ) + φis (xi , ais )
. (73)
φ̃is (ais ) = log
exp
κf,s + κs,1,s
i
x
The single scan marginal qsi (ais ) is the solution of the following
sub-problem:
minimise
− γ̃s
minimise f ([q i (xi )], [qsi (xi , ais )])
)], [qsi (xi , ais )])
+ hs,1 ([q (x
n X
X
+
µi (xi )q i (xi )
i=1 xi
+
τ ∈S i=1
xi
qsi,j
wsi,j
+ β̃s
ms
X
qs0,j log qs0,j
j=1
(1 − qsi,j ) log(1 − qsi,j ),
(74)
i=1 j=1
Pms i,j
qs =
subject to (56)–(57) and the additional constraint j=0
βs
1 ∀ i, derived from (53) and (55). In (74), β̃s = κf,s +κ
,
s,1,s
γs
i,j
i
γ̃s = κf,s +κs,1,s , and log ws = φ̃s (j). This sub-problem is
studied in theorem 1. The update to λ is
c
λis,1,x (xi ) = φi (xi ) − κf,x log q i (xi ),
c
λis,1,s (xi , ais ) =
q i (xi ) =
φis (xi , ais ) − κf,s
ms
X
qsi (xi , ais ),
ais =0
log qsi (xi , ais ),
(75)
(76)
(77)
where = denotes equality up to an additive constant. For other
scans τ ∈ S, τ 6= s, λis,1,τ (xi , aiτ ) = 0.
Lemma 6. The solution of block (s, 2):
minimise f ([q i (xi )], [qsi (xi , ais )])
+ hs,2 ([q i (xi )], [qsi (xi , ais )])
n X
X
+
µi (xi )q i (xi )
i=1 xi
+
Lemma 5. Consider the problem to be solved in block (s, 1):
mτ
n X X
XX
qsi,j log
c
The convex free energy in (60) is of the form (43), so we
propose a solution using PDCA. As discussed in section II-D,
this is achieved by iterating (45), (47) and (48), commencing
with λs = 0 ∀ s ∈ S. To begin, we note that if the gradient of
a block hi with respect to a subset of variables is zero,8 then
those updated dual variables in (48) will be zero.
The algorithm proceeds by repeatedly cycling through blocks
hs,1 and hs,2 for each s ∈ S. The following lemmas provide
algorithms for solving each block in turn; the proofs can be
found in [54, app C].
i
ms
n X
X
i=1 j=0
ms
n X
X
C. Solution of convex energy
i
(71)
φis (xi , ais )
µiτ (xi , aiτ )qτi (xi , aiτ )
(69)
aiτ =0
under the constraints (52)–(57), where (54)–(57) is applied
only for scan s, and κf,x + κs,1,x = 0. The solution of this
problem is given by:
n i i i i i o
φ (x )+φs (x ,as )
exp
κf,s +κs,1,s
i
i i
i i
qs (x , as ) = qs (as ) ×
,
(70)
exp φ̃is (ais )
8 As discussed in [22], constraints for each block can be incorporated into
hi , so this condition also implies that no constraints are incorporated for the
subset of variables in block hi .
mτ
n X X
XX
τ ∈S i=1
xi
µiτ (xi , aiτ )qτi (xi , aiτ ),
(78)
aiτ =0
under the constraints (52)–(54) (including (54) only for scan
s) is:
(
)
φi (xi ) + φ̃is (xi )
i
i
q (x ) ∝ exp
, (79)
κf,x + κf,s + κs,2,x + κs,2,s
n i i i o
φs (x ,as )
exp κf,s
+κs,2,s
i
i i
i
i
n
o,
qs (x , as ) = q (x ) ×
(80)
φ̃is (xi )
exp κf,s +κs,2,s
where
φi (xi ) = log ψ i (xi ) − µi (xi ),
φis (xi , ais )
φ̃is (xi )
= log ψsi (xi , ais ) − µis (xi , ais ),
= (κf,s + κs,2,s )
i i i
ms
X
φ
(x
,
a
)
s
s
.
exp
× log
κ
+
κ
f,s
s,2,s
i
as =0
(81)
(82)
(83)
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
1
2
3
4
5
6
7
8
9
10
input : n, m ∈ N; wi,j ∀ i ∈ {0, . . . , n}, j ∈ {0, . . . , m};
γ ∈ [0, 1); α ∈ (0.5, ∞) ∩ [γ, ∞);
β ∈ (0.5, ∞) ∩ [γ, ∞)
output : q i,j ∀ i ∈ {0, . . . , n}, j ∈ {0, . . . , m}
κ := −1 − γ + α + β
Initialise
y i,j := 1 ∀ i ∈ {0, . . . , n}, j ∈ {0, . . . , m}
Iterate to convergence
repeat
−(1−γ)
P
′
′
×
xi,j := w0,j y 0,j + i′ wi ,j y i ,j
−γ
P
′
′
× eκ ∀ i, j
w0,j y 0,j + i′ 6=i wi ,j y i ,j
1
xi,0 := (y i,0 ) α −1 ∀ i
−1
P
′
′
∀j
x0,j := w0,j y 0,j + i′ wi ,j y i ,j
P
′
′ −(1−γ)
y i,j := wi,0 xi,0 + j ′ wi,j xi,j
×
P
′
′ −γ
× eκ ∀ i, j
wi,0 xi,0 + j ′ 6=j wi,j xi,j
P
′
′ −1
∀i
y i,0 := wi,0 xi,0 + j ′ wi,j xi,j
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1
11
12
13
14
15
y 0,j := (x0,j ) β −1 ∀ j
until sufficiently small change in y i,j , y i,0 , y 0,j
Calculate outputs (q 0,0 , 0)
q i,j := wi,j y i,j x0,j ∀ i, j
1
1
q i,0 := wi,0 (y i,0 ) α ∀ i, q 0,j := w0,j (y 0,j ) 1−β ∀ j
Fig. 4. Fractional BP algorithm for optimising single scan fractional free energy
q i,0
i,0
(74), where α is the coefficient
P of q log wi,0 (i.e., α = 1 in this instance).
Calculations marked ∀i or i are over
P the range i ∈ {1, . . . , n} (excluding
i = 0), while those marked ∀j or j are over the range j ∈ {1, . . . , m}
(excluding j = 0).
18
11
input : S; ψ i (xi ), ψsi (xi , ais ); γs , βs , ∆γs , ∆βs
output : q i (xi ), qsi (xi , ais ), ψ̄si (xi , ais )
Initialise
Set κ values according to lemma 4
λis,1,x (xi ) := 0, λis,1,s (xi , ais ) := 0
λis,2,x (xi ) := 0, λis,2,s (xi , ais ) := 0
Perform primal-dual coordinate ascent until convergence
repeat
for s ∈ S do
Solve blockP(s, 1)
P
µs (xi ) := τ ∈S\{s} λis,1,x (xi ) + τ ∈S λis,2,x (xi )
µs (xi , ais ) := λis,2,s (xi , ais )
Calculate λis,1,x (xi ), λis,1,s (xi , ais ) using lemma 5
Solve blockP(s, 2)
P
µs (xi ) := τ ∈S λis,1,x (xi ) + τ ∈S\{s} λis,2,x (xi )
µs (xi , ais ) := λis,1,s (xi , ais )
Calculate λis,2,x (xi ), λis,2,s (xi , ais ), q i (xi ) and
qsi (xi , ais ) using lemma 6
end
until sufficiently small change in λis,1,x (xi ), λis,2,x (xi ),
λis,1,s (xi , ais ), λis,2,s (xi , ais )
Calculate modified ψ̄si (xi , ais ) using theorem 3 (if
required)
Fig. 5. PDCA algorithm for minimising convex free energy based on
decomposition described in lemma 4.
D. Modification of factors between scans
As discussed at the beginning of this section (and illustrated
in figure 3), we propose solving a problem at scan S involving
a recent history of scans of measurements s ∈ S with fractional
weights configured to give high accuracy in the newest scan, and
using lower values in earlier scans. The main goal of retaining
historical scans is to ensure that consistency constraints from
The update to λ is
past scans remain enforced.
Suppose that we solve the multiple scan problem at scan S.
c
λis,2,x (xi ) = φi (xi ) − κf,x log q i (xi ),
(84) When we move to scan S 0 = S + 1, we will set γS = 0, this
c
λis,2,s (xi , ais ) = φis (xi , ais ) − κf,s log qsi (xi , ais ).
(85) time using the larger value for γS 0 . Thus we seek to modify
the problem parameters at time S to counteract the change of
reducing γs to zero. This is analogous to the approximation
For other scans τ ∈ S, τ 6= s, λis,2,τ (xi , aiτ ) = 0.
that JPDA makes, approximating the posterior as the product
of the marginals.
Theorem 1. The iterative procedure in figure 4 converges to
More generally, suppose we have been using weights γs ,
the minimum of the problem in (74), provided that β̃s > 0.5, s ∈ S, and at the next time, we will change these to γ̄s .
γ̃s ∈ [0, 1), and a feasible interior solution exists.
Similarly, suppose that the coefficients of the terms involving
qs0,j were βs , and will be changed to β̄s . The following theorem
This theorem is proven in [54, app D]. Implementation of gives
the modification to the problem parameters necessary to
the algorithm can be challenging due to numerical underflow. ensure that the solution of the problem remains unchanged.
This can be mitigated by implementing the updates in the log
domain using well-known numerical optimisations for log-sum- Theorem 3. Let [q i (xi )], [qsi (xi , ais )] and [qsi,j ] be the solution
exp [59, p844]. An alternative method for solving this form of of the problem in (58) using fractional weights γs and βs , s ∈ S.
problem based on Newton’s method was provided in [45]; the Suppose that the weights are changed to γ̄s = γs + ∆γs and
iterative BP-like method in figure 4 is significantly faster in β̄s = βs + ∆βs , and the problem parameters are changed as
follows:
most cases.
Theorem 2. The iterative procedure in figure 5 converges to
the minimum of the overall convex free energy,P
provided that
weights are as given in lemma 4, γs ≥ 0, and s∈S γs < 1.
This theorem is a corollary of claim 8 in [22], recognising
that the algorithms are an instance of this framework.
log ψ̄si (xi , ais = j) = log ψsi (xi , j)
+ ∆γs [1 + log(1 − qsi,j )] − ∆βs [1 + log qs0,j ] (86)
for j > 0, and log ψ̄si (xi , 0) = log ψsi (xi , 0) remains unchanged. Then the solution of the modified problem, denoted
[q̄ i (xi )], [q̄si (xi , ais )] and [q̄si,j ], is unchanged.
12
The proof of the theorem is in [54, app E]. Figure 5 includes
a step to incorporate these modifications.
E. Uses and limitations of proposed method
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
alarms are uniform over the bearing range covering the region
of interest.
We compare to two variants of JPDA, both of which maintain
a particle representation of each target location, and utilise BP
to approximate data association probabilities. The first variant
(which we refer to as JPDA-PBP, with ‘P’ denoting parallel)
processes measurements from different sensors in parallel,
solving each single-sensor problem once, as in [43] (this is
better suited to maintaining track rather than initial localisation).
The second variant (JPDA-BP) approaches multi-sensor data
by sequentially processing individual sensors, similar to the
IC-TOMB/P approach in [43].
For the proposed method, we compare:
1
1) γs = βs = |S|+1
for each scan, weights according to
lemma 4, and solving using figure 5, not utilising sequential modification (i.e., the final line of the algorithm); we
refer to this as the convex variational (CV) algorithm
2) The method using the sequential modification of section
III-D, introducing a new scan at each step with γs = 0.55,
for past scans setting γs = 0, and with βs = κf,s +κs,1,s ,
again solving using figure 5; we refer to this as the CVsequential (CVS) algorithm
The proposed method seeks to estimate marginal distributions
of target states. This provides a complete summary of the information available when considering each target separately, and
is useful, for example, when seeking to provide a confidence
region for the target location, or when deciding whether it
is necessary to execute sensor actions which will provide
clarifying information. The experiments in the following
section demonstrate that the proposed methods provide a
scalable approach for solving problems of this type, addressing
limitations experienced using existing methods.
In some tracking problems, a particular type of uncertainty
arises, in which multiple modes appear in the joint distribution
which essentially correspond to exchanges of target identity; the
coalescence problem in JPDA is a well-known example of this
(e.g., [60]). In such instances, multiple modes will appear in the
estimates of the marginal distributions; this is by design, and
is a correct summary of the uncertainty which exists. In these
cases, extracting point estimates from marginal distributions
is not straight-forward, but can be performed using methods
such as the variational minimum mean optimal subpattern A. Illustrative example
assignment (VMMOSPA) estimator [61]. Alternatively, if a
The result in figure 6 illustrates the behaviour on a simplified
point estimate is all that is required, MAP-based methods can
version
of the problem, with three sensors, two targets, and
be used. Likewise, MF or hybrid MF-BP methods (e.g., [20],
a
low
false
alarm rate (10−6 ). The top row (a)-(e) shows the
applied to tracking in [38], [39]) tend to provide good estimates
of a particular mode (and hence point estimates), at the expense results for different algorithms utilising the first two sensors,
of not characterising the full multi-modal uncertainty which where the first sensor initialises the distribution for each
target (drawing particles along each bearing line), and the
exists (see [39]).
Past experiments (e.g., [32]) have shown that the accuracy second permits triangulation. The sensor locations are shown
of the beliefs provided by BP is highest when SNR is low, as triangles, while true target locations are shows as crosses.
e.g., high false alarm rate and/or low probability of detection. Measurements are illustrated as dotted grey lines. The marginal
Conversely, accuracy is lowest in high SNR conditions, e.g., low distributions of the two targets, as estimated by the various
false alarm rate, high probability of detection. As demonstrated algorithms studied, are shown in the background image.
• Due to the low false alarm rate, JPDA-BP (shown in (a))
in the next section, this can now be mitigated through the
essentially provides a MAP association. The marginal
use of FFE. This behaviour is complementary to traditional
distribution estimates indicate high confidence for each
solution techniques, which perform very well in high SNR
target, in an incorrect (ghost) location, assigning near-zero
conditions (when ambiguity is the least, permitting tractable,
probability density to the true target location.
exact solution) but fail in low SNR conditions where many
• JPDA-FBP(0.55) (shown in (b)) utilises the fractional
targets are interdependent.
BP method, based on figure 4 (for which convergence
IV. E XPERIMENTS
is proven as theorem 1), with γ = 0.55. The figure
The proposed method is demonstrated through a simulation
demonstrates correct characterisation of the uncertainty
which seeks to estimate the marginal distribution of several
in the problem, with each marginal distribution estimate
targets using bearings only measurements. The region of interest
showing significant probability density in both the true
is the square [−100, 100]2 ⊂ R2 . Tracks are initialised using a
target location, and the ghost location.
single accurate bearing measurement from one sensor, corrupted
• Since tracks are initialised using sensor 1, and updated
by Gaussian noise with 0.1◦ standard deviation (e.g., as may
using sensor 2, the two sensor problem is a single scan
be provided if there is an accurate, low false alarm rate
association problem, and JPDA-PBP and MSBP (shown in
sensor providing bearing measurements from a single location);
(c) and (d)) is identical to JPDA-BP, similarly indicating
particle filter representations of each track are initialised by
high confidence for each target in a ghost location, and
randomly sampling from the posterior calculated by combining
assigning near-zero probability density to the true target
these measurements with a uniform prior on the region of
location.
interest. The proposed algorithm is then utilised to refine the
• Similarly, CV (shown in (e)) is identical to JPDA-FBP,
sensor positions. Target-originated bearing measurements are
and again correctly characterises the uncertainty in the
corrupted with 1◦ standard deviation Gaussian noise. False
problem.
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
(Two sensors)
(a) JPDA-BP
(c) JPDA-PBP
(e) CV
(d) MSBP
100
100
100
100
100
0
0
0
0
0
1
−100
−100
2
0
100
0
0
100
3
2
0
−100
−100
100
−100
−100
−100
−100
100
0
(h) JPDA-PBP
3
2
0
100
2
0
100
−100
−100
100
0
−100
−100
100
(j) CV
3
0
2
1
2
0
−100
−100
100
3
0
1
2
1
(i) MSBP
100
0
1
1
2
100
0
1
−100
−100
−100
−100
1
2
100
3
100
1
(g) JPDA-FBP(0.55)
(f ) JPDA-BP
(Three sensors)
(b) JPDA-FBP(0.55)
13
0
100
1
−100
−100
2
0
100
Fig. 6. Example problem involving two targets and two or three sensors. Targets are marked as ‘+’, and sensors as ‘4’, and bearing measurements are
illustrated as dotted grey lines. Tracks are initialised with measurements from sensor 1, and updated with two measurements from sensor 2 (top row, (a)-(e)).
Bottom row (f)-(j), incorporates an additional scan from sensor 3, in which a single measurement was received. Background image shows marginal distribution
estimates for the two tracks overlaid.
The bottom row (f)-(j) shows the results for the same algorithms introducing a third sensor, which receives a measurement
on one of the two targets.
•
•
•
•
•
Utilising the measurement from the third sensor, JPDA-BP
(shown in (f)) correctly localises one of the two targets,
but the second remains invalid, indicating high confidence
in a ghost location, and assigning near-zero probability
to the true target location.
JPDA-FBP is shown in (g) to correctly localise one of
the two targets, but a bimodal distribution remains for the
second. This is unnecessary: localisation of the first target
effectively confirms that the upper measurement from
sensor 2 belongs to that target, which in turn confirms that
the lower measurement belongs to the other target. Thus
the distribution exhibits unnecessarily high uncertainty as
a consequence of not enforcing past association feasibility
constraints.
Because data from sensor 2 is not used when interpreting
data from sensor 3, JPDA-PBP (shown in (h)), still
indicates high confidence in a ghost location for both
targets, and near-zero probability in the true location.
By simultaneously optimising over multiple scans of data,
MSBP (shown in (i)) is able to recover from the incorrect
solution in (d) and arrive at the correct solution.
Likewise, by retaining association feasibility constraints
from previous scans, CV (shown in (j)) is able to utilise
the confirmation of the location of one target to resolve the
bimodal uncertainty in the other target, correctly localising
both targets.
Of the methods shown, JPDA-BP, JPDA-PBP and MSBP
exhibit false confidence in ghost locations in figures (a), (c),
(d), (f) and (h), and JPDA-FBP fails to localise the distribution
to the extent possible in (g). CV is the only method shown
that is able to correctly characterise the uncertainty present in
both cases.
B. Quantitative analysis
As illustrated through the example in figure 6, the goal in
this work is to produce a faithful estimate of the marginal
probability distributions. This is quite different to problems in
which the aim is to produce a point estimate of target location,
for which the standard performance measure is mean square
error (MSE). In 200 Monte Carlo trials of the scenario in figure
6(a)-(e), JPDA-BP, JPDA-PBP and MSBP resolve uncertainty
to a single mode for each target, which is correct 76% of the
time, and incorrect (as in figure 6(a), (c) and (d)) 24% of the
time. This incorrect resolution of uncertainty could lead to dire
outcomes if the decision is made to take an action based on the
incorrect characterisation of uncertainty (which indicates high
confidence in a single, incorrect mode, as in figure 6(a), (c) and
(d)), rather than wait until further information is obtained (as
the characterisation in figure 6(b) and (e) would direct). Since
MSE is a measure only of the proximity of the single point
estimate to the true location, it does not capture the correctness
of the uncertainty characterised in the marginal probability
distribution, and it is not an adequate measure in this class of
problem.
Instead, we utilise two performance measures, which directly
measure occurrences of the undesirable outcomes in figure
6(a), (c), (d), (f), (g) and (h), and which are known to behave
consistently in the presence of multi-modal uncertainty. The
performance measures are the entropy of the beliefs produced
by each method, and the high probability density (HPD) value
in which the true location lies. The HPD value is defined as
the total probability under a distribution that is more likely
than a given point; e.g., given a point x∗ (the true location of
the target) and a distribution p, the HPD value is
Z
∗
HPD(p, x ) =
p(x)dx.
(87)
x|p(x)≥p(x∗ )
Three examples of this are illustrated in figure 7; in each case,
the true target location is marked as x∗ , and the HPD value is
14
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
HPD(p, x∗ ) = 0.21
HPD(p, x∗ ) = 0.79
Results for twelve sensor problem
HPD(p, x∗ ) = 0.98
1
C:8.26%
B:10.6%
A:12.4%
CDF
0.8
0.6
0.4
0.2
x
∗
x
∗
x
0
∗
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
HPD
Fig. 7. Example of HPD value for different true target locations (x∗ ). HPD
is the area of probability that is more likely than a given point, i.e., the area
of the shaded region in each diagram.
1
0.8
CDF
the area of the shaded region. Thus if HPD(p, x∗ ) ≈ 1 then x∗
0.6
is in the distant tails of p(x) and is assigned very low likelihood,
0.4
as in figure 6(a), (c), and (d) while if HPD(p, x∗ ) = 0 then x∗
0.2
falls on the most likely value of p(x) (i.e., it is a MAP estimate).
0
Under mild conditions, it can be shown that if x∗ ∼ p(x)
−1
−0.5
0
0.5
1
1.5
2
(i.e., if p(x) correctly characterises the uncertainty in x∗ ) then
Entropy
∗
HPD(p, x ) ∼ U{0, 1} [62, section 9.7.2]. If the distribution
JPDA-BP
JPDA-PBP
CV
CVS(0.55)
MSBP
of HPD values is concentrated at the lower end, then the beliefs
generated are conservative, i.e., they overestimate uncertainty Fig. 8. CDF of HPD value of true location and entropy for beliefs of each
in such a way that the true value rarely falls in the tails. If the target over 200 Monte Carlo trials. Points A, B and C mark the percent of
distribution of HPD values is concentrated at the higher end, cases in which the true target location is less likely than 99% of the marginal
estimates produced by MSBP, JPDA-BP and JPDA-PBP, i.e., it is
then the beliefs are non-conservative, i.e., they underestimate distribution
in the tails in a similar manner to figure 6(a), (c), (d), (f) and (h).
uncertainty, and the true value is often falling in the tails.
The entropy of the distribution characterises its uncertainty;
for example, the entropy of a multivariate Gaussian distribution of the MSBP beliefs are significantly smaller than the other
with covariance P is 0.5 log |2πeP|. Entropy is often preferred methods, the point labelled as ‘A’ in the top figure reveals
over variance for multi-modal distributions as it is not affected that the true location is less likely than 99% of the belief for
by the distance between well-spaced modes (whereas the 12.4% of targets (treating each target in each Monte Carlo
distance between the modes will dominate the variance). The simulation as a sample). This indicates that if the MSBP belief
HPD value is not sufficient, e.g., since an estimator based is used to construct a 99% confidence region for the location
purely on the prior distribution should produce a uniform HPD of a particular target, the target does not lie within the region
distribution, but this would have a much higher entropy than a 12.4% of the time. An instance of this is illustrated in figure
solution which utilises all available measurement data. Likewise 6(d), where the beliefs effectively rule out the location of the
entropy is not sufficient, since one could devise a method true target. These results may be useful in applications where
of approximating beliefs which reports an arbitrarily small smaller entropy is desirable, but when consistency and accuracy
uncertainty; this would report large HPD values. There is not of the beliefs is essential, it is unacceptable.
JPDA-BP and JPDA-PBP also produce non-conservative
a single measure which adequately characterises performance
in this class of problem. This pair of values is necessary results, since the CDF of the HPD value consistently lies
to capture the undesirability of the behaviour in figure 6(a), below the x = y line (i.e., the CDF of a uniform distribution).
(c) and (d) (assigning near-zero likelihood to the true target The points labelled as ‘B’ and ‘C’ reveal that the true location
location, and producing a HPD(p, x∗ ) ≈ 1), as well as the is less likely than 99% of the belief for 10.6% of targets for
undesirability of the behaviour in figure 6(g) (not resolving JPDA-BP, and 8.26% of targets for JPDA-PBP. Again, this
uncertainty when adequate information exists to do so, and indicates that if the respective beliefs are used to construct
99% confidence regions for a particular target, the target does
thus increasing entropy).
For the quantitative experiment, twelve sensors are spaced not lie within the region for 10.6% or 8.26% of the time, as
equally around a circle with radius 100 units. False alarms illustrated in figures 6(a) and (c).
The convex variational (CV) method is shown to produce
follow a Poisson distribution with one per scan on average, and
targets are detected with probability 0.9. The number of targets conservative beliefs, since the CDF of the HPD value consisis Poisson distributed with expected value of twelve (but in tently lies above the x = y line. The cost of this conservatism
each simulation the true number is known by the estimator; the is beliefs with higher entropy; in many applications this may
method can be extended to accommodate an unknown number be preferable. The CVS method with γs = 0.55 is also shown
to significantly reduce instances where the target is in a very
of targets using [10]).
The results in figures 8–9 show the cumulative distribution low likelihood area of the belief. By tuning the value of γs a
function (CDF) of the HPD value (top) and entropy (bottom) trade-off between conservatism and entropy can be obtained.
for the various methods. Figure 8 shows that MSBP produces
A large family of heuristic methods can be developed by
significantly non-conservative results. Although the entropies employing the algorithm in figure 5 with weights that do not
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
Results for heuristic algorithms
proportion of the time. The result is a scalable, reliable
algorithm for estimating marginal probability distributions using
multiple scans.
1
0.8
CDF
15
0.6
ACKNOWLEDGEMENTS
0.4
The authors would like to thank the anonymous reviewers
for suggestions that helped to clarify many points.
0.2
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
HPD
1
CDF
0.8
0.6
0.4
0.2
0
−1
0
−0.5
0.5
1
1.5
2
Entropy
CV
CVH
CVS(0.55)
CVSH(0.55)
CVSH1(0.55)
Fig. 9. CDF of HPD value of true location and entropy for beliefs of each
target over 200 Monte Carlo trials.
ensure that each component (62), (63) is convex. We consider
an instance of this, which sets κf,x = κf,s = 1, κs,1,x =
−1,
P and κs,1,s = κs,2,x = κs,2,s = 0. As long as κf,x +
s∈S [κs,1,x + κs,2,x ] = −|S| + 1 and κf,s + κs,1,s + κs,2,s =
1 ∀ s ∈ S, the original objective remains unchanged, and if the
algorithm converges, the result is optimal (assuming convexity).
Experimentally, convergence appears to be both reliable and
rapid, though not guaranteed. In the rare case that convergence
is not obtained, weights may be reverted (immediately or via
a homotopy) to the form in lemma 4, for which convergence
is guaranteed, but slower in practice. With γs = 1 ∀ s ∈ S,
this can be seen to be
Pequivalent to multiple scan BP (which
is non-convex since s∈S γs > 1).
Figure 9 shows the results of the heuristic approach. The
algorithms with guaranteed convergence are marked as CV and
CVS(0.55); the heuristic equivalents are CVH and CVSH(0.55)
respectively. CVSH1(0.55) uses a single backward-forward
sweep after introducing each new sensor. The slight difference
between the method with guaranteed performance and the
heuristic method is caused by the different values of βs used
(since we must ensure that β̃s > 0.5; in each case we select
βs to set β̃s = 1). The results demonstrate that very similar
performance can be obtained with a single sweep.
V. C ONCLUSION
This paper has shown how the BP data association method
of [32] can be extended to multiple scans in a manner which
preserves convexity, using convex optimisation alongside a
convergent, BP-like method for optimising the FFE. In doing
so, it was demonstrated that the conservative beliefs can be
obtained, whereas the estimates provided by MSBP and JPDABP are significantly non-conservative, and can provide beliefs
which effectively rule out the true target location a significant
R EFERENCES
[1] S. S. Blackman and R. Popoli, Design and Analysis of Modern Tracking
Systems. Norwood, MA: Artech House, 1999.
[2] D. B. Reid, “An algorithm for tracking multiple targets,” IEEE Trans.
Autom. Control, vol. AC-24, no. 6, pp. 843–854, December 1979.
[3] T. Kurien, “Issues in the design of practical multitarget tracking
algorithms,” in Multitarget-Multisensor Tracking: Advanced Applications,
Y. Bar-Shalom, Ed. Norwood, MA: Artech-House, 1990, pp. 43–83.
[4] S. S. Blackman, “Multiple hypothesis tracking for multiple target
tracking,” IEEE Aerospace and Electronic Systems Magazine, vol. 19,
pp. 5–18, Jan. 2004.
[5] T. Fortmann, Y. Bar-Shalom, and M. Scheffe, “Sonar tracking of multiple
targets using joint probabilistic data association,” IEEE J. Ocean. Eng.,
vol. 8, no. 3, pp. 173–184, Jul 1983.
[6] L. Y. Pao, “Multisensor multitarget mixture reduction algorithms for
tracking,” Journal of Guidance, Control, and Dynamics, vol. 17, no. 6,
pp. 1205–1211, 1994.
[7] J. Vermaak, S. Maskell, and M. Briers, “A unifying framework for multitarget tracking and existence,” in Proc. 8th International Conference on
Information Fusion, July 2005.
[8] P. Horridge and S. Maskell, “Real-time tracking of hundreds of targets
with efficient exact JPDAF implementation,” in Proc. 9th International
Conference on Information Fusion, July 2006.
[9] D. Musicki and R. Evans, “Multiscan multitarget tracking in clutter
with integrated track splitting filter,” IEEE Trans. Aerosp. Electron. Syst.,
vol. 45, no. 4, pp. 1432–1447, October 2009.
[10] J. L. Williams, “Marginal multi-Bernoulli filters: RFS derivation of MHT,
JIPDA and association-based MeMBer,” IEEE Trans. Aerosp. Electron.
Syst., vol. 51, no. 3, July 2015.
[11] J. Roecker, “Multiple scan joint probabilistic data association,” IEEE
Trans. Aerosp. Electron. Syst., vol. 31, no. 3, pp. 1204–1210, Jul. 1995.
[12] K. Pattipati, R. Popp, and T. Kirubarajan, “Survey of assignment
techniques for multitarget tracking,” in Multitarget-Multisensor Tracking:
Applications and Advances, Y. Bar-Shalom and W. D. Blair, Eds.
Norwood, MA: Artech-House, 2000, vol. 3, ch. 2, pp. 77–159.
[13] A. B. Poore and S. Gadaleta, “Some assignment problems arising from
multiple target tracking,” Mathematical and Computer Modelling, vol. 43,
no. 9–10, pp. 1074–1091, 2006.
[14] M. J. Wainwright and M. I. Jordan, “Graphical models, exponential
families, and variational inference,” Foundations and Trends in Machine
Learning, vol. 1, no. 1–2, pp. 1–305, 2008.
[15] D. Koller and N. Friedman, Probabilistic Graphical Models: Principles
and Techniques. Cambridge, MA, USA: MIT Press, 2009.
[16] J. Pearl, Probabilistic Reasoning in Intelligent Systems. San Francisco,
CA: Morgan Kaufmann, 1988.
[17] J. S. Yedidia, W. T. Freeman, and Y. Weiss, “Understanding belief
propagation and its generalizations,” Exploring artificial intelligence in
the new millennium, pp. 239–269, 2003.
[18] T. S. Jaakkola, “Tutorial on variational approximation methods,” in
Advanced mean field methods: theory and practice. MIT Press, 2000,
pp. 139–160.
[19] G. E. Kirkelund, C. N. Manchón, L. P. Christensen, E. Riegler, and B. H.
Fleury, “Variational message-passing for joint channel estimation and
decoding in MIMO-OFDM,” in Proc. 2010 Global Telecommunications
Conference, 2010.
[20] E. Riegler, G. E. Kirkelund, C. N. Manchón, M.-A. Badiu, and B. H.
Fleury, “Merging belief propagation and the mean field approximation:
A free energy approach,” IEEE Trans. Inf. Theory, vol. 59, no. 1, pp.
588–602, 2013.
[21] M. Wainwright, T. Jaakkola, and A. Willsky, “A new class of upper
bounds on the log partition function,” IEEE Trans. Inf. Theory, vol. 51,
no. 7, pp. 2313–2335, July 2005.
[22] T. Hazan and A. Shashua, “Norm-product belief propagation: Primal-dual
message-passing for approximate inference,” IEEE Trans. Inf. Theory,
vol. 56, no. 12, pp. 6294–6316, December 2010.
16
[23] R. McEliece, D. MacKay, and J.-F. Cheng, “Turbo decoding as an
instance of Pearl’s “belief propagation” algorithm,” IEEE J. Sel. Areas
Commun., vol. 16, no. 2, pp. 140–152, Feb 1998.
[24] L. Chen, M. J. Wainwright, M. Çetin, and A. S. Willsky, “Multitargetmultisensor data association using the tree-reweighted max-product
algorithm,” in Proc SPIE Signal Processing, Sensor Fusion, and Target
Recognition, vol. 5096, August 2003, pp. 127–138.
[25] L. Chen, M. Çetin, and A. S. Willsky, “Distributed data association
for multi-target tracking in sensor networks,” in Proc. 8th International
Conference on Information Fusion, July 2005.
[26] L. Chen, M. J. Wainwright, M. Çetin, and A. S. Willsky, “Data association
based on optimization in graphical models with application to sensor
networks,” Mathematical and Computer Modelling, vol. 43, no. 9–10,
pp. 1114–1135, 2006.
[27] A. Gning and L. Mihaylova, “Dynamic clustering and belief propagation
for distributed inference in random sensor networks with deficient links,”
in Proc. 12th International Conference on Information Fusion, July 2009,
pp. 656–663.
[28] M. Chertkov, L. Kroc, and M. Vergassola, “Belief propagation and beyond
for particle tracking,” arXiv, e-print arXiv:0806.1199v1, June 2008.
[29] B. Huang and T. Jebara, “Approximating the permanent with belief
propagation,” arXiv, e-print arXiv:0908.1769v1, August 2009.
[30] J. L. Williams and R. A. Lau, “Data association by loopy belief
propagation,” in Proc. 13th International Conference on Information
Fusion, Edinburgh, UK, July 2010.
[31] M. Chertkov, L. Kroc, F. Krzakala, M. Vergassola, and L. Zdeborov,
“Inference in particle tracking experiments by passing messages between
images,” Proceedings of the National Academy of Sciences, vol. 107,
no. 17, pp. 7663–7668, 2010.
[32] J. L. Williams and R. A. Lau, “Approximate evaluation of marginal
association probabilities with belief propagation,” IEEE Trans. Aerosp.
Electron. Syst., vol. 50, no. 4, October 2014.
[33] P. O. Vontobel, “The Bethe permanent of a non-negative matrix,” in Proc.
48th Allerton Conference on Communication, Control, and Computing,
Urbana-Champaign, IL, September/October 2010, pp. 341–346.
[34] J. L. Williams and R. A. Lau, “Convergence of loopy belief propagation
for data association,” in Proc. 6th International Conference on Intelligent Sensors, Sensor Networks and Information Processing, Brisbane,
Australia, December 2010, pp. 175–180.
[35] P. Vontobel, “The Bethe permanent of a nonnegative matrix,” IEEE Trans.
Inf. Theory, vol. 59, no. 3, pp. 1866–1901, 2013.
[36] K. P. Murphy, Y. Weiss, and M. I. Jordan, “Loopy belief propagation for
approximate inference: An empirical study,” in Proc. 15th Conference
on Uncertainty in Artificial Intelligence, 1999, pp. 467–476.
[37] R. A. Lau and J. L. Williams, “Tracking a coordinated group using
expectation maximisation,” in Proc. 8th International Conference on
Intelligent Sensors, Sensor Networks and Information Processing, Melbourne, Australia, April 2013.
[38] R. D. Turner, S. Bottone, and B. Avasarala, “A complete variational
tracker,” in Advances in Neural Information Processing Systems 27, 2014,
pp. 496–504.
[39] R. A. Lau and J. L. Williams, “A structured mean field approach for
existence-based multiple target tracking,” in Proc. 19th International
Conference on Information Fusion, July 2016.
[40] A. S. Rahmathullah, R. Selvan, and L. Svensson, “A batch algorithm for
estimating trajectories of point targets using expectation maximization,”
IEEE Trans Signal Process, vol. 64, no. 18, pp. 4792–4804, Sept 2016.
[41] H. Lan, Q. Pan, F. Yang, S. Sun, and L. Li, “Variational Bayesian
approach for joint multitarget tracking of multiple detection systems,” in
Proc 19th International Conference on Information Fusion, July 2016,
pp. 1260–1267.
[42] F. Meyer, P. Braca, P. Willett, and F. Hlawatsch, “Scalable multitarget
tracking using multiple sensors: A belief propagation approach,” in Proc.
18th International Conference on Information Fusion, July 2015.
[43] ——, “A scalable algorithm for tracking an unknown number of targets
using multiple sensors,” IEEE Trans Signal Process, vol. 65, no. 13, pp.
3478–3493, July 2017.
[44] A. Frank, P. Smyth, and A. Ihler, “Beyond MAP estimation with the
track-oriented multiple hypothesis tracker,” IEEE Trans. Signal Process.,
vol. 62, no. 9, pp. 2413–2423, May 2014.
[45] J. L. Williams, “Interior point solution of fractional Bethe permanent,”
in Proc. IEEE Workshop on Statistical Signal Processing, Gold Coast,
Australia, July 2014.
[46] M. Chertkov and A. B. Yedidia, “Approximating the permanent with
fractional belief propagation,” Journal of Machine Learning Research,
vol. 14, pp. 2029–2066, 2013.
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
[47] S. L. Lauritzen, Graphical Models. Oxford, UK: Clarendon Press, 1996,
vol. 17.
[48] R. E. Kalman, “A new approach to linear filtering and prediction
problems,” Transactions of the ASME Journal of Basic Engineering,
vol. 82, no. Series D, pp. 35–45, 1960.
[49] L. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2,
pp. 257–286, Feb 1989.
[50] A. Viterbi, “Error bounds for convolutional codes and an asymptotically
optimum decoding algorithm,” IEEE Trans. Inf. Theory, vol. 13, no. 2,
pp. 260–269, April 1967.
[51] T. M. Cover and J. A. Thomas, Elements of Information Theory. New
York, NY: John Wiley and Sons, 1991.
[52] A. Globerson and T. Jaakkola, “Approximate inference using conditional entropy decompositions,” Journal of Machine Learning Research:
Workshop and Conference Proceedings, vol. 2, pp. 131–138, 2007.
[53] H. A. Bethe, “Statistical theory of superlattices,” Proceedings of the
Royal Society of London, Series A—Mathematical and Physical Sciences,
vol. 150, no. 871, pp. 552–575, 1935.
[54] J. L. Williams and R. A. Lau, “Multiple scan data association by convex
variational inference (extended version),” arXiv, e-print arXiv:1607.07942,
January 2018.
[55] D. Musicki and R. J. Evans, “Joint integrated probabilistic data association: JIPDA,” IEEE Trans. Aerosp. Electron. Syst., vol. 40, no. 3, pp.
1093–1099, July 2004.
[56] R. T. Rockafellar, Convex Analysis. Princeton, NJ: Princeton University
Press, 1970.
[57] P. Tseng, “Dual coordinate ascent methods for non-strictly convex
minimization,” Mathematical Programming, vol. 59, no. 1–3, pp. 231–
247, 1993.
[58] A. L. Yuille, “CCCP algorithms to minimize the Bethe and Kikuchi
free energies: Convergent alternatives to belief propagation,” Neural
Computation, vol. 14, no. 7, pp. 1691–1722, June 2002.
[59] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery,
Numerical Recipes: The Art of Scientific Computing, 3rd ed. New York,
NY, USA: Cambridge University Press, 2007.
[60] H. A. Blom and E. A. Bloem, “Probabilistic data association avoiding
track coalescence,” IEEE Trans. Autom. Control, vol. 45, no. 2, pp.
247–259, February 2000.
[61] J. L. Williams, “An efficient, variational approximation of the best fitting
multi-Bernoulli filter,” IEEE Trans. Signal Process., vol. 63, no. 1, pp.
258–273, January 2015.
[62] S. Davey, N. Gordon, I. Holland, M. Rutten, and J. Williams, Bayesian
Methods in the Search for MH370, ser. SpringerBriefs in Electrical and
Computer Engineering. Singapore: Springer, 2016.
[63] S. Julier and J. Uhlmann, “Unscented filtering and nonlinear estimation,”
Proceedings of the IEEE, vol. 92, no. 3, pp. 401–422, Mar. 2004.
[64] N. Gordon, D. J. Salmond, and A. Smith, “Novel approach to non-linear
and non-Gaussian Bayesian state estimation,” IEE Proceedings F: Radar
and Signal Processing, vol. 140, pp. 107–113, 1993.
[65] D. P. Bertsekas, Nonlinear Programming, 2nd ed. Belmont, MA: Athena
Scientific, 1999.
[66] E. Kohlberg and J. W. Pratt, “The contraction mapping approach to
the Perron-Frobenius theory: Why Hilbert’s metric?” Mathematics of
Operations Research, vol. 7, no. 2, pp. 198–210, 1982.
A PPENDIX A
A SSOCIATION HISTORY MODEL
In section II-B, the problem of interest is formulated to
incorporate both continuous states xi and discrete association
hypothesis variables ais . Alternatively, we may formulate the
problem by defining association history hypotheses aiS =
(ai1 , . . . , aiS ), which detail which measurement corresponds to
the target in each scan. The role of the variational algorithm is to
determine the marginal association distribution pi (aiS ) for each
target. Calculation of the kinematic distribution conditioned on
i
an association hypothesis, pi,aS (xi ), can utilise well-studied
methods such as the Kalman filter (KF) [48], extended Kalman
filter (EKF), unscented Kalman filter (UKF) [63], or the particle
filter (PF) [64].
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
17
replaced by S 0 = {1, . . . , S 0 }, where S 0 = S + 1. If the new
scan represents a new time step, each hypothesis-conditioned
i
PDF pi,aS (xi ) first undergoes prediction according to standard
KF/EKF/UKF/PF expressions. A new single-target hypothesis
aiS 0 = (aiS , aiS 0 ) is generated for each combination of an old
single-target hypothesis aiS , and choice of event in the new
Definition 2. A single target association history hypothesis
scan, aiS 0 , where aiS 0 = 0 denotes a missed detection, and
(or single target hypothesis for short) is a sequence of timeaiS 0 = j ∈ {1, . . . , mS 0 } indicates that target i corresponded
stamped measurements that are hypothesised to correspond to
to measurement j. The parameters for the hypothesis aiS 0 =
the same target.
(aiS , 0) can be calculated using the expression:
Z
A global hypothesis for the scans in set S may be represented
i
i
i
wi,aS 0 = wi,aS [1 − PSd0 (xi )]pi,aS (xi )dxi ,
(94)
as aS = (a1S , . . . , anS ). Each single target hypothesis is
i,aiS
equipped with a hypothesis weight w
(utilised in the
i
i
pi,aS 0 (xi ) ∝ [1 − PSd0 (xi )]pi,aS (xi ).
(95)
calculation of the probability of the global hypotheses), and the
i
i
target state probability density function (PDF) conditioned on The hypothesis aS 0 = (aS , j), which updates the old singlei
the hypothesis pi,aS (xi ). Under this model, prediction steps target hypothesis aiS with measurement z jS 0 , is calculated using
may be easily introduced to incorporate a stochastic state the expressions:
model.
i R
i
i
wi,aS pS 0 (z jS 0 |xi )PSd0 (xi )pi,aS (xi )dxi
We denote by AiS the set of feasible single-target hypotheses
wi,aS 0 =
,
j
λfa
for target i ∈ {1, . . . , n} in the scans in S. The set of all
S 0 (z S 0 )
(96)
feasible global hypotheses (i.e., those in which no two targets
i
i
j
i,a
i
i
d
i
i,a
i
utilise the same measurement) can be written as:
p S 0 (x ) ∝ pS 0 (z S 0 |x )PS 0 (x )p S (x ).
(97)
The extension of these steps to accommodate an unknown,
AS = (a1S , . . . , anS ) aiS = (ai1 , . . . , aiS ) ∈ AiS ,
time-varying number of targets can be found in [10].
The association history model can be written in a graphical
ais 6= ajs ∀ s, i, j s.t. i 6= j, ais 6= 0 . (88) model form as:
Definition 1. A global association history hypothesis (or
global hypothesis for short) is a hypothesis for the origin of
every measurement received so far, i.e., for each measurement
it specifies from which target it originated, or if it was a false
alarm.
The joint distribution of states and hypotheses conditioned
p(X, aS , bS |ZS ) ∝
on measurements may be written as:
ms
n
Y
Y
( n
)(
) Y
i
i
i
i,j i j
ψsi (aiS , ais )
Y
Y
ψ
(x
,
a
)ψ
(a
)
ψ
(a
,
b
)
,
i
i
S
S
s
s
s
p(X, aS , bS |ZS ) ∝
wi,aS pi,aS (xi )
ψs (as , bs ) . i=1
j=1
s∈S
i=1
s∈S
(89)
Marginalising the kinematic states, the probability of a global
hypothesis aS = (a1S , . . . , anS ) ∈ AS (and the corresponding
bS ) can be written in the form:
( n
)(
)
Y
Y
i,aiS
p(aS , bS |ZS ) ∝
w
ψs (as , bs ) .
(90)
i=1
s∈S
The joint PDF of all targets can be represented through a
total probability expansion over all global hypotheses:
p(X) =
X
p(aS )
aS ∈AS
n
Y
i
pi,aS (xi ),
(91)
i=1
where, for notational simplicity, we drop the explicit conditioning on ZS from p(X) and p(aS ). It is of interest to obtain
the marginal distributions of global hypothesis probabilities:
X
pi (aiS ) =
p(ãS ).
(92)
i
i
ãS =(ã1S ,...,ãn
S )∈AS |ãS =aS
From these marginal association distributions, we can find the
marginal state PDF of each target:
X
i
pi (xi ) =
pi (aiS )pi,aS (xi ).
(93)
aiS ∈AiS
We now describe the updates which occur when a new scan
of measurements is received, i.e., when S = {1, . . . , S} is
(98)
where ψ i (xi , aS ) =
ψsi (aiS , ais ) ensures that
(
1,
i
i
i
ψs (aS , as ) =
0,
i,aiS
i,aiS
p
(xi ), ψ i (aS ) = w
, and
i
aS and ais are in agreement:
aiS = (ãi1 , . . . , ãiS ), ãis = ais
otherwise.
(99)
This graph is illustrated in figure 10. Marginalising the
kinematic states xi (which can be done simply since they
are leaves), we arrive at the representation
p(aS , bS |ZS ) ∝
ms
n
Y
Y
Y
ψsi (aiS , ais )
ψ i (aS )
ψsi,j (ais , bjs ) . (100)
i=1
s∈S
j=1
The derivation of section III may be applied directly to the
model in (100), substituting aiS in place of xi . The kinematic
distribution can then be recovered as
i
q(xi , aiS ) = q(aiS )ψ i (xi , aS ) = q(aiS )pi,aS (xi ).
(101)
A PPENDIX B
D ERIVATION OF B ETHE FREE ENERGY FORM
A. Single scan
In this section, we present two formulations for the single
scan problem, (102)-(105) and (117)-(120), and show they
18
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
function for the partial minimisation can be written as:
minimise
FB ([qsi (ais )], [qsi,j (ais , bjs )], [qsj (bjs )])
i,j
j
i
qs (as ,bs )
ms X
ms
n X
n
X
X
i
+
λi,j
qsi,j (ais , bjs ) − qsi (ais )
s (as )
i=1 j=1 ais =0
+
bjs =0
ms X
n X
n
X
j
λi,j
s (bs )
i=1 j=1 bjs =0
Fig. 10. Graphical model of association history formulation of multiple scan
problem.
are equivalent to the formulation of (27)-(30) after partial
minimisation. The formulation of (27)-(30) is similar to the
formulation in [35]; the difference is our formulation includes
the belief that target i is not detected, qsi,0 , and the belief
that measurement zsj is not used by any target, qs0,j , whereas
the formulation in [35] is for the matrix permanent problem,
which excludes qsi,0 and qs0,j (and is thus constrained such that
n = ms ).
The Bethe variational problem in section II-C involving
random variables ais and bjs can be solved by minimising:
i=1
−
qsi,j (ais , bjs ) = 1,
qsi,j (ais , bjs ) ≥ 0,
bjs =0
i
i,j j
where λi,j
s (as ) and λs (bs ) are dual variables. Solving the
dual function yields the solution:
qsi,j (ais , bjs ) =
1
ci,j
i
i,j j
ψsi,j (ais , bjs ) exp{−λi,j
s (as ) − λs (bs )},
(106)
where ci,j is the normalisation constant.
Using the marginalisation constraints (104), the pairwise
joint (106) and the definition of ψsi,j (ais , bjs ) in (26) (which
ensures that qsi,j (ais , bjs ) = 0 if ais = j, bjs 6= i or bjs = i,
ais 6= j), we find that qsi,j = qsi,j (ais = j, bjs = i), which is
related to the dual variables by
qsi,j =
FB ([qsi (ais )], [qsi,j (ais , bjs )], [qsj (bjs )]) =
ms
n
X
X
−
H(ais ) + E[log ψsi (ais )] −
H(bjs )
ms
n X
X
ais =0
qsi,j (ais , bjs ) − qsj (bjs )
ais =0
ms X
n
X
subject to
ms
X
1
ci,j
i
i,j j
exp{−λi,j
s (as = j)} exp{−λs (bs = i)}. (107)
Secondly, for i0 6= i and j 0 6= j, we find that qsi (ais = j 0 ) = qsi,j
0
and qsj (bjs = i0 ) = qsi ,j are related to the dual variables by
j=1
−I(ais ; bjs ) + E[log ψsi,j (ais , bjs )] , (102)
0
qsi,j =
i=1 j=1
1
ci,j
i
0
exp{−λi,j
s (as = j )}
n
X
0
j
0
exp{−λi,j
s (bs = i )},
i0 =0
0
i 6=i
(108)
subject to the constraints:
0
qsi,j (ais , bjs ) ≥ 0, qsi (ais ) ≥ 0, qsj (bjs ) ≥ 0,
(103)
m
n
s
X
X
qsi,j (ais , bjs ) = qsi (ais ),
qsi,j (ais , bjs ) = qsj (bjs ),
bjs =0
n
X
ais =0 bjs =0
1
j
0
exp{−λi,j
s (bs = i )}
ci,j
qsi,j (ais , bjs ) = 1,
ms
X
ais =0
qsi (ais ) = 1,
n
X
qsj (bjs ) = 1,
j =0
j 0 6=j
(109)
As we have already seen, qsi,j = qsi,j (ais = j, bjs = i). In the
remaining case, ais = j 0 6= j, bjs = i0 6= i, we again exploit the
structure of ψsi,j (ais , bjs ) to obtain
bjs =0
(105)
where ψsi (ais ) is defined by (32), I(ais ; bjs ) is defined in (6),
and ψsi,j (ais , bjs ) is in (26). Note that there is some redundancy
in these constraints, which is retained to reinforce that the
constraints in (103) and (105) are retained when the marginal
constraints (104) are relaxed in the next step.
Let the marginals qsi (ais ) and qsj (bjs ) be fixed and feasible.
Because the marginals are fixed, the Bethe variational problem
(102)-(105) is convex with respect to qsi,j (ais , bjs ). In addition,
since the marginals are feasible, then qsi (ais = j) = qsj (bjs =
i) , qsi,j . Relaxing the marginal constraints (104), the dual
i
0
exp{−λi,j
s (as = j )}.
0
ais =0
(104)
ms
X
qsi ,j =
ms
X
1
i
0
i,j j
0
exp{−λi,j
s (as = j )} exp{−λs (bs = i )}.
(110)
Substituting (107)–(110) into the pairwise normalisation constraint (105) yields:
qsi,j (ais , bjs ) =
qsi,j +
ci,j
n
1 X
ci,j
j
0
exp{−λi,j
s (bs = i )}
i0 =0
0
i 6=i
×
ms
X
j 0 =0
j 0 6=j
i
0
exp{−λi,j
s (as = j )} = 1. (111)
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
Subsequently, we find for i0 6= i, j 0 6= j,
rangement of the Bethe free energy (33) is:
0
qsi,j (ais
=j
0
, bjs
0
=i)=
19
0
qsi,j qsi ,j
1 − qsi,j
.
(112)
FB ([qsi (ais )], [qsi (xi , ais )], [qsi,j ]) =
n
X
−
H(ais ) + H(xi |ais ) + E[log ψ i (xi )ψsi (xi , ais )]
i=1
Substituting the marginals and the pairwise joint into the
entropies H(ais ), H(bjs ) and H(ais , bjs ) yields:
+
ms
X
qs0,j log qs0,j −
j=1
−H(ais ) =
ms
X
qsi,j log qsi,j ,
(113)
qsi,j log qsi,j ,
(114)
j=0
−H(bjs ) =
n
X
−H(ais , bjs ) = qsi,j log qsi,j +
ms
n X
X
qsi,j qsi ,j
i0 =0 j 0 =0
i0 6=i j 0 6=j
1−
0
qsi,j
0
log
0
qsi,j qsi ,j
1−
qsi,j
= qsi,j log qsi,j − (1 − qsi,j ) log (1 − qsi,j )
ms
n
X
X
0
0
0
0
+
qsi,j log qsi,j +
qsi ,j log qsi ,j (115)
j 0 =0
j 0 6=j
i0 =0
i0 6=i
qsi,j
Let
and
be fixed and feasible. Minimising the
Bethe free energy (121) with respect to qsi (xi , ais ) subject to
the constraints (34)-(38) yields the solution (using (13)):
q i (ai )ψ i (xi )ψ i (xi , ai )
qsi (xi , ais ) = Ps s i i 0 is i 0 si .
xi 0 ψ (x )ψs (x , as )
Substituting qsi (xi , ais ) into the Bethe free energy (121) results
in the equivalent
Bethe variational problem (27)-(30) where
P
wsi,j = xi ψ i (xi )ψsi (xi , j), and qsi (ais = j) = qsi,j .
B. Multiple scans
The Bethe variational problem in section II-B, which is
represented by figure 3(c), can be solved by minimising:
i=1
I(ais ; bjs ) = −qsi,j log qsi,j − (1 − qsi,j ) log (1 − qsi,j ). (116)
−
Substituting (113)-(116) into the single scan formulation (102)(105), we arrive at the equivalent Bethe variational problem
(27)-(30) where wsi,j = ψsi (ais = j).
The Bethe variational problem in section II-C involving
random variables xi , ais and bjs (illustrated in figure 3(a)) can
be solved by minimising:
FB ([q i (xi )], [qsi (xi , ais )], [qsi (ais )], [qsi,j (ais , bjs )], [qsj (bjs )]) =
ms
n
n
X
X
X
−
H(xi ) + E[log ψ i (xi )] −
H(ais ) −
H(bjs )
i=1
−
n
X
j=1
−I(xi ; ais ) + E[log ψsi (xi , ais )]
i=1
−
ms
n X
X
−I(ais ; bjs ) + E[log ψsi,j (ais , bjs )] , (117)
i=1 j=1
subject to the constraints (103)-(105) and
ms
X
qsi (xi , ais ) ≥ 0, qsi (xi ) ≥ 0,
(118)
X
qsi (xi , ais ) = qsi (xi ),
qsi (xi , ais ) = qsi (ais ), (119)
ais =0
xi
X
ms
X
xi ais =0
qsi (xi , ais )
= 1,
(122)
FB ([q i (xi )], [qsi (xi , ais )], [qsi (ais )], [qsi,j (ais , bjs )], [qsj (bjs )]) =
n
X
−
H(xi ) + E[log ψ i (xi )]
so that the mutual information (6) is:
i=1
(1 − qsi,j ) log(1 − qsi,j ). (121)
i=1 j=1
qsi (ais )
i=0
0
ms
n X
X
n
XX
H(ais ) −
s∈S i=1
ms
XX
H(bjs )
s∈S j=1
n
XX
−
−I(xi ; ais ) + E[log ψsi (xi , ais )]
s∈S i=1
ms
n X
XX
−
−I(ais ; bjs ) + E[log ψsi,j (ais , bjs )] , (123)
s∈S i=1 j=1
subject to the constraints (103)-(105) and (118)-(120) for s ∈
S. Partial minimisation over qsi,j (ais , bjs ) and rearrangement
produces the equivalent variational problem (51)-(57).
A PPENDIX C
P ROOF OF ALGORITHMS FOR MINIMISING PDCA
SUB - PROBLEMS
In this section, we prove lemmas 5 and 6, i.e., we derive
algorithms for minimising the blocks utilised in the PDCA
algorithm. Before we begin, we prove the preliminary result in
lemma 3, which shows that the block hs,1 is convex (convexity
of f and hs,2 is straight-forward).
Proof of lemma 3: If κs,1,x ≥ 0, then convexity with
respect to q i (xi ) is immediate. Otherwise, −κs,1,x > 0; let
κ̃ = κs,1,s + κs,1,x ≥ 0, and rewrite the first line of (62) as:
n
X
−
−κs,1,x H(ais |xi ) + κ̃H(xi |ais ) + κ̃H(ais ) .
i=1
X
qsi (xi )
= 1.
(120)
xi
Partial minimisation over the pairwise joint qsi,j (ais , bjs )
arrives at the Bethe variational problem (33)-(38). A rear-
The first two terms are convex by definition of conditional
entropy, as is the second line in (62), so we focus on the
remainder of the expression:
ms
n
n X
X
X
− κ̃
H(ais ) − γs
(1 − qsi,j ) log(1 − qsi,j ), (124)
i=1
i=1 j=1
20
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
Pms i,j
qs log qsi,j . Recognising (124) as:
where −H(ais ) = j=0
ms
ms
n X
n X
X
X
γs
qsi,j log qsi,j −
(1 − qsi,j ) log(1 − qsi,j )
i=1 j=0
+(κ̃−γs )
while qsi (xi , ais ) can be recovered via (70). Dividing by (κf,s +
κs,1,s ), we obtain (74). Finally, since
∇qi (xi ) f = κf,x log q i (xi ) + κf,x − log ψ i (xi ),
ms
n X
X
qsi,j log qsi,j +γs
i=1 j=0
n
X
(1−qsi,0 ) log(1−qsi,0 ),
i=1
we find that the update in (48) reduces to
λis,1,x (xi ) = −µi (xi ) − κf,x log q i (xi ) − κf,x + log ψ i (xi )
(125)
we obtain the desired result; the first line is convex by
theorem
20 inP
[35], which shows that the function S(ξ) =
P
ξ
−
j
j ξj log
j (1 − ξj ) log(1 − ξj ) is convex on the domain
P
ξj ≥ 0, j ξj = 1; the second line is convex by convexity of
x log x.
Proof of lemma 5: Collecting terms, the objective to be
minimised is:
µ
Fs,1
([q i (xi )], [qsi (xi , ais )])
n
X
=−
(κf,x + κs,1,x )H(xi ) + E[φi (xi )]
= φi (xi ) − κf,x log q i (xi ) − κf,x ,
−
−
µ
Fs,2
([q i (xi )], [qτi (xi , aiτ )])
n
X
=−
(κf,x + κs,2,x )H(xi ) + E[φi (xi )]
i=1
n
XX
−
κf,τ H(xi , aiτ ) + E[φiτ (xi , aiτ )]
n
X
κf,τ H(xi , aiτ ) + E[φiτ (xi , aiτ )]
τ ∈S i=1
−
τ ∈S\{s} i=1
n
X
i=1
+ βs
qs0,j log qs0,j
ms
n X
X
− γs
(1 − qsi,j ) log(1 − qsi,j ).
j=1
i=1 j=1
(126)
For τ ∈ S\{s}, hs,1 is constant with respect to qτi (xi , aiτ ), so
λis,1,τ (xi , aiτ ) = 0. If we define
φ̃is (xi , ais ) = φi (xi ) + φis (xi , ais ),
ms
X
q i (xi ) =
qsi (xi , ais ),
(127)
(κf,s + κs,1,s )[H(xi |ais ) + H(ais )]
i=1
−
n
X
E[φ̃is (xi , ais )] + βs
i=1
ms
X
ms
n X
X
(1 − qsi,j ) log(1 − qsi,j ). (129)
n
X
κ̃H(xi ) + E[φi (xi )]
i=1
n
X
−
(κf,s + κs,2,s )H(ais |xi ) + E[φis (xi , ais )]
i=1
X
n
X
κf,τ H(xi , aiτ ) + E[φiτ (xi , aiτ )] , (134)
τ ∈S\{s} i=1
where κ̃ = κf,x + κf,s + κs,2,x + κs,2,s . Using lemma 2
to perform a partial minimisation of (134) with respect to
qτi (xi , aiτ ), holding q i (xi ) fixed, we find the result in (80) and
the remaining problem:
qs0,j log qs0,j
j=1
− γs
µ
Fs,2
([q i (xi )], [qτi (xi , ais )]) = −
−
then the terms in (126) that depend on qsi (xi , ais ) can be written
as
−
For τ ∈ S\{s}, (54) is not enforced, and hs,2 is constant
with respect to qτi (xi , aiτ ), so λis,2,τ (xi , aiτ ) = 0. Since the
constraint (54) is enforced for time s, (133) can be written
equivalently as
(128)
ais =0
n
X
n
X
κs,2,s H(xi , ais ) . (133)
i=1
(κf,s + κs,1,s )H(xi , ais ) + E[φis (xi , ais )]
ms
X
(132)
which is the result in (75). Following identical steps for
qsi (xi , ais ) gives the result in (76).
Proof of lemma 6: Collecting terms, the objective to be
minimised is:
i=1
X
(131)
i=1 j=0
c
µ
Fs,2
([q i (xi )]) =
n n
io
h
X
−
κ̃H(xi ) + E φi (xi ) + φ̃is (xi ) .
i=1
(135)
i=1 j=1
Using lemma 2 to minimise with respect to qsi (xi , ais ) while
holding qsi (ais ) fixed, we find that the optimisation becomes
"
n n
o
X
(κf,s + κs,1,s ) −
H(ais ) + E[φ̃is (ais )]
i=1
+ β̃s
ms
X
j=1
qs0,j
log qs0,j
#
ms
n X
X
i,j
i,j
− γ̃s
(1 − qs ) log(1 − qs ) ,
i=1 j=1
(130)
Using lemma 1, we obtain the result in (79).
Following similar steps to (132) gives the updates for
λis,2,x (xi ) and λis,2,s (xi , ais ).
A PPENDIX D
P ROOF OF CONVERGENCE OF SINGLE SCAN ITERATION
In this section, we prove convergence of an iterative
algorithm for solving the single scan block, illustrated in figure
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
Lemma 7. The solution of (136)-(139) lies in the relative
interior, i.e., 0 < qi,j < 1 ∀ (i, j) s.t. wi,j > 0.
2:
minimise
n X
m
X
qi,j log
i=1 j=1
n
X
+α
qi,0 log
i=1
qi,j
wi,j
Proof. Rewrite the objective in (136) in the form:
m
X
q0,j
qi,0
+β
q0,j log
wi,0
w
0,j
j=1
n X
m
X
−γ
(1 − qi,j ) log(1 − qi,j )
(1 − γ)
n X
m
X
subject to
j=0
n
X
qi,j log
i=1 j=1
+ (α − γ)
(136)
i=1 j=1
m
X
21
n
X
qi,0 log
i=1
"
qi,j = 1 ∀ i ∈ {1, . . . , n}
(137)
qi,j = 1 ∀ j ∈ {1, . . . , m}
(138)
qi,j
wi,j
m
X
qi,0
q0,j
+β
q0,j log
wi,0
w0,j
j=1
#
n m
qi,j X X
+γ
−
qi,j log
(1 − qi,j ) log(1 − qi,j ) .
wi,j i=1 j=1
i=0 j=1
n X
m
X
(140)
i=0
0 ≤ qi,j ≤ 1,
(139)
Consider two feasible points q 0 and q 1 , where q 0 is on the
boundary and q 1 is in the relative interior (such a point exists
by assumption 1). Let q λ = λq 1 + (1 − λ)q 0 , and denote the
objective evaluated at q λ by
where γ ∈ [0, α] ∩ [0, β] ∩ [0, 1), α ∈ (0.5, ∞), β ∈ (0.5, ∞).
In our analysis, we permit values wi,j = 0, maintaining a finite
f (λ) = g(λ) + h(λ),
objective by fixing the corresponding qi,j = 0, and defining
qi,j /wi,j , 1; since these take on fixed values, we do not where g(λ) is the first two lines of (140) evaluated at q λ ,
consider them to be optimisation variables. While we state and h(λ) is the final line. Lemma 3 shows that h(λ) is
the algorithm more generally, we prove convergence for three convex, therefore its gradient is monotonically non-decreasing.
cases:
Consequently it must be the case that:
1) n = m and wi,0 = w0,j = 0 (i.e., no missed
lim h0 (λ) = c < ∞.
(141)
detection/false alarm events)
λ↓0
2) wi,0 > 0 ∀ i, w0,j = 0 ∀ j, α = 1 (i.e., missed detections
The derivative of g(λ) is given by:
but no false alarms)
"
#
3) wi,0 > 0 ∀ i, w0,j > 0 ∀ j, α = 1 (i.e., missed detections
n X
m
1
0
X
λq
+
(1
−
λ)q
i,j
i,j
0
1
0
and false alarms)
g (λ) = (1−γ)
(qi,j −qi,j ) log
+1
wi,j
In case 1 above, α and β have no effect since qi,0 = 0 and
i=1 j=1
"
#
n
1
0
q0,j = 0. Similarly, in case 2, β has no effect since q0,j = 0.
X
λqi,0
+ (1 − λ)qi,0
1
0
(qi,0 − qi,0 ) log
+ (α − γ)
+1
Assumption 1 ensures that the problem has a relative interior
wi,0
i=1
(again, we exclude the qi,j variables for which wi,j = 0, since
"
#
m
1
0
X
they are fixed to zero).
λq0,j
+ (1 − λ)q0,j
1
0
+β
(q0,j − q0,j ) log
+1 .
w0,j
Assumption 1. There exists a feasible point in the relative
j=1
interior, i.e., there exists qi,j satisfying the constraints (137)(139) such that 0 < qi,j < 1 ∀ (i, j) s.t. wi,j > 0.
Assumption 2. The graph is connected, i.e., we can travel from
any left-hand side vertex ai , i ∈ {1, . . . , n} to any right-hand
side vertex bj , j ∈ {1, . . . , m} by following a path consisting
of edges (i0 , j 0 ) with wi0 ,j 0 > 0.
Assumption 1 can easily be shown to be satisfied if wi,0 >
0 ∀ i and w0,j > 0 ∀ j (i.e., missed detection and false alarm
likelihoods are non-zero). In problems without false alarms or
missed detections, the condition excludes infeasible problems
(e.g., where two measurements can only be associated with
a single target), and problems with trivial components (e.g.,
where a measurement can only be associated with one target,
so that measurement and target can be removed and the smaller
problem solved via optimisation). Assumption 2 ensures that
the problem is connected; this property is utilised in the proof
of case 1. Any problem in which the graph is not connected
can be solved more efficiently by solving each connected
component separately.
Since q 0 is on the boundary and q 1 is not, we must have:
lim g 0 (λ) = −∞.
λ↓0
(142)
By (141) and (142), we thus have that f 0 (λ) < 0 ∀ λ ∈
(0, ) for some > 0. Thus the optimum cannot lie on the
boundary.
Lemma 8. The Karush-Kuhn-Tucker (KKT) optimality conditions [65] for the problem in (136) are:
qi,j
log
+ γ log(1 − qi,j ) + 1 + γ − λi − µj = 0 ∀ i, j > 0,
wi,j
(143)
qi,0
α log
+ α − λi = 0 ∀ i > 0,
(144)
wi,0
q0,j
β log
+ β − µj = 0 ∀ j > 0,
(145)
w0,j
as well as the primal feasibility conditions (137)-(139). The
conditions are necessary and sufficient for optimality. In case
1 (where wi,0 = 0 ∀ i and w0,j = 0 ∀ j) the solution is unique
22
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
up to a constant c being added to λi ∀ i and subtracted from
µj ∀ j. In other cases, the solution is unique.
Proof. One complication is that the objective is not convex
on Rn+1×m+1 but rather only on the subspace in which
either (137) or (138) is satisfied. We show that the regular
KKT conditions are still necessary and sufficient in this case.
Relaxing the non-negativity condition,9 the problem can be
expressed as:
The optimisation methodology we adopt is to define an
iterative method and prove that it converges to a point that
satisfies the KKT conditions, motivated by analysis of the BP
iteration in [32], [35]. Defining λ̄i = λi − α, µ̄i = µi − β and
κ = −1 − γ + α + β, the KKT conditions in (143)-(145) can
be rewritten as:
qi,0
minimise f (q)
subject to A1 q = b1 ,
q0,j =
A2 q = b2 .
The KKT conditions for this problem are:
∇f (q) − AT1 λ − AT2 µ = 0,
A1 q = b1 ,
A2 q = b2 .
(146)
(147)
Given a solution q 0 that satisfies A1 q 0 = b1 , we can
express any feasible q as q 0 + P(q − q 0 ) where P =
I − AT1 (A1 AT1 )−1 A1 is the matrix that projects onto the
null-space of A1 . Thus we can equivalently solve
A2 q = b2 ,
(148)
(149)
where, after taking the gradient of f in (148), we substitute
q 0 + P(q − q 0 ) = q since the point must satisfy the constraints
(149). The projection of the gradient is:
−1
m
X
wi,j exp{µ̄j + κ}
,
exp λ̄i = wi,0 exp{( α1 − 1)λ̄i } +
(1 − qi,j )γ
j=1
"
(155)
where the RHS values of qi,j , λ̄i and µ̄j refer to the previous
iterates. The updates in (154) and (155) are applied alternately.
After each update, the values of qi,j are recalculated using
(151)-(153).
The iteration may be written equivalently in terms of the
parameterisation xi,j and yi,j , where
Similarly, given a point satisfying the KKT conditions for the
original problem, we can find a corresponding point satisfying
the modified KKT conditions (148)-(149) by inverting (150).
Thus points satisfying the KKT conditions for the original
problem (146)-(147) and the modified problem (148)-(149) are
in direct correspondence.
The expressions in (143)-(145) are found by forming the
Lagrangian and taking gradients. Uniqueness of the solution
comes from strict convexity of f . The freedom to choose
a constant offset is the result of linear dependence of the
constraints
in case 1 (since each set of constraints implies that
P
q
=
n).
i,j i,j
exp{µ̄j + κ}
,
(1 − qi,j )γ
= exp{( α1 − 1)λ̄i }, x0,j = exp{µ̄j },
exp{λ̄i + κ}
yi,j =
,
(1 − qi,j )γ
= exp{λ̄i }, y0,j = exp{( β1 − 1)µ̄j }.
xi,j =
xi,0
(150)
#−1
(1 − qi,j )γ
i=1
∗
λ̃ = λ∗ + (A1 AT1 )−1 A1 ∇f (q).
n
X
wi,j exp{λ̄i + κ}
exp µ̄j = w0,j exp{( β1 − 1)µ̄j } +
Thus a point (q , λ , µ ) satisfying the KKT conditions for
the modified problem (148)-(149) corresponds to a point
∗
(q ∗ , λ̃ , µ∗ ) in the KKT conditions for the original problem
(146)-(147), where
∗
(153)
While these expressions do not permit us to immediately solve
for qi,j , they permit application of an iterative method, in which
we repeatedly calculate new LHS values of qi,j by updating
either λ̄i via the equation:
P∇f (q) = ∇f (q) − AT1 (A1 AT1 )−1 A1 ∇f (q).
∗
∀ j > 0.
A2 q = b2 .
P∇f (q) − AT1 λ − AT2 µ = 0,
∗
(152)
or µ̄j via the equation:
Since the argument of f lies in the feasible subspace for the
first constraint, this problem is convex, and under Assumption 1
the Slater condition [65] is satisfied, so the KKT conditions are
necessary and sufficient. The KKT conditions for this modified
problem are:
A1 q = b1 ,
w0,j exp{ β1 µ̄j }
(151)
(154)
minimise f (q 0 + P(q − q 0 ))
subject to A1 q = b1 ,
wi,j exp{λ̄i + µ̄j + κ}
∀ i, j > 0,
(1 − qi,j )γ
= wi,0 exp{ α1 λ̄i } ∀ i > 0,
qi,j =
yi,0
(156)
(157)
Algebraic manipulation yields equivalent iterations in terms of
xi,j and yi,j as:
!−(1−γ)
(k+1)
xi,j
= ri,j (y
(k)
(k)
w0,j y0,j
),
+
X
(k)
wi0 ,j yi0 ,j
i0
−γ
(k)
× w0,j y0,j +
X
(k)
wi0 ,j yi0 ,j
× eκ , (158)
i0 6=i
(k+1)
xi,0
(k)
1
= ri,0 (y (k) ) , (yi,0 ) α −1 ,
(159)
!−1
(k+1)
x0,j
= r0,j (y (k) ) ,
(k)
w0,j y0,j +
X
(k)
wi0 ,j yi0 ,j
,
i0
9 Alternatively,
define f (q) = ∞ for points violating the constraint.
(160)
,
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
23
thus
Fig. 11. Structure of iterative solution, alternating between half-iterations
x(k+1) = r(y (k) ) and y (k+1) = s(x(k+1) ).
and
i0 6=i
(k+1)
(k+1)
= si,j (x(k+1) ) , wi,0 xi,0
+
X
(k+1)
Raising this to the power γ and multiplying by exp{(1−γ)µ̄∗j +
κ}, we obtain (158). Similar steps show (161) and (162), while
(159) and (163) are immediate.
wi,j 0 xi,j 0
j0
−γ
(k+1)
× wi,0 xi,0
+
X
(k+1)
wi,j 0 xi,j 0
× eκ ,
j 0 6=j
(161)
−1
(k+1)
yi,0
(168)
−(1−γ)
yi,j
"
#−1
exp{µ̄∗j }
wi,j exp{λ̄∗i + κ}
∗
(167)
= exp{−µ̄j } −
∗
∗ )γ
1 − qi,j
(1 − qi,j
−1
n
∗
X wi0 ,j exp{λ̄ 0 + κ}
i
= w0,j exp{( β1 − 1)µ̄∗j } +
∗ )γ
(1
−
q
0
i ,j
i0 =1
(k+1)
= si,0 (x(k+1) ) , wi,0 xi,0
+
X
(k+1)
wi,j 0 xi,j 0
Our goal in what follows is to prove that the composite
operator r(s(·)) is a contraction, as defined below. We utilise
the same distance metric as in [32]:
,
d(x, x̃) = max log
i,j
j0
(162)
(k+1)
y0,j
(k+1)
= s0,j (x(k+1) ) , (x0,j
1
) β −1 .
(163)
0
The shorthand
i0Prepresents the sum over the set i ∈
{1, . . . , n}, while
the same summation,
i0 6=i represents P
excluding the i-th element. Similarly,
P j 0 represents the sum
over the set j 0 ∈ {1, . . . , m}, while j 0 6=j represents the same
summation, excluding the j-th element. The structure of this
iterative method is illustrated in figure 11. Note that if γ = 1,
this reduces to the BP iteration of [32].
At this point, we have stated but not derived the iteration
(158)–(163). The validity of the expressions is established by
proving that the solution of the KKT conditions is a fixed point
of the iteration (in lemma 9), and then showing that repeated
application of the expressions yields a contraction, which is
guaranteed to converge to the unique fixed point.
P
∗
Lemma 9. Let (qi,j
, λ̄∗i , µ̄∗j ) be the solution of the KKT
∗
conditions in lemma 8, and let x∗i,j and yi,j
be the values
∗
∗
∗
calculated from (qi,j , λ̄i , µ̄j ) using (156)–(157). Then x∗i,j and
∗
yi,j
are a fixed point of r(·) and s(·) in (158)-(163).
Pn
Proof. Feasibility implies that i=0 qi,j = 1 ∀ j. Therefore:
w0,j exp{ β1 µ̄∗j } +
n
X
wi,j exp{λ̄∗i + µ̄∗j + κ}
(1 −
i=1
∗ )γ
qi,j
= 1,
(164)
where we define 0/0 = 1.
Definition 3. An operation g(x) is a contraction with respect
to d(·, ·) if there exists α ∈ [0, 1) such that for all x, x̃
d[g(x), g(x̃)] ≤ αd(x, x̃).
(169)
If the expression is satisfied for α = 1, then g(x) is a nonexpansion.
Lemma 10. Let g(·) be the operator taking the weighted
combination with non-negative weights wi,j ≥ 0:
X
gk,l (x) =
wi,j,k,l xi,j .
i,j
g(·) is non-expansive with respect to d.
Proof. Let L = exp{d(x, x̃)} < ∞ (otherwise there is nothing
to prove), so that L1 xi,j ≤ x̃i,j ≤ Lxi,j . Then
X
X
gk,l (x̃) =
wi,j,k,l x̃i,j ≤ L
wi,j,k,l xi,j = Lgk,l (x),
i,j
i,j
(170)
and
gk,l (x̃) =
X
i,j
wi,j,k,l x̃i,j ≥
1
L
X
wi,j,k,l xi,j =
1
L gk,l (x).
i,j
(171)
or
"
exp{µ̄∗j }
xi,j
,
x̃i,j
=
w0,j exp{( β1
−
1)µ̄∗j }
+
n
X
wi,j exp{λ̄∗ + κ}
#−1
i
∗ )γ
(1 − qi,j
(165)
Equating terms, this proves (160), and shows that the first
factor in (158) is exp{(1 − γ)µ̄∗j }. To prove (158), note that
i=1
1
∗ =
1 − qi,j
1−
1
∗
wi,j exp{λ̄∗
i +µ̄j +κ}
∗ )γ
(1−qi,j
,
(166)
Lemma 11. Let f (·) be formed from two operators g(·) and
h(·) as
fi,j (x) = gi,j (x)ρg hi,j (x)ρh .
Suppose that g(·) and h(·) are contractions or non-expansions
with coefficients αg and αh . If αf = αg |ρg | + αh |ρh | < 1,
then f (·) is a contraction with respect to d. If αf = 1 then
f (·) is a non-expansion.
24
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
(k−1)
∗
maxi,j [yi,j /yi,j
] = L for some L with 1 < L < ∞ (if
L = 1 then convergence has occurred, and L = ∞ will only
d[f (x), f (x̃)]
∗
occur if yi,j
= 0, which contradicts lemma 7). The result is
fi,j (x)
similar
to
that
obtained by changing the distance to Hilbert’s
= max log
(172)
i,j
fi,j (x̃)
projective metric (e.g., [66]). We emphasise that this rescaling
does not need to be performed in the online calculation; rather
gi,j (x)ρg hi,j (x)ρh
(173) we are exploiting an equivalence to aid in proving convergence.
= max log
i,j
gi,j (x̃)ρg hi,j (x̃)ρh
Lemmas 13 and 14 establish an induction which shows
gi,j (x)
hi,j (x)
≤ |ρg | max log
+ |ρh | max log
(174) that after n steps, we are guaranteed to have reduced
i,j
i,j
gi,j (x̃)
hi,j (x̃)
(k+n)
∗
maxi,j [yi,j /yi,j
]. The induction commences with a single
≤ (|ρg |αg + |ρh |αh )d(x, x̃).
(175)
(k−1)
∗
edge with yi,j /yi,j
= 1, setting T (k−1) = {(i, j)}, and
v (k−1) = 1. As the induction proceeds, the set T (k) (or,
alternately, S (k) ) represents the edges for which improvement
The following results immediately from lemmas 10 and 11.
in the bound L is guaranteed, and v (k) < L (or, alternately,
Corollary 1. The operators r(·) and s(·) defined in (158)- u(k) ) represents the amount of improvement that is guaranteed.
The induction proceeds by alternately visiting the left-hand
(162) are non-expansions.
vertices and right-hand vertices (e.g., in figure 2), at each stage
Lemma 12. If d(x, x̃) ≤ log L̄ < ∞, then the operator s(·) adding to the set S (k) edges (i, j) for which w > 0, and an
i,j
is a contraction in cases 2 and 3 with a contraction factor edge that is incident on the vertex j is in T (k) (or, alternatively,
dependent on L̄ and wi,j .
adding to T (k) edges that could be traversed by starting from
a vertex i represented by an edge in S (k) ).
Proof. Lemma 2 in [32] shows that the update:
w
13. At iteration (k − 1), suppose that 1 ≤
P i,j
(176) Lemma
yi,j =
(k−1)
(k−1)
∗
∗
1 + j 0 6=j wi,j 0 xi,j 0
yi,j /yi,j
≤ L < ∞ ∀ i, j, and yi,j /yi,j
≤ v (k−1) <
Proof.
(k)
is a contraction. The proof of lemma 2 in [32] may be trivially
modified to show that the updates:
c
P 1
,
(177)
c2 + j 0 6=j wi,j 0 xi,j 0
c
P 1
(178)
c2 + j 0 wi,j 0 xi,j 0
L ∀ (i, j) ∈ T (k−1) . Then 1/L ≤ xi,j /x∗i,j ≤ 1 ∀ (i, j), and
(k)
1/u(k) ≤ xi,j /x∗i,j ≤ 1 ∀ (i, j) ∈ S (k) , where
are also contractions for any c1 > 0, c2 > 0. In cases 2 and
3, α = 1, so xi,0 = 1. Therefore, these results combined with
lemma 11, show that (161) and (162) are contractions. Since
| β1 − 1| < 1, (163) is also a contraction.
and
This is adequate to prove convergence in cases 2 and 3: s(·)
is a contraction, and r(·) is a non-expansion, so the composite
operator is a contraction. Combined with lemma 8, this shows
that the iteration converges to the unique solution of the KKT
conditions. The fact that the contraction factor in lemma 12
depends on an upper bound on the distance L̄ is not of concern;
since the combined operation is a contraction, the contraction
factor for the upper bound L̄ that we begin with will apply
throughout.
The final step is to prove convergence in case 1. For this,
we prove that n successive iterations of applying operators r(·)
and s(·) collectively form a contraction. The proof is based on
[35], but adapts it to address γ < 1, and to admit cases where
some edges have wi,j = 0.
As discussed in lemma 8, the solution in case 1 is not
changed by adding any constant c to λ̄i ∀ i and subtracting
it from µ̄j ∀ j; this is clear from (151), and was termed
message gauge invariance in [35, remark 30]. Incorporating
any such constant simply offsets all future iterations by the
value, having no impact on the qi,j iterates produced. Thus,
for the purpose of proving convergence, when analysing r(·),
(k−1)
∗
∗
we scale yi,j
such that mini,j [yi,j /yi,j
] = 1, and we denote
S (k) =
(i, j) ∈ {1, . . . , n}2 ∃i0 s.t. (i0 , j) ∈ T (k−1) , wi,j > 0 ,
(179)
(k−1) (k−1)
u(k) = max[θj
j
v
(k−1)
+(1−θj
)L]1−γ Lγ < L, (180)
where
(k−1)
θj
X
=
∗
wi,j yi,j
.X
∗
wi,j yi,j
.
(181)
i
i|(i,j)∈T (k−1)
Proof. Consider the sum in the first factor in (158) (remembering that w0,j = 0):
X
X
(k)
(k−1)
∗
σj∗ =
wi,j yi,j
, σj =
wi,j yi,j ,
(182)
i
i
(k)
so that σj /σj∗ ≥ 1, and
P
P
(k)
∗
∗
v (k−1) i|(i,j)∈T (k−1) wi,j yi,j
+ L i|(i,j)∈T
σj
/ (k−1) wi,j yi,j
P
≤
∗
σj∗
i wi,j yi,j
(k−1) (k−1)
= θj
v
(k−1)
+ (1 − θj
)L.
While a similar analysis could be applied to the second factor
of the expression for r(y) (as in [35]), to prove convergence
in the case with γ < 1, it is adequate to simply bound it by:
P
(k−1)
0
i0 6=i wi ,j yi0 ,j
1≤ P
≤ L.
(183)
∗
0
i0 6=i wi ,j yi0 ,j
Substituting these bounds into (158) gives the desired result.
WILLIAMS AND LAU: MULTIPLE SCAN DATA ASSOCIATION BY CONVEX VARIATIONAL INFERENCE
(k)
Lemma 14. At iteration k, suppose that 0 < L1 ≤ xi,j /x∗i,j ≤
(k)
1 ∀ i, j, and 1/L < 1/u(k) ≤ xi,j /x∗i,j ∀ (i, j) ∈ S (k) . Then
(k)
(k)
∗
∗
1 ≤ yi,j /yi,j
≤ L ∀ (i, j) and 1 ≤ yi,j /yi,j
≤ v (k) ∀ (i, j) ∈
(k)
T , where
T (k) = (i, j) ∈ {1, . . . , n}2 ∃j 0 s.t. (i, j 0 ) ∈ S (k) , wi,j > 0
(184)
and
"
#1−γ
(k)
(k)
ωi
(1 − ωi )
1
1
1
= min (k) +
> ,
(185)
γ
(k)
i
L
L
L
v
u
where
(k)
ωi
X
=
wi,j x∗i,j
.X
wi,j x∗i,j .
(186)
j
j|(i,j)∈S (k)
Proof. Taking either the left or right derivative of α(L) in
(189):
L
1
∂v(L) · log
v(L) − L log v(L)
.
(190)
∂α(L) =
log2 L
Thus it suffices to show that (omitting the iteration index
superscript from v)
j
so that
(k)
τi /τi∗
1
P
ũj
∂+ v(L) ≥
(k+l−1) (k−l−1)
(L) = θj
v
(k+l−1)
(L) + (1 − θj
(k+l)
(k+l)
ũ(k+l) (L) = max ũj
j
≤ 1, and
j|(i,j)∈S (k)
)L.
By the second result in lemma 16, ũj
will satisfy the
property (191) for each j. The pointwise maximum in (180):
j
P
∗
wi,j x∗i,j + L1 j|(i,j)∈S
/ (k) wi,j yi,j
u(k)
P
≥
∗
j wi,j yi,j
1
(k) 1
(k)
+ (1 − ωi ) .
= ωi
L
u(k−1)
The second factor in (161) can be bounded by the expression
P
(k)
0
1
j 0 6=j wi,j xi,j 0
≤P
≤ 1.
(188)
∗
0
L
j 0 6=j wi,j xi,j 0
(k)
τi
τi∗
v(L) log v(L)
,
L log L
v(L) log v(L)
.
L log L
(191)
We prove by induction, showing that the property in (191) is
maintained through the recursion in (180) and (185). The base
case is established by noting that if v (k−1) = 1, then (191)
holds. Now suppose that the property (191) is held for some
iteration (k + l − 1) with upper bound v (k+l−1) (L). Then let
∂− v(L) ≥
(k+l)
Proof. Following similar steps to the proof of lemma 13, we
define:
X
X
(k)
(k)
τi∗ =
wi,j x∗i,j , τi =
wi,j xi,j ,
(187)
25
(L)
will introduce a finite number of points where ũ(k+l) (L) is
continuous but the derivative is discontinuous. However, the
one-sided derivative at any of these points will satisfy (191)
since each component in the pointwise maximum satisfied it.
Finally, the result of (180) is:
u(k+l) (L) = [ũ(k+l) (L)]1−γ Lγ .
The first result in lemma 16 shows that this will satisfy (191).
Now we need to prove that the other half-iteration (185)
maintains the property (191). First, note that if L̄ = 1/L and
Substituting these bounds into (161) gives the desired result. ū(k+l) = 1/u(k+l) then the final result in lemma 16 shows
that if u(k+l) (L) satisfies (191), then:
We employ lemmas 13 and 14 by setting v (k−1) = 1 (scaling
∗
yi,j
accordingly), and T (k−1) to contain the edge(s) with
(k−1)
∗
yi,j /yi,j
= 1. Iteratively applying the lemmas for n steps,
(k+n)
∗
we find that 1 ≤ yi,j /yi,j
≤ v (k+n) < L ∀ (i, j) since
(k+n)
we will have T
containing all edges (since the graph is
connected). To prove linear convergence, we first need to show
that the distance is reduced by at least a constant, or that
α(L) ,
log v (k+n) (L)
< 1.
log L
ū(k+l) (L̄) log ū(k+l) (L̄)
,
L̄ log L̄
ū(k+l) (L̄) log ū(k+l) (L̄)
∂+ ū(k+l) (L̄) ≥
.
L̄ log L̄
∂− ū(k+l) (L̄) ≥
(192)
Subsequently, if:
(k+l)
ṽi
(k+l) (k−l)
(L̄) = ωi
ū
(k+l)
(L̄) + (1 − ωi
)L̄,
(k+l)
(189)
then the second result in lemma 16 establishes that ṽi
will
satisfy the property (192) for each i. As with the pointwise
maximum in the the previous case, the pointwise minimum
(185):
(k+l)
ṽ (k+l) (L̄) = min ṽi
(L̄)
where v (k+n) (L) depends on L through the recursion in (180)
and (185). The inequality in (189) can be established simply
i
by commencing from v (k−1) = 1, and observing that if
(k+l−1)
(k+l)
(k+l)
v
< L then u
< L and v
< L for any l > 0. will introduce a finite number of points where ṽ (k+l) (L̄) is
This is not adequate to prove convergence; we further need continuous but the derivative is discontinuous. However, the
to show that the constant α(L) is non-decreasing in L. This one-sided derivative at any of these points will satisfy (192).
ensures that the initial contraction rate (for the first n iterations) The first result in lemma 16 shows that the composition:
applies, at least, in all subsequent n-step iteration blocks.
v̄ (k+l) (L̄) = [ṽ (k+l) (L̄)]1−γ L̄γ
Lemma 15. α(L) is continuous and non-decreasing in L, i.e.,
its left and right derivatives everywhere satisfy
will also satisfy (192). Finally, the result of (185) is:
∂− α(L) ≥ 0,
∂+ α(L) ≥ 0.
v (k+l) (L) = 1/v̄ (k+l) (1/L)
26
ACCEPTED FOR PUBLICATION, IEEE TRANSACTIONS ON SIGNAL PROCESSING
which will satisfy (192) by the final result in lemma 16.
du(x)
dx
u(x) log u(x)
.
x log x
Lemma 16. Suppose that
≥
Then if v(x)
is given by any of the following:
1) v(x) = u(x)1−γ xγ ,
2) v(x) = θu(x) + (1 − θ)x
v(x) log v(x)
then dv(x)
. Finally, if y = 1/x and
dx ≥
x log x
v(y) = 1/u(x) = 1/u(1/y),
then
dv(y)
dy
≥
where
φis (j) = ∆γs [1 + log(1 − qsi,j )] − ∆βs [1 + log qs0,j ]. (194)
Consider the KKT conditions relating to [qsi,j ], since all
modifications relate to these variables. The conditions in the
original problem are:
νsi,j + ρis + σsj + γs log(1 − qsi,j ) + γs = 0,
νs0,j + σsj + βs log qs0,j + βs = 0,
v(y) log v(y)
.
y log y
(195)
(196)
where νsi,j is the dual variable for the
in (55), ρis is
Pconstraint
ms i,j
the dual variable for the constraint j=0 qs = 1, and σsj is
the dual variable for (57). For the modified problem, the same
two KKT conditions are:
Proof. For the first case:
dv(x)
du(x)
= (1 − γ)
u(x)−γ xγ + γu(x)1−γ xγ−1
dx
dx
u(x) log u(x)
≥ (1 − γ)
u(x)−γ xγ + γu(x)1−γ xγ−1
ν̄si,j + ρ̄is + σ̄sj + γ̄s log(1 − q̄si,j ) + γ¯s − φis (j) = 0, (197)
x log x
ν̄s0,j + σ̄sj + β̄s log q̄s0,j + β̄s = 0.
(198)
u(x)1−γ xγ [(1 − γ) log u(x) + γ log x]
=
x log x
Substituting in (194) and expanding γ̄s and β̄s , we find:
v(x) log v(x)
.
=
ν̄si,j + ρ̄is + σ̄sj + [γs + ∆γs ] log(1 − q̄si,j ) + γs + ∆γs
x log x
− ∆γs [1 + log(1 − qsi,j )] + ∆βs [1 + log qs0,j ] = 0, (199)
For the second case:
dv(x)
du(x)
Subsequently, by setting q̄si,j = qsi,j , q̄s0,j = qs0,j , ν̄si,j = νsi,j ,
=θ
+ (1 − θ)
dx
dx
ρ̄is = ρis and
u(x) log u(x)
+ (1 − θ)
≥θ
σ̄sj = σsj − ∆βs [1 + log qs0,j ],
(200)
x log x
θu(x) log u(x) + (1 − θ)x log x
we find a primal-dual solution (with identical primal values
=
x log x
[qsi,j ]) that satisfies the KKT conditions for the modified
v(x) log v(x)
problem, providing a certificate of optimality.
,
≥
x log x
where the final inequality is the result of convexity of x log x.
For the final result, let f (x) = 1/u(x) and g(y) = 1/y and
apply the chain rule:
dv(y)
= f 0 (g(y)) × g 0 (y)
dy
u0 (1/y)
1
=−
×− 2
2
u(1/y)
y
1
u(1/y) log u(1/y)
≥
×
u(1/y)2 y 2
(1/y) log(1/y)
[1/u(1/y)] log[1/u(1/y)]
v(y) log v(y)
=
=
.
y log y
y log y
A PPENDIX E
P ROOF OF SEQUENTIAL MODIFICATION
This section proves theorem 3, i.e., that the solution of the
problem in (58) is the same as the solution of the modified
problem of the same form, changing γs to γ̄s = γs + ∆γs , βs
to β̄s = βs + ∆βs , and ψsi (xi , ais ) as described in (86).
Proof of theorem 3: Let FBγ,β be the original problem
(in (58)) and F̄Bγ̄,β̄ be the modified problem. Note that the
modifying term in (86) depends only on ais , so we can
equivalently implement the modification by retaining the
unmodified ψsi (xi , ais ) and incorporating an additive term
− E[φis (ais )] = −
ms
X
j=0
qsi,j φis (j),
(193)
Jason L. Williams (S’01–M’07–SM’16) received
degrees of BE(Electronics)/BInfTech from Queensland University of Technology in 1999, MSEE from
the United States Air Force Institute of Technology
in 2003, and PhD from Massachusetts Institute of
Technology in 2007. He worked for several years as
an engineering officer in the Royal Australian Air
Force, before joining Australia’s Defence Science
and Technology Group in 2007. He is also an
adjunct associate professor at Queensland University
of Technology. His research interests include target
tracking, sensor resource management, Markov random fields and convex
optimisation.
Roslyn A. Lau (S’14) received the degrees of BE
(Computer Systems)/BMa&CS (Statistics) in 2005,
and MS (Signal Processing) in 2009, all from the
University of Adelaide, Adelaide, Australia. She is
currently a PhD candidate at the Australian National
University. She is also a scientist at the Defence
Science and Technology Group, Australia. Her research interests include target tracking, probabilistic
graphical models, and variational inference.
| 2 |
1
Sharing Storage in a Smart Grid: A Coalitional
Game Approach
arXiv:1712.02909v1 [] 8 Dec 2017
Pratyush Chakraborty∗, Enrique Baeyens∗, Kameshwar Poolla, Pramod P. Khargonekar, and Pravin Varaiya
Abstract—Sharing economy is a transformative socio-economic
phenomenon built around the idea of sharing underused resources and services, e.g. transportation and housing, thereby
reducing costs and extracting value. Anticipating continued
reduction in the cost of electricity storage, we look into the
potential opportunity in electrical power system where consumers
share storage with each other. We consider two different scenarios. In the first scenario, consumers are assumed to already
have individual storage devices and they explore cooperation
to minimize the realized electricity consumption cost. In the
second scenario, a group of consumers is interested to invest in
joint storage capacity and operate it cooperatively. The resulting
system problems are modeled using cooperative game theory. In
both cases, the cooperative games are shown to have non-empty
cores and we develop efficient cost allocations in the core with
analytical expressions. Thus, sharing of storage in cooperative
manner is shown to be very effective for the electric power system.
Index Terms—Storage Sharing, Cooperative Game Theory,
Cost Allocation
I. I NTRODUCTION
A. Motivation
The sharing economy is disruptive and transformative socioeconomic trend that has already impacted transportation and
housing [1]. People rent out (rooms in) their houses and
use their cars to provide transportation services. The business
model of sharing economy leverages under utilized resources.
Like these sectors, many of the resources in electricity grid
is also under-utilized or under-exploited. There is potential
benefit in sharing the excess generation by rooftop solar
panels, sharing flexible demand, sharing unused capacity in
the storage services, etc. Motivated by the recent studies [2]
predicting a fast drop in battery storage prices, we focus on
sharing electric energy storage among consumers.
B. Literature Review
Storage prices are projected to decrease by more than 30%
by 2020. The arbitrage value and welfare effects of storage in
This research is supported by the National Science Foundation under grants
EAGER-1549945, CPS-1646612, CNS-1723856 and by the National Research
Foundation of Singapore under a grant to the Berkeley Alliance for Research
in Singapore
∗ The first two authors contribute equally
Corresponding author P. Chakraborty is with the Department of Mechanical
Engineering, University of California, Berkeley, CA, USA
E. Baeyens is with the Instituto de las Tecnologı́as Avanzadas de la
Producción, Universidad de Valladolid, Valladolid, Spain
K. Poolla and P. Varaiya are with the Department of Electrical Engineering
and Computer Science, University of California, Berkeley, CA, USA
P. P. Khargonekar is with the Department of Electrical Engineering and
Computer Science, University of California, Irvine, CA, USA
electricity markets has been explored in literature. In [3], the
value of storage arbitrage was studied in deregulated markets.
In [4], the authors studied the role of storage in wholesale
electricity markets. The economic viability of the storage
elements through price arbitrage was examined in [5]. Agentbased models to explore the tariff arbitrage opportunities for
residential storage systems were introduced in [6]. In [7], [8],
authors address the optimal control and coordination of energy
storage. All these works explore the economic value of storage
to an individual, not for shared services. Sharing of storage
among firms has been analyzed using non-cooperative game
theory in [9]. But the framework needs a spot market among
the consumers and also coordination is needed among the firms
that are originally strategic.
In this paper, we explore sharing storage in a cooperative
manner among consumers. Cooperative game theory has significant potential to model resource sharing effectively [10].
Cooperation and aggregation of renewable energy sources
bidding in a two settlement market to maximize expected
and realized profit has been analyzed using cooperative game
theory in [11]–[13]. Under a cooperative set-up, the cost
allocation to all the agents is a crucial task. A framework
for allocating cost in a fair and stable way was introduced
in [14]. Cooperative game theoretic analysis of multiple demand response aggregators in a virtual power plant and their
cost allocation has been tackled in [15]. In [16], sharing opportunities of photovoltaic systems (PV) under various billing
mechanisms were explored using cooperative game theory.
C. Contributions and Paper Organization
In this paper, we investigate the sharing of storage systems
in a time of use (TOU) price set-up using cooperative game
theory. We consider two scenarios. In the first one, a group of
consumers already own storage systems and they are willing to
operate all together to minimize their electricity consumption
cost. In a second scenario, a group of consumers wish to
invest in a shared common storage system and get benefit
for long term operation in a cooperative manner. We model
both the cases using cooperative game theory. We prove that
the resulting games developed have non-empty cores, i.e.,
cooperation is shown to be beneficial in both the cases. We also
derive closed-form and easy to compute expressions for cost
allocations in the core in both the cases. Our results suggest
that sharing of electricity storage in a cooperative manner is
an effective way to amortize storage costs and to increase
its utilization. In addition, it can be very much helpful for
consumers and at the same time to integrate renewables in the
2
Consumer 1
Consumer 2
Consumer 3
Storage
Storage
Storage
Consumer 1
Consumer 2
Consumer 3
which corresponds to γi ∈ [0, 1]. The consumers discharge
their storage during peak hours and charge them during offpeak hours.
The daily cost of storage of a consumer i ∈ N for the peak
period consumption xi depends on the capacity investment Ci
and is given by
Storage
J(xi , Ci ) = πi Ci + πh (xi − Ci )+ + πℓ min{Ci , xi },
Fig. 1. Configuration of three consumers in the two analyzed scenarios
system, because off-peak periods correspond to large presence
of renewables that can be stored for consumption during peak
periods.
The remainder of the paper is organized as follows. In
Section II, we formulate the cooperative storage problems. A
brief review of cooperative game theory is presented in Section
III. In Section IV, we state and explain our main results. A
case study illustrating our results using real data from Pecan
St. Project is presented in Section VI. Finally, we conclude
the paper in Section VII.
II. P ROBLEM F ORMULATION
A. System Model
We consider a set of consumers indexed by i ∈ N :=
{1, 2, . . . , N }. The consumers invest in storage. The consumers cooperate and share their storage with each other. We
consider two scenarios here. In the scenario I, the consumers
already have storage and they operate with storage devices
connected to each other. In the scenario II, the consumers
wish to invest in a common storage. There is a single meter
for this group of consumers. We assume that there is necessary
electrical connection between all the consumers for effective
sharing. We ignore here the capacity constraints, topology
or losses in the connecting network. The configuration of
the scenarios with three consumers are depicted in Figure 1.
Examples of the situations considered here include consumers
in an industrial park, office buildings on a campus, or homes
in a residential complex.
(4)
where πi Ci is the capital cost of acquiring Ci units of storage
capacity, πh (xi − Ci )+ is the daily cost of the electricity
purchase during peak price period, and πℓ min{xi , Ci } is the
daily cost of the electricity purchase during off-peak period to
be stored for consumption during the peak period. We ignore
the off-peak period electricity consumption of the consumer
from the expression of J as its expression is independent of
the storage capacity. The daily peak consumption of electricity
is not known in advance and we assume it to be a random
variable. Let F be the joint cumulative distribution function
(CDF) of the collection of random variables {xi : i ∈ N } that
represents the consumptions of the consumers in N . If S ⊆ N
is a subset of consumers, then xS denotes the aggregated peak
consumption of S and its CDF is FS .
The daily cost of storage of a group of P
consumers S ⊆ N
with aggregated peak consumption xS = i∈S xi and joint
storage capacity CS is
J(xS , CS ) = πS CS + πh (xS − CS )+ + πℓ min{CS , xS }
(5)
where πS is the daily capital cost of aggregated storage of the
group amortized during its life span. Note that the individual
storage costs (4) are obtained from (5) for the singleton sets
S = {i}.
The daily cost of storage given by (4) and (5) are random
variables with expected values
JS (CS ) = EJS (xS , CS ),
S ⊆ N.
(6)
In the sequel, we will distinguish between the random variables and their realized values by using bold face fonts xS for
the random variables and normal fonts xS for their realized
values.
B. Cost of Storage
Each day is divided into two periods –peak and off-peak.
There is a time-of-use pricing. The peak and off-peak period
prices are denoted by πh and πℓ respectively. The prices are
fixed and known to all the consumers.
Let πi be the daily capital cost of storage of the consumer
i ∈ N amortized over its life span. Let the arbitrage price be
defined by
πδ := πh − πℓ
(1)
and define the arbitrage constant γi as follows:
γi :=
πδ − πi
πδ
(2)
In order to have a viable arbitrage opportunity, we need
πi ≤ πδ
(3)
C. Quantifying the Benefit of Cooperation Benefit
We are interested in studying and quantifying the benefit of
cooperation in the two scenarios. In the first scenario, the consumers already have installed storage capacity {Ci : i ∈ N }
that they acquired in the past. Each of the consumers can have
a different storage technology that was acquired at a different
time compared to the other consumers. Consequently, each
consumer has a different daily capital cost πi . The consumers
aggregate their storage capacities and they operate using the
same strategy, they use the aggregated storage capacity to store
energy during off-peak periods that they will later use during
peak periods. By aggregating storage devices, the unused
capacity of some consumers is used by others producing
cost savings for the group. We analyze this scenario using
cooperative game theory and develop an efficient allocation
3
rule of the daily storage cost that is satisfactory for every
consumer.
In the second scenario, we consider a group of consumers
that join to buy storage capacity that they want to use in a
cooperative way. First, the group of consumers have to make
a decision about how much storage capacity they need to
acquire and then they have to share the expected cost among
the group participants. The decision problem is modeled as an
optimization problem where the group of consumers minimize
the expected cost of daily storage. The problem of sharing the
expected cost is modeled using cooperative game theory. We
quantify the reduction in the expected cost of storage for the
group and develop a mechanism to allocate the expected cost
among the participants that is satisfactory for all of them.
III. BACKGROUND : C OALITIONAL G AME T HEORY
C OST S HARING
FOR
Game theory deals with rational behavior of economic
agents in a mutually interactive setting [17]. Broadly speaking,
there are two major categories of games: non-cooperative
games and cooperative games. Cooperative games (or coalitional games) have been used extensively in diverse disciplines
such as social science, economics, philosophy, psychology
and communication networks [10], [18]. Here, we focus on
cooperative games for cost sharing [19].
Let N := {1, 2, . . . , N } denote a finite collection of
players. In a cooperative game for cost sharing, the players
want to minimize their joint cost and share the resulting cost
cooperatively.
Definition 1 (Coalition): A coalition is any subset S ⊆ N .
The number of players in a coalition S is denoted by its
cardinality, |S|. The set of all possible coalitions is defined
as the power set 2N of N . The grand coalition is the set of
all players, N .
Definition 2 (Game and Value): A cooperative game is
defined by a pair (N , v) where v : 2N → R is the value
function that assigns a real value to each coalition S ⊆ N .
Hence, the value of coalition S is given by v(S). For the cost
sharing game, v(S) is the total cost of the coalition.
Definition 3 (Subadditive Game): A cooperative game
(N , v) is subadditive if, for any pair of disjoint coalitions
S, T ⊂ N with S ∩T = ∅, we have v(S)+v(T ) ≥ v(S ∪T ).
Here we consider the value of the coalition v(S) is transferable among players. The central question for a subadditive
cost sharing game with transferrable value is how to fairly
distribute the coalition value among the coalition members.
Definition 4 (Cost Allocation): A cost allocation for the
coalition S ⊆ N is a vector x ∈ RN whose entry xi represents
the allocation to member i ∈ S (xi = 0, i ∈
/ S).
For any coalition S ⊆ N , let xS denote the sum
P of cost
allocations for every coalition member, i.e. xS = i∈S xi .
Definition 5 (Imputation): A cost allocation x for the grand
coalition N is said to be an imputation if it is simultaneously
efficient –i.e. v(N ) = xN , and individually rational –i.e.
v(i) ≥ xi , ∀i ∈ N . Let I denote the set of all imputations.
The fundamental solution concept for cooperative games is
the core [17].
Definition 6 (The Core): The core C for the cooperative
game (N , v) with transferable cost is defined as the set of
cost allocations such that no coalition can have cost which is
lower than the sum of the members current costs under the
given allocation.
(7)
C := x ∈ I : v(S) ≥ xS , ∀S ∈ 2N .
A classical result in cooperative game theory, known as
Bondareva-Shapley theorem, gives a necessary and sufficient
condition for a game to have nonempty core. To state this
theorem, we need the following definition.
Definition 7 (Balanced Game and Balanced Map): A cooperative game (NP
, v) for cost sharing is balanced if for any
balanced map α, S∈2N α(S)v(S) ≥ v(N ) where the map
α : 2N →
P [0, 1] is said to be balanced if for all i ∈ N ,
we have S∈2N α(S)1S (i) = 1, where 1S is the indicator
function of the set S, i.e. 1S (i) = 1 if i ∈ S and 1S (i) = 0
if i 6∈ S.
Next we state the Bondareva-Shapley theorem.
Theorem 1 (Bondareva-Shapley Theorem [10]): A coalitional game has a nonempty core if and only if it is balanced.
If a game is balanced, the nucleolus [18] is a solution that
is always in the core.
IV. M AIN R ESULTS
A. Scenario I: Realized Cost Minimization with Already Procured Storage Elements
Our first concern is to study if there is some benefit in
cooperation of the consumers by sharing the storage capacity
that they already have. To analyze this scenario we shall
formulate our problem as a coalitional game.
1) Coalitional Game and Its Properties: The players of the
cooperative game are the consumers that share their storage
and want to reduce their realized joint storage investment cost.
For any coalition S ⊆ N , the cost of the coalition is u(S)
which
is the realized cost of the joint storage investment CS =
P
C
i∈S i . Each consumer may have a different daily capital
cost of storage {πi : i ∈ N }, because they did not necessarily
their storage systems at the same time or at the same price for
KW. The realized cost
P of the joint storage for the peak period
consumption xS = i∈S xi is given by
u(S) = J(xS , CS )
(8)
where J was defined in (5). Since we are using the realized
value of the aggregated peak consumption xS , J(xS , CS ) is
not longer a random variable.
In order to show that cooperation is advantageous for the
members of the group, we have to prove that the game
is subadditive. In such a case, the joint daily investment
cost of the consumers is never greater that the sum of the
individual daily investment costs. Subadditivity of the cost
sharing coalitional game is established in Theorem 2.
Theorem 2: The cooperative game for storage investment
cost sharing (N , u) with the cost function u defined in (8) is
subadditive.
4
Proof: See appendix.
However, subadditivity is not enough to provide satisfaction
of the coalition members. We need a stabilizing allocation
mechanism of the aggregated cost. Under a stabilizing cost
sharing mechanism no member in the coalition is impelled
to break up the coalition. Such a mechanism exists if the cost
sharing coalitional game is balanced. Balancedness of the cost
sharing coalitional game is established in Theorem 3.
Theorem 3: The cooperative game for storage investment
cost sharing (N , u) with the cost function u defined in (8) is
balanced.
Proof: See the appendix.
2) Sharing of Realized Cost: Since the cost sharing cooperative game (N , u) is balanced, its core is nonempty and there
always exist cost allocations that stabilize the grand coalition.
One of this coalitions is the nucleolus while another one is the
allocation that minimizes the worst case excess [12]. However,
computing these allocations requires solving linear programs
with a number of constraints that grows exponentially with
the cardinality of the grand coalition and they can be only
applied for coalitions of moderate size. As an alternative to
these computationally intensive cost allocations, we propose
the following cost allocation.
Allocation 1: Define the cost allocation {ξi : i ∈ N } as
follows:
πi Ci + πh (xi − Ci ) + πℓ Ci , if xN ≥ CN
ξi :=
(9)
πi Ci + πℓ xi ,
if xN < CN
for all i ∈ N .
We establish in Theorem 4, this cost allocation belongs to
the core of the cost sharing cooperative game.
Theorem 4: The cost allocation {ξi : i ∈ N } defined in
Allocation 1 belongs to the core of the cost sharing cooperative
game (N , u).
Proof: See appendix.
Unlike the nucleolus or the cost allocation minimizing the
worst-case excess, Allocation 1 has an analytical expression
and can be easily obtained without any costly computation.
Thus, we have developed a strategy such that consumers
that independently invested in storage, and are subject to a
two period (peak and off-peak) TOU pricing mechanism can
reduce their costs by sharing their storage devices. Moreover,
we have proposed a cost sharing allocation rule that stabilizes
the grand coalition. This strategy can be considered a weak
cooperation because each consumer acquired its storage capacity independently of each other, but they agree to share the
joint storage capacity.
In the next section we consider a stronger cooperation
problem, where a group of consumers decide to invest jointly
in storage capacity.
B. Scenario II: Expected Cost Minimization for Joint Storage
Investment
In this scenario, we consider a group of consumers indexed
by i ∈ N , that decide to jointly invest in storage capacity.
We are interested in studying whether cooperation provides a
benefit for the coalition members for the long term.
1) Coalitional Game and Its Properties: Similar to the
previous case, only the peak consumption is relevant in the
investment decision. Let xi denote the daily peak period
consumption of consumer i ∈ N . Unlike the previous scenario, here xi is a random variable with marginal cumulative
distribution function (CDF) Fi . The daily cost of the consumer
i ∈ N depends on the storage capacity investment of the
consumer as per (4). This cost is also a random variable. If
the consumer is risk neutral, it acquires the storage capacity
Ci∗ that minimizes the expected value of the daily cost
Ci∗ = arg min Ji (Ci ),
(10)
Ji (Ci ) = EJ(xi , Ci ),
(11)
Ci ≥0
where
and πS is the daily capital cost of storage amortized over its
lifespan that in this case is the same for each of the consumers
–i.e. πi = πS for all i ∈ N , because we assume that they buy
storage devices of the same technology at the same time. This
problem has been previously solved in [9] and its solution is
given by Theorem 5.
Theorem 5 ( [9]): The storage capacity of a consumer i ∈ N
that minimizes its daily expected cost is Ci∗ , where
πδ − πS
= γS
Fi (Ci∗ ) =
πδ
and the resulting optimal cost is
Ji∗ = Ji (Ci∗ ) = πℓ E[xi ] + πS E[xi | xi ≥ Ci∗ ].
(12)
Let us consider a group of consumers S ⊆ N that decide
to join to invest in joint storage capacity.
P The joint peak
consumption of the coalition is xS =
i∈S xi with CDF
FS . We also assume that the joint CDF of all the agent’s
peak consumptions F is known or can be estimated from
historical data. By applying Theorem 5, the optimal investment
in storage capacity of the coalition S ⊆ N is CS∗ such that
FS (CS∗ ) = γS and the optimal cost is
JS∗ = JS (CS∗ ) = πℓ E[xS ] + πS E[xS | xS ≥ CS∗ ].
(13)
Consider the cost sharing cooperative game (N , v) where
the cost function v : 2N → R is defined as follows
v(S) = JS∗ = arg min JS (CS ),
CS ≥0
(14)
where JS∗ was defined in (13).
Similar to the case of consumers that already own storage
capacity and decide to join to reduce their costs, here we prove
that the cooperative game is subadditive so that the consumer
obtain a reduction of cost. This is the result in Theorem 6.
Theorem 6: The cooperative game for storage investment
cost sharing (N , v) with the cost function v defined in (14) is
subadditive.
Proof: See appendix.
We also need a cost allocation rule that is stabilizing.
Theorem 7 establishes that the game is balanced and has a
stabilizing allocation.
Theorem 7: The cooperative game for storage investment
cost sharing (N , v) with the cost function v defined in (14) is
balanced.
5
Proof: See appendix.
2) Stable Sharing of Expected Cost: Similar to the previous
scenario, we were able to develop a cost allocation rule that is
in the core. This cost allocation rule has an analytical formula
and can be efficiently computed. This allocation rule is defined
as follows.
Allocation 2: Define the cost allocation {ζi : i ∈ N } as
follows:
∗
ζi := πℓ E[xi ] + πS E[xi | xN ≥ CN
], i ∈ N .
(15)
In the next theorem, we prove that Allocation 2 provides
a sharing mechanism of the expected daily storage cost of a
coalition of agents that is in the core of the cooperative game.
Theorem 8: The cost allocation {ζi : i ∈ N } defined in
Allocation 2 belongs to the core of the cost sharing cooperative
game (N , v).
Proof: See appendix.
3) Sharing of Realized Cost: Based on the above results,
the consumers can invest on joint storage and they will make
savings for long term. But the cost allocation ζi defined by
(15) is in expectation. The realized allocation will be different
due to the randomness of the daily consumption. Here we
develop a daily cost allocation for the k-th day as
k
ρki = βi πN
,
(16)
k
where πN
is the realized cost for the grand coalition on the
k-th day and βi = PNζi ζ .
i=1 i
PN
P
N
k
As i=1 βi = 1, i=1 ρki = πN
and the cost allocation
is budget
balanced.
Also
using
strong
law of large numbers,
PK
1
k
ρ
→
ζ
as
K
→
∞
and
the
realized allocation is
i
k=1 i
K
strongly consistent with the fixed allocation ζi .
V. B ENEFIT OF C OOPERATION
A. Scenario I
The benefit of cooperation by joint operation of storage
reflected in the total reduction of cost is given by
X
X
Ji − JS = πh ( (xi − Ci )+ − (xS − CS )+ )+
i∈S
i∈S
X
πℓ (
min{Ci , xi } − min{CS , xS }),
(17)
i∈S
where the reduction for individual agent with cost allocation
(9) is
πδ (Ci − xi )+ , if xN ≥ CN
Ji − ζi :=
(18)
πδ (xi − Ci )+ , if xN < CN
B. Scenario II
The benefit of cooperation given by the reduction in the
expected cost that the coalition S obtains by jointly acquiring
and exploiting the storage is
X
Ji∗ − JS∗ =
i∈S
πS
X
i∈S
E[xi | xi ≥ Ci∗ ] − πS E[xS | xS ≥ CS∗ ],
(19)
Fig. 2. Estimated CDFs of the peak consumption of the five households and
their aggregated consumption
TABLE I
C ORRELATION COEFFICIENTS FOR THE FIVE
1
2
3
4
5
1
1.000000
0.363586
0.297733
0.292073
0.486665
2
0.363586
1.000000
0.132320
0.453056
0.157210
3
0.297733
0.132320
1.000000
0.085869
0.365212
HOUSEHOLDS
4
0.292073
0.453056
0.085868
1.000000
-0.056696
5
0.486665
0.157210
0.365212
-0.056696
1.000000
and the reduction in expected cost of each participant assuming
that the expected cost of the coalition is split using cost
allocation (15) is
Ji∗ − ζi = πS E[xi | xi ≥ Ci∗ ] − πS E[xi | xS ≥ CS∗ ]. (20)
VI. C ASE S TUDY
We develop a case study to illustrate our results. For this
case study, we used data from the Pecan St project [20]. We
consider a two-period ToU tariff with πh = 55¢/KWh, and
πℓ = 20¢/KWh. Electricity storage is currently expensive. The
amortized cost of Tesla’s Powerwall Lithium-ion battery is
around 25¢/KWh per day. But storage prize is projected to
reduce by 30% by 2020 [21]. Keeping in mind this projection,
we consider πS = 15¢/KWh.
A group of five household decide to join to acquire storage.
Using historical data of 2016, we estimate the individual CDFs
of their daily peak consumptions and the CDF of the daily
joint peak consumption. Peak consumption period in Texas
corresponds to non-holidays and non-weekends from 7h to
23h. The estimated CDFs for peak consumption are depicted
in Figure 2. From this figure, we can see that the shape of the
CDFs are quite similar for the five households. The correlation
coefficients of these five households are given in Table I.
Although the shape of the CDFs are very similar, the peak
consumptions are not completely dependent. This means that
there is room for reduction in cost by making a coalition.
The optimal investments in storage for the five households
and for the grand coalition are given in Table II. Also in
this table, we show the allocation of the expected storage
cost given by (15). The reduction in cost for the consumers
coalition is about 5%, however those with less correlation with
the other, have a larger reduction. Consumers 3 and 4 have
cost reductions higher than 7%, while consumer 1, whose
consumption is more correlated with the other, have about
2.4% of cost reduction.
6
TABLE II
O PTIMAL STORAGE CAPACITY INVESTMENTS ( IN KW H ), MINIMAL
EXPECTED STORAGE COST ( IN $) AND EXPECTED COST ALLOCATION OF
THE GRAND COALITION ( IN $)
Ci∗
Ji∗
ζi
1
22.98
899.76
882.45
2
14.09
579.79
543.10
3
12.64
600.88
550.02
4
13.21
525.51
488.20
5
29.82
1189.41
1140.35
N
95.58
3604.13
3604.13
TABLE III
A LLOCATION OF THE REALIZED COST FOR S CENARIO I FOR THE FIRST
TEN DAYS OF THE YEAR ( IN $)
Day
1
2
3
4
5
6
7
8
9
10
ξ1
492.66
464.89
541.21
675.74
761.41
646.05
654.47
583.59
640.46
604.49
ξ2
612.83
624.96
482.61
373.95
403.49
516.53
760.99
411.25
394.04
446.14
ξ3
436.88
343.61
299.84
377.64
405.52
404.89
387.80
533.00
482.85
475.46
ξ4
549.61
567.21
541.40
418.01
371.64
573.17
536.92
455.56
483.24
310.22
ξ5
904.69
947.27
820.46
734.10
799.23
812.54
797.46
831.97
787.20
791.60
R EFERENCES
Now, we assume that the five households buy storage independently and then decide to cooperate by sharing their storage
to reduce the realized cost. This corresponds to Scenario I.
For simplicity of computation and comparison with scenario
II, we consider πi = πS for all i. The realized cost is allocated
using (9). In Table III, we show the allocation of the realized
aggregated cost for the ten first days of 2016, assuming that
the households have storage capacities {Ci∗ : i ∈ N }.
Finally, in Figure 3, we depict the evolution of the average
allocation of the realized cost of storage to each household for
the 2016 year. The average allocation for D days is given by
D
1 X
ξi , i ∈ N ,
ξ¯i (D) =
D i=1
(21)
where D is the number of days. The average cost allocation is
compared to the optimal expected costs Ji∗ . Assuming stationarity of the peak consumptions random variables, the expected
allocation converge to some values ξi∞ = limD→∞ ξ̄i (D) ≤
Ji∗ for i ∈ N , as it is shown in Figure 3.
VII. C ONCLUSIONS
In this paper, we explored sharing opportunities of electricity storage elements among a group of consumers. We
Fig. 3. Average allocation of the realized storage cost
used cooperative game theory as a tool for modeling. Our
results prove that cooperation is beneficial for agents that
either already have storage capacity or want to acquire storage
capacity. In the first scenario, the different agents only need
the infrastructure to share their storage devices. In such a case
the operative scheme is really simple, because each agent only
has to storage at off-peak periods as much as possible energy
that they will consume during peak periods. At the end of the
day, the realized cost is shared among the participants. In the
second scenario, the coalition members can take an optimal
decision about how much capacity they jointly acquire by
minimizing the expected daily storage cost. We showed that
the cooperative games in both the cases are balanced. We also
developed allocation rules with analytical formulas in both
the cases. Thus, our results suggest that sharing of storage in
a cooperative way is very much useful for all the agents and
the society.
[1] H. Heinrichs, “Sharing economy: a potential new pathway to sustainability,” Gaia, vol. 22, no. 4, p. 228, 2013.
[2] N. Kittner, F. Lill, and D. M. Kammen, “Energy storage
deployment and innovation for the clean energy transition,”
Nature Energy, vol. 2, p. 17125, 07 2017. [Online]. Available:
http://dx.doi.org/10.1038/nenergy.2017.125
[3] F. Graves, T. Jenkin, and D. Murphy, “Opportunities for electricity
storage in deregulating markets,” The Electricity Journal, vol. 12, no. 8,
pp. 46–56, 1999.
[4] R. Sioshansi, P. Denholm, T. Jenkin, and J. Weiss, “Estimating the value
of electricity storage in pjm: Arbitrage and some welfare effects,” Energy
economics, vol. 31, no. 2, pp. 269–277, 2009.
[5] K. Bradbury, L. Pratson, and D. Patiño-Echeverri, “Economic viability
of energy storage systems based on price arbitrage potential in real-time
us electricity markets,” Applied Energy, vol. 114, pp. 512–519, 2014.
[6] M. Zheng, C. J. Meinrenken, and K. S. Lackner, “Agent-based model
for electricity consumption and storage to evaluate economic viability of
tariff arbitrage for residential sector demand response,” Applied Energy,
vol. 126, pp. 297–306, 2014.
[7] D. Wu, T. Yang, A. A. Stoorvogel, and J. Stoustrup, “Distributed optimal
coordination for distributed energy resources in power systems,” IEEE
Transactions on Automation Science and Engineering, vol. 14, no. 2,
pp. 414–424, 2017.
[8] P. M. van de Ven, N. Hegde, L. Massoulié, and T. Salonidis, “Optimal
control of end-user energy storage,” IEEE Transactions on Smart Grid,
vol. 4, no. 2, pp. 789–797, 2013.
[9] C. Wu, D. Kalathil, K. Poolla, and P. Varaiya, “Sharing electricity
storage,” in Decision and Control (CDC), 2016 IEEE 55th Conference
on. IEEE, 2016, pp. 813–820.
[10] W. Saad, Z. Han, M. Debbah, A. Hjørungnes, and T. Başar, “Coalitional
game theory for communication networks,” IEEE Signal Processing
Magazine, vol. 26, no. 5, pp. 77–97, 2009.
[11] P. Chakraborty, E. Baeyens, P. P. Khargonekar, and K. Poolla, “A
cooperative game for the realized profit of an aggregation of renewable
energy producers,” in 2016 IEEE 55th Conference on Decision and
Control (CDC), Dec 2016, pp. 5805–5812.
[12] E. Baeyens, E. Y. Bitar, P. P. Khargonekar, and K. Poolla, “Coalitional
aggregation of wind power,” IEEE Transactions on Power Systems,
vol. 28, no. 4, pp. 3774–3784, 2013.
[13] P. Chakraborty, “Optimization and control of flexible demand and
renewable supply in a smart power grid,” Ph.D. dissertation, University
of Florida, 2016.
[14] P. Chakraborty, E. Baeyens, and P. P. Khargonekar, “Cost causation
based allocations of costs for market integration of renewable energy,”
IEEE Transactions on Power Systems, vol. PP, no. 99, pp. 1–1, 2017.
[15] H. Nguyen and L. Le, “Bi-objective based cost allocation for cooperative
demand-side resource aggregators,” IEEE Transactions on Smart Grid,
2017.
[16] P. Chakraborty, E. Baeyens, and P. P. Khargonekar, “Analysis of solar
energy aggregation under various billing mechanisms,” arXiv preprint
arXiv:1708.05889, 2017.
7
[17] J. Von Neumann and O. Morgenstern, Theory of Games and Economic
Behavior. Princeton University Press, 1944.
[18] R. B. Myerson, Game Theory: Analysis of Conflict. Harvard University
Press, 2013.
[19] K. Jain and M. Mahdian, “Cost sharing,” Algorithmic game theory, pp.
385–410, 2007.
[20] Pecan St. Project. [Online]. Available: http://www.pecanstreet.org/
[21] How
Cheap
Can
Energy
Storage
Get?
Pretty
Darn
Cheap.
[Online].
Available:
http://rameznaam.com/2015/10/14/how-cheap-can-energy-storage-get/
A. Proof of Theorem 2
We shall prove that J defined by (4) is a subadditive
function. For any nonnegative real numbers xS , xT , CS , CT ,
we define JS = J(xS , CS ), JT = J(xT , CT ), JS∪T =
J(xS + xT , CS + CT ), then
X
JS =
πi Ci + πh (xS − CS )+ + πℓ min{CS , xS },
i∈S
JT =
πi Ci + πh (xT − CT )+ + πℓ min{CT , xT },
i∈T
JS∪T =
X
We begin by proving that the cost allocation (9) is an
imputation, i.e. ξ ∈ I. An imputation is a cost allocation
satisfying budget balance and individual rationality.
If xN ≥ CN :
X
X
ξi =
πi Ci + πh (xN − CN ) + πℓ CN = u(N ).
i∈N
i∈N
If xN < CN :
A PPENDIX
X
C. Proof of Theorem 4
X
πi Ci + πℓ xN = u(N ).
i∈N
P
Thus, i∈N ξi = u(N ) and the cost allocation {ξi : i ∈ N }
satisfies budget balance.
The individual cost is:
πi Ci + πh (xi − Ci ) + πℓ Ci xi ≥ Ci
u({i}) =
πi Ci + πℓ xi
xi < Ci
If xN ≥ CN :
ξi = πi Ci + πh (xi − Ci ) + πℓ Ci
= πi Ci + πℓ xi − πδ (Ci − xi )
πi Ci + πh (xS + xT − CS − CT )+ +
i∈S∪T
πℓ {CS + CT , xS + xT }.
= u({i}) − πδ (Ci − xi )+ .
We can distinguish four cases1 : (a) xS ≥ CS and xT ≥ CT ,
(b) xS ≥ CS , xT < CT and xS + xT ≥ CS + CT , (c)
xS ≥ CS , xT < CT and xS + xT < CS + CT , and (d)
xS < CS and xT < CT . Using simple algebra it is easy to
see that for all of these cases, JS∪T ≤ JS +JT or equivalently,
J(xS + xT , CS + CT ) ≤ J(xS , CS ) + J(xT , CT ),
(22)
and this proves subadditivity of J. Since the storage cost
function u(S) = J(xS , CS ), the cost sharing cooperative
game (N , u) is subadditive.
B. Proof of Theorem 3
We notice that the function J is positive homogeneous,
i.e, for any α ≥ 0, J(αxS , αCS ) = αJ(xS , CS ). J is also
subadditive as per Theorem 2. Thus for any arbitrary balanced
map α : 2N → [0, 1]
X
α(S)u(S)
If xN < CN :
ξi = πi Ci + πℓ xi
= u({i}) − πδ (xi − Ci )+ .
Thus, ξi ≤ v({i}) for all i ∈ N , and the cost allocation ξ is
individually rational. Since it is also budget balanced, it is an
imputation, i.e. ξ ∈ I.
Finally, to prove that the cost allocation ξ belongs
P to the
core of the cooperative game, we have to prove that i∈S ξi ≤
u(S) for any coalition S ⊆ N .
If xN ≥ CN :
X
X
ξi =
πi Ci + πh (xS − CS ) + πℓ CS
i∈S
i∈S
=
X
πi Ci + πℓ xS − πδ (CS − xS )
i∈S
= u(S) − πδ (CS − xS )+ .
S∈2N
=
X
α(S)J(xS , CS )
S∈2N
=
X
J(α(S)xS , α(S)CS ) [positive homogeneity]
S∈2N
≥ J(
X
α(S)xS ,
S∈2N
= J(
X X
X
α(S)CS ) [subadditivity]
S∈2N
α(S)1S (i)xS ,
i∈N S∈2N
= J(xN , CN ) = u(N ).
X X
α(S)1S (i)CS )
i∈N S∈2N
and this proves that the cost sharing game (N , u) is balanced.
1 Since
xS , xT , CS and, CT are arbitrary nonnegative real numbers,
any other possible case can be easily recast as one of these four cases by
interchanging S and T .
If xN < CN :
X
ξi =
i∈S
X
πS CS + πℓ xS
i∈S
= u(S) − πδ (xS − CS )+ .
P
Thus, i∈S ξi ≤ u(S) for any S ⊆ N and the cost allocation
ξ is in the core of the cooperative game (N , u).
D. Proof of Theorem 6
Let S and T two arbitrary nonempty disjoint coalitions, i.e.
S, T ⊆ N such that S ∩ T = ∅. Define
Φ(xS ) = min EJ(CS , xS ).
CS ≥0
We shall prove that Φ(xS ) is a subbadditive function.
(23)
8
From the definition of J given in (4),
J(xS , CS∗ )
+
J(xT , CT∗ )
Note that
xT , CS∗
≥ J(xS +
+
CT∗ ).
Taking expectations on both sides,
πh (xS − CS∗ )+ + πℓ min{CS∗ , xS } ≥ πh (xS − CS∗ ) + πℓ CS∗ .
and therefore,
xT , CS∗
CT∗
Φ(xS ) + Φ(xT ) ≥ EJ(xS +
+
)
≥ min EJ(xS + xT , C)
πS CS∗ + πh E[(xS − CS∗ )+ ] + πℓ E[min{CS∗ , xS }]
≥ πS CS∗ + πh E[(xS − CS∗ )] + πℓ CS∗ .
C≥0
= Φ(xS , xT ),
and this proves subadditivity of Φ.
Subadditivity of the cost sharing cooperative game (N , v)
is a consequence of the subadditivity of Φ because v(S) =
Φ(xS ) for any S ⊆ N .
E. Proof of Theorem 7
First, we prove that the function Φ defined by (23) is
positive homogeneous. Observe that if a random variable z
has CDF F , then the scaled random variable αz with α > 0
has CDF: Fα (θ) = P{αz ≤ θ} = F (θ/α). Then, for any
α ≥ 0 and γ ∈ [0, 1], γ = F (C) if and only if γ = Fα (αC).
This means that if CS is such that Φ(xS ) = EJ(xS , CS∗ ), then
Φ(αxS ) = EJ(αxS , αCS∗ ).
For any α ≥ 0, and from the definition of the daily storage
cost J (4), J(αxS , αCS∗ ) = αJ(xS , CS∗ ). Taking expectations
on both sides, Φ(αxS ) = αΦ(xS ), and this proves positive
homogeneity of Φ.
Now, balancedness of the cost sharing cooperative game
(N , v) is a consequence of the properties of function Φ
X
X
α(S)v(S) =
α(S)Φ(xS )
S∈2N
Let us define the sets A+ = {xN ∈ R+ | xN ≥ CN },
A− = R+ \A+ , and the auxiliary function ψ(xN ) as follows
πh if xN ∈ A+
ψ(xN ) =
πℓ if xN ∈ A−
Let F (xS , xN ) be the joint distribution function of the peak
consumptions (xS , xN ), then
∗
E[ψ(xN )(xN − CN
)]
Z Z
= πℓ
(xS − CS0 )dF (xS , xN )+
R+ A−
Z Z
πh
(xS − CS∗ )dF (xS , xN )
Z R+Z A+
≤ πh
(xS − CS∗ )dF (xS , xN )
ZR+ A+ ∪A−
= πh
(xS − CS∗ )dF (xS , xN )
R+
= E[(xS − CS∗ ],
and consequently,
πS CS∗ + πh E[(xS − CS∗ )] + πℓ CS∗
≥ πS CS∗ + E[πα (xS − CS∗ )] + πℓ CS∗
S∈2N
=
X
Φ(α(S)xS ) [positive homogeneity]
S∈2N
≥ Φ(
X
α(S)xS ) [subadditivity]
S∈2N
= Φ(
X X
α(S)1S (i)xS )
i∈N S∈2N
= Φ(xN ) = v(N ).
F. Proof of Theorem 8
We begin by proving that the cost allocation given by (9)
satisfies budget balance,
X
X
X
∗
ζi =
πℓ E[xi ] +
πS E[xi | xN ≥ CN
]
i∈N
i∈N
= πℓ E
i∈N
"
X
i∈N
#
xi + πS E
"
X
∗
xi | xN ≥ CN
i∈N
#
∗
= πℓ E[xN ] + πh E[xN | xN ≥ CN
]
= v(N ).
Now, we proveP
that the right hand side of the previous
expression equals i∈S ζi
πS CS∗ + E[ψ(xN )(xS − CS∗ )] + πℓ CS∗
Z Z
= πS CS∗ +
ψ(xN )(xi − Ci0 )dF (xS , xN ) + πℓ CS∗
R+ R+
Z Z
= πS CS∗ + πℓ
(xS − CS∗ )dF (xS , xN )+
R+ A− ∪A+
Z Z
(πh − πℓ )
(xS − CS∗ )dF (xS , xN ) + πℓ CS∗
R+ A+
Z Z
∗
= πS CS + πδ
(xS − CSi )dF (xS , xN ) + πℓ E[xi ]
R+ A+
Z Z
πS
∗
(xS − CS∗ )dF (xS , xN ) + πℓ E[xi ]
= πS CS +
1 − γS R+ A+
Z Z
1
xS dF (xS , xN ) + πℓ E[xi ]
= πS
P{xN ≥ CN } R+ A+
X
=
ζi
i∈S
The cost allocation is in the core if we prove that v(S) ≥
P
i∈S ζi for any coalition S ⊂ N . Please note that individual
rationality is included in the previous condition.
The storage cost for a coalition S ⊂ N is
v(S) = πℓ E[xS ] + πS E[xS | xS ≥ CS∗ ]
= πS CS∗ + πh E[(xS − CS∗ )+ ] + πℓ E[min{CS∗ , xS }].
P
Thus, i∈S ζi ≤ v(S) and the cost allocation {ζi : i ∈ N }
is an imputation in the core.
| 3 |
From Imitation to Prediction, Data Compression vs
Recurrent Neural Networks for Natural Language
Processing
Juan Andrés Laura∗ , Gabriel Masi† and Luis Argerich‡
arXiv:1705.00697v1 [cs.CL] 1 May 2017
Departemento de Computación, Facultad de Ingenierı́a
Universidad de Buenos Aires
Email: ∗ [email protected], † [email protected], ‡ [email protected]
Abstract—In recent studies [1][13][12] Recurrent Neural Networks were used for generative processes and their surprising
performance can be explained by their ability to create good
predictions. In addition, data compression is also based on
predictions. What the problem comes down to is whether a data
compressor could be used to perform as well as recurrent neural
networks in natural language processing tasks. If this is possible,
then the problem comes down to determining if a compression
algorithm is even more intelligent than a neural network in
specific tasks related to human language. In our journey we
discovered what we think is the fundamental difference between
a Data Compression Algorithm and a Recurrent Neural Network.
I. I NTRODUCTION
One of the most interesting goals of Artificial Intelligence
is the simulation of different human creative processes like
speech recognition, sentiment analysis, image recognition,
automatic text generation, etc. In order to achieve such goals,
a program should be able to create a model that reflects how
humans think about these problems.
Researchers think that Recurrent Neural Networks (RNN)
are capable of understanding the way some tasks are done such
as music composition, writing of texts, etc. Moreover, RNNs
can be trained for sequence generation by processing real data
sequences one step at a time and predicting what comes next
[1][13].
Compression algorithms are also capable of understanding
and representing different sequences and that is why the compression of a string could be achieved. However, a compression
algorithm might be used not only to compress a string but also
to do non-conventional tasks in the same way as neural nets
(e.g. a compression algorithm could be used for clustering
[11], sequence generation or music composition).
Both neural networks and data compressors have something
in common: they should be able to learn from the input data
to do the tasks for which they are designed. In this way, we
could argue that a data compressor can be used to generate
sequences or a neural network can be used to compress data.
In consequence, if we use the best data compressor to generate
sequences then the results obtained should be better that the
ones obtained by a neural network but if this is not true then
the neural network should compress better than the state of
the art in data compression.
Our hypothesis is that, if compression is based on learning
from the input data set, then the best compressor for a given
data set should be able to compete with other algorithms in
natural language processing tasks. In the present work, we
will analyze this hypothesis for two given scenarios: sentiment
analysis and automatic text generation.
II. DATA C OMPRESSION AS AN A RTIFICIAL I NTELLIGENCE
F IELD
For many authors there is a very strong relationship between
Data Compression and Artificial Intelligence [8][9]. Data
Compression is about making good predictions which is also
the goal of Machine Learning, a field of Artificial Intelligence.
We can say that data compression involves two steps:
modeling and coding. Coding is a solved problem using
arithmetic compression. The difficult task is modeling. In
modeling the goal is to build a description of the data using the
most compact representation; this is again directly related to
Artificial Intelligence. Using the Minimal Description Length
principle[10] the efficiency of a good Machine Learning
algorithm can be measured in terms of how good is is to
compress the training data plus the size of the model itself.
If we have a file containing the digits of π we can compress
the file with a very short program able to generate those
digits, gigabytes of information can be compressed into a few
thousand bytes, the problem is having a program capable of
understanding that our input file contains the digits of π. We
can they say that, in order to achieve the best compression
level, the program should be able to always find the most
compact model to represent the data and that is clearly an
indication of intelligence, perhaps even of General Artificial
Intelligence.
III. RNN S FOR DATA C OMPRESSION
Recurrent Neural Networks and in particular LSTMs were
used for predictive tasks [7] and for Data Compression [14].
While the LSTMs were brilliant in their text[13], music[12]
and image generation[18] tasks they were never able to defeat
the state of the art algorithms in Data Compression[14].
This might indicate that there is a fundamental difference
between Data Compression and Generative Processes and
between Data Compression Algorithms and Recurrent Neural
Networs.After experiments we will show that there’s indeed
a fundamental difference that explains why a RNN can be
the state of the art in a generative process but not in Data
Compression.
IV. S ENTIMENT A NALYSIS
A. A Qualitative Approach
The Sentiment of people can be determined according to
what they write in many social networks such as Facebook,
Twitter, etc.. It looks like an easy task for humans. However, it
could be not so easy for a computer to automatically determine
the sentiment behind a piece of writing.
The task of guessing the sentiment of text using a computer
is known as sentiment analysis and one of the most popular
approaches for this task is to use neural networks. In fact,
Stanford University created a powerful neural network for
sentiment analysis [3] which is used to predict the sentiment
of movie reviews taking into account not only the words
in isolation but also the order in which they appear. In our
first experiment, the Stanford neural network and a PAQ
compressor [2] will be used for doing sentiment analysis of
movie reviews in order to determine whether a user likes or not
a given movie. After that, results obtained will be compared.
Both algorithms will use a public data set for movie reviews
[17].
It is important to understand how sentiment analysis could
be done with a data compressor. We start introducing the
concept of using Data Compression to compute the distance
between two strings using the Normalized Compression Distance [16].
N CD(x, y) =
C(xy) − min{C(x), C(y)}
max{C(x), C(y)}
Where C(x) is the size of applying the best possible
compressor to x and C(xy) is the size of applying the best
possible compressor to the concatenation of x and y.
The NCD is an approximation to the Kolmogorov distance
between two strings using a Compression Algorithm to approximate the complexity of a string because the Kolmogorov
Complexity is uncomputable.
The principle behind the NCD is simple: when we concatenate string y after x then if y is very similar to x we
should be able to compress it a lot because the information in
x contains everything we need to describe y. An observation is
that C(xx) should be equal, with minimal overhead difference
to C(x) because the Kolmogorov complexity of a string
concatenated to itself is equal to the Kolmogorov complexity
of the string.
As introduced, a data compressor performs well when it is
capable of understanding the data set that will be compressed.
This understanding often grows when the data set becomes
bigger and in consequence compression rate improves. However, it is not true when future data (i.e. data that has not
been compressed yet) has no relation with already compressed
data because the more similar the information it is the better
compression rate it is.
Let C(X1 , X2 ...Xn ) be a compression algorithm that
compresses a set of n files denoted by X1 , X2 ...Xn . Let
P1 , P2 ...Pn and N1 , N2 ...Nm be a set of n positive reviews
and m negative reviews respectively. Then, a review R can be
predicted positive or negative using the following inequality:
C(P1 , ...Pn , R)−C(P1 , ..., Pn ) < C(N1 , ..., Nm , R)−C(N1 , ..., Nm )
The formula is a direct derivation from the NCD. When
the inequality is not true, we say that a review is predicted
negative.
The order in which files are compressed must be considered.
As you could see from the proposed formula, the review R is
compressed last.
Some people may ask why this inequality works to predict
whether a review is positive or negative. So it is important
to understand this inequality. Suppose that the review R is a
positive review but we want a compressor to predict whether it
is positive or negative. If R is compressed after a set of positive
reviews then the compression rate should be better than the
one obtained if R is compressed after a set of negative reviews
because the review R has more related information with the set
of positive reviews and in consequence should be compressed
better. Interestingly, both the positive and negative set could
have different sizes and that is why it is important to subtract
the compressed file size of both sets in the inequality.
B. Data Set Preparation
We used the Large Movie Review Dataset [17] which is a
popular dataset for doing sentiment analysis, it has been used
by Kaggle for Sentiment Analysis competitions.
We describe the quantity of movie reviews used in the
following table.
Total
Training
Test
Positive
12491
9999
2492
Negative
12499
9999
2500
C. PAQ for Sentiment Analysis
The idea of using Data Compression for Sentiment Analysis
is not new, it has been already proposed in [5] but the authors
did not use PAQ.
We chose PAQ [2] because at the time of this writing it was
the best Data Compression algorithm in several benchmarks.
The code for PAQ is available and that was important to
be able to run the experiments. For the Sentiment Analysis
task we used PAQ to compress the positive train set and the
negative train set storing PAQ’s data model for each set. Then
we compressed each test review after loading the positive and
negative models comparing the size to decide if the review
was positive or negative.
PAQ
RNN
Correct
77.20%
70.93%
Incorrect
18.41%
23.60%
Inconcluse
4.39%
5.47%
D. Experiment Results
In this section, the results obtained are explained giving a
comparison between the data compressor and the Stanford’s
Neural Network for Sentiment Analysis.
The following table shows the results obtained
As you could see from the previous table, 77.20% of movie
reviews were correctly classified by the PAQ Compressor
whereas 70.93% were well classificated by the Stanford’s
Neural Network.
There are two main points to highlight according to the
result obtained:
1) Sentiment Analysis could be achieved with a PAQ
compression algorithm with high accuracy ratio.
2) In this particular case, a higher precision can be achieved
using PAQ rather than the Stanford Neural Network for
Sentiment Analysis.
We observed that PAQ was very accurate to
determine whether a review was positive or negative,
the missclassifications were always difficult reviews and in
some particular cases the compressor outdid the human label,
for example consider the following review:
1. Moreover, this compressor uses contexts, a main part of
compression algorithms. They are built from the previous history and could be accessed to make predictions, for example,
the last ten symbols can be used to compute the prediction of
the eleventh.
Figure 1. we can see that PAQ splits the [0,1) interval giving 1/4 of
probability to the bit 0 and 3/4 of probability to the bit 1. When a random
number is sampled in this interval it is likely that PAQ will generate a 1 bit.
After 8 bits are generated we have a character. Each bit generated is used as
context but is not used to update the PAQ models because PAQ should not
learn from the text it is randomly generating. PAQ learns from the training
test and then generates random text using that model.
PAQ uses an ensamble of several different models to compute how likely a bit 1 or 0 is next. Some of these models are
based in the previous n characters of m bits of seen text, other
models use whole words as contexts, etc. In order to weight
the prediction performed by each model, a neural network is
used to determine the weight of each model [15]:
P (1|c) =
n
X
Pi (1|c)Wi
i=1
“The piano part was so simple it could have been picked
out with one hand while the player whacked away at the
gong with the other. This is one of the most bewilderedly
trancestate inducing bad movies of the year so far for me.”
This review was labeled positive but PAQ correctly predicted
it as negative, since the review is misslabeled it counted as a
miss in the automated test.
V. AUTOMATIC T EXT G ENERATION
This module’s goal is to generate automatic text with a PAQ
series compressor and compare it with RNN’s results, using
specifics metrics and scenarios.
The ability of good compressors when making predictions is
more than evident. It just requires an entry text (training set) to
be compressed. At compression time, the future symbols will
get a probability of occurrence: The greater the probability, the
better compression rate for success cases of that prediction, on
the other hand, each failure case will take a penalty. At the
end of this process, a probability distribution will be associated
with that entry data.
As a result of that probabilistic model, it could be possible
to simulate new samples, in other words, generate automatic
text.
A. Data Model
PAQ series compressors use arithmetic coding [2], it encodes symbols assigned to a probability distribution. This
probability lies in the interval [0,1) and when it comes to
arithmetic coding, there are only two possible symbols: 0 and
Where P (1|c) is the probability of the bit 1 with context
”c”, Pi (1|c) is the probability of a bit 1 in context ”c” for
model i and Wi is the weight of model i
In addition, each model adjusts their predictions based on
the new information. When compressing, our input text is
processed bit by bit. On every bit, the compressor updates the
context of each model and adjusts the weights of the neural
network.
Generally, as you compress more information, the predictions will be better.
B. Text Generation
When data set compression is over, PAQ is ready to generate
automatic text.
A random number in the [0, 1) interval is sampled and
transformed into a bit zero or one using Inverse Transform
Sampling. In other words, if the random number falls within
the probability range of symbol 1, bit 1 will be generated,
otherwise, bit 0.
Once that bit is generated, it will be compressed to reset
every context for the following prediction.
What we want to achieve here is updating models in a way
that if you get the same context in two different samples,
probabilities will be the same, if not, this could compute and
propagate errors. Seeing that, it was necessary to turn off the
training process and the weight adjustment of each model in
generation time. This was also possible because the source
code for PAQ is available.
We observed that granting too much freedom to our compressor could result in a large accumulation of bad predictions
that led to poor text generation. Therefore, it is proposed to
make the text generation more conservative adding a parameter called “temperature” reducing the possible range of the
random number.
On maximum temperature, the random number will be
generated in the interval [0,1), giving the compressor maximum degree of freedom to make errors, whereas when the
temperature parameter turns minimum, the “random” number
will always be 0.5, removing the compressor the degree of
freedom to commit errors (in this scenario, the symbol with
greater probability will always be generated).
1) Pearson’s Chi-Squared: How likely it is that any observed difference between the sets arose by chance.
The chi-square is computed using the following formula:
2
X =
n
X
(Oi − Ei )2
Ei
i=1
Where Oi is the observed ith value and Ei is the expected
ith value.
A value of 0 means equality.
2) Total Variation: Each n-gram’s observed frequency can
be denoted like a probability if it is divided by the sum of
all frequencies, P(i) on the real text and Q(i) on the generated
one. Total variation distance can be computed according to the
following formula:
n
δ(P, Q) =
1X
|Pi − Qi |
2 i=1
In other words, the total variation distance is the largest
possible difference between the probabilities that the two
probability distributions can assign to the same event.
3) Generalized Jaccard Similarity: It is the size of the
intersection divided by the size of the union of the sample
sets.
Figure 2. Effect of Temperature in the Jaccard Similarity, very high temperatures produce text that is not so similar to the training test, temperatures
that are too low aren’t also optima, the best value is usually an intermediate
temperature.
When temperature is around 0.5 the results are very legible
even if they are not as similar as the original text using the
proposed metrics. This can be seen in the following fragment
of randomly generated Harry Potter.
”What happened?” said Harry, and she was standing
at him. ”He is short, and continued to take the
shallows, and the three before he did something
to happen again. Harry could hear him. He was
shaking his head, and then to the castle, and the
golden thread broke; he should have been a back at
him, and the common room, and as he should have
to the good one that had been conjured her that the
top of his wand too before he said and the looking
at him, and he was shaking his head and the many
of the giants who would be hot and leafy, its flower
beds turned into the song, and said, ”I can took the
goblet and sniffed it. He saw another long for him.”
G∩T
G∪T
A value of 1 means both texts are equals.
J(G, T ) =
D. Results
Turning off the training process and the weights adjustment
of each model, freezes the compressor’s global context on the
last part of the training set. As a consequence of this event,
the last piece of the entry text will be considered as a “big
seed”.
For example, The King James Version of the Holy Bible
includes an index at the end of the text, a bad seed for text
generation. If we generate random text after compressing the
Bible and its index we get:
55And if meat is broken behold I will love for the
foresaid shall appear, and heard anguish, and height
coming in the face as a brightness is for God shall
give thee angels to come fruit.
56But whoso shall admonish them were dim
born also for the gift before God out the least was
in the Spirit into the company
C. Metrics
A simple transformation is applied to each text in order to
compute metrics.
It consists in counting the number of occurrences of each
n-gram in the input (i.e. every time a n-gram ”WXYZ” is
detected, it increases its number of occurrences)
Then three different metrics were considered:
[67Blessed shall be loosed in heaven.)
We noticed this when we compared different segments of
each input file against each other, we observed that in some
files the last segment was significantly different than the rest
of the text. If we remove the index at the end of the file and
ask PAQ to generate random text after compressing the Bible
we get the following:
12The flesh which worship him, he of our Lord
Jesus Christ be with you most holy faith, Lord,
Let not the blood of fire burning our habitation of
merciful, and over the whole of life with mine own
righteousness, shall increased their goods to forgive
us our out of the city in the sight of the kings of
the wise, and the last, and these in the temple of
the blind.
He was like working, his eyes. He doing you
were draped in fear of them to study of your
families to kill, that the beetle, he time. Karkaroff
looked like this. It was less frightening you.
”Sight what’s Fred cauldron bottle to wish
you reckon? Binding him to with his head was
handle.” Once and ask Harry where commands and
this thought you were rolling one stationed to do.
The stone. Harry said, battered.
13For which the like unto the souls to the
saints salvation, I saw in the place which when
they that be of the bridegroom, and holy partly, and
as of the temple of men, so we say a shame for a
worshipped his face: I will come from his place,
declaring into the glory to the behold a good; and
loosed.
”The you,” said Ron, and Harry in the doorway.
Come whatever Hagrid was looking from understood
page. ”So, hardly to you,” said Fred, no in the
morning. ”They’re not enough, we’ll to have all
through her explain, and the others had relicious
importance,” said Dumbledore, he wouldn’t say
anything.”
The difference is remarkable. It was very interesting to
notice that for the RNN the index at the end of the bible
did not result in a noticeable difference for the generated text.
This was the first hint that the compressor and the RNN were
proceeding in different ways.
While the text may not make sense it certainly follows the
style, syntax and writing conventions of the training text.
Analyzers based on words like the Stanford Analyzer tend to
have difficulties when the review contains a lot of uncommon
words. It was surprising to find that PAQ was able to correctly
predict these reviews.
Consider the following review:
Figure 3. The effect of the chosen seed in the Chi Squared metric. In Orange
the metric variation by temperature using a random seed. In Blue the same
metric with a chosen seed.
In some cases the compressor generated text that was
surprisingly well written. This is an example of random text
generated by PAQ8L after compressing ”Harry Potter”
CHAPTER THIRTY-SEVEN - THE GOBLET
OF LORD VOLDEMORT OF THE FIREBOLT
MARE!”
Harry looked around. Harry knew exactly who
lopsided, looking out parents. They had happened
on satin’ keep his tables.”
Dumbledore stopped their way down in days
and after her winged around him.
”The author sets out on a ”journey of discovery”
of his ”roots” in the southern tobacco industry
because he believes that the (completely and
deservedly forgotten) movie ”Bright Leaf” is about
an ancestor of his. Its not, and he in fact discovers
nothing of even mild interest in this absolutely silly
and self-indulgent glorified home movie, suitable
for screening at (the director’s) drunken family
reunions but certainly not for commercial - or even
non-commercial release. A good reminder of why
most independent films are not picked up by major
studios - because they are boring, irrelevant and of
no interest to anyone but the director and his/her
immediate circles. Avoid at all costs!”
This was classified as positive by the Stanford Analyzer,
probably because of words such as ”interest, suitable, family,
commercial, good, picked”, the Compressor however was able
to read the real sentiment of the review and predicted a
negative label. In cases like this the Compressor shows the
ability to truly understand data.
E. Metric Results
We show the result of bot PAQ and a RNN for text
generation using the mentioned metrics to evaluate how similar
the generated text is to the original text used for training.
It can be seen that the compressor got better results for all
texts except Poe and Game of Thrones.
PAQ8L
RNN
Game of Thrones
47790
44935
Harry Potter
46195
83011
Paulo Coelho
45821
86854
Bible
47833
52898
Poe
61945
57022
Shakespeare
60585
84858
Math Collection
84758
135798
War and Peace
46699
47590
Linux Kernel
136058
175293
Table I
C HI S QUARED R ESULTS ( LOWER VALUE IS BETTER )
PAQ8L
RNN
Game of Thrones
25.21
24.59
Harry Potter
25.58
37.40
Paulo Coelho
25.15
34.80
Bible
25.15
25.88
Poe
30.23
27.88
Shakespeare
27.94
30.71
Math Collection
31.05
35.85
War and Peace
24.63
25.07
Linux Kernel
44.74
45.22
Table II
T OTAL VARIATION ( LOWER VALUE IS BETTER )
The results of this metric were almost identical to the results
of the Chi-Squared test.
PAQ8L
RNN
Game of Thrones
0.06118
0.0638
Harry Potter
0.1095
0.0387
Paulo Coelho
0.0825
0.0367
Bible
0.1419
0.1310
Poe
0.0602
0.0605
Shakespeare
0.0333
0.04016
Math Collection
0.21
0.1626
War and Peace
0.0753
0.0689
Linux Kernel
0.0738
0.0713
Table III
JACCARD S IMILARITY ( HIGHER IS BETTER )
In the Jaccard similarity results were again good for PAQ
except for ”Poe”, ”Shakespeare” and ”Game of Thrones”, there
is a subtle reason why Poe was won by the RNN in all metrics
and we’ll explain that in our conclusions.
VI. C ONCLUSIONS
In the sentiment analysys task we have noticed an improvement using PAQ over a Neural Network. We can argue
then that a Data Compression algorithm has the intelligence
to understand text up to the point of being able to predict
its sentiment with similar or better results than the state of
the art in sentiment analysis. In some cases the precision
improvement was up to 6% which is a lot.
We argue that sentiment analysys is a predictive task, the
goal is to predict sentiment based on previously seen samples
for both positive and negative sentiment, in this regard a
compression algorithm seems to be a better predictor than a
RNN.
In the text generation task the use of a right seed is needed
for a Data Compression algorithm to be able to generate good
text, this was evident in the example we showed about the
Bible. This result is consistent with the sentiment analysis
result because the seed is acting like the previously seen
reviews if the seed is not in sync with the text then the results
will not be similar to the original text.
The text generation task showed the critical difference between a Data Compression algorithm and a Recurrent Neural
Network and we believe this is the most important result of
our work: Data Compression algorithms are predictors while
Recurren Neural Networks are imitators.
The text generated by a RNN looks in general better than the
text generated by a Data Compressor but if we only generate
one paragraph the Data Compressor is clearly better. The Data
Compressor learns from the previously seen text and creates
a model that is optimal for predicting what is next, that is
why they work so well for Data Compression and that is why
they are also very good for Sentiment Analysis or to create a
paragraph after seeing training test.
On the other hand the RNN is a great imitator of what
it learned, it can replicate style, syntax and other writing
conventions with a surprising level of detail but what the RNN
generates is based in the whole text used for training without
weighting recent text as more relevant. In this sense we can
argue that the RNN is better for random text generation while
the Compression algorithm should be better for random text
extension or completion.
If we concatenate the text of Romeo & Juliet after Shapespeare and ask both methods to generate a new paragraph the
Data Compressor will create a new paragraph of Romeo and
Juliet while the RNN will generate a Shakespeare-like piece
of text. Data Compressors are better for local predictions
and RNNs are better for global predictions.
This explains why in the text generation process PAQ and
the RNN obtained different results for different training tests.
PAQ struggled with ”Poe” or ”Game of Thrones” but was
very good with ”Coelho” or the Linux Kernel. What really
happened was that we measured how predictable each author
was!. If the text is very predictable then the best predictor will
win, PAQ defeated the RNN by a margin with the Linux Kernel
and Paulo Coelho. When the text is not predictable then the
ability to imitate in the RNN defeated PAQ. This can be used
as a wonderful tool to evaluate the predictability of different
authors comparing if the Compressor or the RNN works better
to generate similar text. In our experiment we conclude that
Coelho is more Predictable than Poe and it makes all the sense
in the world!
As our final conclusion we have shown that Data Compression algorithms show rational behaviour and that they they are
based in the accurate prediction of what will follow based on
what they have learnt recently. RNNs learn a global model
from the training data and can then replicate it. That’s what
we say that Data Compression algorithms are great predictors while Recurrent Neural Networks are great imitators.
Depending on which ability is needed one or the other may
provide the better results.
VII. F UTURE W ORK
We believe that Data Compression algorithms can be used
with a certain degree of optimality for any Natural Language
Processing Task were predictions are needed based on recent
local context. Completion of text, seed based text generation,
sentiment analysis, text clustering are some of the areas where
Compressors might play a significant role in the near future.
We have also shown that the difference between a Compressor and a RNN can be used as a way to evaluate the
predictability of the writing style of a given text. This might be
expended in algorithms that can analyze the level of creativity
in text and can be applied to books or movie scripts.
R EFERENCES
[1] Graves, Alex. Generating Sequences With Recurrent Neural Networks.
arXiv:1308.0850v5
[2] Mahoney,
Matt.
The
PAQ
Data
Compression
series
http://mattmahoney.net/dc/paq.html
[3] Socher Richard; Perelygin, Alex; Wu, Jean; Chuang, Jason; Manning,
Christopher; Ng, Andrew and Potts, Christopher: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. EMNLP
(2013)
[4] Matthew Mahoney. Adaptive Weighing of Context Models for Lossless
Data Compression.
[5] Dominique ZiegelMayer, Rainer Schrader. Sentiment polarity classification using statistical data compression models.
[6] Graves, Alex. Supervised sequence labelling with recurrent neural
networks. Vol. 385. Springer, 2012.
[7] Gers, F. A., Schmidhuber, J.,Cummins, F. (2000). Learning to forget
Continual prediction with LSTM.
[8] Arthur Franz. Artificial general intelligence through recursive data
compression and grounded reasoning: a position paper.
[9] Ofir David,Shay Moran,Amir Yehudayoff. On statistical learning via the
lens of compression.
[10] Peter Grunwald. A tutorial introduction to the minimum description
length principle.
[11] Rudi Cilibrasi, Paul Vitanyi. Clustering by compression.
[12] Oliver Bown,Sebastian Lexer Continuous-Time Recurrent Neural Networks for Generative and Interactive Musical Performance.
[13] Ilya Sutskever,James Martens,Geoffrey Hinton Generating Text with
Recurrent Neural Networks.
[14] Schmidhuber, Jürgen, and Heil, Stefan, Sequential Neural Text Compression, IEEE Trans. on Neural Networks7(1): 142-146, 1996.
[15] Matthew Mahoney Fast Text Compression with Neural Networks.
[16] Paul Vitanyi. The Kolmogorov Complexity and its Applications.
[17] Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang,
Dan and Ng, Andrew Y. and Potts, Christopher. Learning Word Vectors
for Sentiment Analysis.
[18] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende,
Daan Wierstra DRAW: A Recurrent Neural Network For Image Generation
| 2 |
Characterizations and Effective Computation of
Supremal Relatively Observable Sublanguages*
arXiv:1609.02251v1 [] 8 Sep 2016
Kai Cai1 and Renyuan Zhang2 , W.M. Wonham3
Abstract
Recently we proposed relative observability for supervisory control of discrete-event systems under
partial observation. Relative observability is closed under set unions and hence there exists the supremal
relatively observable sublanguage of a given language. In this paper we present a new characterization
of relative observability, based on which an operator on languages is proposed whose largest fixpoint
is the supremal relatively observable sublanguage. Iteratively applying this operator yields a monotone
sequence of languages; exploiting the linguistic concept of support based on Nerode equivalence, we
prove for regular languages that the sequence converges finitely to the supremal relatively observable
sublanguage, and the operator is effectively computable. Moreover, for the purpose of control, we propose
a second operator that in the regular case computes the supremal relatively observable and controllable
sublanguage. The computational effectiveness of the operator is demonstrated on a case study.
Keywords
Supervisory control, partial-observation, relative observability, regular language, Nerode equivalence
relation, support relation, discrete-event systems, automata
I.
I NTRODUCTION
In [3] we proposed relative observability for supervisory control of discrete-event systems (DES)
under partial observation. The essence of relative observability is to set a fixed ambient language relative
*This work was supported in part by JSPS KAKENHI Grant no. JP16K18122 and Program to Disseminate Tenure Tracking
System, MEXT, Japan; the National Nature Science Foundation, China, Grant no. 61403308; the Natural Sciences and
Engineering Research Council, Canada, Grant no. 7399.
1
K. Cai is with Urban Research Plaza, Osaka City University, Japan ([email protected])
2
R. Zhang is with School of Automation, Northwestern Polytechnical University, China ([email protected])
3
W.M. Wonham is with the Systems Control Group, Department of Electrical and Computer Engineering, University of
Toronto, Canada ([email protected]).
to which the standard observability conditions [8] are tested. Relative observability is proved to be
stronger than observability [5], [8], weaker than normality [5], [8], and closed under arbitrary set unions.
Therefore the supremal relatively observable sublanguage of a given language exists, and we developed
an automaton-based algorithm to compute the supremal sublanguage.
In this paper and its conference precursor [2], we present a new characterization of relative observability.
The original definition of relative observability in [3] was formulated in terms of strings, while the
new characterization is given in languages. Based on this characterization, we propose an operator on
languages, whose largest fixpoint is precisely the supremal relatively observable sublanguage. Iteratively
applying this operator yields a monotone sequence of languages. In the case where the relevant languages are regular, we prove that the sequence converges finitely to the supremal relatively observable
sublanguage, and the operator is effectively computable.
This new computation scheme for the supremal sublanguage is given entirely in terms of languages, and
the convergence proof systematically exploits the concept of support ( [9, Section 2.8]) based on Nerode
equivalence relations [7]. The solution therefore separates out the linguistic essence of the problem from
the implementational aspects of state computation using automaton models. This approach is in the same
spirit as [10] for controllability, namely operator fixpoint and successive approximation.
Moreover, the proposed language-based scheme allows more straightforward implementation, as compared to the automaton-based algorithm in [3]. In particular, we show that the language operator used in
each iteration of the language-based scheme may be decomposed into a series of standard or well-known
language operations (e.g. complement, union, subset construction); therefore off-the-shelf algorithms may
be suitably assembled to implement the computation scheme. On the other hand, both the language
and automaton-based algorithms have (at least) exponential complexity in the worst case, which is the
unfortunate nature of supervisor synthesis under partial observation. Our previous experience with the
automaton-based algorithm in [3] suggests that computing the supremal relatively observable sublanguage
is fairly delicate and thus prone to error. Hence, it is advantageous to have two algorithms at hand so
that one can double check the computation results, thereby ensuring presumed correctness based on
consistency.
Finally, for the purpose of supervisory control under partial observation, we combine relative observability with controllability. In particular, we propose an operator which in the regular case effectively
computes the supremal relatively observable and controllable sublanguage. We have implemented this
operator and tested its effectiveness on a case study.
The rest of the paper is organized as follows. In Section II we present a new characterization of relative
2
observability, and an operator on languages that yields an iterative scheme to compute the supremal
relatively observable sublanguage. In Section III we prove that in the case of regular languages, the
iterative scheme generates a monotone sequence of languages that is finitely convergent to the supremal
relatively observable sublanguage. In Section IV we combine relative observability and controllability,
and propose an operator that effectively computes the supremal relatively observable and controllable
sublanguage. Section V presents illustrative examples, and finally in Section VI we state conclusions.
This paper extends its conference precursor [2] in the following respects. (1) In the main result of Section III, Theorem 1, the bound on the size of the supremal sublanguage is tightened and the corresponding
proof given. (2) The effective computability of the proposed operator is shown in Subsection III-C. (3)
Relative observability is combined with controllability in Section IV, and a new operator is presented
that effectively computes the supremal relatively observable and controllable sublanguage. (4) A case
study is given in Subsection V-B to demonstrate the effectiveness of the newly proposed computation
schemes.
II.
C HARACTERIZATIONS
OF
R ELATIVE O BSERVABILITY
AND ITS
S UPREMAL E LEMENT
In this section, the concept of relative observability proposed in [3] is first reviewed. Then we present
a new characterization of relative observability, together with a fixpoint characterization of the supremal
relatively observable sublanguage.
A. Relative Observability
Let Σ be a finite event set. A string s ∈ Σ∗ is a prefix of another string t ∈ Σ∗ , written s ≤ t,
if there exists u ∈ Σ∗ such that su = t. Let L ⊆ Σ∗ be a language. The (prefix) closure of L is
L := {s ∈ Σ∗ | (∃t ∈ L) s ≤ t}. For partial observation, let the event set Σ be partitioned into Σo ,
˙ uo ). Bring in the natural
the observable event subset, and Σuo , the unobservable subset (i.e. Σ = Σo ∪Σ
projection P : Σ∗ → Σ∗o defined according to
P (ǫ) = ǫ, ǫ is the empty string;
ǫ, if σ ∈
/ Σo ,
P (σ) =
σ, if σ ∈ Σ ;
o
(1)
P (sσ) = P (s)P (σ), s ∈ Σ∗ , σ ∈ Σ.
In the usual way, P is extended to P : P wr(Σ∗ ) → P wr(Σ∗o ), where P wr(·) denotes powerset. Write
P −1 : P wr(Σ∗o ) → P wr(Σ∗ ) for the inverse-image function of P .
3
Throughout the paper, let M denote the marked behavior of the plant to be controlled, and C ⊆ M
an imposed specification language. Let K ⊆ C . We say that K is relatively observable (with respect to
M , C , and P ), or simply C -observable, if the following two conditions hold:
(i) (∀s, s′ ∈ Σ∗ , ∀σ ∈ Σ) sσ ∈ K, s′ ∈ C, s′ σ ∈ M , P (s) = P (s′ ) ⇒ s′ σ ∈ K
(ii) (∀s, s′ ∈ Σ∗ ) s ∈ K, s′ ∈ C ∩ M, P (s) = P (s′ ) ⇒ s′ ∈ K.
In words, relative observability of K requires for every lookalike pair (s, s′ ) in C that (i) s and s′ have
identical one-step continuations, if allowed in M , with respect to membership in K ; and (ii) if each
string is in M and one actually belongs to K , then so does the other. Note that the tests for relative
observability of K are not limited to the strings in K (as with standard observability [5], [8]), but apply
to all strings in C ; for this reason, one may think of C as the ambient language, relative to which the
conditions (i) and (ii) are tested.
We have proved in [3] that in general, relative observability is stronger than observability, weaker than
normality, and closed under arbitrary set unions. Write
O(C) = {K ⊆ C | K is C -observable }
(2)
for the family of all C -observable sublanguages of C . Then O(C) is nonempty (the empty language ∅
belongs) and contains a unique supremal element
sup O(C) :=
[
{K | K ∈ O(C)}
(3)
i.e. the supremal relatively observable sublanguage of C .
B. Characterization of Relative Observability
For N ⊆ Σ∗ , write [N ] for P −1 P (N ), namely the set of all lookalike strings to strings in N . A
language N is normal with respect to M if [N ] ∩ M = N . For K ⊆ Σ∗ write
N (K, M ) = {K ′ ⊆ K | [K ′ ] ∩ M = K ′ }.
(4)
Since normality is closed under union, N (K, M ) has a unique supremal element sup N (K, M ) which
may be effectively computed [1], [4].
Write
C.σ := {sσ | s ∈ C}, σ ∈ Σ.
4
(5)
Let K ⊆ C and define
D(K) :=
[
[K ∩ C.σ] ∩ C.σ | σ ∈ Σ .
(6)
Thus D(K) is the collection of strings in the form tσ (t ∈ C , σ ∈ Σ), that are lookalike to the strings
in K ending with the same event σ . Note that if K = ∅ then D(K) = ∅. This language D(K) turns out
to be key to the following characterization of relative observability.
Proposition 1. Let K ⊆ C ⊆ M . Then K is C -observable if and only if
(i′ ) D(K) ∩ M ⊆ K
(ii′ ) [K] ∩ C ∩ M = K.
Note that condition (i′ ) is in a form similar to controllability of K [10] (i.e. KΣu ∩ M ⊆ K , where
Σu is the uncontrollable event set), although the expression D(K) appearing here is more complicated
owing to the presence of the normality operator [·]. Condition (ii′ ) is normality of K with respect to
C ∩ M.
Proof of Proposition 1. We first show that (i′ ) ⇔ (i), and then (ii′ ) ⇔ (ii).
1. (i′ ) ⇒ (i). Let s, s′ ∈ Σ∗ , σ ∈ Σ, and assume that sσ ∈ K , s′ ∈ C , s′ σ ∈ M , and P (s) = P (s′ ). It
will be shown that s′ σ ∈ K . Since K ⊆ C , we have K ⊆ C and
sσ ∈ K ⇒ sσ ∈ K ∩ C.σ
⇒ s′ σ ∈ [K ∩ C.σ]
⇒ s′ σ ∈ [K ∩ C.σ] ∩ C.σ
⇒ s′ σ ∈ D(K)
⇒ s′ σ ∈ D(K) ∩ M
⇒ s′ σ ∈ K
(by (i′ )).
/ D(K); thus s 6= ǫ. Let s = tσ for some t ∈ Σ∗
2. (i′ ) ⇐ (i). Let s ∈ D(K) ∩ M . According to (6) ǫ ∈
5
and σ ∈ Σ. Then
s ∈ D(K) ∩ M ⇒ tσ ∈ [K ∩ C.σ] ∩ C.σ ∩ M
⇒ t ∈ C, tσ ∈ M ,
(∃t′ ∈ Σ∗ )(P (t) = P (t′ ), t′ σ ∈ K ∩ C.σ)
⇒ tσ ∈ K,
(by (i))
⇒ s ∈ K.
3. (ii′ ) ⇒ (ii). Let s, s′ ∈ Σ∗ and assume that s ∈ K , s′ ∈ C ∩ M , and P (s) = P (s′ ). Then
s ∈ K ⇒ s′ ∈ [K]
⇒ s′ ∈ [K] ∩ C ∩ M
⇒ s′ σ ∈ K
(by (ii′ )).
4. (ii) ⇒ (ii′ ). (⊇) holds because K ⊆ [K] and K ⊆ C ∩ M . To show (⊆), let s ∈ [K] and s ∈ C ∩ M .
Then there exists s′ ∈ K such that P (s) = P (s′ ). Therefore by (ii) we derive s ∈ K .
Thanks to the characterization of relative observability in Proposition 1, we rewrite O(C) in (2) as
follows:
O(C) = {K ⊆ C | D(K) ∩ M ⊆ K & [K] ∩ C ∩ M = K}.
(7)
In the next subsection, we will characterize the supremal element sup O(C) as the largest fixpoint of a
language operator.
C. Fixpoint Characterization of sup O(C)
For a string s ∈ Σ∗ , write s̄ for {s}, the set of prefixes of s. Given a language K ⊆ Σ∗ , let
F (K) := {s ∈ K | D(s̄) ∩ M ⊆ K}.
(8)
Lemma 1. F (K) is closed, i.e. F (K) = F (K). Moreover, if K ∈ O(C), then F (K) = K .
Proof. First, let s ∈ F (K); then there exists w ∈ Σ∗ such that sw ∈ F (K), i.e. sw ∈ K and
D(sw) ∩ M ⊆ K . It follows that s ∈ K and D(s) ∩ M ⊆ K , namely s ∈ F (K). This shows that
F (K) ⊆ F (K); the other direction F (K) ⊇ F (K) is automatic.
6
Next, suppose that K ∈ O(C); by (7) we have D(K) ∩ M ⊆ K . Let s ∈ K ; it will be shown that
D(s̄) ∩ M ⊆ K . Taking an arbitrary string t ∈ D(s̄) ∩ M , we derive
t∈
[
[s ∩ C.σ] ∩ C.σ | σ ∈ Σ ∩ M
⇒t ∈
[
[K ∩ C.σ] ∩ C.σ | σ ∈ Σ ∩ M
⇒t ∈ K.
This shows that s ∈ F (K) by (8), and hence K ⊆ F (K). The other direction F (K) ⊇ K is automatic.
Now define an operator Ω : P wr(Σ∗ ) → P wr(Σ∗ ) according to
Ω(K) := sup N K ∩ F (K), C ∩ M ,
K ∈ P wr(Σ∗ ).
(9)
A language K such that K = Ω(K) is called a fixpoint of the operator Ω. The following proposition
characterizes sup O(C) as the largest fixpoint of Ω.
Proposition 2. sup O(C) = Ω(sup O(C)), and sup O(C) ⊇ K for every K such that K = Ω(K).
Proof. Since sup O(C) ∈ O(C), we have
Ω(sup O(C)) = sup N sup O(C) ∩ F (sup O(C)), C ∩ M
= sup N (sup O(C) ∩ sup O(C), C ∩ M )
= sup N (sup O(C), C ∩ M )
= sup O(C).
Next let K be such that K = Ω(K). To show that K ⊆ sup O(C), it suffices to show that K ∈ O(C).
From
K = Ω(K) := sup N K ∩ F (K), C ∩ M
we have K ⊆ K ∩ F (K). But K ∩ F (K) ⊆ K . Hence, in fact, K = K ∩ F (K). This implies that
K = sup N K, C ∩ M ; namely K is normal with respect to C ∩ M .
On the other hand, by K = K ∩ F (K) ⊆ F (K), we have K ⊆ F (K) = F (K). But F (K) ⊆ K by
definition; therefore K = F (K). In what follows it will be shown that D(F (K)) ∩ M ⊆ F (K), which
is equivalent to D(K) ∩ M ⊆ K . Let s ∈ D(F (K)) ∩ M . As in the proof of Proposition 1 (item 2), we
7
know that s 6= ǫ. So let s = tσ for some t ∈ Σ∗ and σ ∈ Σ. Then
s ∈ D(F (K)) ∩ M ⇒ tσ ∈ [F (K) ∩ C.σ] ∩ C.σ ∩ M
⇒ (∃t′ ∈ C)P (t) = P (t′ ), t′ σ ∈ F (K)
⇒ D(t′ σ) ∩ M ⊆ K (by definition of F (K)).
Then by (6)
[
[t′ σ ∩ C.σ] ∩ C.σ | σ ∈ Σ ∩ M ⊆ K.
Since tσ belongs to the left-hand-side of the above inequality, we have tσ ∈ K = F (K). Therefore
D(F (K)) ∩ M ⊆ F (K); equivalently D(K) ∩ M ⊆ K . This completes the proof of K ∈ O(C).
In view of Proposition 2, it is natural to attempt to compute sup O(C) by iteration of Ω as follows:
(∀j ≥ 1) Kj = Ω(Kj−1 ),
K0 = C.
(10)
It is readily verified that Ω(K) ⊆ K ; hence
K0 ⊇ K1 ⊇ K2 ⊇ · · ·
Namely the sequence {Kj } (j ≥ 1) is a monotone (descending) sequence of languages. This implies that
the (set-theoretic) limit
K∞ := lim Kj =
j→∞
∞
\
Kj
(11)
j=0
exists. The following result asserts that if K∞ is reached in a finite number of steps, then K∞ is precisely
the supremal relatively observable sublanguage of C , i.e. sup O(C).
Proposition 3. If K∞ in (11) is reached in a finite number of steps, then
K∞ = sup O(C).
Proof. Suppose that the limit K∞ is reached in a finite number of steps. Then K∞ = Ω(K∞ ). As in
the proof of Proposition 2, we derive that K∞ ∈ O(C).
It remains to show that K∞ is the supremal element of O(C). Let K ′ ∈ O(C); it will be shown that
K ′ ⊆ K∞ by induction. The base case K ′ ⊆ K0 holds because K ′ ⊆ C and K0 = C . Suppose that
8
K ′ ⊆ Kj−1 . Let s ∈ K ′ . Then s ∈ Kj−1 and
D(s) ∩ M ⊆ D(K ′ ) ∩ M
(by K ′ ∈ O(C))
⊆ K′
⊆ Kj−1 .
Hence s ∈ F (Kj−1 ). This shows that
K ′ ⊆ F (Kj−1 )
⇒ K ′ ⊆ F (Kj−1 )
⇒ K ′ ⊆ Kj−1 ∩ F (Kj−1 ).
Moreover, since K ′ ∈ O(C), K ′ is normal with respect to C ∩ M . Thus K ′ ⊆ sup N Kj−1 ∩
F (Kj−1 ), C ∩ M = Kj . This completes the proof of the induction step, and therefore confirms that
K ′ ⊆ K∞ .
In the next section, we shall establish that, when the given languages M and C are regular, the limit
K∞ in (11) is indeed reached in a finite number of steps.
III.
E FFECTIVE C OMPUTATION
OF
sup O(C) IN THE R EGULAR C ASE
In this section, we first review the concept of Nerode equivalence relation and a finite convergence
result for a sequence of regular languages. Based on these, we then prove that the sequence generated by
(10) converges to the supremal relatively observable sublanguage sup O(C) in a finite number of steps.
Finally, we show that the computation of sup O(C) is effective.
A. Preliminaries
Let π be an arbitrary equivalence relation on Σ∗ . Denote by Σ∗ /π the set of equivalence classes of
π , and write |π| for the cardinality of Σ∗ /π . Define the canonical projection Pπ : Σ∗ → Σ∗ /π , namely
the surjective function mapping any s ∈ Σ∗ onto its equivalence class Pπ (s) ∈ Σ∗ /π .
Let π1 , π2 be two equivalence relations on Σ∗ . The partial order π1 ≤ π2 holds if
(∀s1 , s2 ∈ Σ∗ ) s1 ≡ s2 (mod π1 ) ⇒ s1 ≡ s2 (mod π2 ).
The meet π1 ∧ π2 is defined by
(∀s1 , s2 ∈ Σ∗ ) s1 ≡ s2 (mod π1 ∧ π2 ) iff s1 ≡ s2 (mod π1 ) & s1 ≡ s2 (mod π2 ).
9
For a language L ⊆ Σ∗ , write Ner(L) for the Nerode equivalence relation [7] on Σ∗ with respect to
L; namely for all s1 , s2 ∈ Σ∗ , s1 ≡ s2 (mod Ner(L)) provided
(∀w ∈ Σ∗ ) s1 w ∈ L ⇔ s2 w ∈ L.
Write ||L|| for the cardinality of the set of equivalence classes of Ner(L), i.e. ||L|| := |Ner(L)|. The
language L is said to be regular [7] if ||L|| < ∞. Henceforth, we assume that the given languages M
and C are regular.
An equivalence relation ρ is a right congruence on Σ∗ if
(∀s1 , s2 , t ∈ Σ∗ ) s1 ≡ s2 (mod ρ) ⇒ s1 t ≡ s2 t(mod ρ).
Any Nerode equivalence relation is a right congruence. For a right congruence ρ and languages L1 , L2 ⊆
Σ∗ , we say that L1 is ρ-supported on L2 [9, Section 2.8] if L1 ⊆ L2 and
{L1 , Σ∗ − L1 } ∧ ρ ∧ Ner(L2 ) ≤ Ner(L1 ).
(12)
The ρ-support relation is transitive: namely, if L1 is ρ-supported on L2 , and L2 is ρ-supported on L3 , then
L1 is ρ-supported on L3 . The following lemma is central to establish finite convergence of a monotone
language sequence.
Lemma 2. [9, Theorem 2.8.11] Given a monotone sequence of languages K0 ⊇ K1 ⊇ K2 ⊇ · · · with
K0 regular, and a fixed right congruence ρ on Σ∗ with |ρ| < ∞, suppose that Kj is ρ-supported on
Kj−1 for all j ≥ 1. Then each Kj is regular, and the sequence is finitely convergent to a sublanguage
K . Furthermore, K is supported on K0 and
||K|| ≤ |ρ| · ||K0 || + 1.
In view of this lemma, to show finite convergence of the sequence in (10), it suffices to find a fixed
right congruence ρ with |ρ| < ∞ such that Kj is ρ-supported on Kj−1 for all j ≥ 1. To this end, we
need the following notation.
Let µ := Ner(M ), η := Ner(C) be Nerode equivalence relations and
ϕj := {F (Kj ), Σ∗ − F (Kj )}, κj := {Kj , Σ∗ − Kj }
(j ≥ 1)
also stand for the equivalence relations corresponding to these partitions. Then |µ| < ∞, |η| < ∞, and
|ϕj | = |κj | = 2. Let π be an equivalence relation on Σ∗ , and define fπ : Σ∗ → Pwr(Σ∗ /π) according to
(∀s ∈ Σ∗ ) fπ (s) = {Pπ (s′ ) | s′ ∈ [s] ∩ C ∩ M }
10
(13)
where [s] = P −1 P ({s}). Write ℘(π) := ker fπ . The size of ℘(π) is |℘(π)| ≤ 2|π| [9, Ex. 1.4.21].
Another property of ℘(·) we shall use later is [9, Ex. 1.4.21]:
℘(π1 ∧ ℘(π2 )) = ℘(π1 ∧ π2 ) = ℘(℘(π1 ) ∧ π2 )
where π1 , π2 are equivalence relations on Σ∗ .
B. Convergence Result
First, we present a key result on support relation of the sequence {Kj } generated by (10).
Proposition 4. Consider the sequence {Kj } generated by (10). For each j ≥ 1, there holds that Kj is
ρ-supported on Kj−1 , where
ρ := µ ∧ η ∧ ℘(µ ∧ η).
(14)
Let us postpone the proof of Proposition 4, and present immediately our main result.
Theorem 1. Consider the sequence {Kj } generated by (10), and suppose that the given languages M
and C are regular. Then the sequence {Kj } is finitely convergent to sup O(C), and sup O(C) is a regular
language with
|| sup O(C)|| ≤ ||M || · ||C|| · 2||M ||·||C|| + 1.
Proof. Let ρ = µ ∧ η ∧ ℘(µ ∧ η) as in (14). Since µ and η are right congruences, so are µ ∧ η and
℘(µ ∧ η) ( [9, Example 6.1.25]). Hence ρ is a right congruence, with
|ρ| ≤ |µ| · |η| · 2|µ|·|η|
= ||M || · ||C|| · 2||M ||·||C||.
Since the languages M and C are regular, i.e. ||M ||, ||C|| < ∞, we derive that |ρ| < ∞.
It then follows from Lemmas 3 and 2 that the sequence {Kj } is finitely convergent to sup O(C), and
sup O(C) is ρ-supported on K0 , i.e.
Ner(sup O(C)) ≥ {sup O(C), Σ∗ − sup O(C)} ∧ ρ ∧ Ner(K0 )
= {sup O(C), Σ∗ − sup O(C)} ∧ µ ∧ η ∧ ℘(µ ∧ η) ∧ Ner(K0 )
= {sup O(C), Σ∗ − sup O(C)} ∧ µ ∧ ℘(µ ∧ η) ∧ Ner(K0 ).
11
Hence sup O(C) is in fact (µ ∧ ℘(µ ∧ η))-supported on K0 , which implies
|| sup O(C)|| ≤ |µ ∧ ℘(µ ∧ η)| · ||K0 || + 1
≤ ||M || · ||C|| · 2||M ||·||C|| + 1 < ∞.
Therefore sup O(C) is itself a regular language.
Theorem 1 establishes the finite convergence of the sequence {Kj } in (10), as well as the fact that an
upper bound of || sup O(C)|| is exponential in the product of ||M || and ||C||.
In the sequel we prove Proposition 4, for which we need two lemmas.
Lemma 3. For each j ≥ 1, the Nerode equivalence relation on Σ∗ with respect to F (Kj−1 ) satisfies
Ner(F (Kj−1 )) ≥ ϕj ∧ Ner(Kj−1 ) ∧ ℘(Ner(Kj−1 ) ∧ µ ∧ η).
Proof. First, let s1 , s2 ∈ Σ∗ − F (Kj−1 ); then for all w ∈ Σ∗ it holds that s1 w, s2 w ∈ Σ∗ − F (Kj−1 ).
Thus s1 ≡ s2 (mod Ner(F (Kj−1 ))).
Next, let s1 , s2 ∈ F (Kj−1 ) and assume that
s1 ≡ s2 (mod Ner(Kj−1 ) ∧ ℘(Ner(Kj−1 ) ∧ µ ∧ η)).
Also let w ∈ Σ∗ be such that s1 w ∈ F (Kj−1 ). It will be shown that s2 w ∈ F (Kj−1 ). Note first that
s2 w ∈ Kj−1 , since s1 w ∈ F (Kj−1 ) ⊆ Kj−1 and s1 ≡ s2 (mod Ner(Kj−1 )). Hence it is left to show that
D(s2 w) ∩ M ⊆ Kj−1 , i.e.
[
[s2 w ∩ C.σ] ∩ C.σ | σ ∈ Σ ∩ M ⊆ Kj−1 .
It follows from s2 ∈ F (Kj−1 ) that
[
[s2 ∩ C.σ] ∩ C.σ | σ ∈ Σ ∩ M ⊆ Kj−1 .
Thus let s′2 ∈ [s2 ], x′ ∈ [w], and s′2 x′ ∈ [s2 w ∩C.σ]∩C.σ ∩M for some σ ∈ Σ. Write x′ := y ′ σ , y ′ ∈ Σ∗ .
Since s1 ≡ s2 (mod ℘(Ner(Kj−1 ) ∧ µ ∧ η)), there exists s′1 ∈ [s1 ] such that s′1 ≡ s′2 (mod Ner(Kj−1 ) ∧
µ ∧ η). Hence s′1 x′ ∈ M and s′1 y ′ ∈ C , and we derive that s′1 x′ = s′1 y ′ σ ∈ [{s1 w} ∩ C.σ] ∩ C.σ ∩ M .
It then follows from s1 w ∈ F (Kj−1 ) that s′1 x′ ∈ Kj−1 , which in turn implies that s′2 x′ ∈ Kj−1 . This
completes the proof of s2 w ∈ F (Kj−1 ), as required.
Lemma 4. For Kj (j ≥ 1) generated by (10), the following statements hold:
[
Kj =
[s] ∩ C ∩ M | s ∈ Σ∗ & [s] ∩ C ∩ M ⊆ Kj−1 ∩ F (Kj−1 ) ;
Ner(Kj ) ≥ µ ∧ η ∧ ℘(Ner(Kj−1 ) ∧ Ner(F (Kj−1 )) ∧ µ ∧ η).
12
Proof. By (9) we know that Kj is the supremal normal sublanguage of Kj−1 ∩ F (Kj−1 ) with respect
to C ∩ M . Thus the conclusions follow immediately from Example 6.1.25 of [9].
Now we are ready to prove Proposition 4.
Proof of Proposition 4. To prove that Kj is ρ-supported on Kj−1 (j ≥ 1), by definition we must show
that
Ner(Kj ) ≥ κj ∧ µ ∧ η ∧ ℘(µ ∧ η) ∧ Ner(Kj−1 ).
It suffices to show the following:
Ner(Kj ) ≥ κj ∧ µ ∧ η ∧ ℘(µ ∧ η).
We prove this statement by induction. First, we show the base case (j = 1)
Ner(K1 ) ≥ κ1 ∧ µ ∧ η ∧ ℘(µ ∧ η).
From Lemma 3 and K0 = C (thus Ner(K0 ) = η ) we have
Ner(F (K0 )) ≥ ϕ1 ∧ Ner(K0 ) ∧ ℘(Ner(K0 ) ∧ µ ∧ η)
= ϕ1 ∧ η ∧ ℘(µ ∧ η).
It then follows from Lemma 4 that
Ner(K1 ) ≥ µ ∧ η ∧ ℘(Ner(K0 ) ∧ Ner(F (K0 )) ∧ µ ∧ η)
≥ µ ∧ η ∧ ℘(η ∧ ϕ1 ∧ η ∧ ℘(µ ∧ η) ∧ µ ∧ η)
= µ ∧ η ∧ ℘(ϕ1 ∧ µ ∧ η) ∧ ℘(µ ∧ η)
= µ ∧ η ∧ ℘(ϕ1 ∧ µ ∧ η).
(15)
We claim that
Ner(K1 ) ≥ κ1 ∧ µ ∧ η ∧ ℘(µ ∧ η).
To show this, let s1 , s2 ∈ Σ∗ and assume that s1 ≡ s2 (mod κ1 ∧ µ ∧ η ∧ ℘(µ ∧ η)). If s1 , s2 ∈ Σ∗ − K1 ,
then for all w ∈ Σ∗ , s1 w, s2 w ∈ Σ∗ − K1 ; thus s1 ≡ s2 (mod Ner(K1 )). Now let s1 , s2 ∈ K1 . By
Lemma 4 we derive that for all s′1 ∈ [s1 ] ∩ C ∩ M and s′2 ∈ [s2 ] ∩ C ∩ M , s′1 , s′2 ∈ K1 . Since
K1 ⊆ F (K0 ), s′1 , s′2 ∈ F (K0 ) and hence
{Pϕ1 ∧µ∧η (s′1 ) | s′1 ∈ [s1 ] ∩ C ∩ M } = {Pϕ1 ∧µ∧η (s′2 ) | s′2 ∈ [s2 ] ∩ C ∩ M }.
13
Namely s1 ≡ s2 (mod ℘(ϕ1 ∧ µ ∧ η)). This implies that s1 ≡ s2 (mod Ner(K1 )) by (15). Hence the above
claim is established, and the base case is proved.
For the induction step, suppose that for j ≥ 2, there holds
Ner(Kj−1 ) ≥ κj−1 ∧ µ ∧ η ∧ ℘(µ ∧ η).
Again by Lemma 3 we have
Ner(F (Kj−1 )) ≥ ϕj−1 ∧ Ner(Kj−1 ) ∧ ℘(Ner(Kj−1 ) ∧ µ ∧ η)
≥ ϕj−1 ∧ κj−1 ∧ µ ∧ η ∧ ℘(µ ∧ η) ∧ ℘(κj−1 ∧ µ ∧ η ∧ ℘(µ ∧ η) ∧ µ ∧ η)
= ϕj−1 ∧ κj−1 ∧ µ ∧ η ∧ ℘(µ ∧ η) ∧ ℘(κj−1 ∧ µ ∧ η)
= ϕj−1 ∧ κj−1 ∧ µ ∧ η ∧ ℘(κj−1 ∧ µ ∧ η)
Then by Lemma 4,
Ner(Kj ) ≥ µ ∧ η ∧ ℘(Ner(Kj−1 ) ∧ Ner(F (Kj−1 )) ∧ µ ∧ η)
≥ µ ∧ η ∧ ℘(ϕj−1 ∧ κj−1 ∧ µ ∧ η ∧ ℘(κj−1 ∧ µ ∧ η))
= µ ∧ η ∧ ℘(ϕj−1 ∧ κj−1 ∧ µ ∧ η).
(16)
We claim that
Ner(Kj ) ≥ κj ∧ µ ∧ η ∧ ℘(µ ∧ η).
To show this, let s1 , s2 ∈ Σ∗ and assume that s1 ≡ s2 (mod κj ∧ µ ∧ η ∧ ℘(µ ∧ η)). If s1 , s2 ∈ Σ∗ − Kj ,
then for all w ∈ Σ∗ , s1 w, s2 w ∈ Σ∗ − Kj ; hence s1 ≡ s2 (mod Ner(Kj )). Now let s1 , s2 ∈ Kj . By
Lemma 4 we derive that for all s′1 ∈ [s1 ] ∩ C ∩ M and s′2 ∈ [s2 ] ∩ C ∩ M , s′1 , s′2 ∈ Kj . Since
Kj ⊆ F (Kj−1 ) ⊆ Kj−1 ,
{Pϕj−1 ∧κj−1 ∧µ∧η (s′1 ) | s′1 ∈ [s1 ] ∩ C ∩ M }
={Pϕj−1 ∧κj−1 ∧µ∧η (s′2 ) | s′2 ∈ [s2 ] ∩ C ∩ M }.
Namely s1 ≡ s2 (mod ℘(ϕj−1 ∧κj−1 ∧µ∧η). This implies that s1 ≡ s2 (mod Ner(Kj )) by (16). Therefore
the above claim is established, and the induction step is completed.
14
C. Effective Computability of Ω
We conclude this section by showing that the iteration scheme in (10) yields an effective procedure
for the computation of sup O(C), when the given languages M and C are regular. For this, owing to
Theorem 1, it suffices to prove that the operator Ω in (9) is effectively computable.
Recall that a language L ⊆ Σ∗ is regular if and only if there exists a finite-state automaton G =
(Q, Σ, δ, q0 , Qm ) such that
Lm (G) = {s ∈ Σ∗ | δ(q0 , s) ∈ Qm } = L.
Let O : (P wr(Σ∗ ))k → (P wr(Σ∗ )) be an operator that preserves regularity; namely L1 , ..., Lk regular
implies O(L1 , ..., Lk ) regular. We say that O is effectively computable if from each k-tuple (L1 , ..., Lk )
of regular languages, one can construct a finite-state automaton G with Lm (G) = O(L1 , ..., Lk ).
The standard operators of language closure, complement,1 union, and intersection all preserve regularity
and are effectively computable [6]. Moreover, both the operator sup N : P wr(Σ∗ ) → P wr(Σ∗ ) given
by
sup N (L) :=
[
{L′ ⊆ L | [L′ ] ∩ H = L′ },
for some fixed H ⊆ Σ∗
and the operator sup F : P wr(Σ∗ ) → P wr(Σ∗ ) given by
sup F(L) :=
[
{L′ ⊆ L | L′ = L′ }
preserve regularity and are effectively computable (see [4] and [10], respectively).
The main result of this subsection is the following theorem.
Theorem 2. Suppose that M and C are regular. Then the operator Ω in (9) preserves regularity and is
effectively computable.
The following proposition is a key fact.
Proposition 5. For each K ⊆ Σ∗ ,
F (K) = K ∩ sup F
\
{sup N (K ∪ (M ∩ C.σ)c ) ∪ (C.σ)c | σ ∈ Σ} .
Proof. By (8) and (6),
F (K) = {s ∈ K |
1
[
[s ∩ C.σ] ∩ C.σ | σ ∈ Σ ∩ M ⊆ K}.
For a language L ⊆ Σ∗ , its complement, written Lc , is Σ∗ − L.
15
Hence
[
[s ∩ C.σ] ∩ C.σ | σ ∈ Σ ∩ M ⊆ K
[
[s ∩ C.σ] ∩ C.σ | σ ∈ Σ ⊆ K ∪ (M )c
⇔ s ∈ K and
s ∈ F (K) ⇔ s ∈ K and
⇔ s ∈ K and (∀σ ∈ Σ) [s ∩ C.σ] ∩ C.σ ⊆ K ∪ (M )c
⇔ s ∈ K and (∀σ ∈ Σ) [s ∩ C.σ] ⊆ K ∪ (M )c ∪ (C.σ)c
⇔ s ∈ K and (∀σ ∈ Σ) [s ∩ C.σ] ⊆ K ∪ (M ∩ C.σ)c
⇔ s ∈ K and (∀σ ∈ Σ) s ∩ C.σ ⊆ sup N (K ∪ (M ∩ C.σ)c )
⇔ s ∈ K and (∀σ ∈ Σ) s ⊆ sup N (K ∪ (M ∩ C.σ)c ) ∪ (C.σ)c
\
⇔ s ∈ K and s ⊆ {sup N (K ∪ (M ∩ C.σ)c ) ∪ (C.σ)c | σ ∈ Σ}
\
⇔ s ∈ K and s ∈ sup F
{sup N (K ∪ (M ∩ C.σ)c ) ∪ (C.σ)c | σ ∈ Σ}
\
{sup N (K ∪ (M ∩ C.σ)c ) ∪ (C.σ)c | σ ∈ Σ} .
⇔ s ∈ K ∩ sup F
We also need the following lemma.
Lemma 5. Let σ ∈ Σ be fixed. Then the operator Bσ : P wr(Σ∗ ) → P wr(Σ∗ ) given by
Bσ (L) := L.σ = {sσ | s ∈ L}
preserves regularity and is effectively computable.
Proof. Let G = (Q, Σ, δ, q0 , Qm ) be a finite-state automaton with Lm (G) = L. We will construct a
new finite-state automaton H such that Lm (H) = Bσ (L). The construction is in two steps. First, let q ∗
be a new state (i.e. q ∗ ∈
/ Q), and define G′ = (Q′ , Σ, δ′ , q0 , Q′m ) where
Q′ := Q ∪ {q ∗ },
δ′ := δ ∪ {(q, σ, q ∗ )|q ∈ Q},
Q′m := {q ∗ }.
Thus G′ is a finite-state automaton with Lm (G′ ) = Bσ (L). However, G′ is nondeterministic, inasmuch
as δ′ (q, σ) = {q ′ , q ∗ } whenever δ(q, σ) is defined and δ(q, σ) = q ′ . The second step is hence to apply the
standard subset construction to convert the nondeterministic G′ to a deterministic finite-state automaton
H with Lm (H) = Lm (G′ ) = Bσ (L). This completes the proof.
Finally we present the proof of Theorem 2.
16
Proof of Theorem 2. By Proposition 5 and the definition of Ω : P wr(Σ∗ ) → P wr(Σ∗ ) in (9), for each
K ⊆ Σ∗ we derive
\
{sup N (K ∪ (M ∩ C.σ)c ) ∪ (C.σ)c | σ ∈ Σ} .
Ω(K) = sup N K ∩ sup F
Since the language closure, complement, union, intersection, sup N , sup F and C.σ (by Lemma 5)
all preserve regularity and are effectively computable, the same conclusion for the operator Ω follows
immediately.
In the proof, we see that the operator Ω in (9) is decomposed into a sequence of standard or well-known
language operations. This allows straightforward implementation of Ω using off-the-shelf algorithms.
IV.
R ELATIVE O BSERVABILITY
AND
C ONTROLLABILITY
For the purpose of supervisory control under partial observation, we combine relative observability
with controllability and provide a fixpoint characterization of the supremal relatively observable and
controllable sublanguage.
Let the alphabet Σ be partitioned into Σc , the subset of controllable events, and Σu , the subset of
uncontrollable events. For the given M and C , we say that C is controllable with respect to M if
CΣu ∩ M ⊆ C.
Whether or not C is controllable, write C(C) for the family of all controllable sublanguages of C . Then
the supremal element sup C(C) exists and is effectively computable [10].
Now write CO(C) for the family of controllable and C -observable sublanguages of C . Note that the
family CO(C) is nonempty inasmuch as the empty language is a member. Thanks to the closed-underunion property of both controllability and C -observability, the supremal controllable and C -observable
sublanguage sup CO(C) therefore exists and is given by
sup CO(C) :=
[
{K | K ∈ CO(C)}.
(17)
Define the operator Γ : P wr(Σ∗ ) → P wr(Σ∗ ) by
Γ(K) := sup O(sup C(K)).
(18)
The proposition below characterizes sup CO(C) as the largest fixpoint of Γ.
Proposition 6. sup CO(C) = Γ(sup CO(C)), and sup CO(C) ⊇ K for every K such that K = Γ(K).
17
Proof. Since sup CO(C) ∈ CO(C), i.e. both controllable and C -observable,
Γ(sup CO(C)) = sup O(sup C(sup CO(C)))
= sup O(sup CO(C))
= sup CO(C).
Next let K be such that K = Γ(K). To show that K ⊆ sup CO(C), it suffices to show that K ∈ CO(C).
Let H := sup C(K); thus H ⊆ K . On the other hand, from K = Γ(K) = sup O(H) we have K ⊆ H .
Hence K = H . It follows that K = sup C(K) and K = sup O(K), which means that K is both
controllable and C -observable. Therefore we conclude that K ∈ CO(C).
In view of Proposition 6, we compute sup CO(C) by iteration of Γ as follows:
(∀j ≥ 1) Kj = Γ(Kj−1 ),
K0 = C.
(19)
It is readily verified that Γ(K) ⊆ K , and thus
K0 ⊇ K1 ⊇ K2 ⊇ · · ·
Namely the sequence {Kj } (j ≥ 1) is a monotone (descending) sequence of languages. Recalling the
notation from Section III-A, we have the following key result.
Proposition 7. Consider the sequence {Kj } generated by (19) and let ρ = µ ∧ η ∧ ℘(µ ∧ η). Then for
each j ≥ 1, Kj is ρ-supported on Kj−1 .
Proof. Write Hj := sup C(Kj−1 ) and ψj := {Hj , Σ∗ − Hj } for j ≥ 1. Then by [10, p. 642] there
holds
Ner(Hj ) ≥ ψj ∧ µ ∧ Ner(Kj−1 ).
We claim that for j ≥ 1,
Ner(Kj ) ≥ κj ∧ µ ∧ η ∧ ℘(µ ∧ η).
We prove this claim by induction. For the base case (j = 1),
Ner(H1 ) ≥ ψ1 ∧ µ ∧ Ner(K0 )
= ψ1 ∧ µ ∧ η
Since K1 = sup O(H1 ), we set up the following sequence to compute K1 :
(∀i ≥ 1) Ti = Ω(Ti−1 ),
18
T0 = H1 .
Following the derivations in the proof of Proposition 4, it is readily shown that each Ti is ρ-supported
on H1 ; in particular,
Ner(K1 ) ≥ κ1 ∧ ρ ∧ Ner(H1 )
≥ κ1 ∧ ψ1 ∧ µ ∧ η ∧ ℘(µ ∧ η)
= κ1 ∧ µ ∧ η ∧ ℘(µ ∧ η).
This confirms the base case.
For the induction step, suppose that for j ≥ 2, there holds
Ner(Kj−1 ) ≥ κj−1 ∧ µ ∧ η ∧ ℘(µ ∧ η).
Thus
Ner(Hj ) ≥ ψj ∧ µ ∧ Ner(Kj−1 )
≥ ψj ∧ κj−1 ∧ µ ∧ η ∧ ℘(µ ∧ η)
= ψj ∧ µ ∧ η ∧ ℘(µ ∧ η).
Again set up a sequence to compute Kj as follows:
(∀i ≥ 1) Ti = Ω(Ti−1 ),
T0 = Hj .
We derive by similar calculations as in Proposition 4 that each Ti is ρ-supported on Hj ; in particular,
Ner(Kj ) ≥ κj ∧ ρ ∧ Ner(Hj )
≥ κj ∧ ψj ∧ µ ∧ η ∧ ℘(µ ∧ η)
= κj ∧ µ ∧ η ∧ ℘(µ ∧ η).
Therefore the induction step is completed, and the above claim is established. Then it follows immediately
Ner(Kj ) ≥ κj ∧ µ ∧ η ∧ ℘(µ ∧ η) ∧ Ner(Kj−1 )
= κj ∧ ρ ∧ Ner(Kj−1 ).
Namely, Kj is ρ-supported on Kj−1 , as required.
The following theorem is the main result of this section, which follows directly from Proposition 7
and Lemma 2.
19
q1
α,
q6
Σo = {α, γ, σ}
γ
β1
q0
σ
β2
q2
q3
α
α
q7
γ
β3 q4
Σuo = {β1 , β2 , β3 , β4 , β5 }
β5
q9
σ
q11
P : (Σo ∪ Σuo )∗ → Σ∗o
β4
q5
q8
α, γ
β5
initial state
q10
marker state
G
Fig. 1.
Example: computation of the supremal C-observable sublanguage sup O(C) by iteration of the operator Ω in (9)
Theorem 3. Consider the sequence {Kj } in (19), and suppose that the given languages M and C
are regular. Then the sequence {Kj } is finitely convergent to sup CO(C), and sup CO(C) is a regular
language with
|| sup CO(C)|| ≤ ||M || · ||C|| · 2||M ||·||C|| + 1.
Finally, sup CO(C) is effectively computable, inasmuch as the operators sup C(·) and sup O(·) are
(see [10] and Theorem 2, respectively). In particular, the operator Γ in (18) is effectively computable.
V.
E XAMPLES
In this section, we first give an example to illustrate the computation of the supremal C -observable
sublanguage sup O(C) (by iteration of the operator Ω). Then we present an empirical study on the
computation of the supremal controllable and C -observable sublanguage sup CO(C) (by iteration of the
operator Γ, which has been implemented by a computer program).
A. An Example of Computing sup O(C)
Consider the example displayed in Fig. 1. The observable event set is Σo = {α, γ, σ} and unobservable
Σuo = {β1 , β2 , β3 , β4 , β5 }; thus the natural projection is P : (Σo ∪ Σuo )∗ → Σ∗o . Let
M := Lm (G) = {ǫ, α, γ, ασ, γσ, β1 ασ, β2 α, β2 αβ5 σ, β3 γ,
β3 γβ5 σ, β4 , β4 α, β4 γ, β4 αβ5 , β4 γβ5 }
20
and the specification language
C := M − {β4 αβ5 , β4 γβ5 }.
Both M and C are regular languages.
Now apply the operator Ω in (9). Initialize K0 = C . The first iteration j = 1 starts with
F (K0 ) = {s ∈ K0 | D(s) ∩ M ⊆ K0 }
= {ǫ, α, γ, ασ, γσ, β1 , β1 α, β1 ασ, β2 , β2 α, β3 , β3 γ, β4 , β4 α, β4 γ}
= K0 − {β2 αβ5 , β2 αβ5 σ, β3 γβ5 , β3 γβ5 σ}.
Note that since β2 αβ5 σ ∈ K0 , strings β2 αβ5 , β2 αβ5 σ ∈ K0 . But β2 αβ5 , β2 αβ5 σ ∈
/ F (K0 ); this is
because the string β4 αβ5 belongs to D(β2 αβ5 ) ∩ M and D(β2 αβ5 σ) ∩ M , but β4 αβ5 does not belong
to K0 . For the same reason, β3 γβ5 , β3 γβ5 σ ∈ K0 but β3 γβ5 , β3 γβ5 σ ∈
/ F (K0 ). Next calculate
F (K0 ) ∩ K0 = {ǫ, α, γ, ασ, γσ, β1 ασ, β2 α, β3 γ, β4 , β4 α, β4 γ}
= K0 − {β2 αβ5 σ, β3 γβ5 σ}.
Removing strings β2 αβ5 σ, β3 γβ5 σ from K0 makes F (K0 ) ∩ K0 not normal with respect to C ∩ M .
Indeed, ασ, β1 ασ ∈ [β2 αβ5 σ] ∩ C ∩ M and γσ ∈ [β3 γβ5 σ] ∩ C ∩ M violate the normality condition and
therefore must also be removed. Hence,
K1 = sup N (F (K0 ) ∩ K0 , C ∩ Lm (G))
= {ǫ, α, γ, β2 α, β3 γ, β4 , β4 α, β4 γ}
= (F (K0 ) ∩ K0 ) − {ασ, β1 ασ, γσ}.
This completes the first iteration j = 1.
Since K1 $ K0 , we proceed to j = 2,
F (K1 ) = {s ∈ K1 | D(s) ∩ M ⊆ K1 }
= {ǫ, γ, β2 , β3 , β3 γ, β4 , β4 γ}
= K1 − {α, β2 α, β4 α}.
We see that α, β2 α, β4 α ∈ K1 but α, β2 α, β4 α ∈
/ F (K1 ). This is because the string β1 α ∈ D(α) ∩ M ,
/ K1 . Note that β1 α was in K0 since β1 ασ ∈ K0 , but β1 ασ
D(β2 α) ∩ M , and D(β4 α) ∩ M , but β1 α ∈
21
was removed so as to ensure normality of K1 ; this in turn removed β1 α, which now causes removal of
strings α, β2 α, β4 α altogether. Continuing,
F (K1 ) ∩ K1 = {ǫ, γ, β3 γ, β4 , β4 γ}
= K1 − {α, β2 α, β4 α}.
Removing strings α, β2 α, β4 α does not destroy normality of K1 . Indeed F (K1 ) ∩ K1 is normal with
respect to C ∩ M and we have
K2 = sup N (F (K1 ) ∩ K1 , C ∩ M )
= {ǫ, γ, β3 γ, β4 , β4 γ}
= F (K1 ) ∩ K1 .
This completes the second iteration j = 2.
Since K2 $ K1 , we proceed to j = 3 as follows:
F (K2 ) = {s ∈ K2 | D(s) ∩ M ⊆ K2 }
= {ǫ, γ, β3 , β3 γ, β4 , β4 γ} = K2 ;
F (K2 ) ∩ K2 = K2 ∩ K2 = K2 ;
K3 = sup N (F (K2 ) ∩ K2 , C ∩ M )
= sup N (K2 , C ∩ M ) = K2 .
Since K3 = K2 , the limit of the sequence in (10) is reached. Therefore
K3 = {ǫ, γ, β3 γ, β4 , β4 γ}
is the supremal C -observable sublanguage of C .
B. A Case Study of Computing sup CO(C)
Consider the same case study as in [3, Section V-B], namely a manufacturing workcell served by five
automated guided vehicles (AGV). Adopting the same settings, we apply the implemented Γ operator to
compute the supremal relatively observable and controllable sublanguage sup CO(C), as represented by
a finite-state automaton, say SUPO. That is,
Lm (SUPO) = sup CO(C).
For this case study, the full-observation supervisor (representing the supremal controllable sublanguage) has 4406 states and 11338 transitions. Selecting different subsets of unobservable events, the
22
TABLE I.
SUPO COMPUTED FOR DIFFERENT SUBSETS OF UNOBSERVABLE EVENTS IN THE AGV CASE STUDY
Σuo = Σ − Σo
State #, transition # of SUPO
{13}
(4406,11338)
{21}
(4348,10810)
{31}
(4302,11040)
{43}
(4319,10923)
{51}
(4400,11296)
{12,31}
(1736,4440)
{24,41}
(4122,10311)
{31,43}
(4215,10639)
{32,51}
(2692,6596)
{41,51}
(3795,9355)
{11,31,41}
(163,314)
{12,33,51}
(94,140)
{12,24,33,44,53}
(72,112)
{12,21,32,43,51}
(166,314)
{13,23,31,33,
(563,1244)
41,43,51,53}
computational results for the supremal relatively observable and controllable sublanguages, or SUPO,
are listed in Table I. We see in all cases but the first (Σuo = {13}) that the state and transition numbers of
SUPO are fewer than those of the full-observation supervisor. When Σuo = {13}, in fact, the supremal
controllable sublanguage is already observable, and is therefore itself the supremal relatively observable
and controllable sublanguage.
Moreover, we have confirmed that the computation results agree with those by the algorithm in [3].
Thus the new computation scheme provides a useful alternative to ensure presumed correctness based on
consistency.
VI.
C ONCLUSIONS
We have presented a new characterization of relative observability, and an operator on languages whose
largest fixpoint is the supremal relatively observable sublanguage. In the case of regular languages and
based on the support relation, we have proved that the sequence of languages generated by the operator
converges finitely to the supremal relatively observable sublanguage, and the operator is effectively
23
computable.
Moreover, for the purpose of supervisory control under partial observation, we have presented a second
operator that in the regular case effectively computes the supremal relatively observable and controllable
sublanguage. Finally we have presented an example and a case study to illustrate the effectiveness of the
proposed computation schemes.
R EFERENCES
[1]
R. D. Brandt, V. Garg, R. Kumar, F. Lin, S. I. Marcus, and W. M. Wonham. Formulas for calculating supremal controllable
and normal sublanguages. Systems & Control Letters, 15(2):111–117, 1990.
[2]
K. Cai and W. M. Wonham. A new algorithm for computing the supremal relatively observable sublanguage. In Proc.
Workshop on Discrete-Event Systems, pages 8–13, Xi’an, China, 2016.
[3]
K. Cai, R. Zhang, and W. M. Wonham. Relative observability of discrete-event systems and its supremal sublanguages.
IEEE Trans. Autom. Control, 60(3):659–670, 2015.
[4]
H. Cho and S. I. Marcus. On supremal languages of classes of sublanguages that arise in supervisor synthesis problems
with partial observation. Math. of Control, Signals, and Systems, 2(1):47–69, 1989.
[5]
R. Cieslak, C. Desclaux, A. S. Fawaz, and P. Varaiya. Supervisory control of discrete-event processes with partial
observations. IEEE Trans. Autom. Control, 33(3):249–260, 1988.
[6]
S. Eilenberg. Automata, Languages and Machines. Volume A, Academic Press, 1974.
[7]
J. E. Hopcroft and J. D. Ullman. Introduction to Automata Theory, Languages and Computation. Addison-Wesley, 1979.
[8]
F. Lin and W. M. Wonham. On observability of discrete-event systems. Inform. Sci., 44(3):173–198, 1988.
[9]
W. M. Wonham. Supervisory Control of Discrete-Event Systems. Systems Control Group, Dept. of Electrical and Computer
Engineering, University of Toronto, updated annually 1998-2016. Available online at http://www.control.toronto.edu/DES,
2016.
[10]
W. M. Wonham and P. J. Ramadge. On the supremal controllable sublanguage of a given language. SIAM J. Control and
Optimization, 25(3):637–659, 1987.
24
| 3 |
PROPERTIES OF EXTENDED ROBBA RINGS
arXiv:1709.06221v1 [math.NT] 19 Sep 2017
PETER WEAR
Abstract. We extend the analogy between the extended Robba rings of p-adic Hodge
theory and the one-dimensional affinoid algebras of rigid analytic geometry, proving some
fundamental properties that are well known in the latter case. In particular, we show
that these rings are regular and excellent. The extended Robba rings are of interest as
they are used to build the Fargues-Fontaine curve.
1. Introduction
Since being introduced in [6], the Fargues-Fontaine curve has quickly become an important object in number theory. Given a finite extension K of Qp with Galois group GK ,
an important result from p-adic Hodge theory gives an equivalence of categories between
continuous representations of GK on finite free Zp -modules and étale (φ, Γ)-modules over
the period ring AK (see [2] for an exposition of this result). In [6], Fargues and Fontaine
describe this category in terms of vector bundles on the scheme-theoretic Fargues-Fontaine
curve.
There is also an adic version of the Fargues-Fontaine curve, this is an analytification
satisfying a version of the GAGA principle as seen in [14, §4.7]. Both versions of the curve
parametrize the untilts of characteristic p perfectoid fields. Recently, Fargues has formulated a conjecture using the curve to link p-adic Hodge theory, the geometric Langlands
program and the local Langlands correspondence [5].
The adic version of the curve is built out of extended Robba rings. These rings appear
in p-adic Hodge theory ([13], for example). In [11], Kedlaya proved that they are strongly
noetherian. Kiehl’s theory of coherent sheaves on rigid analytic spaces has been extended
to a similar theory on adic spaces by Kedlaya and Liu in [14] and on rigid geometry by
Fujiwara and Kato in [7]. The strong noetherian property is required to fit the curve into
this theory.
This work suggests an analogy between the extended Robba rings and one-dimensional
affinoid algebras. In [11], Kedlaya established some finer properties of the rings suggested
by this analogy and listed some other expected properties [11, Remark 8.10]. In this paper,
we establish these properties. We hope that the extension of this analogy will help transfer
results from the theory of rigid analytic spaces to the Fargues-Fontaine curve.
We now give an outline of this paper. The extended Robba rings are completions of
rings of generalized Witt vectors. In [10], Kedlaya gave a classification of the points of the
Berkovich space associated to W (R) - the ring of Witt vectors over any perfect Fp -algebra
R. His proof demonstrates a close analogy between W (R) and the polynomial ring R[T ]
1
PROPERTIES OF EXTENDED ROBBA RINGS
2
equipped with the Gauss norm. We first extend this classification to the generalized Witt
vectors by exploiting the many shared functorial properties of the two constructions. We
then consider higher rank valuations to get a description of the corresponding adic space,
again taking advantage of the analogy to R[T ]. Completing, we get the classification for
the extended Robba rings.
Using this explicit classification, we can prove that the rank-1 valuations of these rings
are dense in the constructible topology of the adic spectrum. This allows us to compute
the power bounded elements of a rational localization of these rings. We then extend these
results to the rings defined by étale morphisms of extended Robba rings.
Finally, we prove a form of the Nullstellensatz and use this to prove regularity and
excellence for these rings. In characteristic zero, our proof of excellence is an adaption of
the proof in Matsumura’s book [20, Theorem 101] that the ring of convergent power series
over R or C is excellent. In particular, we work with the derivations of the rings, proving a
Jacobian criterion. In the characteristic p case, excellence follows from a theorem of Kunz
[17, Theorem 2.5].
1.1. Acknowledgements. The author would like to thank Kiran Kedlaya for suggesting
these questions and for many helpful conversations. The author gratefully acknowledges
the support of NSF grant DMS-1502651 and UCSD.
2. Generalized Witt Vectors
Throughout this paper, we will be working in the same setup as [11]. Fix a prime p
and a power q of p. Let L be a perfect field containing Fq , complete for a multiplicative
nonarchimedean norm | • |, and let E be a complete discretely valued field with residue
field containing Fq and uniformizer ̟ ∈ E. Let oL and oE be the corresponding valuation
subrings and write W (oL )E := W (oL ) ⊗W (Fq ) oE . Concretely, each element of W (oL )E can
P
be uniquely written in the form i≥0 ̟i [xi ] with xi ∈ oL .
This ring is treated at length in [6, Sections 5-6] with the notation WoE (oF ). Alternately,
W (oL )E is a ring of generalized Witt vectors as described in [3, Section 2] with the notation
W̟ (oL ). The generalized Witt vectors retain many useful properties of the usual p-typical
Witt vectors. In this section, we briefly go over the results we will need in the rest of the
paper.
Recall that given a perfect Fp -algebra R, we can define W (R) functorially as as the
unique strict p-ring W (R) for which W (R)/(p) ∼
= R. The analogous statement is true for
W (oL )E .
Lemma 2.1. We have W (oL )E /(̟) = oL , and W (oL )E is ̟-torsion-free, ̟-adically
complete and separated.
Proof. [3, Proposition 2.12]
Lemma 2.2. The addition law is given by
X
X
X
[y i ]̟i =
[z i ]̟ i
[xi ]̟i +
i≥0
i≥0
i≥0
PROPERTIES OF EXTENDED ROBBA RINGS
j−i
3
j−i
for j = 0, . . . , i. This polynomial has integer
where z i is a polynomial in xqj , y qj
coefficients and is homogeneous of degree 1 for the weighting in which xj , y j have degree
1. The analogous statement is true for multiplication.
Proof. [6, Remarque 5.14].
We conclude this section with a result on factoring in W (oL )E .
Definition 2.3. An element x ∈ W (oL )E is stable if it has the form
either |xi | = 0 for all i ≥ 0 or |x0 | > p−i |xi | for all i > 0.
P∞
i=0 ̟
i [x
i]
with
Theorem 2.4. Assume that L is algebraically closed. For x ∈ W (oL )E nonzero and not
stable, we can write x = y(̟ − [u1 ]) · · · (̟ − [un ]) for some nonzero stable y ∈ W (oL )E
and some u1 , . . . , un ∈ oL with |u1 |, . . . , |un | ≤ p−1 .
Proof. [6, Théorème 6.46].
3. The Berkovich spectrum of the generalized Witt vectors
Let R be a perfect Fp -algebra, equipped with the trivial norm. In [10, Theorem 8.17],
Kedlaya gives an explicit classification of the points of M(W (R)). In this section, we will
extend this result to M(W (oL )E ). This entire section follows Kedlaya’s paper extremely
closely, the arguments all carry over fairly directly due to the similarities of the rings W (R)
and W (oL )E stated in Section 2. We therefore explain the differences caused by the change
of rings, but when arguments are essentially identical to the original paper we simply give
a sketch and a reference to the original proof.
We first define an analogue of the Gauss seminorm on W (oL )E .
Lemma 3.1. The function λ : W (oL )E → R given by
∞
X
̟i [xi ] = max p−i |xi |
λ
i=0
i
is multiplicative and bounded by the p-adic norm.
Proof. This is the analogue of [10, Lemma 4.1] where the seminorm α (which is | · | in our
case) is assumed to be multiplicative. The proof is identical to the normal Witt vector case,
using the fact that addition and subtraction are defined on the jth Teichmuller component
as homogeneous polynomials of degree q j as in Lemma 2.2.
This acts like the (p−1 )-Gauss seminorm for the generator ̟, and the (r/p)-Gauss
seminorm can be defined by replacing p−i in the above lemma by (r/p)i . We can use this
to build analogues of Gauss seminorms for other generators and weights.
Definition 3.2. Given u ∈ oL with |u| ≤ p−1 , let π = ̟ − [u]. Then given r ∈ [0, 1], we
define the valuation H(u, r) to be the quotient norm on W (oL )E [T ]/(T − π) ∼
= W (oL )E
induced by the (r/p)-Gauss extension of | · | to W (R)[T ].
To show that these valuations are multiplicative and to compute them easily, we define
stable presentations.
PROPERTIES OF EXTENDED ROBBA RINGS
4
Definition
L is complete, so is oL and so W (oL )E is (̟, [u])-adically complete, so
P 3.3. As
i with x ∈ W (o ) converges to some limit x. We say that the sequence
any sum ∞
x
π
i
i
L E
i=0
x0 , x1 , . . . forms a presentation of x with respect to u. If each xi is stable (Definition 2.3),
we call this a stable presentation.
P
i
Lemma 3.4. For any x ∈ W (oL )E , there exists some y = ∞
i=0 ̟ [yi ] ∈ W (oL )E with
x ≡ y (mod π) and |y0 | ≥ |yi | for all i > 0. By definition, y is stable.
Proof. This follows the construction of [10, Lemma 5.5] but is a bit simpler as our π is of
the form ̟ − [u] instead of some general primitive element. For any integer j ≥ 0, we can
write
j
∞
∞
X
X
X
i
i
̟ [xi ] ≡
[u] [xi ] +
̟ i [xi ] (mod π).
x=
i=0
i
i=j+1 ̟ [xi ]
P∞
i=0
i=j+1
goes to zero. So either there exists some N > 0 such that
As j grows,
PN
P
i
i
| i=0 [u] [xi ]| ≥ |xn | for all n > N or the sum ∞
i=0 [u] [xi ] converges. In the first case, we
∞
∞
N
X
X
X
̟ i [xi ], in the second case, we let y =
[u]i [xi ].
[u]i [xi ] +
let y =
i=0
i=N +1
i=0
Lemma 3.5. Every element of W (oL )E admits a stable presentation.
Proof. This is the analogue of [10, Lemma 5.7]. Given x, x0 , x1 , . . . , xi−1 ∈ W (oL )E , apply
P
i
Lemma 3.4 to construct xi congruent to (x − i−1
j=0 xj )/π (mod π). This process yields a
stable presentation of x0 , x1 , . . . of x.
Theorem 3.6. The function H(u, r) is a multiplicative seminorm and bounded by λ. Given
any stable presentation x0 , x1 , . . . of x ∈ W (oL )E , H(u, r) = maxi {(r/p)i λ(xi )}.
Proof. The proof of [10, Theorem 5.11] carries over exactly as all the needed properties of
presentations in W (R) also hold in W (oL )E .
The following computation will be useful later.
Corollary 3.7. For u, u′ ∈ oL with |u|, |u′ | ≤ p−1 and r ∈ [0, 1],
H(u, r)(̟ − [u′ ]) = max{r/p, H(u, 0)(̟ − [u′ ])}.
Proof. [10, Lemma 5.13]
Remark 3.8. [10, Remark 5.14] As H(u, 0) is the quotient norm on W (oL )E /(π) induced
by λ, we have H(u, 0)(x) = 0 if and only if x is divisible by π.
Furthermore, any v ∈ M(W (oL )E ) with v(π) = 0 must equal H(u, 0). Given x ∈
W (oL )E , we can construct a stable presentation with respect to π. Then only the first
term of the presentation will affect v(x) as v(π) = 0, so v is exactly H(u, 0) by Theorem
3.6.
Lemma 3.9. For any v ∈ M(W (oL )E ), there exists a perfect overfield L′ of L complete
with respect to a multiplicative nonarchimedean norm extending the one on L and some
u ∈ mL′ \ {0} such that the restriction of H(u, 0) to W (oL )E equals v.
PROPERTIES OF EXTENDED ROBBA RINGS
5
I
defined in 5.1, the exact same
Proof. This is shown in [11, Lemma 6.3] for the rings BL,E
proof will work for W (oL )E . The analogous construction for p-typical Witt vectors is in
[10, Definition 7.5].
This lemma is very important as it allows us to reduce our study of general seminorms
of W (oL )E to those in Definition 3.2.
Definition 3.10. Given v ∈ M(W (oL )E ) and ρ ∈ [0, 1], choose L′ , u as in 3.9 and define
H(v, ρ) to be the restriction of H(u, ρ) to W (oL )E . We define the radius of v to be the
largest ρ ∈ [0, 1] for which H(v, ρ) = v. This is well defined by continuity.
Lemma 3.11. This definition doesn’t depend on L′ or u and defines a continuous map
H : M(W (oL )E ) × [0, 1] → M(W (oL )E ) such that
H(H(v, ρ), σ) = H(v, max{ρ, σ})
Proof. [10, Theorem 7.8]
(v ∈ M(W (oL )E ); ρ, σ ∈ [0, 1]).
Now let L̃ be a completed algebraic closure of L, there is a unique multiplicative extension
of | · | to L̃ so we will continue to call this | · |. Let oL̃ be the valuation ring of L̃ and equip
W (oL̃ ) with the multiplicative norm λ̃.
Definition 3.12. For u ∈ oL̃ with |u| ≤ p−1 and r ∈ [0, 1], let β̃u,r be the valuation H(u, r)
and let βu,r be the restriction of β̃u,r to W (oL )E .
Remark 3.13. There is a natural analogue of β̃u,r in M(K[T ]) where K is an algebraically
closed field. In that case, the seminorm can be identified with the supremum norm over
the closed disc in C of center u and radius r. An analogous statement holds here, Lemma
3.15 implies that β̃u,r dominates the supremum norm. We won’t use or prove this fact, but
it may be helpful for intuition.
We now give a very brief exposition of some useful properties of the valuations β̃u,r and
βu,r that are needed for the classification. All proofs are now identical to those in [10]. By
[10, Lemma 8.3], we have β̃u,r = β̃u′ ,r if and only if r/p ≥ β̃u′ ,0 (̟ − [u]). We can therefore
replace the center u of β̃u,r with a nearby element u′ ∈ oL̃ [10, Corollary 8.4] which we
can choose to be integral over oL [10, Corollary 8.5]. This integrality allows us to move
to βu,r : factoring the minimal polynomial of u reduces computations to checking things of
the form p − [ui ]. This type of argument implies that the radius works as expected, the
radius of βu,r is r [10, Corollary 8.8].
This brings us to the key lemma for our classification.
Lemma 3.14. Given v ∈ M(W (oL )E ) with radius r and s ∈ (r, 1], there exists u ∈ oL̃
with |u| ≤ p−1 for which H(v, s) = βu,s .
Proof. The full proof is given in [10, Lemma 8.10], we will give a sketch. The set of s such
that H(v, s) = βu,s for some u ∈ oL̃ is up-closed and nonempty. Let t be its infimum, we
will check that t ≤ r.
PROPERTIES OF EXTENDED ROBBA RINGS
6
By Lemma 3.9, we can expand L to some L′ and find w ∈ mL′ \ {0} such that v is the
restriction of H(w, 0) to W (oL )E . Taking an algebraic closure L′ lets us identify oL with
a subring of oL′ , so we can compare βw,s with any βu,s ∈ M(W (oL̃ )). If s < t then these
valuations must be distinct, so s/p < βw,0 (p − [u]).
By the computation in Corollary 3.7,
βw,s (p − [u]) = max{s/p, βw,0 (p − [u])} = βw,0 (p − [u])
when s ≤ t, so for these elements the valuation doesn’t depend on s. But because L′
is algebraically closed, by Theorem 2.4 we can factor every element of W (oL )E into the
product of a stable element and finitely many p − [ui ] . As all of the terms of this product
are independent of s, we conclude that βw,s = βw,0 = v for all s ∈ [0, t] and so t ≤ r as
desired.
Lemma 3.15. For u ∈ oL̃ with |u| ≤ p−1 and r ∈ [0, 1], let D(u, r) be the set of βv,0
dominated by βu,r . Then for r, s ∈ [0, 1], D(u, r) = D(u, s) if and only if r = s.
Proof. [10, Lemma 8.16]
Theorem 3.16. Each element of M(W (oL )E ) is of exactly one of the following four types.
(1) A point of the form βu,0 for some u ∈ oL̃ with |u| ≤ p−1 . Such a point has radius
0.
(2) A point of the form βu,r for some u ∈ oL̃ with |u| ≤ p−1 and some r ∈ (0, 1] such
that t/p is the norm of an element of oL̃ . Such a point has radius r.
(3) A point of the form βu,r for some u ∈ oL̃ with |u| ≤ p−1 and some r ∈ (0, 1) such
that t/p is the not norm of an element of oL̃ . Such a point has radius r.
(4) The infimum of a sequence βui ,ri for which the sequence D(ui , ri ) is decreasing with
empty intersection. Such a point has radius inf ri > 0.
i
Proof. This is [10, Lemma 8.17], we again give a sketch. Types (i), (ii), (iii) are distinct
as they have different radii, and βu,r cannot be type (iv) because βu,0 would be in each
D(ui , ti ). So the four types of points are distinct and we must check that any β not of the
form βu,s is type (iv).
Let r be the radius of β, choose a sequence 1 ≥ t1 > t2 > · · · with infimum r. Then by
3.14 we have H(β, ti ) = βui ,ti for some ui ∈ oL̃ . Then βu1 ,t1 , βu2 ,t2 , . . . is decreasing with
infimum β, so the sequence D(ui , ti ) is also decreasing. Any u in the intersection would
allow us to write β = βu,r , so the D(ui , ti ) must have empty intersection. The radius must
be nonzero because any decreasing sequence of balls with empty intersection must have
radii bounded below by a nonzero number.
4. The points of the adic spectrum
We will now add in a classification of the higher rank valuations, giving a complete description of the adic spectrum. To move to Spa(W (oL )E , W (oL )E ◦ ), we must also consider
higher rank valuations. In the lecture notes [4, Lecture 11], Conrad gives an explicit description of the points of the adic unit disk Spa(khti, k◦ hti) over a non-archimedean field k.
PROPERTIES OF EXTENDED ROBBA RINGS
7
In this section, we adapt this proof to show that all the higher rank points are the natural
equivalent of the type 5 points of the adic unit disk.
Define the abelian group Γ := R>0 × Z with the lexicographical order, the group action
given by (t1 , m1 )(t2 , m2 ) = (t1 t2 , m1 + m2 ). We define 1− := (1, −1), 1+ := (1, 1) = 1/1− ,
r − = r1− , and r + = r1+ . Intuitively, 1− is infinitesimally less than 1.
Definition 4.1. Given u ∈ oL̃ with |u| ≤ p−1 and r ∈ (0, 1], we define β̃u,r+ : W (oL̃ )E →
Γ ∪ {0} by
β̃u,r+ (x) = max (r + /p)i |xi |}
i
for any stable presentation x0 , x1 , . . . of x with respect to u. We define βu,r+ to be the
restriction of β̃u,r+ to W (oL )E and we define β̃u,r− and βu,r− analogously. We call these
the type 5 valuations.
We remark that this definition doesn’t depend on the choice of stable presentation by
the argument of [10, Theorem 5.11], this was already used in Theorem 3.6. One can check
that these valuations are continuous. If r/p isn’t the norm of an element of W (oL̃ )E then
×
βx,r+ = βx,r− = βx,r . We will therefore assume from now on that |r/p| ∈ |W (oL̃ )×
E | = |oL |.
Remark 4.2. Unlike the rank one case, the maximum in the definition of β̃u,r+ and β̃u,r−
is attained by a unique element. This is clear as any two terms have different powers of
1+ .
Theorem 4.3. All of the points of Spa(W (oL )E , W (oL )E ◦ ) \ M(W (oL )E ) are of type 5.
Proof. The argument is essentially that of [4, Theorem 11.3.13], we give a sketch pointing
out the differences. We assume that L is algebraically closed; as before the general case
follows by restricting valuations from the algebraic closure. Let v be a valuation of W (oL )E
with rank greater than 1. Then there is some x′ ∈ W (oL )E with v(x′ ) 6∈ |o×
L |. As L is
algebraically closed, we can use Theorem 2.4 to factor x′ = y(p − [u1 ]) · · · (p − [un ]) for
y ∈ W (oL )E stable and ui ∈ oL with |ui | ≤ p−1 . We have v(y) ∈ |o×
L |, so we must have
×
some ui with v(p − [ui ]) 6∈ |oL |. For simplicity we will call this u and define π = p − [u], γ =
v(p − [ui ]). We will show that u must act as our center.
m 6∈ |o× | for any non-zero integer m.
As L is algebraically closed, |o×
L
L | is divisible, so γ
P
i
Then given any x ∈ W (oL )E , if we construct a stable presentation x = ∞
i=0 xi π we have
v(x) = max v(xi π i ) = max γ i |xi |
i
i
as the valuations of the terms are pairwise distinct. The rest of the proof now follows
exactly as in [4]. One checks that γ must be infinitesimally close to some r ∈ |o×
L | (if this
weren’t the case we could construct an order-preserving homomorphism from Γ → R>0 ,
contradicting that Γ is higher rank). Then either v = βu,r+ or βu,r− depending on if γ > r
or γ < r in Γ.
PROPERTIES OF EXTENDED ROBBA RINGS
8
5. The extended Robba rings
We now define the rings used in the construction of the Fargues Fontaine curve and
extend the classification of 3.16 and 4.3 to them.
Definition 5.1. Following [11, Definition 2.2], we define
AL,E = W (oL )E [[x] : x ∈ L],
BL,E = AL,E ⊗oE E.
P
Each element of AL,E (resp. BL,E ) can be written uniquely in the form i∈Z ̟ i [xi ] for
some xi ∈ L which are zero for i < 0 (resp. for i sufficiently small) and bounded for i
large. The valuations H(0, r) for r ∈ (0, 1] (defined in 3.2) therefore extend naturally to
these rings, allowing us to make the following definition.
Definition 5.2. Let ArL,E be the completion of AL,E with respect to H(0, r) and define
r
BL,E
analogously. Given a closed subinterval I = [s, r] of (0, ∞), let λI = max{H(0, s), H(0, r)},
I
this is a power multiplicative norm as the H(0, r) are multiplicative. Let BL,E
be the completion of BL,E with respect to λI .
r
I
are called B b and the rings BL,E
are called BI . In
Remark 5.3. In [6] the rings BL,E
[11] the Gauss norms used to obtain these rings are written in a different form but are
equivalent.
Proposition 5.4. The points of adic spectra of all the rings defined in 5.1 and 5.2 can be
classified into types 1-5 as in 3.16 and 4.3.
◦ )) and an element
Proof. Given a valuation v ∈ Spa(AL,E , A◦L,E ) (resp. Spa(BL,E , BL,E
P
x = i∈Z ̟i [xi ] in AL,E or BL,E , there exists some y ∈ oL and k ≥ 0 such that ̟ k x[y −1 ] ∈
W (oL )E . As v is by definition multiplicative, we have v(x) = v(̟ k x[y −1 ])v([y])v(̟)−k , so
v(x) is uniquely determined by the restriction of v to W (oL )E . Taking completions won’t
r , and
add points to the adic spectrum, so the desired result also follows for ArL,E , BL,E
I .
BL,E
6. Rational Localizations
I , B I,◦ )
We can now use this explicit classification to determine some properties of Spa(BL,E
L,E
I ). We first set some notation. Let (B I , B I,◦ ) → (C, C + )
that can be checked on M(BL,E
L,E
L,E
be a rational localization, so there exist elements f1 , . . . , fn , g ∈ BL,E generating the unit
I
ideal in BL,E
such that
I,◦
I
) : v(fi ) ≤ v(g) 6= 0 (i = 1, . . . , n)}.
Spa(C, C + ) = {v ∈ Spa(BL,E
, BL,E
By [13, Lemma 2.4.13a] we have
I
C = BL,E
{T1 /ρ1 , . . . , Tn /ρn }/(gT1 − f1 , . . . , gTn − fn )
I
(as BL,E
is strongly noetherian [11, Theorem 4.10], this ideal is already closed so we don’t
need to take the closure) and by definition
C + = {x ∈ C : v(x) ≤ 1 (v ∈ Spa(C, C + ))}.
PROPERTIES OF EXTENDED ROBBA RINGS
9
Remark 6.1. By [13, Remark 2.4.7], we can choose the defining elements f1 , . . . , fn , g of
our rational localization to be elements of BL,E . This is convenient as it can P
be difficult to
I
work with general elements of BL,E
- they aren’t necessarily all of the form i∈Z ̟i [xi ].
We start with a useful computation, showing that inequalities coming from a type-5
valuation continue to be true near that valuation.
I , B I,+ ),
Lemma 6.2. Given elements x and y in BL,E and a type-5 valuation βu,r+ in Spa(BL,E
L,E
if βu,r+ (x) ≤ βu,r+ (y) then there exists some real number s > r such that for every
r ′ ∈ (r, s), βu,r′ (x) ≤ βu,r′ (y). If we instead choose βu,r− such that βu,r− (x) ≤ βu,r− (y),
then there exists some real number s < r such that for every r ′ ∈ (s, r), βu,r′ (x) ≤ βu,r′ (y).
Proof. We will just prove the first statement, the other statement follows identically. We
can assume that L is algebraically closed as the other case follows by restricting the valuations. Let π = ̟ − [u] and fix stable presentations x0 , x1 , . . . and y0 , y1 , . . . of x and y, so
βu,r+ (x) = maxi {(r + /p)i λI (xi )} and βu,r+ (y) = maxi {(r + /p)i λI (yi )}. Let j be the unique
(by Remark 4.2) index such that (r + /p)j λI (yj ) = βu,r+ (y).
For any term xi , we have (r + /p)i λI (xi ) ≤ (r + /p)j λI (yj ) by assumption. If i ≤ j then
for all r ′ > r we have (r ′ /p)i λI (xi ) ≤ (r ′ /p)j λI (yj ) so any choice of s will retain the desired
inequality in this case. If i > j, we must have a strict inequality (r/p)i λI (xi ) < (r/p)j λI (yj )
as if this were an equality moving from r to r + would increase the left side more than the
right side. For any fixed i, there is some interval (r, si ) where this inequality remains strict.
It is therefore enough to show that we only need to consider finitely many terms. But there
are only finitely many i such that (1/p)i λI (xi ) > (r/p)j λI (yj ) and these are the only terms
of our presentation of x that could ever pass the leading term of y.
I ) are dense in Spa(B I , B I,◦ ) in the
Proposition 6.3. The rank one valuations M(BL,E
L,E
L,E
constructible topology.
I ) is dense in Spa(B I , B I,◦ ) in
Proof. It is known (eg [13, Definition 2.4.8]) that M(BL,E
L,E
L,E
I , B I,◦ ) that is locally
the standard topology. We must show that any subset of Spa(BL,E
L,E
I ). As we have a
closed in the standard topology has nonempty intersection with M(BL,E
basis of rational subsets, it is enough to show that if we have rational subsets Spa(C, C + )
I , B I,◦ ) and we have some semivaluation v ∈ Spa(C, C + ) \
and Spa(D, D + ) of Spa(BL,E
L,E
I ).
Spa(D, D + ), then there is some semivaluation v ′ ∈ (Spa(C, C + ) \ Spa(D, D + )) ∩ M(BL,E
I ) there is nothing to check, so we may assume that v has rank greater than
If v ∈ M(BL,E
1. By the above classification, v must be a type-5 point. We will assume that v = βu,r+ , a
similar argument will take care of the other option of v = βu,r− . Then if the corresponding
type-2 point βu,r is in Spa(C, C + ) \ Spa(D, D + ) we are done, so we can assume this is not
the case. Then we will use 6.2 to show that there is some s > r such that for all r ′ ∈ (r, s),
βu,r′ is in Spa(C, C + ) \ Spa(D, D + ).
As βu,r 6∈ Spa(C, C + ) \ Spa(D, D + ), we either have βu,r ∈ Spa(D, D + ) or βu,r 6∈
Spa(C, C + ). If we are in the first case, then by definition there are finitely many elements (possibly not all the defining elements of the rational subset) fi , g ∈ BL,E such that
PROPERTIES OF EXTENDED ROBBA RINGS
10
βu,r (fi ) ≤ βu,r (g) 6= 0 and βu,r+ (fi ) > βu,r+ (g). Then for each i, by Lemma 6.2 we have
some interval (r, si ) such that for all r ′ ∈ (r, si ), βu,r′ (fi ) > βu,r′ (g) 6= 0. As we are assumI,+
I,◦
I , B I,◦ ) for all
ing that BL,E
= BL,E
, there is some interval (r, s′ ) such that βu,r′ ∈ Spa(BL,E
L,E
′
′
r ∈ (r, s ). The intersection of this finite set of intervals is nonempty, giving valuations of
types 2 and 3 that are also in Spa(C, C + ) \ Spa(D, D + ) as desired. The other case follows
similarly.
I , B I,◦ ) forms a covCorollary 6.4. A finite collection of rational subspaces of Spa(BL,E
L,E
I ) do so.
ering if and only if the intersections with M(BL,E
I , B I,◦ ) is determined by its intersection
Corollary 6.5. A rational subspace of Spa(BL,E
L,E
I ).
with M(BL,E
I,◦
I,+
I , then
, the ring of power-bounded elements of BL,E
equals BL,E
Proposition 6.6. If BL,E
I , B I,+ ) → (C, C + ), one also has C + = C ◦ .
for any rational localization (BL,E
L,E
Proof. By definition we have C + ⊂ C ◦ , so we must show that C ◦ ⊂ C + . By the description
of C + at the start of the section, this can be done by showing that for any x ∈ C ◦ and
v ∈ Spa(C, C + ), v(x) ≤ 1. We claim that it is enough to check this for v ∈ Spa(C, C + ) ∩
I ), i.e. when v is a rank-one valuation. Assume the contrary: that C + 6= C ◦ but
M(BL,E
I ) = Spa(C, C ◦ ) ∩ M(B I ). Then we can choose some f ∈ C ◦ \ C +
Spa(C, C + ) ∩ M(BL,E
L,E
I , B I,+ ) →
and add the condition that v(f g) ≤ v(g) to the defining inequalities of (BL,E
L,E
I , B I,+ ) → (C ′ , C ′+ ) such that
(C, C + ). This will give a new rational localization (BL,E
L,E
Spa(C ′ , C ′+ ) ( Spa(C, C + ) as we are simply enforcing an extra nontrivial inequality. But
by our assumption, this inequality was already satisfied by all the elements of Spa(C, C + )∩
I ), so we have Spa(C, C + ) ∩ M(B I ) = Spa(C ′ , C ′+ ) ∩ M(B I ). This contradicts
M(BL,E
L,E
L,E
Corollary 6.5, so we must have C ◦ = C + as desired. We remark that Spa(C, C ◦ ) need not
a priori be a rational localization, this is why we had to construct Spa(C ′ , C ′+ ).
I ). By [11, Lemma 6.3], v is the restriction of
Now fix some v ∈ Spa(C, C + ) ∩ M(BL,E
a norm of the form H(u, 0) on some perfect overfield L′ of L (compare with Lemma 3.9).
I,◦
I
I
The inclusion L → L′ gives an inclusion BL,E
→ BLI ′ ,E such that BLI,◦
′ ,E ∩ BL,E = BL,E . Let
I,◦
I , B I,◦ ) → (B I
(C ′ , C ′+ ) denote the base extension of (C, C + ) along (BL,E
L′ ,E , BL′ ,E ), so
L,E
C ′ = BLI ′ ,E {T1 /ρ1 , . . . , Tn /ρn }/(gT1 − f1 , . . . , gTn − fn )
I
where f1 , . . . , fn , g are simply the corresponding elements of BL,E
viewed as elements of
′◦
◦
′
′
′+
I
BL′ ,E . As C ∩ C = C and v ∈ Spa(C , C ) restricts to v on C, checking that v ′ (x) ≤ 1
for all x ∈ C ′◦ will give our result.
By [11, Remark 5.14], the norm v ′ on BLI ′ ,E is just the quotient norm on BLI ′ ,E /(̟−[u]) =
H(v ′ ) (compare with 3.8). So extending v ′ to C ′ , we get the quotient norm on C ′ /(̟ − [u]).
By [11, Lemma 7.3], the map BLI ′ ,E /(̟ − [u]) → C ′ /(̟ − [u]) is an isomorphism, so we
have reduced to looking at a multiplicative norm on a field. In this case, it is clear that
power bounded elements have norm at most 1, so we are done.
PROPERTIES OF EXTENDED ROBBA RINGS
11
7. Étale Morphisms
Étale morphisms of adic spaces were first defined and studied by Huber in [9]. In [11,
Section 8], Kedlaya gives some results on étale morphisms of extended Robba rings. In
this section, we recall the setup of Huber and some of Kedlaya’s results. We then extend
the results of the previous sections to étale morphisms.
I , B I,+ ) → (C, C + ) be a morphism of adic Banach rings which
Hypothesis 7.1. Let (BL,E
L,E
is étale in the sense of Huber [9, Definition 1.6.5]. In particular, C is a quotient of
I {T /ρ , . . . , T /ρ } for some n, so it is strongly noetherian by [11, Theorem 4.10].
BL,E
1 1
n n
I , B I,+ ) and M(C) →
By definition, we get induced morphisms Spa(C, C + ) → Spa(BL,E
L,E
I ).
M(BL,E
Lemma 7.2. There exist finitely many rational localizations {(C, C + ) → (Di , Di+ )}i such
I , B I,+ ) → (D , D + ) factors as a
that ∪i Spa(Di , Di+ ) = Spa(C, C + ) and for each i, (BL,E
i
i
L,E
I , B I,+ ) → (C , C + ) followed by a finite étale morphism
connected rational localization (BL,E
i
i
L,E
(Ci , Ci+ ) → (Di , Di+ ) with Di also connected.
Proof. [9, Lemma 2.2.8]
The above construction commutes with base extension, so when we extend from L to L′
as in Lemma 3.9 the above result is retained.
Proposition 7.3. The rings Ci are all principal ideal domains and the rings Di are all
Dedekind domains.
Proof. See [11, Theorem 7.11(c)] and [11, Theorem 8.3(b)].
We now start working towards a classification of the points of Spa(C, C + ). We have a
I , B I,+ ) and a good understanding of the points of Spa(B I , B I,+ )
map Spa(C, C + ) → Spa(BL,E
L,E
L,E
L,E
I , B I,+ ) we will look at the preimage
from Proposition 5.4, so given a point v ∈ Spa(BL,E
L,E
{wj }j∈J . We will show that this is a finite set of valuations with the same rank and radius
as v.
I , B I,+ ),
Proposition 7.4. Given a valuation w ∈ Spa(C, C + ) mapping to v ∈ Spa(BL,E
L,E
the rank of w is the same as the rank of v.
I
→ C is a finite map of rings, so C is a
Proof. By Lemma 7.2 we can assume that BL,E
I
finitely generated BL,E -module. Let the value group of v be H and the value group of w
be G, then [G : H] is finite so the groups must have the same rank.
I , B I,+ ), the preimage {γ }
Proposition 7.5. Given a valuation β ∈ Spa(BL,E
j j∈J is a finite
L,E
set.
Proof. By Lemma 7.2, we can choose a neighborhood of β so that we are working with a
finite étale morphism. This now follows from [9, Lemma 1.5.2c].
PROPERTIES OF EXTENDED ROBBA RINGS
12
We can now extend the definition of radius to valuations in M(C). Retaining the
notation of the proof of 7.5, the radius of β is the maximal r ∈ [0, 1] such that the restriction
I
of βu,r to BL,E
is β. It is therefore natural to define the radius rj of γj to be the maximal
rj ∈ [0, 1] such that some element of the preimage of βu,r restricts to γj on C.
I ) ⊂ Spa(B I , B I,+ ) with radius r and γ in the
Proposition 7.6. Given β ∈ M(BL,E
j
L,E
L,E
preimage of β with radius rj , we have r = rj .
I
Proof. If s > r, βu,s doesn’t restrict to β on BL,E
so the preimage of βu,s won’t contain
γj . We therefore have r ≥ rj . To show that r ≤ rj , we note that extending the radius is
continuous and that the preimage of βu,s in Spa(C ′ , C ′+ ) maps to a subset of the finite set
{γj } ⊂ Spa(C, C + ) for all s ∈ [0, r]. So by continuity, every γj in the preimage of β is the
restriction of some element of the preimage of βu,r as desired.
We finally extend to higher rank valuations.
Proposition 7.7. Let βu,r± be a type 5 valuation as in Definition 4.1. Then the preimage
of βu,r± is in bijection with the preimage of βu,r .
Proof. This follows from continuity and the fact that the size of the fibers is locally constant.
I , B I,◦ ), the rank one valuations M(C) are
Proposition 7.8. If (C, C + ) is étale over (BL,E
L,E
dense in Spa(C, C + ) in the constructible topology.
I , B I,◦ ) is locally the composition of open immerProof. The map Spa(C, C + ) → Spa(BL,E
L,E
sions and finite étale maps. Étale maps are smooth and therefore open by [9, Proposition
1.7.8], and finite maps are closed by [9, Lemma 1.4.5], so finite étale maps send locally
closed subsets to locally closed subsets. The same is certainly true of open immersions, so
any subset U in Spa(C, C + ) that is locally closed under the standard topology is mapped
I , B I,◦ ). By Proposition 6.3, there is some rank
to a locally closed subset V in Spa(BL,E
L,E
one valuation v ∈ V , and by Proposition 7.4 the preimage of v is made up of rank one
valuations.
I , B I,◦ ), a finite collection of rational subCorollary 7.9. If (C, C + ) is étale over (BL,E
L,E
spaces of Spa(C, C + ) forms a covering if and only if the intersections with M(C) do so.
I , B I,◦ ), a rational subspace of Spa(C, C + )
Corollary 7.10. If (C, C + ) is étale over (BL,E
L,E
is determined by its intersection with M(C).
Proposition 7.11. If C + equals C ◦ , then for any rational localization (C, C + ) → (D, D + ),
one also has D + = D ◦ .
Proof. The proof of 6.6 carries over. By 7.8, we can again reduce to checking specific
inequalities on rank one valuations. We can therefore use Lemma 7.2 and Proposition 7.3
to assume that C is a Dedekind domain. By extending L to some L′ , we reduce to checking
norm with nonempty kernel. As C is a Dedekind domain, the kernel is a maximal ideal.
PROPERTIES OF EXTENDED ROBBA RINGS
13
We are therefore again dealing with power bounded elements in a field with a multiplicative
norm, where the desired inequalities are clear.
8. Consequences of the strong noetherian property
In [11, Theorem 3.2], Kedlaya gives a proof that the ring ArL,E is strongly noetherian, i.e. that ArL,E {T1 /ρ1 , . . . , Tn /ρn } is noetherian for any nonnegative integer n and
ρ1 , . . . , ρn > 0. This is done very explicitly, using the theory of Gröbner bases to construct generators for a given ideal. In this section, we adapt this proof to give a version
of the Nullstellensatz for the rings A{T1 /ρ1 , . . . , Tn /ρn }. We then use the Nullstellensatz
to prove that these rings are regular. In the next section, we will use regularity to prove
that the rings A{T1 /ρ1 , . . . , Tn /ρn } are excellent. We also show that ArL,E is also strictly
I
and rings C
noetherian. As in [11], these arguments can be generalized to the rings BL,E
I
coming from étale extensions of BL,E as in Definition 7.1.
Definition 8.1. Given a ring R and a subring A, we say that the pair (R, A) satisfies the
Nullstellensatz condition if every maximal ideal of R restricts to a maximal ideal of A.
Remark 8.2. We use this name because Munshi proved and then used this property for
(F [x1 , . . . , xn ], F [x1 ]) for F a field to give a proof of Hilbert’s Nullstellensatz, his proof is
the subject of [19].
Theorem 8.3. Let A be a nonarchimedean Banach ring with a multiplicative norm | · |.
Assume further that A is a strongly noetherian Euclidean domain; in particular, this holds
for A = ArL,E . Let n be a positive integer and ρ = (ρ1 , . . . , ρn ) an n-tuple of positive real
numbers such that the value group of A{T1 /ρ1 , . . . , Tn /ρn } \ {0} has finite index over the
value group of A× . Then A{T1 /ρ1 , . . . , Tn /ρn } satisfies the Nullstellensatz condition with
respect to A.
Proof. We begin with two reductions. For each Ti , there is some positive integer ei such
that |Tiei | ∈ |A× |, so we can write A{T1 /ρ1 , . . . , Tn /ρn } as an integral extension of the ring
ArL,E {(T1 /ρ1 )e1 , . . . , (Tn /ρn )en }. This ring has the same value group as A× , and by going
up maximal ideals of A{T1 /ρ1 , . . . , Tn /ρn } restrict to maximal ideals of this ring. We can
therefore assume that the value groups of A{T1 /ρ1 , . . . , Tn /ρn } and A× are equal.
We also note that it is enough to show that m ∩ A 6= 0 whenever A is not a field. Given
x1 ∈ m ∩ A with x1 6= 0, the ring A{T1 /ρ1 , . . . , Tn /ρn }/(x1 ) = A/(x1 ){T1 /ρ1 , . . . , Tn /ρn }
is strongly noetherian and m/(x1 ) is a maximal ideal. We can therefore find x2 ∈ m/(x1 ) ∩
A/(x1 ) and iterate this process until A/(x1 , . . . , xn ) is a field. This must eventually happen
as A is noetherian, then m∩A = (x1 , . . . , xn ) is a maximal ideal of A as desired. We remark
that it isn’t clear that A/(x1 , . . . , xn ) must be a nonarchimedean field.
To show that m ∩ A 6= 0, we will be combining the proofs of [11, Theorem 3.2] (via
[11, Lemma 3.8]) and Lemma [12, 3.8]. The first proof deals specifically with the rings
A{T1 /ρ1 , . . . , Tn /ρn } while the second proves this version of the Nullstellensatz for a similar
ring.
PROPERTIES OF EXTENDED ROBBA RINGS
14
The proof will use ideas from the theory of Gröbner bases and an idea of Munshi [19].
We therefore begin by setting up the combinatorial construction.
Hypothesis 8.4. Let I = (i1 , . . . , in ) and J = (j1 , . . . , jn ) denote elements of the additive
monoid Zn≥0 of n-tuples of nonnegative integers.
Definition 8.5. We equip Zn≥0 with the componentwise partial order ≤ where I ≤ J if
and only if ik ≤ jk for i = 1, . . . , n. This is a well-quasi-ordering: any infinite sequence
contains an infinite nondecreasing sequence.
We also equip Zn≥0 with the graded lexicographic total order for which I ≺ J if either
i1 + · · · + in < j1 + · · · jn , or i1 + · · · + in = j1 + · · · + jn and there exists k ∈ {1, . . . , n}
such that iℓ = jℓ for l < k and ik < jk . Since is a refinement of ≤, it is a well-ordering.
The key properties for the proof is that is a well-ordering refining ≤ and that for any
I, there are only finitely many J with J I.
P
Definition 8.6. For x = I xI T I ∈ A{T1 /ρ1 , . . . , Tn /ρn }, define the leading index of x to
be the index of I which is maximal under for the property that |xI T I | = |x|, and define
the leading coefficient of x to be the corresponding coefficient xI .
We proceed by contradiction: suppose that m is a maximal ideal of A{T1 /ρ1 , . . . , Tn /ρn }
with m ∩ A = 0. As we assumed that A is strongly noetherian, A{T1 /ρ1 , . . . , Tn /ρn } is
noetherian so m is closed by [1, Proposition 3.7.2/2].
Define the projection map ψ forgetting the constant term of x ∈ A{T1 /ρ1 , . . . , Tn /ρn }, it
is a bounded surjective morphism of Banach spaces with kernel A. Then m + A is a closed
subspace of A{T1 /ρ1 , . . . , Tn /ρn }, and V = ψ(m+A) is closed by the open mapping theorem
[1, § 2.8.1]. So ψ induces a bounded bijective map of Banach spaces m → V ; by the open
mapping theorem ψ −1 is also bounded. Define the nonconstant degree deg′ (x) = deg(ψ(x))
to be the leading index of ψ(x). Define the leading nonconstant coefficient of x to be xdeg′ (x) .
We now follow the proof of [11, 3.2] but using deg′ instead of deg in [11, 3.7]
Pand beyond
and using |ψ(·)| instead of | · |ρ . We obtain a finite set of generators mI =
mI,J T J for
m such that the leading index of ψ(mI ) is I. The key fact here is that for any x ∈ m with
leading index J, there is some mI with I J.
Scale the mI by elements of A so that they all have norm 1. Let aI = mI,I be the
leading coefficient of mI , so now |aI | = 1. Define ǫ < 1 as in [11, 3.8] to be the largest
possible norm of some coefficient mI,J with I ≺ J. As the norm on A is multiplicative, the
ring oA /IA has no nonzero nilpotents so the nilradical
is {0}. We can therefore choose a
Q
nonzero prime ideal p of oA /IA not containing I∈S aI . Choose any ̟ ∈ oA reducing to a
nonzero element of p, so |̟| = 1. As m ∩ A = 0, we have ̟ 6∈ m, so by maximality we can
find x0 ∈ A{T1 /ρ1 , . . . , Tn /ρn } such that 1 + ̟x0 ∈ m.
Lemma 8.7. Let S be the multiplicative system generated by the aI . Given any c ∈
S and x ∈ A{T1 /ρ1 , . . . , Tn /ρn } with c + ̟x ∈ m, there exists some c′ ∈ S, x′ ∈
A{T1 /ρ1 , . . . , Tn /ρn } with c′ + ̟x′ ∈ m and |ψ(x′ )| ≤ ǫ|ψ(x)|.
PROPERTIES OF EXTENDED ROBBA RINGS
15
Proof. We will construct c′ and x′ iteratively. Given any cℓ ∈ S, xℓ ∈ A{T1 /ρ1 , . . . , Tn /ρn }
with cℓ + ̟xℓ ∈ m, we will construct cℓ+1 ∈ S, xℓ+1 ∈ A{T1 /ρ1 , . . . , Tn /ρn } with cℓ+1 +
̟xℓ+1 ∈ m and |ψ(xℓ+1 )| ≤ |ψ(xℓ )|. We will then use this construction to get c′ and x′ .
Choose λ ∈ A× so that |ψ(λ(cℓ + ̟xℓ ))| = 1, and let eIℓ T Iℓ be the leading term of
ψ(λxℓ ). Note that the leading index Iℓ is the same as the leading index of ψ(λ(cℓ + ̟xℓ ))
as ψ causes the contribution of cℓ to be forgotten and ̟ is a nonzero element of A so it
won’t affect the leading index. Then by the construction of the mI there is some mℓ with
leading index Jℓ ≤ Iℓ ; let aℓ be the leading coefficient of mℓ . Define
yℓ = aℓ λ(cℓ + ̟xℓ ) − ̟eIℓ mℓ T Iℓ −Jℓ ∈ m.
This has been chosen so that the coefficient of T Iℓ in yℓ is 0 and |ψ(yℓ )| ≤ 1. Let
xℓ+1 =
λ−1 yℓ − aℓ cℓ
= aℓ xℓ − λ−1 eIℓ mℓ T Iℓ −Jℓ ,
̟
cℓ+1 = aℓ cℓ .
Clearly cℓ+1 ∈ S, cℓ+1 + ̟xℓ+1 = λ−1 yℓ+1 ∈ m, and |ψ(xℓ+1 )| ≤ |ψ(xℓ )|.
As is a well ordering, we have a bijection between indices of x and positive integers call the mth index Im . As ψ(xℓ ) is a convergent power series, there are only finitely many
terms of ψ(xℓ ) with coefficient norm greater than ǫ|ψ(x)|. We can therefore associate a
unique integer nℓ to each ψ(xℓ ) such that the mth term in the binary representation of
nℓ is 1 exactly when the coefficient of T Im in ψ(xℓ ) is greater than ǫ|ψ(x)|. We claim
that nℓ > nℓ+1 whenever nℓ > 0, so after finitely many steps we must have nk = 0. By
definition, this means that after finitely many steps every term of ψ(xk ) will have coefficient
with norm at most ǫ|ψ(x)|, so |ψ(xk )| ≤ ǫ|ψ(x)| as desired.
By the construction of ǫ, adding the multiple of mℓ required to go from xℓ to xℓ+1
won’t introduce any coefficients with norm greater than ǫ|ψ(x)| and index J ≻ Iℓ . By the
construction of xℓ+1 , the coefficient of T Iℓ in xℓ+1 is 0. So when we move from nℓ to nℓ+1 ,
the digit corresponding to Iℓ is changed from 1 to 0 and no higher digits are changed. So
nℓ > nℓ+1 as desired.
Starting with c0 = 1, we can iterate this process to get sequences {cℓ } ⊂ S, {xℓ } ⊂
A{T1 , . . . , Tn } so that for all ℓ, yℓ := cℓ + ̟xℓ ∈ m and |ψ(yℓ )| → 0. As the inverse of ψ is
bounded, we must have |yℓ | → 0. This implies that |cℓ + ̟xℓ,0 | → 0 as this is the constant
term of yℓ , which implies that for ℓ large cℓ − ̟xℓ,0 ∈ IA . This is a contradiction as we
chose ̟ so that cℓ is never divisible by ̟ in oA /IA .
Remark 8.8. We note that in our proof, it was essential that we could scale elements
of A{T1 /ρ1 , . . . , Tn /ρn } by elements of A× to get elements of norm 1. The result isn’t
generally true if we allow infinite extensions in the value group. For example, if we let
A = Qp {T1 /ρ} where ρ is irrational, the ring A{T2 /ρ−1 } has (T1 T2 − 1) as a maximal ideal,
but A ∩ (T1 T2 − 1) = {0}. This is analogous to the more standard example of the maximal
ideal (px − 1) of Zp [x].
I {T /ρ′ , . . . , T /ρ′ }
Remark 8.9. This result can be extended to some rings of the form BL,E
n m
1 1
by using Lemma [11, 4.9] to rewrite them as quotients of some ArL,E {T1 /ρ1 , . . . , Tn /ρn }.
PROPERTIES OF EXTENDED ROBBA RINGS
16
I , but it will
As mentioned in Remark 8.8, this argument will not hold for all of the BL,E
I , then it will also
work for intervals I = [s, r] with s ∈ Q. If the result does hold for BL,E
hold for any étale extension C.
I {T , . . . , T } are regular for I as in
Corollary 8.10. The rings ArL,E {T1 , . . . , Tn }, BL,E
1
n
Remark 8.9.
I {T , . . . , T }, the other case follows similarly. We must
Proof. We just show this for BL,E
1
n
I
show that for any maximal ideal m ⊂ BL,E {T1 , . . . , Tn }, the localization at m is a regular
I
local ring. By Theorem 8.3, m ∩ BL,E
= (m) for some maximal ideal (m) of the principal
I
I /(m) is a nonarchimedean field; we must check
ideal domain BL,E . We claim that BL,E
I /(m)) is a single point. By [11, Lemma 7.10], M(B I /(m)) is a finite
that M(BL,E
L,E
I /(m))
discrete topological space. By [13, Proposition 2.6.4], any disconnect of M(BL,E
I /(m). As B I /(m) is a field, this is impossible so
would induce a disconnect of BL,E
L,E
I
I /(m)){T , . . . , T } is therefore
M(BL,E /(m)) must be a point as desired. The ring (BL,E
1
n
a classical affinoid algebra so it is regular [15]. The result now follows from a general
commutative algebra statement: given a local ring (R, m) and an element m ∈ m \ m2 , if
R/(m) is regular then so is R. This is clear as in passing from R to R/(m), the dimension
of the local ring is reduced by 1 by Krull’s principal ideal theorem and the k-dimension of
m/m2 is reduced by 1 by our assumption on m.
Using similar ideas, we show that ArL,E is strictly noetherian. We first recall the necessary
definitions.
Definition 8.11. Let (A, A+ ) be a Huber pair with A Tate. An A+ -module N is almost
finitely generated if for every topologically nilpotent unit u in A, there is a finitely generated
A+ -submodule N ′ of N such that uN is contained in N ′ .
Definition 8.12. A Huber pair (A, A+ ) is strictly noetherian if for every finite A+ -module
M , every A+ -submodule N of M is almost finitely generated.
Remark 8.13. We note that if (A, A+ ) is strictly noetherian, A must be noetherian.
Given an ideal H of A and topologically nilpotent unit u, u(H ∩ A+ ) is contained in some
finitely generated ideal hx1 , . . . , xn i of A+ . For any h ∈ H, multiplying
by a sufficiently
P
large power P
of u will give un h ∈ u(H ∩ A+ ), so we have un h =
ai xi with the ai in A+ ,
and so h =
u−n ai xi and so the xi generate H.
Remark 8.14. Kiehl was the first to consider the strict noetherian property, showing that
affinoid algebras are strictly noetherian in [16, Satz 5.1].
Proposition 8.15. For any nonnegative integer n and ρ1 , . . . , ρn ∈ R>0 , the pair (R, R◦ ) :=
(ArL,E {T1 /ρ1 , . . . , Tn /ρn }, ArL,E {T1 /ρ1 , . . . , Tn /ρn }◦ ) is strictly noetherian.
Proof. Quotients and direct sums preserve the almost finitely generated property, so it is
enough to check that every ideal H of R◦ is almost finitely generated. Fix any ideal H ⊂ R◦
and topologically nilpotent unit u, we will construct a finitely generated ideal H ′ ⊂ H such
PROPERTIES OF EXTENDED ROBBA RINGS
17
that uH ⊂ H ′ . Exactly following the construction of [11, Theorem 3.2] gives a finite subset
{xI } of HPsuch that for all y ∈ H, there exist aI ∈ R such that |aI |ρ |xI |ρ ≤ |y|ρ for all I
and y =
aI xI . Letting δ := min{|xI |ρ }, we see that if |y|ρ ≤ δ we have aI ∈ R◦ for all
I. The set {xI } therefore generates all of the elements of uH with norm at most δ.
Let c = |u|ρ , let m = ⌈logc δ⌉. As u is topologically nilpotent we have c < 1. For
each index I and k ∈ {1, . . . , m}, let dI,k be the smallest possible degree of the leading
coefficient of an element of H with leading index I and weighted Gauss norm ck . Following
[11, Definition 3.7], for each nonnegative integer d and k ∈ {1, . . . , n}, define Sd,k to be the
(finite) set of I which are minimal with respect to ≤ for the property that dI,k = d and let
Sk be the union of the Sd,k . For each I ∈ Sk , choose xI,k ∈ H \ {0} with leading index I,
weighted Gauss norm ck , and leading coefficient cI,k of degree dI .
We claim that the finite set {xI } ∪ {xI,1 : I ∈ S1 } ∪ · · · ∪ {xI,m : I ∈ Sm } generates every
element yP∈ uH. In the original proof, the key property of the chosen generators is that for
′
any y =
yJ T J in the ideal with leading term yJ ′ T J , there is some xI such that I ≤ J ′
and deg(cI ) ≤ deg(yJ ′ ). This allows for a series of approximations that can be shown to
converge using the fact that ArL,E is a Euclidean domain.
This argument continues to hold in our case, but we must also use generators with norm
at least that of y so that at each step of the approximation we are multiplying xI by an
′
element of R◦ . Let j = ⌈logc (|y|ρ )⌉ and let yJ ′ T J be the leading term of y. If j > m,
|y|ρ ≤ δ so y can be generated by elements of {xI }. Otherwise, we have u−1 y ∈ H and
|u−1 y|ρ > ck , so there is some unit v ∈ ArL,E ∩ R◦ with |vu−1 y|ρ = ck . Multiplication in R
by units will not change leading indices or leading degrees, so by construction, we can find
some xI,j with I ≤ J ′ and deg(cI,j ) ≤ deg(yJ ′ ). As |xI,j |ρ = ck ≥ |y|ρ , this is the desired
element, the rest of the proof is identical to [11].
I , C are strictly noetherian.
Corollary 8.16. The rings BL,E
Proof. By [11, Lemma 4.9] and Hypothesis 7.1, these rings are quotients of some ring of
the form ArL,E {T1 /ρ1 , . . . , Tn /ρn }.
9. Excellence
I , and C are excellent when
Finally, we show that the rings ArL,E {T1 /ρ1 , . . . , Tn /ρn }, BL,E
the ρi are chosen as in Theorem 8.3, adapting the argument of [20, Theorem 101]. The idea
∂
of ArL,E {T1 /ρ1 , . . . , Tn /ρn } give us derivations which
is that the n partial derivatives ∂T
i
satisfy a Jacobian criterion that implies excellence.
Remark 9.1. The methods of this section only work for rings of characteristic 0, but if E is
characteristic p our rings will also be characteristic p. In this case, ArL,E {T1 /ρ1 , . . . , Tn /ρn }
is a ring of convergent power series over L. As L is perfect of characteristic p, ArL,E {T1 /ρ1 , . . . , Tn /ρn }
has a finite p-basis given by ̟, T1 , . . . , Tn . It is therefore excellent by a theorem of Kunz
[17, Theorem 2.5]. Excellence of the other rings of interest follow as in the characteristic 0
case, see Corollary 9.15.
PROPERTIES OF EXTENDED ROBBA RINGS
18
We begin the mixed characteristic case by recalling some results from Matsumura [20,
Sections 32, 40].
Definition 9.2. A noetherian ring R is J-0 if Reg(Spec(R)) contains a non-empty open
subset of Spec(R), and J-1 if Reg(Spec(R)) is open in Spec(R).
Lemma 9.3. For a noetherian ring R, the following conditions are equivalent:
(1) Any finitely generated R-algebra S is J-1;
(2) Any finite R-algebra S is J-1;
(3) For any p ∈ Spec(R), and for any finite radical extension K ′ of the residue field
κ(p), there exists a finite R-algebra R′ satisfying R/p ⊆ R′ ⊆ K ′ which is J-0 and
whose quotient field is K ′ .
If these conditions are satisfied, we say that R is J-2.
Proof. [20, Theorem 73]
Corollary 9.4. Given a noetherian ring R containing Q, if R/p is J-1 for all p ∈ Spec(R)
then R is J-2.
Proof. By [20, Chapter 32, Lemma 1], the first condition in Lemma 9.3 is equivalent to
the following condition: Let S be a domain which is finitely generated over R/p for some
p ∈ Spec(R), then S is J-0. Let k and k′ be the quotient fields of R and S respectively. As
Q ⊂ R, k′ is a separable extension of k by [20, Paragraph 23.E]. This is now Case 1 of the
proof of [20, Theorem 73], so the result follows.
Our goal is to understand when quotients of a regular local ring are again regular. The
following lemmas explain how to use derivations to do this. We first set some notation,
following [20, Section 40].
Given a ring A, elements x1 , . . . , xr ∈ A, and derivations D1 , . . . , Ds ∈ Der(A), we write
J(x1 , . . . , xr ; D1 , . . . , Ds ) for the Jacobian matrix (Di xj ). Given a prime ideal p ⊂ A, we
write J(x1 , . . . , xr ; D1 , . . . , Ds )(p) for the reduction of the Jacobian mod p. If p contains
x1 , . . . , xr , the rank of the Jacobian mod p depends only on the ideal I generated by the
xi , so we denote it rank J(I; D1 , . . . , Ds )(p). Given a set ∆ of derivations of A, we define
rank J(I; ∆)(p) to be the supremum of rank J(I; D1 , . . . , Ds )(p) over all finite subsets
{D1 . . . , Ds } ⊂ ∆.
Lemma 9.5. Let (R, m) be a regular local ring, let p be a prime ideal of height r and ∆
be a subset of Der(R). Then:
(1) rank J(p; ∆)(m) ≤ rank J(p; ∆)(p) ≤ r,
(2) if rank J(f1 , . . . , fr ; D1 , . . . , Dr )(m) = r and f1 , . . . , fr ∈ p, then p = (f1 , . . . , fr )
and R/p is regular.
Proof. [20, Theorem 94]
Lemma 9.6. Let R, p, and ∆ be as in the preceding lemma. Then the following two
conditions are equivalent:
(1) rank J(p; ∆)(p) = ht p,
PROPERTIES OF EXTENDED ROBBA RINGS
19
(2) let q be a prime ideal contained in p, then Rp /qRp is regular if and only if rank J(q; ∆)(p) =
ht q.
Proof. [20, Theorem 95]
Definition 9.7. The weak Jacobian condition holds in a regular ring R if for every p in
Spec(R), rank J(p; Der(R))(p) = ht p. In this case we say that (WJ) holds in R.
Mizutani and Nomura showed that rings satisfying (WJ) and containing Q are excellent:
their proof is [20, Theorem 101]. A key step in the proof is the following proposition, we
go through it in detail as we will adapt it when showing that A{T1 , . . . , Tn } is excellent.
Proposition 9.8. Every regular ring R satisfying (WJ) is J-2.
Proof. This is roughly the argument of [20, Paragraph 40.D]. By Corollary 9.4, it is enough
to show that for every q ∈ Spec(R), the set Reg(R/q) is open in Spec(R/q). Fix any prime
p ⊇ q such that the image P of p in Spec(R/q) is regular. We will construct an open set
around P contained in Reg(R/q).
As (WJ) holds in R, we have rank J(p; Der(R))(p) = rank J(p; Der(Rp )) = ht(p). By
Lemma 9.6, this implies that rank J(q; Der(Rp ))(p) = ht(q) as Rp /qRp is regular. Let
r = ht(q), then we have f1 , . . . , fr ∈ q and D1 , . . . , Dr ∈ Der(R) such that det(Di fj ) 6∈ (p).
By Lemma 9.5, this implies that qRp = (f1 , . . . , fr )Rp . We therefore have some g ∈ R − p
such that qRg = (f1 , . . . , fr )Rg . Let h = det(Di fj ). By definition g and h are not in p or
q, so the reduction gh is nonzero in R/q. For any prime p′ reducing to some P ′ ∈ D(gh) ⊂
Spec(R/q), we have rank J(f1 , . . . , fr ; D1 , . . . , Dr )(p′ ) = r as h 6∈ P ′ , so by Lemma 9.5
ht(qRp′ ) ≥ r. As g 6∈ P ′ , f1 , . . . , fr generate qRp′ so ht(qRp′ ) ≤ r. So ht(qRp′ ) = r and we
can apply Lemma 9.5 to see that Rp′ /qRp′ is regular. So Reg(R/qR) contains the open set
D(gh) containing P , so it is open in Spec(R/qR) as desired.
We can now state the hypotheses we need to prove excellence.
Hypothesis 9.9. Let A be a regular integral domain containing Q, let R be a ring such that
∂
. Assume
A[T1 , . . . , Tn ] ⊂ R ⊂ A[[T1 , . . . , Tn ]] and that is stable under the n derivatives ∂T
i
that (R, A) satisfies the Nullstellensatz condition of Definition 8.1 and that R ⊗A Frac(A)
is weakly Jacobian as in Definition 9.7.
It is clear that for (WJ) to hold in R we must have dimR (Der(R)) ≥ dim(R). We
have n natural derivations to work with in our setup, so we’d prefer to work with rings
of dimension at most n. We therefore tensor with Frac(A) to reduce the dimension to
something that (WJ) can apply to. We now check that the rings of interest satisfy our
hypothesis.
Proposition 9.10. Let A = ArL,E and R = ArL,E {T1 /ρ1 , . . . , Tn /ρn }, then Hypothesis 9.9
is satisfied.
Proof. Everything but the weak Jacobian condition has already been checked or is clear.
Choose p ∈ Spec(R ⊗A Frac(A)), let the height of p be h. Let p′ ∈ Spec(R) be the
PROPERTIES OF EXTENDED ROBBA RINGS
20
contraction of p, it also has height h. Let q ∈ Spec(R) be a maximal ideal containing p′ ,
let Q = Rq /p′ , this is a local ring of dimension n + 1 − q.
Let ∆ be the derivations of R induced by derivations of Der(R ⊗A K)), then ∆ is
∂
. The derivations in ∆ are exactly the A-derivations of R.
generated by the n elements ∂T
i
We have rank J(p; Der(R ⊗A K)) = rank J(p′ ; ∆) ≤ h by Lemma 9.5; we must show that
we have equality. Following the proof of [20, Theorem 100], we see that rank DerA (Q) =
n − rank J(p′ ; ∆), so it is enough to show that rank DerA (Q) = n − h.
To do this, we largely follow [20, Theorem 98]. By Theorem 8.3, we have q∩ A = (x0 ) for
a principal maximal ideal (x0 ). We extend x0 to a system of parameters x0 , . . . , xr of Q,
b is an integral extension
here r = n − h. By the Cohen structure theorem, we have that Q
of F [[x0 , . . . , xr ]] where F = R/q. Writing F = (R/(x0 ))/(q/(x0 )), we see that it is the
quotient of an affinoid algebra over the field A/(x0 ), so it is an integral extension.
b Then D
Take any A-linear derivation D of Q vanishing on x1 , . . . , xr and extend it to Q.
vanishes on x0 as x0 ∈ A, and it vanishes on F as it vanishes on A/(x0 ) and F is an integral
b
extension of A/(x0 ). So D vanishes on all of F [[x0 , . . . , xr ]], so it must also vanish on Q
and therefore Q. Indeed, given any y ∈ Q, there is some integral relation f (T ) for y over
F [[x0 , . . . , xn ]] of minimal degree. Then 0 = D(f (y)) = f ′ (y)D(y) and f ′ (y) is nonzero as
the characteristic is 0, so D(y) = 0. So D is determined by the tuple (D(x1 ), . . . , D(xm )),
and so rank DerA (Q) ≤ n − h. As rank J(p′ ; ∆) ≤ h, we must have equalities in both
equations as desired.
Remark 9.11. Keeping notation as in Proposition 9.10, we note we could apply [20,
Theorem 100] to get (WJ) for R ⊗A Frac(A) if we knew that every maximal ideal m ⊂
R ⊗A Frac(A) has residue field algebraic over Frac(A). While this seems plausible, we
were unable to prove it directly. It is tempting to try to prove a version of Weierstrass
preparation and division for R ⊗A Frac(A), but following the standard proof for affinoid
algebras over a field doesn’t quite work.
Proposition 9.12. Any ring R satisfying Hypothesis 9.9 is J-2.
Proof. We proceed by induction on the dimension of A. As A is an integral domain, the
base case is when A is a field. By Proposition 9.10, R is (WJ) in this case, so by Proposition
9.8 it is J-2. For the inductive step, by Corollary 9.4 it is again enough to show that for
every q ∈ Spec(R), the set Reg(R/q) is open in Spec(R/q).
Let q ∩ A = Q, then we can reduce to the case where Q = {0} by noting that R/q ∼
=
(R/Q)/(q/Q) and that (A/Q)[T1 , . . . , Tn ] ⊂ R/Q ⊂ (A/Q)[[T1 , . . . , Tn ]]. In this case, q is
in the image of the injection Spec(R ⊗A Frac(A)) ֒→ Spec(R), so we can work with the
preimage q′ . By Hypothesis 9.9, (WJ) holds in R ⊗A Frac(A), so Proposition 9.8 implies
that this ring is J-2.
In particular, letting p = q′ in Proposition 9.8 we get a nonempty open set D(gh)
containing only regular primes. The construction of gh gives the required generators
f1 , . . . , fr ∈ p ⊗A Frac(A) and derivations D1 , . . . , Dr ∈ Der(R ⊗A Frac(A)) for Lemma
9.5 to apply. Finding a common denominator in Frac(A) gives an element d ∈ A such that
the entire argument works in Rd , so the set Reg((R/q)d ) is open.
PROPERTIES OF EXTENDED ROBBA RINGS
21
To complete the proof, we just need to show that
Reg(R ∩ V (d) ⊂ Spec(R}/q) ∩ V (d)
is open. This is equivalent to checking that Reg((R/d)/(q, d)) is open in Spec(R/d)/(q, d)).
We just need to check this for each of the finitely many minimal primes. As R/(d) satisfies
Hypothesis 9.9 for A = A/(d) and dim(A/d) < dim(A) these follow from the inductive
hypothesis.
Proposition 9.13. Any ring R satisfying Hypothesis 9.9 is a G-ring.
Proof. Here we are adapting [20, Theorem 101]. By [20, Theorem 75], it is enough to show
that for every maximal ideal m of R, the local ring Rm has geometrically regular formal
fibers. As Q ⊂ R, it is enough to check that the formal fibers are regular; the argument
is the same as in Corollary 9.4. Concretely, it is enough to show that for every prime
cm ), the local ring (R
cm )p /p is regular. As in Proposition 9.12, we can reduce to
p ∈ Spec(R
the case where p ∩ A = {0} by replacing A with A/(p ∩ A).
When p ∩ A = (0), we look at the image p′ of p in R ⊗A Frac(A). Here (WJ) holds
by Proposition 9.10, so we get derivations D1′ , . . . , Dr′ and f1′ , . . . , fr′ ∈ p ⊗A Frac(A) such
that rank J(f1′ , . . . , fr′ ; D1′ , . . . , Dr′ )(p′ ) = r. We can multiply by an element of A to clear
the denominators of the matrix and restrict the derivations to R. This gives f1 , . . . , fr ∈ p
with rank J(f1 , . . . , fr ; D1 , . . . , Dr )(p) = r.
cm and view the fi as elements of p(R
cm )P to get
We can extend the derivations to R
cm = ht p =
c
rank J(f1 , . . . , fr ; D1 , . . . , Dr )(p(Rm )P ) = r. By [20, Theorem 19], we have ht pR
cm )P /p is regular as desired.
r so we can again apply 9.5 to see that (R
Combining these gives the desired theorem.
Theorem 9.14. Any ring R satisfying Hypothesis 9.9 is excellent.
Proof. This follows from 8.10, 9.12, and 9.13.
I {T , . . . , T } are excellent.
Corollary 9.15. The rings ArL,E {T1 /ρ1 , . . . , Tn /ρn } and BL,E
1
n
As excellence is stable under passage to finitely generated algebras, this implies that the
rings C of Hypothesis 7.1 arising from étale morphisms are also excellent.
Corollary 9.16. The stalks of the adic Fargues-Fontaine curve are noetherian.
Proof. Temkin proved this for rigid analytic spaces, making essential use of the fact that
affinoid algebras are excellent. His proof works just as well for the Fargues-Fontaine curve
now that we have proven that the extended Robba rings are excellent. We give a brief
sketch of the proof, a more detailed version is given in Proposition [4, 15.1.1].
The local ring Ox at a point x of the Fargues-Fontaine curve can be written the direct
limit of a directed system (Ai ) of rational domains Spa(Ai , A+
i ) containing x where the Ai
are all extended Robba rings. Let m denote the maximal ideal of the local ring Ox , and let
mi ∈ Spec(Ai ) be the image of m in the map Spec(A) → Spec(Ai ) coming from the direct
limit. Letting Bi = (Ai )mi , the directed system of the Bi also has limit Ox .
PROPERTIES OF EXTENDED ROBBA RINGS
22
Huber showed that the transition maps Ai → Aj are flat [8, II.1.iv]. The directed system
(Bi ) therefore consists of local noetherian rings with flat local transition maps, it is shown
in EGA that the limit is noetherian if for sufficiently large i, we have mi Bj = mj for all
j ≥ i. As the transition maps are flat, we have
dim(Bj ) = dim(Bi ) + dim(Bj /mi Bj ) ≥ dim(Bi )
for j ≥ i. As the dimension of the Bj is bounded above, we must have some i0 such that
dim(Bi ) = dim(Bi0 ) for all i ≥ i0 and so dim(Bi /mi0 Bi ) = 0 for i ≥ i0 .
So it is enough to show that Bi /mi0 Bi must be reduced. This is the localization of a
fiber algebra of a map of excellent rings, so it is reduced by the argument in [4].
References
[1] S. Bosch, U. Güntzer, and R. Remmert, Non-Archimedean Analysis, Grundlehren der Math. Wiss. 261,
Springer-Verlag, Berlin, 1984.
[2] O. Brinon and B. Conrad, CMI summer school notes on p-adic Hodge theory, available at
http://math.stanford.edu/~ conrad/.
[3] B. Cais, C. Davis, Canonical Cohen Rings for Norm Fields Int. Math. Res. Notices (2014),
doi:10.1093/imrn/rnu098.
[4] B.
Conrad,
Lecture
notes
on
perfectoid
spaces,
available
at
http://math.stanford.edu/~ conrad/Perfseminar
[5] L. Fargues, Geometrization of the Local Langlands Correspondence: an Overview, in preparation; available at http://webusers.imj-prg.fr/~ laurent.fargues/.
[6] L. Fargues and J.-M. Fontaine, Courbes et fibrés vectoriels en théorie de Hodge p-adique, in preparation;
available at http://webusers.imj-prg.fr/~ laurent.fargues/.
[7] K. Fujiwara and F. Kato, Foundations of Rigid Geometry 1, arxiv:1308.4734v5 (2017)
[8] R. Huber, A generalization of formal schemes and rigid analytic varieties, Math. Z. 217 (1994), 513-551.
[9] R. Huber, Étale Cohomology of Rigid Analytic Varieties and Adic Spaces, Aspects of Mathematics,
E30, Friedr. Vieweg & Sohn, Braunschweig, 1996.
[10] K.S. Kedlaya, Nonarchimedean Geometry of Witt Vectors, Nagoya Math. J. 209 (2013), 111-165.
[11] K.S. Kedlaya, Noetherian Properties of Fargues-Fontaine Curves, Int. Math. Res. Notices (2015), article
ID rnv227.
[12] K.S. Kedlaya and R. Liu, On families of (φ, Γ)-modules, Algebra and Number Theory 4 (2010), 943-967.
[13] K.S. Kedlaya and R. Liu, Relative p-adic Hodge theory: Foundations, Astérisque 371 (2015), 239 pages.
[14] K.S. Kedlaya and R. Liu, Relative p-adic Hodge theory II: Imperfect period rings, arXiv:1602.06899v2
(2016).
[15] R. Kiehl, Ausgezeichnete Ringe in der nichtarchimedischen analytischen Geometrie, Journal für Mathematik 234 (1969), 89-98.
[16] R. Kiehl, Der Endlichkeitssatz für eigenliche Abbildungen in der nichtarchimedischen Funktionentheorie, Invent. Math. 2 (1967), 191-214.
[17] E. Kunz, On Noetherian rings of characteristic p, Am. J. Math., 98(4):999-1013, (1976).
[18] K. Kurano and K. Shimomoto, Ideal-Adic Completion of Quasi-Excellent Rings (After Gabber)
arxiv:1609.09246 (2016).
[19] J.P. May, Munshi’s proof of the Nullstellensatz, Amer. Math. Monthly 110 (2003), 133-140.
[20] H. Matsumura, Commutative Algebra 2nd ed. W.A. Benjamin, New York, 1980.
| 0 |
arXiv:1510.08517v1 [cs.LO] 28 Oct 2015
Algorithmic Analysis of Qualitative and Quantitative
Termination Problems for Affine Probabilistic Programs ∗
Krishnendu Chatterjee
Hongfei Fu †
Petr Novotný
IST Austria
[email protected]
IST Austria
State Key Laboratory of Computer
Science, Institute of Software, Chinese
Academy of Sciences
[email protected]
IST Austria
[email protected]
Rouzbeh Hasheminezhad
Sharif University of Technology
[email protected]
Abstract
moreover, the NP-hardness result holds already for A PP’s without
probability and demonic non-determinism. Secondly, we show that
the concentration problem over LRA PP can be solved in the same
complexity as for the membership problem of LRA PP. Finally, we
show that the expectation problem over LRA PP can be solved in
2EXPTIME and is PSPACE-hard even for A PP’s without probability and non-determinism (i.e., deterministic programs). Our experimental results demonstrate the effectiveness of our approach to
answer the qualitative and quantitative questions over A PP’s with
at most demonic non-determinism.
In this paper, we consider termination of probabilistic programs
with real-valued variables. The questions concerned are:
1. qualitative ones that ask (i) whether the program terminates
with probability 1 (almost-sure termination) and (ii) whether
the expected termination time is finite (finite termination);
2. quantitative ones that ask (i) to approximate the expected termination time (expectation problem) and (ii) to compute a bound
B such that the probability to terminate after B steps decreases
exponentially (concentration problem).
1.
To solve these questions, we utilize the notion of ranking supermartingales which is a powerful approach for proving termination
of probabilistic programs. In detail, we focus on algorithmic synthesis of linear ranking-supermartingales over affine probabilistic
programs (A PP’s) with both angelic and demonic non-determinism.
An important subclass of A PP’s is LRA PP which is defined as the
class of all A PP’s over which a linear ranking-supermartingale exists.
Introduction
Probabilistic Programs Probabilistic programs extend the classical imperative programs with random-value generators that produce random values according to some desired probability distribution. They provide a rich framework to model a wide variety
of applications ranging from randomized algorithms [17, 39], to
stochastic network protocols [3, 33], to robot planning [29, 32],
just to mention a few. The formal analysis of probabilistic systems
in general and probabilistic programs in particular has received a
lot of attention in different areas, such as probability theory and
statistics [18, 27, 31, 42, 44], formal methods [3, 33], artificial intelligence [28, 29], and programming languages [12, 20, 22, 46].
Our main contributions are as follows. Firstly, we show that the
membership problem of LRA PP (i) can be decided in polynomial
time for A PP’s with at most demonic non-determinism, and (ii) is
NP-hard and in PSPACE for A PP’s with angelic non-determinism;
∗ The
research was partly supported by Austrian Science Fund (FWF)
Grant No P23499- N23, FWF NFN Grant No S11407-N23 (RiSE/SHiNE),
and ERC Start grant (279307: Graph Games). The research leading to
these results has received funding from the People Programme (Marie
Curie Actions) of the European Union’s Seventh Framework Programme
(FP7/2007-2013) under REA grant agreement No [291734].
† Supported by the Natural Science Foundation of China (NSFC) under
Grant No. 61532019 .
Qualitative and Quantitative Termination Questions The most
basic, yet important, notion of liveness for programs is termination. For non-probabilistic programs, proving termination is equivalent to synthesizing ranking functions [23], and many different approaches exist for synthesis of ranking functions over nonprobabilistic programs [9, 14, 43, 50]. While a ranking function
guarantees termination of a non-probabilistic program with certainty in a finite number of steps, there are many natural extensions
of the termination problem in the presence of probability. In general, we can classify the termination questions over probabilistic
programs as qualitative and quantitative ones. The relevant questions studied in this paper are illustrated as follows.
1. Qualitative Questions. The most basic qualitative question is on
almost-sure termination which asks whether a program terminates with probability 1. Another fundamental question is about
finite termination (aka positive almost-sure termination [7, 22])
which asks whether the expected termination time is finite. Note
[Copyright notice will appear here once ’preprint’ option is removed.]
1
2015/10/30
• Infinite Probabilistic Choices with Non-determinism. The situ-
that finite expected termination time implies almost-sure termination, whereas the converse does not hold in general.
2. Quantitative Questions. We consider two quantitative questions, namely expectation and concentration questions. The expectation question asks to approximate the expected termination time of a probabilistic program (within some additive or
relative error), provided that the expected termination time is
finite. The concentration problem asks to compute a bound B
such that the probability that the termination time is below B is
concentrated, or in other words, the probability that the termination time exceeds the bound B decreases exponentially.
ation changes significantly in the presence of non-determinism.
The Lyapunov-ranking-function method as well as the rankingsupermartingale method are sound but not complete in the presence of non-determinism for finite termination [22]. However,
for a subclass of probabilistic programs with at most demonic
non-determinism, a sound and complete characterization for finite termination through ranking-supermartingale is obtained
in [22].
Our Focus We focus on ranking-supermartingale based algorithmic study for qualitative and quantitative questions on termination
analysis of probabilistic programs with non-determinism. In view
of the existing results, there are at least three important classes
of open questions, namely (i) efficient algorithms, (ii) quantitative
questions and (iii) complexity in presence of two different types of
non-determinism. Firstly, while [22] presents a fundamental result
on ranking supermartingales over probabilistic programs with nondeterminism, the generality of the result makes it difficult to obtain
efficient algorithms; hence an important open question that has not
been addressed before is whether efficient algorithmic approaches
can be developed for synthesizing ranking supermartingales of simple form over probabilistic programs with non-determinism. The
second class of open questions asks whether ranking supermartingales can be used to answer quantitative questions, which have not
been tackled at all to our knowledge. Finally, no previous work considers complexity to analyze probabilistic programs with both the
two types of non-determinism (as required for the synthesis problem with abstraction).
Besides, we would like to note that there exist other quantitative
questions such as bounded-termination question which asks to approximate the probability to terminate after a given number of steps
(cf. [38] etc.).
Non-determinism in Probabilistic Programs Along with probability, another fundamental aspect in modelling is non-determinism.
In programs, there can be two types of non-determinism: (i) demonic non-determinism that is adversarial (e.g., to be resolved
to ensure non-termination or to increase the expected termination
time, etc.) and (ii) angelic non-determinism that is favourable (e.g.,
to be resolved to ensure termination or to decrease the expected
termination time, etc.). The demonic non-determinism is necessary
in many cases, and a classic example is abstraction: for efficient
static analysis of large programs, it is infeasible to track all variables of the program; the key technique in such cases is abstraction of variables, where certain variables are not considered for the
analysis and they are instead assumed to induce a worst-case behaviour, which exactly corresponds to demonic non-determinism.
On the other hand, angelic non-determinism is relevant in synthesis. In program sketching (or programs with holes as studied extensively in [51]), certain expressions can be synthesized which helps
in termination, and this corresponds to resolving non-determinism
in an angelic way. The consideration of the two types of nondeterminism gives the following classes:
1.
2.
3.
4.
Our Contributions In this paper, we consider a subclass of probabilistic programs called affine probabilistic programs (A PP’s)
which involve both demonic and angelic non-determinism. In general, an A PP is a probabilistic program whose all arithmetic expressions are linear. Our goal is to analyse the simplest class of ranking
supermartingales over A PP’s, namely, linear ranking supermartingales. We denote by LRA PP the set of all A PP’s that admit a linear
ranking supermartingale. Our main contributions are as follows:
probabilistic programs without non-determinism;
probabilistic programs with at most demonic non-determinism;
probabilistic programs with at most angelic non-determinism;
probabilistic programs with both angelic and demonic nondeterminism.
1. Qualitative Questions. Our results are as follows.
Algorithm. We present an algorithm for probabilistic programs
with both angelic and demonic non-determinism that decides
whether a given instance of an A PP belongs to LRA PP (i.e.,
whether a linear ranking supermartingale exists), and if yes,
then synthesize a linear ranking supermartingale (for proving
almost-sure termination). We also show that almost-sure termination coincides with finite termination over LRA PP. Our
result generalizes the one [12] for probabilistic programs without non-determinism to probabilistic programs with both the
two types of non-determinism. Moreover, in [12] even for
affine probabilistic programs without non-determinism, possible quadratic constraints may be constructed; in contrast, we
show that for affine probabilistic programs with at most demonic non-determinism, a set of linear constraints suffice, leading to polynomial-time decidability (cf. Remark 2).
Complexity. We establish a number of complexity results as
well. For programs in LRA PP with at most demonic nondeterminism our algorithm runs in polynomial time by reduction to solving a set of linear constraints. In contrast, we show
that for probabilistic programs in A PP’s with only angelic nondeterminism even deciding whether a given instance belongs
to LRA PP is NP-hard. In fact our hardness proof applies even
in the case when there are no probabilities but only angelic
non-determinism. Finally, for A PP’s with two types of nondeterminism (which is NP-hard as the special case with only
angelic non-determinism is NP-hard) our algorithm reduces to
Previous Results We discuss the relevant previous results for
termination analysis of probabilistic programs.
• Discrete Probabilistic Choices. In [36, 37], McIver and Morgan
presented quantitative invariants to establish termination, which
works for probabilistic programs with non-determinism, but
restricted only to discrete probabilistic choices.
• Infinite Probabilistic Choices without Non-determinism. On
one hand, the approach of [36, 37] was extended in [12] to ranking supermartingales resulting in a sound (but not complete)
approach to prove almost-sure termination for infinite-state
probabilistic programs with integer- and real-valued random
variables drawn from distributions including uniform, Gaussian, and Poison; the approach was only for probabilistic programs without non-determinism. On the other hand, Bournez
and Garnier [7] related the termination of probabilistic programs without non-determinism to Lyapunov ranking functions. For probabilistic programs with countable state-space and
without non-determinism, the Lyapunov ranking functions provide a sound and complete method for proving finite termination [7, 24]. Another relevant approach [38] is to explore the exponential decrease of probabilities upon bounded-termination
through abstract interpretation [16], resulting in a sound method
for proving almost-sure termination.
2
2015/10/30
Questions / Models
Qualitative
(Almost-sure,
Finite termination)
Prob prog without nondet
Prob prog with demonic nondet
PTIME
PTIME
Quantitative
(Expectation)
PSPACE-hard
2EXPTIME
PSPACE-hard
2EXPTIME
Prob prog with angelic nondet
NP-hard
Prob prog with both nondet
NP-hard
PSPACE
(QCQP)
PSPACE-hard
2EXPTIME†
PSPACE
(QCQP)
PSPACE-hard
2EXPTIME†
Table 1. Computational complexity of qualitative and quantitative questions for termination of probabilistic programs in LRA PP, where the
complexity of quantitative questions is for bounded LRA PPs with discrete probability choices. Results marked by † were obtained under
additional assumptions.
quadratic constraint solving. The problem of quadratic constraint solving is also NP-hard and can be solved in PSPACE;
we note that developing practical approaches to quadratic constraint solving (such as using semidefinite relaxation) is an active research area [6].
2. Quantitative Questions. We present three types of results. To
the best of our knowledge, we present the first complexity results (summarized in Table 1) for quantitative questions. First,
we show that the expected termination time is irrational in general for programs in LRA PP. Hence we focus on the approximation questions. For concentration results to be applicable, we
consider the class bounded LRA PP which consists of programs
that admit a linear ranking supermartingale with bounded difference. Our results are as follows.
2.
Preliminaries
2.1
Basic Notations
For a set A we denote by |A| the cardinality of A. We denote by N,
N0 , Z, and R the sets of all positive integers, non-negative integers,
integers, and real numbers, respectively. We use boldface notation
for vectors, e.g. x, y, etc, and we denote an i-th component of a
vector x by x[i].
Pn
An affine expression is an expression of the form d + i=1 ai xi ,
where x1 , . . . , xn are variables and d, a1 , . . . , an are real-valued
constants. Following the terminology of [30] we fix the following
nomenclature:
• Linear Constraint. A linear constraint is a formula of the form
Hardness Result. We show that the expectation problem is
PSPACE-hard even for deterministic programs in bounded
LRA PP.
ψ or ¬ψ, where ψ is a non-strict inequality between affine
expressions.
• Linear Assertion. A linear assertion is a finite conjunction of
linear constraints.
• Propositionally Linear Predicate. A propositionally linear
predicate is a finite disjunction of linear assertions.
Concentration Result on Termination Time. We present the
first concentration result on termination time through linear ranking-supermartingales over probabilistic programs in
bounded LRA PP. We show that by solving a variant version of
the problem for the qualitative questions, we can obtain a bound
B such that the probability that the termination time exceeds
n ≥ B decreases exponentially in n. Moreover, the bound B
computed is at most exponential. As a consequence, unfolding
a program upto O(B) steps and approximating the expected
termination time explicitly upto O(B) steps, imply approximability (in 2EXPTIME) for the expectation problem.
In this paper, we deem any linear assertion equivalently as a polyhedron defined by the linear assertion (i.e., the set of points satisfying
the assertion). It will be always clear from the context whether a
linear assertion is deemed as a logical formula or as a polyhedron.
2.2
Syntax of Affine Probabilistic Programs
In this subsection, we illustrate the syntax of programs that we
study. We refer to this class of programs as affine probabilistic
programs since it involves solely affine expressions.
Finer Concentration Inequalities. Finally, in analysis of supermartingales for probabilistic programs only Azuma’s inequality [2] has been proposed in the literature [12]. We show how to
obtain much finer concentration inequalities using Hoeffding’s
inequality [26, 35] (for all programs in bounded LRA PP) and
Bernstein’s inequalities [4, 35] (for incremental programs in
LRA PP, where all updates are increments/decrements by some
affine expression over random variables). Bernstein’s inequality
is based on the deep mathematical results in measure theory on
spin glasses [4], and we show how they can be used for analysis
of probabilistic programs.
Let X and R be countable collections of program and random variables, respectively. The abstract syntax of affine probabilistic programs (A PPs) is given by the grammar in Figure 1, where the expressions hpvari and hrvari range over X and R, respectively. The
grammar is such that hexpri and hrexpri may evaluate to an arbitrary affine expression over the program variables, and the program
and random variables, respectively (note that random variables can
only be used in the RHS of an assignment). Next, hbexpri may
evaluate to an arbitrary propositionally linear predicate.
The guard of each if-then-else statement is either a keyword angel
(intuitively, this means that the fork is non-deterministic and the
non-determinism is resolved angelically; see also the definition of
semantics below), a keyword demon (demonic resolution of nondeterminism), keyword prob(p), where p ∈ [0, 1] is a number
given in decimal representation (represents probabilistic choice,
where the if-branch is executed with probability p and the thenbranch with probability 1 − p), or the guard is a propositionally
linear predicate, in which case the statement represents a standard
deterministic conditional branching.
Example 1. We present an example of an affine probabilistic program shown in Figure 2. The program variable is x, and there is a
while loop, where given a probabilistic choice one of two statement
Experimental Results. We show the effectiveness of our approach to answer qualitative and concentration questions on
several classical problems, such as random walk in one dimension, adversarial random walk in one dimension and two
dimensions (that involves both probability and demonic nondeterminism).
Note that the most restricted class we consider is bounded LRA PP,
but we show that several classical problems, such as random walks
in one dimension, queuing processes, belong to bounded LRA PP,
for which our results provide a practical approach.
3
2015/10/30
A random variable in a probability space (Ω, F, P) is an Fmeasurable function X : Ω → R ∪ {∞}, i.e., a function such that
for every x ∈ R ∪ {∞} the set {ω ∈ Ω | X(ω) ≤ x} belongs
to F. We denote by E(X) the expected value of a random variable
X, i.e. the Lebesgue integral of X with respect to the probability
measure P. The precise definition of the Lebesgue integral of X is
somewhat technical and we omit it here, see, e.g., [45, Chapter 4],
or [5, Chapter 5] for a formal definition. A filtration of a probability
space (Ω, F, P) is a sequence {Fi }∞
i=0 of σ-algebras over Ω such
that F0 ⊆ F1 ⊆ · · · ⊆ Fn ⊆ · · · ⊆ F.
hstmti ::= hpvari ’:=’ hrexpri
| ’if’ hndbexpri ’then’ hstmti ’else’ hstmti ’fi’
| ’while’ hbexpri ’do’ hstmti ’od’
| hstmti ’;’ hstmti | ’skip’
hexpri ::= hconstanti | hpvari | hconstanti ’∗’ hpvari
| hexpri ’+’ hexpri | hexpri ’−’ hexpri
hrexpri ::= hexpri | hrvari | hpvari | hconstanti ’∗’ hrvari
| hconstanti ’∗’ hpvari | hrexpri ’+’ hrexpri
Stochastic Game Structures There are several ways in which one
can express the semantics of A PP’s with (angelic and demonic)
non-determinism [12, 22]. In this paper we take an operational approach, viewing our programs as 2-player stochastic games, where
one-player represents the angelic non-determinism, and the other
player (the opponent) the demonic non-determinism.
Definition 1. A stochastic game structure (SGS) is a tuple G =
(L, (X, R), `0 , x0 , 7→, Pr, G), where
| hrexpri ’−’ hrexpri
hbexpri := haffexpri | haffexpri ’or’ hbexpri
haffexpri ::= hliterali | hliterali ’and’ haffexpri
hliterali ::= hexpri ’≤’ hexpri | hexpri ’≥’ hexpri
| ¬hliterali
hndbexpri ::= ’angel’ | ’demon’ | ’prob(p)’ | hbexpri
• L is a finite set of locations partitioned into four pairwise
Figure 1. Syntax of affine probabilistic programs (A PP’s).
•
x := 0 ;
w h i l e x ≥ 0 do
i f prob ( 0 . 6 ) then
i f angel then
x := x + 1
else
x := x − 1
fi
else
i f demon t h e n
x := x + 1
else
x := x − 1
fi
fi
od
•
•
•
•
We stipulate that each location has at least one outgoing transition. Moreover, for every deterministic location ` we assume the
following: if τ1 , . . . , τk are all transitions outgoing from `, then
G(τ1 ) ∨ · · · ∨ G(τk ) ≡ true and G(τi ) ∧ G(τj ) ≡ false for each
1 ≤ i < j ≤ k. And we assume that each coordinate of D represents an integrable random variable (i.e., the expected value of the
absolute value of the random variable exists).
Figure 2. An example of a probabilistic program
blocks Q1 or Q2 is executed. The block Q1 (resp. Q2 ) is executed
if the probabilistic choice is at least 0.6 (resp. less than 0.4). The
statement block Q1 (resp., Q2 ) is an angelic (resp. demonic) conditional statement to either increment or decrement x.
2.3
For notational convenience we assume that the sets X and R are
endowed with some fixed linear ordering, which allows us to write
X = {x1 , x2 , . . . , x|X| } and R = {r1 , r2 , . . . , r|R| }. Every
update function f in a stochastic game can then be viewed as
a tuple (f1 , . . . , f|X| ), where each fi is of type R|X∪R| → R.
|X|
|R|
We denote by x = (x[i])i=1 and r = (r[i])i=1 the vectors of
concrete valuations of program and random variables, respectively.
In particular, we assume that each component of r lies within the
range of the corresponding random variable. We use the following
succinct notation for special update functions: by id we denote a
function which does not change the program variables at all, i.e.
for every 1 ≤ i ≤ |X| we have fi (x, r) = x[i]. For a function
g over the program and random variables we denote by [xj /g] the
update function f such that fj (x, r) = g(x, r) and fi (x, r) = x[i]
for all i 6= j.
Semantics of Affine Probabilistic Programs
We now formally define the semantics of A PP’s. In order to do this,
we first recall some fundamental concepts from probability theory.
Basics of Probability Theory The crucial notion is of the probability space. A probability space is a triple (Ω, F, P), where Ω is a
non-empty set (so called sample space), F is a sigma-algebra over
Ω, i.e. a collection of subsets of Ω that contains the empty set ∅, and
that is closed under complementation and countable unions, and P
is a probability measure on F, i.e., a function P : F → [0, 1] such
that
• P(∅) = 0,
• for all A ∈ F it holds P(Ω r A) = 1 − P(A), and
• for all pairwise disjoint countable set sequences
PA1 , A2 , · · · ∈
FS
(i.e., Ai ∩ Aj = ∅ for all i 6= j) we have
∞
P( i=1 Ai ).
∞
i=1
disjoint subsets LA , LD , LP , and LS of angelic, demonic,
probabilistic, and standard (deterministic) locations;
X and R are finite disjoint sets of real-valued program and
random variables, respectively. We denote by D the joint distribution of variables in R;
`0 is an initial location and x0 is an initial valuation of program
variables;
7→ is a transition relation, whose every member is a tuple of the
form (`, f, `0 ), where ` and `0 are source and target program
locations, respectively, and f : R|X∪R| → R|X| is an update
function;
Pr = {Pr ` }`∈LP is a collection of probability distributions,
where each Pr ` is a discrete probability distribution on the set
of all transitions outgoing from `.
G is a function assigning a propositionally linear predicates
(guards) to each transition outgoing from deterministic locations.
We say that an SGS G is normalized if all guards of all transitions
in G are in a disjunctive normal form.
Example 2. Figure 8 shows an example of stochastic game structure. Deterministic locations are represented by boxes, angelic locations by triangles, demonic locations by diamonds, and stochas-
P(Ai ) =
4
2015/10/30
that we typically omit from the notation, i.e. if there are no angelic
locations we write only Pπ etc.
tic locations by circles. Transitions are labelled with update functions, while guards and probabilities of transitions outgoing from
deterministic and stochastic locations, respectively, are given in
rounded rectangles on these transitions. For the sake of succinctness we do not picture tautological guards and identity update functions. Note that the SGS is normalized. We will describe in Example 3 how the stochastic game structure shown corresponds to the
program described in Example 1.
From Programs to Games To every affine probabilistic program
P we can assign a stochastic game structure GP whose locations
correspond to the values of the program counter of P and whose
transition relation captures the behaviour of P . The game GP has
the same program and random variables as P , with the initial
valuation x0 of the former and the distribution D of the latter being
specified in the program’s preamble. The construction of the state
space of GP can be described inductively. For each program P
out
the game GP contains two distinguished locations, `in
P and `P ,
the latter one being always deterministic, that intuitively represent
the state of the program counter before and after executing P ,
respectively.
Dynamics of Stochastic Games A configuration of an SGS G
is a tuple (`, x), where ` is a location of G and x is a valuation
of program variables. We say that a transition τ is enabled in a
configuration (`, x) if ` is the source location of τ and in addition,
x |= G(τ ) provided that ` is deterministic.
The possible behaviours of the system modelled by G are represented by runs in G. Formally, a finite path (or execution fragment)
in G is a finite sequence of configurations (`0 , x0 ) · · · (`k , xk ) such
that for each 0 ≤ i < k there is a transition (`i , f, `i+1 ) enabled in (`i , xi ) and a valuation r of random variables such that
xi+1 = f (xi , r). A run (or execution) of G is an infinite sequence
of configurations whose every finite prefix is a finite path. A configuration (`, x) is reachable from the start configuration (`0 , x0 )
if there is a finite path starting at (`0 , x0 ) that ends in (`, x).
1. Expression and Skips. For P = x:=E where x is a program
variable and E is an arithmetic expression, or P = skip, the
out
game GP consists only locations `in
P and `P (both determinisout
in
out
)
or
(`
,
[x/E],
`
tic) and a transition (`in
P , id, `P ), respecP
P
tively.
2. Sequential Statements. For P = Q1 ; Q2 we take the games
GQ1 , GQ2 and join them by identifying the location `out
Q1 with
out
out
in
in
`in
Q2 , putting `P = `Q1 and `P = `Q2 .
3. While Statements. For P = while φ do Q od we add a new
out
deterministic location `in
P which we identify with `Q , a new
in
out
deterministic location `P , and transitions τ = (`P , id, `in
Q ),
0
out
τ 0 = (`in
P , id, `P ) such that G(τ ) = φ and G(τ ) = ¬φ.
4. If Statements. Finally, for P = if ndb then Q1 else Q2 fi we
add a new location `in
P together with two transitions τ1 =
in
in
in
(`in
P , id, `Q1 ), τ2 = (`P , id, `Q2 ), and we identify the locaout
out
out
tions `Q1 and `Q1 with `P . In this case the newly added location `in
P is angelic/demonic if and only if ndb is the keyword ’angel’/’demon’, respectively. If ndb is of the form
prob(p), the location `in
P is probabilistic with Pr `in (τ1 ) = p
P
and Pr `in (τ2 ) = 1 − p. Otherwise (i.e. if ndb is a proposiP
tionally linear predicate), `in
P is a deterministic location with
G(τ1 ) = ndb and G(τ2 ) = ¬ndb.
Due to the presence of non-determinism and probabilistic choices,
an SGS G may exhibit a multitude of possible behaviours. The
probabilistic behaviour of G can be captured by constructing a
suitable probability measure over the set of all its runs. However,
before this can be done, non-determinism in G needs to be resolved.
To do this, we utilize the standard notion of a scheduler.
Definition 2. An angelic (resp., demonic) scheduler in an SGS
G is a function which assigns to every finite path in G that ends
in an angelic (resp., demonic) configuration (`, x), respectively, a
transition outgoing from `.
Intuitively, we view the behaviour of G as a game played between
two players, angel and demon, with angelic and demonic schedulers representing the strategies of the respective players. That is,
schedulers are blueprints for the players that tell them how to play
the game. The behaviour of G under angelic scheduler σ and demonic scheduler π can then be intuitively described as follows:
The game starts in the initial configuration (`0 , x0 ). In every step
i, assuming the current configuration to be (`i , xi ) the following
happens:
Once the game GP is constructed using the above rules, we put
G(τ ) = true for all transitions τ outgoing from deterministic locations whose guard was not set in the process, and finally we add a
self-loop on the location `out
P . This ensures that the assumptions in
Definition 1 are satisfied. Furthermore note that for SGS obtained
for a program P , since the only branching are conditional branching, every location ` has at most two successors `1 , `2 .
Example 3. We now illustrate step by step how the SGS of Example 2 corresponds to the program of Example 1. We first consider
the statements Q1 and Q2 (Figure 3), and show the corresponding
SGSs in Figure 4. Then consider the statement block Q3 which is
a probabilistic choice between Q1 and Q2 (Figure 5). The corresponding SGS (Figure 6) is obtained from the previous two SGSs
as follows: we consider a probabilistic start location where there is
a probabilistic branch to the start locations of the SGSs of Q1 and
Q2 , and the SGS ends in a location with only self-loop. Finally, we
consider the whole program as Q4 (Figure 7), and the corresponding SGS in (Figure 8). The SGS is obtained from SGS for Q3 , with
the self-loop replaced by a transition back to the probabilistic location (with guard x ≥ 0), and an edge to the final location (with
guard x < 0). The start location of the whole program is a new
location, with transition labeled x := 0, to the start of the while
loop location. We label the locations in Figure 8 to refer to them
later.
• A valuation vector r for the random variables of G is sampled
according to the distribution D.
• A transition τ = (`i , f, `0 ) enabled in (`i , xi ) is chosen accord-
ing to the following rules:
If `i is angelic (resp., demonic), then τ is chosen deterministically by scheduler σ (resp., π). That is, if `i is angelic
(resp., demonic) and c0 c1 · · · ci is the sequence of configurations observed so far, then τ equals σ(c0 c1 · · · ci ) (resp.,
π(c0 c1 · · · ci )).
If `i is probabilistic, then τ is chosen randomly according
to the distribution Pr `i .
If `i is deterministic, then by the definition of an SGS there
is exactly one enabled transition outgoing from `i , and this
transition is chosen as τ .
• The transition τ is traversed and the game enters a new configuration (`i+1 , xi+1 ) = (`0 , f (xi , r)).
In this way, the players and random choices eventually produce a
random run in G. The above intuitive explanation can be formalized
by showing that the schedulers σ and π induce a unique probability
measure Pσ,π over a suitable σ-algebra having runs in G as a
sample space. If G does not have any angelic/demonic locations,
there is only one angelic/demonic scheduler (an empty function)
5
2015/10/30
Q4 : x := 0 ; w h i l e ( x ≥ 0 ) do Q3 od
Q1 : i f a n g e l t h e n x := x + 1 e l s e x := x − 1 f i
Q2 : i f demon t h e n x := x + 1 e l s e x := x − 1 f i
Figure 7. Program Q4
Figure 3. Programs Q1 and Q2
`0
x:=x+1
`5
x:=x-1
x:=0
x:=x+1
x:=x-1
`1
x:=x+1
x:=x-1
x:=x-1
`9
x:=x+1
`6
`7
6
10
`2
x≥0
x<0
Figure 4. SGSs for Q1 (left) and Q2 (right)
`3
`8
4
10
`4
Figure 8. SGS of Q4
Q3 : i f p r o b ( 0 . 6 ) t h e n Q1 e l s e Q2 f i
Figure 5. Program Q3
x:=x+1
x:=x-1
Note that for all angelic schedulers σ and demonic schedulers π
we have Eσ,π [T ] < ∞ implies Pσ,π (T < ∞) = 1, however,
the converse does not hold in general. In other words, finitely
terminating implies a.s. terminating, but a.s. termination does not
imply finitely termination.
Definition 4 (Quantitative termination questions). Given a program P and its associated normalized SGS GP , we consider the
following notions:
6
10
4
10
1. Expected Termination Time. The expected termination time of
P is ET(P ) = inf σ supπ Eσ,π [T ].
2. Concentration bound. A bound B is a concentration bound if
there exists two positive constants c1 and c2 such that for all
x ≥ B, we have Thr(P, x) ≤ c1 · exp(−c2 · x), where
Thr(P, x) = inf σ supπ Pσ,π (T > x) (i.e., the probability that
the termination time exceeds x ≥ B decreases exponentially in
x).
x:=x-1
x:=x+1
Figure 6. SGS of Q3
2.4
Qualitative and Quantitative Termination Questions
Note that we assume that an SGS GP for a program P is already
given in a normalized form, as our algorithms assume that all
guards in GP are in DNF. In general, converting an SGS into
a normalized SGS incurs an exponential blow-up as this is the
worst-case blowup when converting a formula into DNF. However,
we note that for programs P that contain only simple guards,
i.e. guards that are either conjunctions or disjunctions of linear
constraints, a normalized game GP can be easily constructed in
polynomial time using de Morgan laws. In particular, we stress
that all our hardness results hold already for programs with simple
guards, so they do not rely on the requirement that GP must be
normalized.
We consider the most basic notion of liveness, namely termination,
for probabilistic programs, and present the relevant qualitative and
quantitative questions.
Qualitative question. We consider the two basic qualitative questions, namely, almost-sure termination (i.e., termination with probability 1) and finite expected termination time. We formally define
them below.
Given a program P , let GP be the associated SGS. A run ρ is
terminating if it reaches a configuration in which the location is
`out
P . Consider the random variable T which to every run ρ in GP
assigns the first point in time in which a configuration with the
location `out
is encountered, and if the run never reaches such a
P
configuration, then the value assigned is ∞.
Definition 3 (Qualitative termination questions). Given a program
P and its associated normalized SGS GP , we consider the following two questions:
3.
The Class LRA PP
For probabilistic programs a very powerful technique to establish
termination is based on ranking supermartingales. The simplest
form of ranking supermartingales are the linear ranking ones. In
this section we will consider the class of A PP’s for which linear
ranking supermartingales exist, and refer to it as LRA PP. Linear
ranking supermartingales have been considered for probabilistic
programs without any types of non-determinism [12]. We show
how to extend the approach in the presence of two types of nondeterminism. We also show that in LRA PP we have that a.s.
termination coincides with finite termination (i.e., in contrast to
the general case where a.s. termination might not imply finitetermination, for the well-behaved class of LRA PP we have a.s.
termination implies finite termination). We first present the general
1. Almost-Sure Termination. The program is almost-surely (a.s.)
terminating if there exists an angelic scheduler (called a.s terminating) σ such that for all demonic schedulers π we have
Pσ,π ({ρ | ρ is terminating }) = 1; or equivalently, Pσ,π (T <
∞) = 1.
2. Finite Termination. The program P is finitely terminating (aka
positively almost-sure terminating) if there exists an angelic
scheduler σ (called finitely terminating) such that for all demonic schedulers π it holds that Eσ,π [T ] < ∞.
6
2015/10/30
• C1: the function η(`, ) : R|X| → R is linear over the program
notion of ranking supermartingales, and will establish their role in
qualitative termination.
Definition 5 (Ranking Supermartingales [22]). A discrete-time
stochastic process {Xn }n∈N wrt a filtration {Fn }n∈N is a ranking supermartingale (RSM) if there exists K < 0 and > 0 such
that for all n ∈ N, E(|Xn |) exists and it holds almost surely (with
probability 1) that
variables X;
• C2: if ` =
6 `out
P and x ∈
• C3: if ` = `out
P and x ∈
• C4:
Xn ≥ K and E(Xn+1 | Fn ) ≤ Xn − · 1Xn ≥0 ,
In following proposition we establish (with detailed proof in the
appendix) the relationship between RSMs and certain notion of
termination time.
Proposition 1. Let {Xn }n∈N be an RSM wrt filtration {Fn }n∈N
and let numbers K, be as in Definition 5. Let Z be the random
variable defined as Z := min{n ∈ N | Xn < 0}; which
denotes the first time n that the RSM Xn drops below 0. Then
P(Z < ∞) = 1 and E(Z) ≤ E(X1)−K .
Remark 1. WLOG we can consider that the constants K and in
Definition 5 satisfy that K ≤ −1 and ≥ 1, as an RSM can be
scaled by a positive scalar to ensure that and the absolute value
of K are sufficiently large.
1. P is a.s. terminating; and
0
2. ET(P ) ≤ η(`0 ,x0 )−K . In particular, ET(P ) is finite.
Key proof idea. Let η be an LRSM, wrt a linear invariant I for GP .
Let σ be the angelic scheduler whose decisions optimize the value
of η at the last configuration of any finite path, represented by
To introduce the notion of linear ranking supermartingales, we need
the notion of linear invariants defined as follows.
Definition 6 (Linear Invariants). A linear invariant on X is a
function I assigning a finite set of non-empty linear assertions on
X to each location of GP such that for all configurations
(`, x)
S
reachable from (`0 , x0 ) in GP it holds that x ∈ I(`).
σ(`, x) = argmin(`,f,`0 )∈7→ η(`0 , x)
for all end configurations (`, x) such that ` ∈ LA and x ∈ R|X| .
Fix any demonic strategy π. Let the stochastic process {Xn }n∈N
be defined by: Xn (ω) := η(θn (ω), {xk,n (ω)}k ). We show that
{Xn }n∈N is an RSM, and then use Proposition 1 to obtain the
desired result (detailed proof in appendix).
Remark 3. Note that the proof of Theorem 1 also provides a way
to synthesize an angelic scheduler, given the LRSM, to ensure that
the expected termination time is finite (as our proof gives an explicit
construction of such a scheduler). Also note that the result provides
an upper bound, which we denote as UB(P ), on ET(P ).
Generation of linear invariants can be done through abstract interpretation [16], as adopted in [12]. We first extend the notion of preexpectation [12] to both angelic and demonic non-determinism.
Definition 7 (Pre-Expectation). Let η : L × R|X| → R be a
function. The function preη : L × R|X| → R is defined by:
(`,id,`0 )∈7→
I(`) → preη (`, x) ≤ η(`, x) − .
Informally, LRSMs extend linear expression maps defined in [12]
with both angelic and demonic non-determinism. The following
theorem establishes the soundness of LRSMs.
Theorem 1. If there exists an LRSM η wrt I for GP , then
For the rest of the section we fix an affine probabilistic program
P and let GP = (L, (X, R), `0 , x0 , 7→, Pr, G) be its associated SGS. We fix the filtration {Fn }n∈N such that each Fn is
the smallest σ-algebra on runs that makes all random variables
in {θj }1≤j≤n , {xk,j }1≤k≤|X|,1≤j≤n measurable, where θj is the
random variable representing the location at the j-th step (note that
each location can be deemed as a natural number.), and xk,j is the
random variable representing the value of the program variable xk
at the j-th step.
P
[
I(`), then K ≤ η(`, x) ≤ K;
We refer to the above conditions as follows: C1 is the linearity condition; C2 is the non-terminating non-negativity condition, which
specifies that for every non-terminating location the RSM is nonnegative; C3 is the terminating negativity condition, which specifies that in the terminating location the RSM is negative (less than
−1) and lowerly bounded; C4 is the supermartingale difference
condition which is intuitively related to the difference in the RSM
definition (cf Definition 5).
Remark 2. In [12], the condition C3 is written as K 0 ≤ η(`, x) <
0 and is handled by Motzkin’s Transposition Theorem, resulting
in possibly quadratic constraints. Here, we replace η(`, x) < 0
equivalently with η(`, x) ≤ K which allows one to obtain linear constraints through Farkas’ linear assertion, where the equivalence follows from the fact that maximal value of a linear program
can be attained if it is finite. This is crucial to our PTIME result
over programs with at most demonic non-determinism.
where E(Xn+1 | Fn ) is the conditional expectation of Xn+1 given
the σ-algebra Fn (cf. [53, Chapter 9]).
• preη (`, x) :=
` 6= `out
P ∧x ∈
S
x) ≥ 0;
S I(`), then η(`,
0
P r` (`, id, `0 ) · η (`0 , x) if ` is a
The class LRA PP. The class LRA PP consists of all A PP’s for which
there exists a linear invariant I such that an LRSM exists w.r.t I for
GP . It follows from Theorem 1 that programs in LRA PP terminate
almost-surely, and have finite expected termination time.
probabilistic location;
• preη (`, x) := max(`,id,`0 )∈7→ η(`0 , x) if ` is a demonic location;
• preη (`, x) := min(`,id,`0 )∈7→ η (`0 , x) if ` is an angelic location;
• preη (`, x) := η(`0 , ER (f (x, r))) if ` is a deterministic location, (`, f, `0 ) ∈7→ and x ∈ G(`, f, `0 ), where ER (f (x, r)) is
the expected value of f (x, ·).
4.
LRA PP: Qualitative Analysis
In this section we study the computational problems related to
LRA PP. We consider the following basic computational questions
regarding realizability and synthesis.
Intuitively, preη (`, x) is the one-step optimal expected value of η
from the configuration (`, x). In view of Remark 1, the notion of
linear ranking supermartingales is now defined as follows.
Definition 8 (Linear Ranking-Supermartingale Maps). A linear
ranking-supermartingale map (LRSM) wrt a linear invariant I for
GP is a function η : L × R|X| → R such that the following
conditions (C1-C4) hold: there exist ≥ 1 and K, K 0 ≤ −1 such
that for all ` ∈ L and all x ∈ R|X| , we have
LRA PP realizability and synthesis. Given an A PPP with its normalized SGS GP and a linear invariant I, we consider the following
questions:
1. LRA PP realiazability. Does there exist an LRSM wrt I for GP ?
2. LRA PP synthesis. If the answer to the realizability question is
yes, then construct a witness LRSM.
7
2015/10/30
Rn | Ax ≤ b} 6= ∅. Then
Note that the existence of an LRSM implies almost-sure and finitetermination (Theorem 1), and presents affirmative answers to the
qualitative questions. We establish the following result.
Theorem 2. The following assertions hold:
{x ∈ Rn | Ax ≤ b} ∩ {x ∈ Rn | Bx < c} = ∅
iff there exist y ∈ Rm and z ∈ Rk such that y, z ≥ 0, 1T · z > 0,
AT y + BT z = 0 and bT y + cT z ≤ 0.
Remark 5. The version of Motzkin’s Transposition Theorem here
is a simplified one obtained by taking into account the assumption
{x ∈ Rn | Ax ≤ b} 6= ∅.
1. The LRA PP realizability and synthesis problems for programs
in A PPs can be solved in PSPACE, by solving a set of quadratic
constraints.
2. For programs in A PPs with only demonic non-determinism, the
LRA PP realizability and synthesis problems can be solved in
polynomial time, by solving a set of linear constraints.
3. Even for programs in A PPs with simple guards, only angelic
non-determinism, and no probabilistic choice, the LRA PP realizability problem is NP-hard.
Motzkin assertion Ψ. Given a polyhedron H = {x | Ax ≤ b}
with A ∈ Rm×n , b ∈ Rm and B ∈ Rk×n , c ∈ Rk , we define
assertion Ψ[H, B, c](ξ, ζ) (which we refer as Motzkin assertion)
for Motzkin’s Theorem by
Ψ[H, B, c](ξ, ζ) := ξ ≥ 0 ∧ ζ ≥ 0∧
Discussion and organization. The significance of our result is as
follows: it presents a practical approach (based on quadratic constraints for general A PPs, and linear constraints for A PPs with only
demonic non-determinism) for the problem, and on the other hand
it shows a sharp contrast in the complexity between the case with
angelic non-determinism vs demonic non-determinism (NP-hard
vs PTIME). In Section 4.1 we present an algorithm to establish the
first two items, and then establish the hardness result in Section 4.2.
4.1
1T · ζ > 0 ∧ AT ξ + BT ζ = 0 ∧ bT ξ + cT ζ ≤ 0 ,
where ξ (resp. ζ) is an m-dimensional (resp. k-dimensional)
column-vector variable. Note that if all the parameters H, B and
c are constant, then the assertion is linear, however, in general the
assertion is quadratic.
Handling emptiness check. The results described till now on linear inequalities require that certain sets defined by linear inequalities are nonempty. The following lemma presents a way to detect
whether such a set is empty (proof in the appendix).
Lemma 2. Let A ∈ Rm×n , B ∈ Rk×n , b ∈ Rm and d ∈
Rk . Then all of the following three problems can be decided in
polynomial time in the binary encoding of A, B, b, d:
Algorithm and Upper Bounds
Solution overview. Our algorithm is based on an encoding of the
conditions (C1–C4) for an LRSM into a set of universally quantified formulae. Then the universally quantified formulae are translated to existentially quantified formulae, and the key technical machineries are Farkas’ Lemma and Motzkin’s Transposition Theorem (which we present below).
Theorem 3 (Farkas’ Lemma [21, 48]). Let A ∈ Rm×n , b ∈ Rm ,
c ∈ Rn and d ∈ R. Assume that {x | Ax ≤ b} 6= ∅. Then
?
1. {x | Ax ≤ b} = ∅;
?
2. {x | Ax ≤ b ∧ Bx < d} = ∅;
?
3. {x | Bx < d} = ∅.
{x | Ax ≤ b} ⊆ {x | cT x ≤ d}
Below we fix an input A PP P .
Notations 1 (Notations for Our Algorithm). Our algorithm for
LRSM realizability and synthesis, which we call LRSMS YNTH,
is notationally heavy. To present the algorithm succinctly we will
use the following notations (that will be repeatedly used in the
algorithm).
iff there exists y ∈ Rm such that y ≥ 0, AT y = c and bT y ≤ d.
Farkas’ linear assertion Φ. Farkas’ Lemma transforms the inclusion testing of non-strict systems of linear inequalities into the
emptiness problem. For the sake of convenience, given a polyhedron H = {x | Ax ≤ b} with A ∈ Rm×n , b ∈ Rm and
c ∈ Rn , d ∈ R, we define the linear assertion Φ[H, c, d](ξ) (which
we refer to as Farkas’ linear assertion) for Farkas’ Lemma by
Sk
`
1. Hk,` : We let I(`) =
{Hk,` }, where each Hk,` is a
k=1
satisfiable linear assertion.
2. G(τ ): For each transition τ (in the SGS GP ), we deem the
propositionally linear predicate G(τ ) also as a set of linear
assertions whose members are exactly the conjunctive subclauses of G(τ ).
3. NS(H): For each linear assertion H, we define NS(H) as
follows: (i) if H = ∅, then NS(H) := ∅; and (ii) otherwise, the
polyhedron NS(H) is obtained by changing each appearance
of ‘<’ (resp. ‘>’) by ‘≤’ (resp. ‘≥’) in H. In other words,
NS(H) is the non-strict inequality version of H.
4. 7→` : We define
Φ[H, c, d](ξ) := ξ ≥ 0 ∧ AT ξ = c ∧ bT ξ ≤ d ,
where ξ is a column-vector variable of dimension n. Moreover, let
Φ[Rn , c, d](ξ) := c = 0 ∧ d ≥ 0 .
Note that Rn ⊆ {x | cT x ≤ d} iff c = 0 and d ≥ 0.
Below we show (proof in the appendix) that Farkas’ Lemma can be
slightly extended to strict inequalities.
Lemma 1. Let A ∈ Rm×n , B ∈ Rk×n , b ∈ Rm , d ∈ Rk ,
c ∈ Rn and d ∈ R. Let Z< = {x | Ax ≤ b ∧ Bx < d} and
Z≤ = {x | Ax ≤ b ∧ Bx ≤ d}. Assume that Z< 6= ∅. Then
for all closed subsets H ⊆ R|X| we have that Z< ⊆ H implies
Z≤ ⊆ H.
Remark 4. Lemma 1 is crucial to ensure that our approach can
be done in polynomial time when P does not involve angelic nondeterminism.
7→` := {(`0 , f, `00 ) ∈7→| `0 = `}
to be the set of transitions from ` ∈ L.
5. Op(`): For a location ` we call Op(`) the following open
sentence:
∀x ∈
[
I(`). preη (`, x) ≤ η(`, x) − ;
which specifies the condition C4 for LRSM.
6. We will consider a` `∈L and b` `∈L as vector and scalar
variables, respectively,nandowill use c` ,nd` as
o vector/scalar lin-
The following theorem, entitled Motzkin’s Transposition Theorem,
handles general systems of linear inequalities with strict inequalities.
Theorem 4 (Motzkin’s Transposition Theorem [40]). Let A ∈
Rm×n , B ∈ Rk×n and b ∈ Rm , c ∈ Rk . Assume that {x ∈
ear expressions over
a`
0
and
`0 ∈L
0
b`
to be deter-
`0 ∈L
mined by PreExp (cf. Item 8 below), respectively. Similarly, for
8
2015/10/30
a transition τ we will alsonuse co`,τ (resp. cnτ ), do`,τ (resp., dτ )
0
0
as linear expressions over a`
and b`
.
`0 ∈L
equivalently into
k`
^
^
`0 ∈L
7. Half: We will use the following notation Half for polyhedrons
(or half-spaces) given by
`
n
`
|X|
x∈R
Half(c , d , ) =
` T
| c
T
k`
^
^
φ` :=
x ≤ d` − ⇔ preη (`, x) ≤ η(`, x) − ,
`,τ
Φ NS(Hk,` ∧ φ0 ), c`,τ , d`,τ −
n
0
k,`,φ
where we have fresh variables ξdt
o
k,`,φ0
(c ) x ≤ d − ⇔ η(` , x) ≤ η(`, x) − .
By Item 1 in Algorithm LRSMS YNTH (cf. below), one can observe that PreExp determines c` , d` , c`,τ , d`,τ , cτ , dτ in terms
of a` , b` .
.
k`
^
Hk,` ⊆ SetOpτ ∈7→` Half(cτ , dτ , )
k=1
Running example. Since our algorithm is technical, we will illustrate the steps of the algorithm on the running example. We consider the SGS of Figure 8, and assign the invariant I such that
I(`0 ) = I(`1 ) = true, I(`i ) = x ≥ 0 for 2 ≤ i ≤ 8,
I(`9 ) = x < 0.
T
φ` :=
k,`,τ
Φ [NS(Hk,` ), cτ , dτ − ] ξdm
k,`,τ
where we have fresh variables ξdm
.
1≤k≤k`
Angelic case. The algorithm further transforms the sentence
equivalently into
k`
^
Hk,` ∩
\
x∈R
|X|
τ T
τ
| (c ) x > d − = ∅
;
τ ∈7→`
k=1
finally, from Motzkin’s Transposition Theorem, Lemma 1 and
Lemma 2, the algorithm transforms the sentence equivalently
into the nonlinear constraint (Motzkin assertion)
1. Template. The algorithm assigns a template η for an LRSM by
setting η(`, x) := (a` )T x + b` for each ` and x ∈ R|X| . This
ensures the linearity condition C1 (cf. Item 6 in Notations 1)
2. Variables for martingale difference and terminating negativity.
The algorithm assigns a variable and variables K, K 0 .
3. Probabilistic
locations. For each probabilistic location ` ∈
L\ `out
, the algorithm transforms the open sentence Op(`)
P
equivalently into
Hk,` ⊆ Half(c` , d` , )
k`
^
^
k=1 τ ∈7→`
The steps of algorithm LRSMS YNTH are as follows: (i) the first
two steps are related to initialization; (ii) then steps 3–5 specify
condition C4 of LRSM, where step 3 considers probabilistic locations, step 4 deterministic locations, and step 5 both angelic and
demonic locations; (iii) step 6 specifies condition C2 and step 7
specifies condition C3 of LRSM; and (iv) finally, step 8 integrates
all the previous steps into a set of constraints. We present the algorithm, and each of the steps 3–7 are illustrated on the running
example immediately after the algorithm. Formally, the steps are as
follows:
k`
^
S
where SetOp is
for demonic location and
for angelic
location, such that PreExp(cτ , dτ , ) holds.
Demonic case. Using Farkas’ Lemma, Lemma 1 and Lemma 2,
the algorithm further transforms the open sentence equivalently
into the linear assertion
Algorithm LRSMS YNTH. Intuitively, our algorithm transform
conditions C1-C4 in Definition 8 into Farkas’ or Motzkin assertions; the transformation differs among different types of locations.
φ` :=
k`
^
Ψ NS(Hk,` ), C` , d`
k,`
τ
{ξag
}k,` , {ζag
}τ ∈7→`
k=1
k,`
with fresh variables ξag
1≤k≤k`
T
τ
, ζag
τ ∈7→`
where
(cτ1 )
dτ1
C` = − ... and d` = − ... + · 1
dτm
(cτm )T
k=1
with 7→` = {τ1 , . . . , τm }.
6. Non-negativity for non-terminating location. For each location
` other than the terminating location `out
P , the algorithm transforms the open sentence
such that PreExp(c` , d` , ). Using Farkas’ Lemma, Lemma 1
and Lemma 2, the algorithm further transforms it equivalently
into the Farkas linear assertion
φ` :=
k`
^
Φ NS(Hk,` ), c` , d` −
ξpk,`
ϕ` := ∀x. x ∈
[
I(`) → η(`, x) ≥ 0
k=1
where we have fresh variables ξpk,` k,` (cf. Notations 1 for
the meaning of Hk,` , PreExp, Half, NS(), Op etc.).
4. Deterministic
locations. For each deterministic location ` ∈
L\ `out
, the algorithm transforms the open sentence Op(`)
P
equivalently into
k`
^
Hk,` ⊆ Half(−a` , b` , 0)
.
k=1
9
0
k,`,φ
ξdt
5. Demonic and angelic locations. For each demonic (resp. angelic) location `, the algorithm transforms the open sentence
Op(`) equivalently into
0
τ
^
k=1 τ ∈7→` φ0 ∈G(τ )
`,τ
and similarly for PreExp(c , d , ). For τ ∈7→ from a
non-deterministic location ` to a target location `0 , we use
PreExp(cτ , dτ , ) to denote the following predicate:
τ T
such that the sentence ∀x ∈ G(τ ).PreExp(c`,τ , d`,τ , ) holds.
Using Farkas’ Lemma, Lemma 1 and Lemma 2, the algorithm
further transforms it equivalently into
·x≤d − ;
and similarly, Half(cτ , dτ , ) and Half(c`,τ , d`,τ , ).
8. PreExp: We will use PreExp(c` , d` , ) to denote the following
predicate:
c`
Hk,` ∧ φ0 ⊆ Half(c`,τ , d`,τ , )
k=1 τ ∈7→` φ0 ∈G(τ )
o
`
^
2015/10/30
• Non-negativity of non-terminating location: step 6. In our ex-
Using Farkas’ Lemma, Lemma 1 and Lemma 2, the algorithm
further transforms it equivalently into
ample, we have
k`
ϕ` :=
^
ϕ`i = Φ[R, −ai , bi ] for i ∈ {0, 1} and
ϕ`i = Φ[x ≥ 0, −ai , bi ] for 2 ≤ i ≤ 8 .
k,`
)
Φ NS(Hk,` ), −a` , b` (ξnt
k=1
• Terminating location: step 7. In our example, we have
k,`
ξnt
where
are fresh variables.
7. Negativity for terminating location. For the terminating location
`out
P , the algorithm transforms the open sentence
ϕ`out := ∀x. x ∈
[
P
0
out
I(`out
P ) → K ≤ η(`P , x) ≤ K
ϕ`9 = Φ[x ≤ 0, a9 , −b9 + K] ∧ Φ[x ≤ 0, −a9 , b9 − K 0 ] .
Remark 6. Note that it is also possible to follow the usage of
Motzkin’s Transposition Theorem in [30] for angelic locations to
first turn the formula into a conjunctive normal form and then
apply Motzkin’s Theorem on each disjunctive sub-clause. Instead
we present a direct application of Motzkin’s Theorem.
equivalently into
k`out
P h
^
out
out
Hk,` ⊆ Half(a`P , −b`P , −K)
Correctness and analysis. The construction of the algorithm ensures that there exists an LRSM iff the algorithm LRSMS YNTH
answers yes. Also note that if LRSMS YNTH answers yes, then a
witness LRSM can be obtained (for synthesis) from the solution
of the constraints. Moreover, given a witness we obtain an upper
bound UB(P ) on ET(P ) from Theorem 1. We now argue two aspects:
i
k=1
k`out
∧
P h
^
out
out
i
Hk,` ⊆ Half(−a`P , b`P , K 0 )
.
k=1
1. Linear constraints. First observe that for algorithm LRSMS YNTH, all steps, other than the one for angelic nondeterminism, only generates linear constraints. Hence it follows
that in the absence of angelic non-determinism we obtain a set
of linear constraints that is polynomial in the size of the input.
Hence we obtain the second item of Theorem 2.
2. Quadratic constraints. Finally, observe that for angelic nondeterminism the application of Motzkin’s Theorem generates
only quadratic constraints. Since the existential first-order theory of the reals can be decided in PSPACE [11], we get the first
item of Theorem 2.
Using Farkas’ Lemma, Lemma 1 and Lemma 2, the algorithm
further transforms it equivalently into
k`out
ϕ`out :=
P
^
h
out
out
Φ NS(Hk,`out ), a`P , −b`P + K
i
P
P
k,`out
P
ξt
k=1
k`out
∧
P
^
h
out
out
Φ NS(Hk,`out ), −a`P , b`P − K 0
i
P
k,`out
P
ξtt
k=1
k,`
where ξtk,` ’s and ξtt
’s are fresh variables.
8. Solving the constraint problem. For each location `, let φ`
and ϕ` be the formula obtained in steps 3–5, and steps 6–
7, respectively. The algorithm outputs whether the following
formula is satisfiable:
ΞP := ≥ 1 ∧ K, K 0 ≤ −1 ∧
^
4.2
We establish the third item of Theorem 2 (detailed proof in the
appendix).
Lemma 3. The LRA PP realizability problem for A PPs with angelic non-determinism is NP-hard, even for non-probabilistic nondemonic programs with simple guards.
(φ` ∧ ϕ` ) ,
`∈L
where the satisfiability is interpreted over all relevant open
variables in ΞP .
Example 4. (Illustration of algorithm LRSMS YNTH on running
example). We describe the steps of the algorithm on the running
example. For the sake of convenience, we abbreviate a`i , b`i by
ai , bi .
Proof (sketch). We show a polynomial reduction from 3-SAT to
the LRA PP realizability problem. For a propositional formula ψ
we construct a non-probabilistic non-demonic program Pψ whose
variables correspond to the variables of ψ and whose form is as
follows: the program consists of a single while loop within which
each variable is set to 0 or 1 via an angelic choice. The guard of
the loop checks whether ψ is satisfied by the assignment: if it is
not satisfied, then the program proceeds with another iteration of
the loop, otherwise it terminates. The test can be performed using
a propositionally linear predicate; e.g. for the formula (x1 ∨ x2 ∨
¬x3 )∧(¬x2 ∨x3 ∨x4 ) the loop guard will be x1 +x2 +(1−x3 ) ≤
1
∨ (1 − x2 ) + x3 + x4 ≤ 21 . To each location we assign a
2
simple invariant I which says that all program variables have values
between 0 and 1. The right hand sides of inequalities in the loop
guard are set to 21 in order for the reduction to work with this
invariant: setting them to 1, which might seem to be an obvious first
choice, would only work for an invariant saying that all variables
have value 0 or 1, but such a condition cannot be expressed by a
polynomially large propositionally linear predicate.
• Probabilistic location: step 3. In our example,
φ`2 = Φ[x ≥ 0, 0.6a3 + 0.4a4 − a2 , b2 − 0.6b3 − 0.4b4 − ]
• Deterministic location: step 4. In our example,
φ`0 = Φ[R, −a0 , b0 − b1 − ]
φ`1 = Φ[x ≥ 0, a2 − a1 , b1 − b2 − ]
∧ Φ[x ≤ 0, a9 − a1 , b1 − b9 − ]
φ`i = Φ[x ≥ 0, a1 − ai , bi − b1 − − a1 ] for i ∈ {5, 7}
φ`i = Φ[x ≥ 0, a1 − ai , bi − b1 − + a1 ] for i ∈ {6, 8} .
• Demonic location: step 5a. In the running example,
φ`4 := Φ[x ≥ 0, a8 − a4 , b4 − b8 − ]
∧ Φ[x ≥ 0, a7 − a4 , b4 − b7 − ] .
If ψ is not satisfiable, then the while loop obviously never terminates and hence by Theorem 1 there is no LRSM for Pψ with respect to any invariant, including I. Otherwise there is a satisfying
assignment ν for ψ which can be used to construct an LRSM η
with respect to I. Intuitively, η measures the distance of the current
• Angelic location: step 5b. In our example,
h
φ`3 = Ψ x ≥ 0,
a3 − a5
−b3 + b5 +
,
a3 − a6
−b3 + b6 +
i
Lower bound
.
10
2015/10/30
valuation of program variables from the satisfying assignment ν.
By using a scheduler σ that consecutively switches the variables
to the values specified by ν the angel ensures that η eventually
decreases to zero. Since the definition of a pre-expectation is independent of the scheduler used, we must ensure that the conditions C2 and C4 of LRSM hold also for those valuations x that
are not reachable under σ. This is achieved by multiplying the distance of each given variable xi from ν by a suitable penalty factor Pen in all locations in the loop that are positioned after the
branch in which xi is set. For instance, in a location that follows
the choice of x1 and x2 and precedes the choice of x3 and x4 the
expression assigned by η in the above example will be of the form
Pen · ((1 − x1 ) + x2 ) + (1 − x3 ) + x4 + d, where d is a suitable
number varying with program locations. This ensures that the value
of η for valuations that are not reachable under σ is very large, and
thus it can be easily decreased in the following steps by switching
to σ.
5.
i.e. that obeys the construction for angelic scheduler illustrated
below Theorem 1. Note that cmp(η) is non-empty (see Remark 3).
This condition is somewhat restrictive, as it might happen that
no near-optimal angelic scheduler is compatible with a martingale
η computed via methods in Section 3. On the other hand, this
definition captures the problem of extracting, from a given LRSM
η, as precise information about the expected termination time as
possible. Note that for programs without angelic non-determinism
the problem is equivalent to approximating ET(P ).
Our main results on are summarized below.
Theorem 5. 1. A concentration bound B can be computed in the
same complexity as for qualitative analysis (i.e., in polynomial
time with only demonic non-determinism, and in PSPACE in
the general case). Moreover, the bound B is at most exponential.
2. The quantitative approximation problem can be solved in a
doubly exponential time for bounded LRA PP with only discrete
probability choices. It cannot be solved in polynomial time
unless P = PSPACE, even for programs without probability
or non-determinism.
Remark 7. Note that the bound B is exponential, and our result (Lemma 5) shows that there exist deterministic programs in
bounded LRA PP that terminate exactly after exponential number
of steps (i.e., an exponential bound for B is asymptotically optimal
for bounded LRA PP).
LRA PP: Quantitative Analysis
In this section we consider the quantitative questions for LRA PP.
We first show a program P in LRA PP with only discrete probabilistic choices such that the expected termination time ET(P ) is
irrational.
n := 1 ;
w h i l e n ≥ 1 do
i f prob( 12 ) t h e n n := n + 1
e l s e n := n − 1 ; n := n − 1 f i od
5.1
Concentration Results on Termination Time
In this section, we present the first approach to show how LRSMs
can be used to obtain concentration results on termination time for
bounded LRA PP.
Figure 9. An example where ET(P ) is irrational.
Example 5. Consider the example in Fig. 9. The program P in the
figure represents an operation of a so called one-counter Markov
chain, a very restricted class of A PPs without non-determinism
and with a single integer variable. It follows from results of [10]
and [19] that the termination time of P is equal to a solution of
a certain system of quadratic√equations, which in this concrete
example evaluates to 2(5 + 5), an irrational number (for the
precise computation see appendix).
5.1.1
Concentration Inequalities
We first consider Azuma’s Inequality [2] which serves as a basic concentration inequality on supermartingales, and then adapt
finer inequalities such as Hoeffding’s Inequality [26, 35], and Bernstein’s Inequality [4, 35] to supermartingales.
Theorem 6 (Azuma’s Inequality [2]). Let {Xn }n∈N be a supermartingale wrt some filtration {Fn }n∈N and {cn }n∈N be a sequence of positive numbers. If |Xn+1 − Xn | ≤ cn for all n ∈ N,
then
Given that the expected termination time can be irrational, we focus
on the problem of its approximation. To approximate the termination time we first compute concentration bounds (see Definition 4).
Concentration bounds can only be applied if there exist bounds
on martingale change in every step. Hence we define the class of
bounded LRA PP.
−
P(Xn − X1 ≥ λ) ≤ e
for all n ∈ N and λ > 0.
Bounded LRA PP. An LRSM η wrt invariant I is bounded if there
exists an interval [a, b] such that the following holds: for S
all locations ` and
successors `0 of `, and all valuations x ∈
I(`)
S
and x0 ∈
I(`0 ) if (`0 , x0 ) is reachable in one-step from (`, x),
then we have (η(`0 , x0 ) − η(`, x)) ∈ [a, b]. Bounded LRA PP is
the subclass of LRA PP for which there exist bounded LRSMs for
some invariant. For example, for a program P , if all updates are
bounded by some constants (e.g., bounded domain variables, and
each probability distribution has a bounded range), then if it belongs to LRA PP, then it also belongs to bounded LRA PP. Note that
all examples presented in this section (as well several in Section 6)
are in bounded LRA PP.
2·
Pλn2
c2
k=2 n
Intuitively, Azuma’s Inequality bounds the amount of actual increase of a supermartingale at a specific time point. It can be refined
by Hoeffding’s Inequality. The original Hoeffding’s Inequality [26]
works for martingales; and we show how to extend to supermartingales.
Theorem 7 (Hoeffding’s Inequality on Supermartingales). Let
{Xn }n∈N be a supermartingale wrt some filtration {Fn }n∈N and
{[an , bn ]}n∈N be a sequence of intervals of positive length in R. If
X1 is a constant random variable and Xn+1 − Xn ∈ [an , bn ] a.s.
for all n ∈ N, then
−
P(Xn − X1 ≥ λ) ≤ e
We formally define the quantitative approximation problem for
LRA PPs as follows: the input is a program P in bounded LRA PP,
an invariant I for P , a bounded LRSM η with a bounding interval
[a, b], and a rational number δ ≥ 0. The output is a rational number
ν such that | inf σ∈cmp(η) supπ ET(P ) − ν| ≤ δ, where cmp(η)
is the set of all angelic schedulers σ that are compatible with η,
Pn
2λ2
k=2
(bk −ak )2
for all n ∈ N and λ > 0.
Remark 8. By letting the interval [a, b] be [−c, c] in Hoeffding’s
inequality, we obtain Azuma’s inequality. Thus, Hoeffding’s inequality is at least as tight as Azuma’s inequality and is strictly
tighter when [a, b] is not a symmetric interval.
11
2015/10/30
If variation and expected value of differences of a supermartingale
is considered, then Bernstein’s Inequality yields finer concentration
than Hoeffding’s Inequality.
Theorem 8 (Bernstein’s Inequality [4, 13]). Let {Xn }n∈N be a
supermartingale wrt some filtration {Fn }n∈N and M ≥ 0. If X1
is constant, Xn+1 − E(Xn+1 | Fn ) ≤ M a.s. and Var(Xn+1 |
Fn ) ≤ c2 for all n ≥ 1, then
−
P(Xn − X1 ≥ λ) ≤ e
in LRSMS YNTH in Section 4.1. Moreover for bounded LRA PP by
definition there exist valid constants a and b.
Key supermartingale construction. We now show that given the
LRSM η and the constants a, b synthesized wrt the conditions for
η above, how to obtain concentration results on termination time.
Define the stochastic process {Yn }n∈N by:
Yn = Xn + · (min{T, n} − 1) .
λ2
2(n−1)c2 + 2M λ
3
The following proposition shows that {Yn }n∈N is a supermartingale and satisfies the requirements of Hoeffding’s Inequaltiy.
Proposition 2. {Yn }n∈N is a supermartingale and Yn+1 − Yn ∈
[a + , b + ] almost surely for all n ∈ N.
for all n ∈ N and λ > 0.
5.1.2
LRSMs for Concentration Results
LRSM and supermartingale to concentration result. We now
show how to use the LRSM and the supermartingale Yn to achieve
in
in
the concentration result. Let W0 := Y1 = (a`P )T x0 + b`P .
Fix an angelic strategy that fulfills supermartingale difference and
bounded change for LRSM; and fix any demonic strategy. By
Hoeffing’s Inequality, for all λ > 0, we have P(Yn − W0 ≥ λ) ≤
The only previous work which considers concentration results for
probabilistic programs is [12], that argues that Azuma’s Inequality
can be used to obtain bounds on deviations of program variables.
However, this technique does not present concentration result on
termination time. For example, consider that we have an additional
program variable to measure the number of steps. But still the invariant I (wrt which the LRSM is constructed) can ignore the additional variable, and thus the LRSM constructed need not provide
information about termination time. We show how to overcome this
conceptual difficulty. For the rest of this section, we fix a program
P in bounded LRA PP and its SGS GP . We first present our result
for Hoeffding’s Inequality (for bounded LRA PP) and then the result for Bernstein’s Inequality (for a subclass of bounded LRA PP).
−
2λ2
e (n−1)(b−a)2 . Note that T > n iff Xn ≥ 0 by conditions C2 and
C3 of LRSM. Let α = (n − 1) − W0 and α
b = (min{n, T } −
1) − W0 . Note that with the conjunct T > n we have that α and α
b
coincide. Thus, for P(T > n) = P(Xn ≥ 0 ∧ T > n) we have
P(Xn ≥ 0 ∧ T > n)
For the rest of this part we fix an affine probabilistic program
P and let GP = (L, (X, R), `0 , x0 , 7→, Pr, G) be its associated SGS. We fix the filtration {Fn }n∈N such that each Fn is
the smallest σ-algebra on runs that makes all random variables
in {θj }1≤j≤n , {xk,j }1≤k≤|X|,1≤j≤n measurable, where we recall
that θj is the random variable representing the location at the jth step, and xk,j is the random variable representing the value of
the program variable xk at the j-th step. We recall that T is the
termination-time random variable for P .
=
=
≤
=
P((Xn + α ≥ α) ∧ (T > n))
P((Xn + α
b ≥ α) ∧ (T > n))
P((Xn + α
b ≥ α))
P(Yn − Y1 ≥ (n − 1) − W0 )
≤
e
−
2((n−1)−W0 )2
(n−1)(b−a)2
for all n > W0 + 1. The first equality is obtained by simply adding
α on both sides, and the second equality uses that because of the
conjunct T > n we have min{n, T } = n which ensures α = α
b.
The first inequality is obtained by simply dropping the conjunct
T > n. The following equality is by definition, and the final
inequality is Hoeffding’s Inequality. Note that in the exponential
function the numerator is quadratic in n and denominator is linear
in n, and hence the overall function is exponentially decreasing in
n.
Constraints for LRSMs to apply Hoeffding’s Inequality. Let η
be an LRSM to be synthesized for P wrt linear invariant I. Let
{Xn }n∈N be the stochastic process defined by
Xn := η(θn , {xk,n }k )
Computational results for concentration inequality. We have the
following results which establish the second item of Theorem 5:
for all natural numbers n. To apply Hoeffding’s Inequality, we need
to synthesize constants a, b such that Xn+1 − Xn ∈ [a, b] a.s. for
all natural numbers n. We encode this condition as follows:
• Computation. Through the synthesis of the LRSM η and a, b,
a concentration bound B0 = W0 + 2 can be computed in
PSPACE in general and in PTIME without angelic nondeterminism (similar to LRSMS YNTH algorithm).
• Optimization. In order to obtain a better concentration bound
B, a binary search can be performed on the interval [0, B0 ] to
find an optimal B ∈ [0, B0 ] such that · (B − 2) ≥ W0 is
consistent with the constraints for synthesis of η and a, b.
• Bound on B. Note that since B is computed in polynomial
space, it follows that B is at most exponential.
Remark 9 (Upper bound on Thr(P, x)). We now show that our
technique along with the concentration result also presents an
upper bound for Thr(P, x) as follows: To obtain an upper bound
for a given x, we first search for a large number M0 such that
M0 (b − a) ≤ (x − 1) − W0 and (x − 2) ≥ W0 are not
consistent with the conditions for η and a, b; then we perform a
binary search for an M ∈ [0, M0 ] such that the (linear) conditions
M (b − a) ≤ (x−1)−W0 and (x−2) ≥ W0 are consistent with
2
the conditions for η and a, b. Then Thr(P, x) ≤ e−2M /(x−1) .
Note that we already provide an upper bound UB(P ) for ET(P )
(recall Theorem 1 and Remark 6), and the upper bound UB(P )
holds for LRA PP, not only for bounded LRA PP.
• Probablistic or demonic locations. for all ` ∈ LP ∪ LD with
successor locations `1 , `2 , the following sentence holds:
∀x ∈
[
I(`). ∀i ∈ {1, 2}.
• Deterministic locations. for all `
∈ LS and all τ
(`, f, `0 ) ∈7→` , the following sentence holds:
∀x ∈
[
η(`i , x) − η(`, x) ∈ [a, b] .
=
I(`) ∧ G(τ ). ∀r. η(`0 , f (x, r)) − η(`, x) ∈ [a, b]
• Angelic location. for all ` ∈ LA with successor locations `1 , `2 ,
the following condition holds:
∀x ∈
[
I(`).∃i ∈ {1, 2}. a ≤ η(`i , x)−η(`, x) ≤ − ≤ b
• We require that − ∈ [a, b]. This is not restrictive since
reflects the supermartingale difference.
We have that if the previous conditions hold, then a, b are valid constants. Note that all the conditions above can be transformed into an
existential formula on parameters of η and a, b by Farkas’ Lemma
or Motzkin’s Transposition Theorem, similar to the transformation
12
2015/10/30
Time
Applying Bernstein’s Inequality. To apply Bernstein’s Inequality,
the variance on the supermartingale difference needs to be evaluated, which might not exist in general for LRA PPs. We consider a
subclass of LRA PPs, namely, incremental LRA PPs.
Definition 9. A program P in LRA PP is incremental if all variable
updates are of the form x := x + g(r) where g is some linear
function on random variables r. An LRSM is incremental if it has
the same coefficients for each program variable at every location,
0
i.e. a` = a` for all `, `0 ∈ L.
Remark 10. The incremental condition for LRSMs can be encoded
as a linear assertion.
Result. We show that for incremental LRA PP, Bernstein’s Inequality can be applied for concentration results on termination time,
using the same technique we developed for applying Hoeffding’s
Inequality. The technical details are presented in the appendix.
5.2
Int RW 1D.
≤ 0.02 sec.
Real RW 1D.
≤ 0.01 sec.
Adv RW 1D.
≤ 0.01 sec.
Adv RW 2D.
≤ 0.02 sec.
Adv RW 2D. (Variant)
≤ 0.02 sec.
Complexity of quantitative approximation
We now show the second item of Theorem 5. The doubly exponential upper bound is obtained by obtaining a concentration bound
B via aforementioned methods and unfolding the program up to
O(B) steps (details in the appendix).
Lemma 4. The quantitative approximation problem can be solved
in a doubly exponential time for programs with only discrete probability choices.
B
47.00
84.50
122.00
159.50
197.00
92.00
167.00
242.00
317.00
392.00
41.00
74.33
107.67
141.00
174.33
162.00
262.00
362.00
462.00
562.00
UB(P )
46.00
83.50
121.00
158.50
196.00
91.00
166.00
241.00
316.00
391.00
40.00
73.33
106.67
140.00
173.33
122.00
152.00
182.00
212.00
242.00
161.00
261.00
361.00
461.00
561.00
Init. Cofig.
5
10
15
20
25
5
10
15
20
25
5
10
15
20
25
(5,10)
(10,10)
(15,10)
(20,10)
(25,10)
(5,0)
(10,0)
(15,0)
(20,0)
(25,0)
Table 2. Experimental results: the first column is the example
name, the second column is the time to solve the problem, the
following columns are our concentration bound and upper bound
on termination time for a given initial condition.
For the PSPACE lower bound we use the following lemma.
Lemma 5. For every C ∈ N the following problem is PSPACEhard: Given a program P without probability or non-determinism,
with simple guards, and belonging to bounded LRA PP; and a
number N ∈ N such that either ET(P ) ≤ N or ET(P ) ≥ N · C,
decide, which of these two alternatives hold.
6.
Experimental Results
In this section we present our experimental results. First observe
that one of the key features of our algorithm LRSMS YNTH is that it
uses only operations that are standard (such as linear invariant generation, applying Farkas’ Lemma), and have been extensively used
in programming languages as well as in several tools. Thus the efficiency of our approach is similar to the existing methods with such
operations, e.g., [12]. The purpose of this section is to demonstrate
the effectiveness of our approach, i.e., to show that our approach can
answer questions for which no previous automated methods exist.
In this respect we show that our approach can (i) handle probabilities and demonic non-determinism together, and (ii) provide useful
answers for quantitative questions, and the existing tools do not
handle either of them. By useful answers we mean that the concentration bound B and the upper bound UB(P ) we compute provide
reasonable answers. To demonstrate the effectiveness we consider
several classic examples and show how our method provides an effective automated approach to reason about them. Our examples
are (i) random walk in one dimension; (ii) adversarial random walk
in one dimension; and (iii) adversarial random walk in two dimensions.
Proof (Sketch). We first sketch the proof of item 1. Fix a number C.
We show a polynomial reduction from the following problem that
is PSPACE-hard for a suitable constant K: Given a deterministic
Turing machine (DTM) T such that on every input of length n the
machine T uses at most K · n tape cells, and given a word w over
the input alphabet of T , decide, whether T accepts w.
For a given DTM T and word w we construct a program P that
emulates, through updates of its variables, the computation of T
on w. This is possible due to the bounded space complexity of
T . The program P consists of a single while-loop whose every
iteration corresponds to a single computational step of T . The loop
is guarded by an expression m ≥ 1 ∧ r ≥ 1, where m, r are
special variables such that r is initialized to 1 and m to C ·J, where
J is such a number that if T accepts w, it does so in at most J
steps (J can be computed in polynomial time again due to bounded
space complexity of T ). The variable m is decremented in every
iteration of the loop, which guarantees eventual termination. If it
happens during the loop’s execution that T (simulated by P ) enters
an accepting state, then r is immediately set to zero, making P
terminate immediately after the current iteration of the loop. Now
P can be constructed in such a way that each iteration of the loop
takes the same amount W of time. If T does not accept w, then
P terminates in exactly C · J · W steps. On the other hand, if T
does accept w, then the program terminates in at most J · W steps.
Putting N = J · W , we get the proof of the first item.
Random walk (RW) in one dimension (1D). We consider two
variants of random walk (RW) in one dimension (1D). Consider a
RW on the positive reals such that at each time step, the walk moves
left (decreases value) or right (increases value). The probability
to move left is 0.7 and the probability to move right is 0.3. In
the first variant, namely integer-valued RW, every change in the
value is by 1; and in the second variant, namely real-valued RW,
every change is according to a uniform distribution in [0, 1]. The
walk starts at value n, and terminates if value zero or less is
reached. Then the random walk terminates almost-surely, however,
similar to Example 5 even in the integer-valued case the expected
termination time is irrational.
Remark 11. Both in proofs of Lemma 3 and Lemma 5 we have
only variables whose change in each step is bounded by 1. Hence
both the hardness proofs apply to bounded LRA PP.
Corollary 1. The quantitative approximation problem cannot be
solved in polynomial time unless P = PSPACE. Moreover, ET(P )
cannot be approximated up to any fixed additive or multiplicative
error in polynomial time unless P = PSPACE.
Adversarial RW in 1D. We consider adversarial RW in 1D that
models a discrete queuing system that perpetually processes tasks
incoming from its environment at a known average rate. In every
iteration there are r new incoming tasks, where r is a random vari-
13
2015/10/30
decrease it by 1, i.e. it is linear in n). For our experimental results,
the linear constraints generated by LRSMS YNTH was solved by
CPLEX [1]. The programs with the linear invariants are presented
in Section F of the appendix.
Significance of our result. We now highlight the significance of our
approach. The analysis of RW in 1D (even without adversary) is a
classic problem in probability theory, and the expected termination
time can be irrational and involve solving complicated equations.
Instead our experimental results show that using our approach
(which is polynomial time) we can compute upper bound on the
expected time that is a linear function. This shows that we provide
a practical and computational approach for quantitative reasoning
of probabilistic processes. Moreover, our approach also extends
to more complicated probabilistic processes (such as RW with
adversary, as well as in 2D), and compute upper bounds which are
linear, whereas precise mathematical analysis of such processes is
extremely complicated.
Figure 10. The plot of UB(P ) vs the initial location.
able taking value 0 with probability 12 , value 1 with probability 14
and value 2 with probability 41 . Then a task at the head of the queue
is processed in a way determined by a type of the task, which is not
known a priori and thus is assumed to be selected demonically. If
an urgent task is encountered, the system solves the task rapidly,
in one step, but there is a 18 chance that this rapid process ends in
failure that produces a new task to be handled. A standard task is
processed at more leisurely pace, in two steps, but is guaranteed to
succeed. We are interested whether for any initial number of tasks
in the queue the program eventually terminates (queue stability)
and in bounds on expected termination time (efficiency of task processing).
7.
Related Work
We have already discussed several related works, such as [7, 12, 22,
36, 37] in Section 1 (Previous results). We discuss other relevant
works here. The termination for concurrent probabilistic programs
under fairness was considered in [49]. A sound and complete characterization of almost-sure termination for countable state space
was given in [25]. A sound and complete method for proving termination of finite state programs was given in [20]. Termination
analysis of non-probabilistic programs has received a lot of attention over the last decade as well [8, 9, 14, 15, 34, 43, 50]. The
most closely related works to our work are [7, 12, 22] that consider
termination of probabilistic programs via ranking Lyapunov functions and supermartingales. However, most of the previous works
focus on proving a.s. termination and finite termination, and discuss
soundness and completeness. In contrast, in this work we consider
simple (linear) ranking supermartingales, and study the related algorithmic and complexity issues. Moreover, we present answers to
the quantitative termination questions, and also consider two types
of non-determinism together that has not been considered before.
Adversarial RW in 2D. We consider two variants of adversarial
RW in 2D.
1. Demonic RW in 2D. We consider a RW in two dimensions,
where at every time step either the x-axis or the y-axis changes,
according to a uniform distribution in [−2, 1]. However, at each
step, the adversary decides whether it is the x-axis or the y-axis.
The RW starts at a point (n1 , n2 ), and terminates if either the
x-axis or y-axis is reached.
2. Variant RW in 2D. We consider a variant of RW in 2D as
follows. There are two choices: in the first (resp. second) choice
(i) with probability 0.7 the x-axis (resp. y-axis) is incremented
by uniform distribution [−2, 1] (resp., [2, −1]), and (ii) with
probability 0.3, the y-axis (resp. x axis) is incremented by
[−2, 1] (resp., [2, −1]). In other words, in the first choice the
probability to move down or left is higher than the probability
to move up or right; and conversely in the second choice. At
every step the demonic choice decides among the two choices.
The walk starts at (n1 , n2 ) such that n1 > n2 , and terminates
if the x-axis value is at most the y-axis value (i.e., terminates
for values (n, n0 ) s.t. n ≤ n0 ).
8.
Conclusion and Future Work
In this work we considered the basic algorithmic problems related to qualitative and quantitative questions for termination of
probabilistic programs. Since our focus was algorithmic we considered simple (linear) ranking supermartingales, and established
several complexity results. The most prominent are that for programs with demonic non-determinism the qualitative problems can
be solved in polynomial time, whereas for angelic non-determinism
with no probability the qualitative problems are NP-hard. We also
present PSPACE-hardness results for the quantitative problems,
and present the first method through linear ranking supermartingales to obtain concentration results on termination time. There are
several directions for future work. The first direction is to consider
special cases of non-linear ranking supermartingales and study
whether efficient algorithmic approaches can be developed for
them. The second interesting direction would be to use the methods of martingale theory to infer deeper insights into the behaviour
of probabilistic programs, e.g. via synthesizing assertions about the
distribution of program variables (”stochastic invariants”).
Experimental results. Our experimental results are shown in Table 2 and Figure 10. Note that all examples considered, other
than demonic RW in 2D, are in bounded LRA PP (with no nondeterminism or demonic non-determinism) for which all our results are polynomial time. For the demonic RW in 2D, which is
not a bounded LRA PP (for explanation why this is not a bounded
LRA PP see Section F of the appendix), concentration results cannot be obtained, however we obtain the upper bound UB(P ) from
our results as the example belongs to LRA PP. Our experimental
result show that the concentration bound and upper bound on expected termination time (recall UB(P ) from Remark 6) we compute is a linear function in all cases (see Fig 10). This shows that
our automated method can effectively compute, or some of the most
classical random walks studied in probability theory, concentration bounds which are asymptotically tight (the expected number of
steps to decrease the value of a standard asymmetric random walk
by n is equal to n times the expected number of steps needed to
References
[1] IBM
ILOG
CPLEX
Optimizer.
http://www01.ibm.com/software/integration/optimization/cplex-optimizer/,
2010.
14
2015/10/30
[2] K. Azuma. Weighted sums of certain dependent random variables.
Tohoku Mathematical Journal, 19(3):357–367, 1967.
2015, Mumbai, India, January 15-17, 2015, pages 489–501. ACM,
2015. ISBN 978-1-4503-3300-9. .
[3] C. Baier and J.-P. Katoen. Principles of model checking. MIT Press,
2008. ISBN 978-0-262-02649-9.
[23] R. W. Floyd. Assigning meanings to programs. Mathematical Aspects
of Computer Science, 19:19–33, 1967.
[4] G. Bennett. Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association, 57
(297):33–45, 1962.
[24] F. G. Foster. On the stochastic matrices associated with certain queuing processes. The Annals of Mathematical Statistics, 24(3):pp. 355–
360, 1953.
[5] P. Billingsley. Probability and Measure. Wiley, 3rd edition, 1995.
[25] S. Hart and M. Sharir. Concurrent probabilistic programs, or: How to
schedule if you must. SIAM J. Comput., 14(4):991–1012, 1985.
[6] A. Bockmayr and V. Weispfenning. Solving numerical constraints.
In J. A. Robinson and A. Voronkov, editors, Handbook of Automated
Reasoning (in 2 volumes), pages 751–842. Elsevier and MIT Press,
2001. ISBN 0-444-50813-9.
[26] W. Hoeffding. Probability inequalities for sums of bounded random
variables. Journal of the American Statistical Association, 58(301):
13–30, 1963.
[7] O. Bournez and F. Garnier. Proving positive almost-sure termination.
In RTA, pages 323–337, 2005.
[27] H. Howard. Dynamic Programming and Markov Processes. MIT
Press, 1960.
[8] A. R. Bradley, Z. Manna, and H. B. Sipma. The polyranking principle.
In ICALP, pages 1349–1361, 2005.
[28] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement
learning: A survey. Journal of Artificial Intelligence Research, 4:237–
285, 1996.
[9] A. R. Bradley, Z. Manna, and H. B. Sipma. Linear ranking with
reachability. In K. Etessami and S. K. Rajamani, editors, Computer
Aided Verification, 17th International Conference, CAV 2005, Edinburgh, Scotland, UK, July 6-10, 2005, Proceedings, volume 3576 of
Lecture Notes in Computer Science, pages 491–504. Springer, 2005.
ISBN 3-540-27231-3. .
[29] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial intelligence,
101(1):99–134, 1998.
[30] J. Katoen, A. McIver, L. Meinicke, and C. C. Morgan. Linear-invariant
generation for probabilistic programs: - automated support for proofbased methods. In SAS, volume LNCS 6337, Springer, pages 390–406,
2010.
[10] T. Brázdil, J. Esparza, S. Kiefer, and A. Kučera. Analyzing Probabilistic Pushdown Automata. FMSD, 43(2):124–163, 2012.
[11] J. Canny. Some algebraic and geometric computations in pspace. In
Proceedings of the twentieth annual ACM symposium on Theory of
computing, pages 460–467. ACM, 1988.
[31] J. Kemeny, J. Snell, and A. Knapp. Denumerable Markov Chains. D.
Van Nostrand Company, 1966.
[32] H. Kress-Gazit, G. E. Fainekos, and G. J. Pappas. Temporal-logicbased reactive mission and motion planning. IEEE Transactions on
Robotics, 25(6):1370–1381, 2009.
[12] A. Chakarov and S. Sankaranarayanan. Probabilistic program analysis
with martingales. In N. Sharygina and H. Veith, editors, Computer
Aided Verification - 25th International Conference, CAV 2013, Saint
Petersburg, Russia, July 13-19, 2013. Proceedings, volume 8044 of
Lecture Notes in Computer Science, pages 511–526. Springer, 2013.
ISBN 978-3-642-39798-1. .
[33] M. Z. Kwiatkowska, G. Norman, and D. Parker. Prism 4.0: Verification
of probabilistic real-time systems. In CAV, LNCS 6806, pages 585–
591, 2011.
[13] F. Chung and L. Lu. Concentration inequalities and martingale inequalities: A survey. Internet Mathematics, 3:79–127, 2011.
[34] C. S. Lee, N. D. Jones, and A. M. Ben-Amram. The size-change
principle for program termination. In POPL, pages 81–92, 2001.
[14] M. Colón and H. Sipma. Synthesis of linear ranking functions. In
T. Margaria and W. Yi, editors, Tools and Algorithms for the Construction and Analysis of Systems, 7th International Conference, TACAS
2001 Held as Part of the Joint European Conferences on Theory and
Practice of Software, ETAPS 2001 Genova, Italy, April 2-6, 2001, Proceedings, volume 2031 of Lecture Notes in Computer Science, pages
67–81. Springer, 2001. ISBN 3-540-41865-2. .
[35] C. McDiarmid. Concentration. In Probabilistic Methods for Algorithmic Discrete Mathematics, pages 195–248. 1998.
[36] A. McIver and C. Morgan. Developing and reasoning about probabilistic programs in pGCL. In PSSE, pages 123–155, 2004.
[37] A. McIver and C. Morgan. Abstraction, Refinement and Proof for
Probabilistic Systems. Monographs in Computer Science. Springer,
2005.
[15] B. Cook, A. See, and F. Zuleger. Ramsey vs. lexicographic termination
proving. In TACAS, pages 47–61, 2013.
[38] D. Monniaux. An abstract analysis of the probabilistic termination
of programs. In P. Cousot, editor, Static Analysis, 8th International
Symposium, SAS 2001, Paris, France, July 16-18, 2001, Proceedings,
volume 2126 of Lecture Notes in Computer Science, pages 111–126.
Springer, 2001. ISBN 3-540-42314-1. . URL http://dx.doi.
org/10.1007/3-540-47764-0_7.
[16] P. Cousot and R. Cousot. Abstract interpretation: A unified lattice
model for static analysis of programs by construction or approximation of fixpoints. In R. M. Graham, M. A. Harrison, and R. Sethi, editors, Conference Record of the Fourth ACM Symposium on Principles
of Programming Languages, Los Angeles, California, USA, January
1977, pages 238–252. ACM, 1977. .
[39] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge
University Press, New York, NY, USA, 1995. ISBN 0-521-47465-5,
9780521474658.
[17] D. Dubhashi and A. Panconesi. Concentration of Measure for
the Analysis of Randomized Algorithms. Cambridge University
Press, New York, NY, USA, 1st edition, 2009. ISBN 0521884276,
9780521884273.
[40] T. S. Motzkin. Beiträge zur Theorie der linearen Ungleichungen
(German). PhD thesis, Basel, Jerusalem, 1936.
[41] M. J. Osborne and A. Rubinstein. A course in game theory. 1994.
[18] R. Durrett. Probability: Theory and Examples (Second Edition).
Duxbury Press, 1996.
[42] A. Paz. Introduction to probabilistic automata (Computer science and
applied mathematics). Academic Press, 1971.
[19] J. Esparza, A. Kučera, and R. Mayr. Quantitative Analysis of Probabilistic Pushdown Automata: Expectations and Variances. In LICS,
pages 117–126. IEEE, 2005.
[43] A. Podelski and A. Rybalchenko. A complete method for the synthesis
of linear ranking functions. In B. Steffen and G. Levi, editors, Verification, Model Checking, and Abstract Interpretation, 5th International
Conference, VMCAI 2004, Venice, January 11-13, 2004, Proceedings,
volume 2937 of Lecture Notes in Computer Science, pages 239–251.
Springer, 2004. ISBN 3-540-20803-8. .
[20] J. Esparza, A. Gaiser, and S. Kiefer. Proving termination of probabilistic programs using patterns. In CAV, pages 123–138, 2012.
[21] J. Farkas. A fourier-féle mechanikai elv alkalmazásai (Hungarian).
Mathematikaiés Természettudományi Értesitö, 12:457–472, 1894.
[44] M. Rabin. Probabilistic automata. Information and Control, 6:230–
245, 1963.
[22] L. M. F. Fioriti and H. Hermanns. Probabilistic termination: Soundness, completeness, and compositionality. In S. K. Rajamani and
D. Walker, editors, Proceedings of the 42nd Annual ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL
[45] J. S. Rosenthal. A First Look at Rigorous Probability Theory. World
Scientific Publishing Company, 2nd edition, 2006.
15
2015/10/30
[46] S. Sankaranarayanan, A. Chakarov, and S. Gulwani. Static analysis
for probabilistic programs: inferring whole program properties from
finitely many paths. In PLDI, pages 447–458, 2013.
[47] A. Schrijver. Theory of Linear and Integer Programming. WileyInterscience series in discrete mathematics and optimization. Wiley,
1999. ISBN 978-0-471-98232-6.
[48] A. Schrijver. Combinatorial Optimization - Polyhedra and Efficiency.
Springer, 2003. ISBN 978-3-540-44389-6.
[49] M. Sharir, A. Pnueli, and S. Hart. Verification of probabilistic programs. SIAM J. Comput., 13(2):292–314, 1984.
[50] K. Sohn and A. V. Gelder. Termination detection in logic programs
using argument sizes. In D. J. Rosenkrantz, editor, Proceedings of the
Tenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of
Database Systems, May 29-31, 1991, Denver, Colorado, USA, pages
216–226. ACM Press, 1991. ISBN 0-89791-430-9. .
[51] A. Solar-Lezama, R. M. Rabbah, R. Bodı́k, and K. Ebcioglu. Programming by sketching for bit-streaming programs. In PLDI, pages
281–294, 2005.
[52] S. A. Vavasis. Approximation algorithms for indefinite quadratic
programming. Mathematical Programming, 57:279–311, 1992.
[53] D. Williams. Probability with Martingales. Cambridge University
Press, 1991.
16
2015/10/30
A.
Proofs for Section 3
where the terms are described below:
YP :=
Proposition 1. Let {Xn }n∈N be an RSM wrt filtration {Fn }n∈N
and constants K, (cf. Definition 5). Let Z be the random variable
defined as Z := min{n ∈ N | Xn < 0}; which denotes the first
time n that the RSM Xn drops below 0. Then P(Z < ∞) = 1 and
E(Z) ≤ E(X1)−K .
n−1
X
1θn =` ·
`∈LP
X
YS :=
≤
E(Xn ) − · E(1Xn ≥0 )
≤
E(X1 ) − ·
=
E(X1 ) − ·
YA :=
P∞
k=1
∞
X
P(Xk ≥ 0) .
YD :=
· Xn + YP0 + YS0 + YA + YD ,
Eσ,π (Xn+1 | Fn ) = 1θn =`out
p
· Xn , YA , YD are measurable in Fn , we have
because 1θn =`out
p
· Xn (and similarly for YA
·
X
Eσ,π (1θn =`out
n | Fn ) = 1θn =`out
p
p
and YD ); and for YP and YS we need their expectation as YP0 and
YS0 defined below. We have
YP0 :=
E(X1 ) − K
.
k=1
P(Xk ≥ 0) ≤
Xh
YS0 :=
X
1θn =` ·
`∈LP
X
i
P(B` = i) · η(`B` =i , {xk,n }k )
i∈{0,1}
h
X
1θn =`∧{xk,n }k |=G(τ ) ·η(`0 , Eσ,τ
R (f ({xk,n }k , r))
`∈LS (`,f,`0 )∈7→`
• P(Z = ∞) = limk→∞ P(Z ≥ k) = 0 and
• E(Z) ≤
1θn =` · η(π(ρ), {xk,n }k ),
where ρ is the finite path up to n steps. Then from properties of
conditional expectation [53, Page 88], one obtains:
It follows from Z ≥ k ⇒ Xk ≥ 0 that
P∞
X
`∈LD
P(Xk ≥ 0) converges and
k=1
1θn =` · η(σ(`, {xk,n }k ), {xk,n }k );
describes the semantics of angelic locations, where σ(`, {xk,n }k )
here denotes the target location of the transition chosen by the
scheduler; and similarly, for demonic locations by replacing σ by
π we have
Pn−1
P(Xk ≥ 0) − · P(Xn ≥ 0)
Pk=1
n
P(Xk ≥ 0) ≤
X
`∈LA
E(X1 ) − E(Xn+1 )
E(X1 ) − K
P(Xk ≥ 0) ≤
≤
.
Hence the series
i
1θn =`∧{xk,n }k ∈G(τ ) ·η(`0 , f ({xk,n }k , r) ;
describes the semantics of deterministic locations.
The first equality is the total expectation law for conditional expectation; the first inequality is obtained from the fact that Xn is a supermartingale; the second inequality is obtained from the inductive
hypothesis; and the final equality is simply rearranging terms. Since
Xn ≥ K almost surely for all n ∈ N, we have that E(Xn ) ≥ K,
for all n. Hence from above we have that
k=1
h
`∈LS (`,f,`0 )∈7→`
P(Xk ≥ 0) .
The base step n = 1 is clear. The inductive step can be carried out
as follows:
E(Xn+1 ) = E (E(Xn+1 | Fn ))
n
X
1B` =i · η(`B` =i , {xk,n }k )
i∈{0,1}
X
k=1
k=1
i
X
where each random variable B` is the Bernoulli random variable
for the decision of the probabilistic branch and `B` =0 , `B` =1 are
the corresponding successor locations of `. Note that all B` ’s and
r’s are independent of Fn . In other words, YP describes the semantics of probabilistic locations.
Proof. The proof is similar to [22, Lemma 5.5]. We first prove by
induction on n ≥ 1 that
E(Xn ) ≤ E(X1 ) − ·
Xh
Note that when θn ∈ LS ∪ LP ∪ LA , by definition we have
· Xn + YP0 + YS0 + YA ; and when
preη (θn , {xk,n }k ) = 1θn =`out
p
θn ∈ LD by definition we have YD ≤ preη (θn , {xk,n }k ). Hence
the result follows.
E(X1 )−K
.
The desired result follows.
Theorem 1. If there exists an LRSM η wrt I for GP , then
To prove Theorem 1, we need the following lemma which specifies
the relationship between pre-expectation and conditional expectation.
Lemma 6. Let η be an LRSM and σ be the angelic scheduler whose
decisions optimize the value of η at the last configuration of any
finite path, represented by
1. P is a.s. terminating; and
2. the expected termination time of P is at most
hence finite, i.e., ET(P ) < ∞.
η(`0 ,x0 )−K 0
and
Proof. We establish both the points. Let η be an LRSM wrt I for
GP . Define the angelic strategy σ which solely depends on the end
configuration of a finite path as follows:
σ(`, x) = argmin(`,f,`0 )∈7→ η(`0 , x)
for all end configuration (`, x) such that ` ∈ LA and x ∈ R|X| . Let
π be any demonic scheduler. Let the stochastic process {Xn }n∈N
be defined such that
σ(`, x) = argmin(`,f,`0 )∈7→ η(`0 , x)
for all ` ∈ LA and x ∈ R|X| . Fix any demonic strategy π. Let the
stochastic process {Xn }n∈N be defined by:
Xn := η(θn , {xk,n }1≤k≤|X| ) .
Xn := η(θn , {xk,n }k ).
Then for all n ∈ N,
For all n ∈ N, from Lemma 6, we have
Eσ,π (Xn+1 | Fn ) ≤ preη (θn , {xk,n }1≤k≤|X| ) .
Eσ,π (Xn+1 | Fn ) ≤ preη (θn , {xk,n }k ) .
Proof. For all n ∈ N, from the program syntax we have
By condition C4,
preη (θn , {xk,n }k ) ≤ η(θn , {xk,n }k ) − · 1θn 6=`out
Xn+1 = 1θn =`out
· Xn + YP + YS + YA + YD
p
P
17
2015/10/30
i
.
to xi or 1 − xi depending on whether ξ is a positive literal or not,
respectively. The body of the program then has the form
for some ≥ 1. Moreover, from C2, C3 and the fact that I is a
linear invariant, it holds almost-surely that θn 6= `out
P iff Xn ≥ 0.
Thus, we have
while ϕ do Q1 ; · · · ; Qn od
Eσ,π (Xn+1 | Fn ) ≤ Xn − · 1Xn ≥0 .
where
It follows that {Xn }n∈N is an RSM. Hence by Proposition 1, it
follows that GP terminates almost surely and
• ϕ is a finite disjunction of linear constraints of the form
η(`0 , x0 ) − K 0
.
E (T ) ≤
The desired result follows.
σ,π
B.
m h
_
gξj,1 + gξj,2 + gξj,3 ≤
j=1
Proofs for Section 4.1
1
2
i
;
• Qi has the form if angel then xi := 1 else xi := 0 fi, for all
1 ≤ i ≤ n.
Lemma 1. Let A ∈ Rm×n , B ∈ Rk×n , b ∈ Rm , d ∈ Rk ,
c ∈ Rn and d ∈ R. Let Z< = {x | Ax ≤ b ∧ Bx < d} and
Z≤ = {x | Ax ≤ b ∧ Bx ≤ d}. Assume that Z< 6= ∅. Then
for all closed subsets H ⊆ R|X| we have that Z< ⊆ H implies
Z≤ ⊆ H.
Clearly, given a formula ψ the program Pψ can be constructed
in time polynomial in the size of ψ, and moreover, Pψ is nonprobabilistic and non-demonic.
Proof. Let z be any point such that Az ≤ b, Bz ≤ d and Bz 6< d.
Let y be a point such that Ay ≤ b and By < d. Then for all
λ ∈ (0, 1),
Note that each valuation x, reachable from the initial configuration
of GPψ , can be viewed as a bit vector, and hence we can identify
these reachable valuations with truth assignments to ψ. Also note
that for such a reachable valuation x it holds gξj,k (x) = 1 if and
only if x, viewed as an assignment, satisfies the formula ξj,k , and
gξj,k (x) = 0 otherwise.
λ · y + (1 − λ) · z ∈ {x | Ax ≤ b ∧ Bx < d} .
By letting λ → 1, we obtain that z ∈ H because H is closed.
m×n
k×n
Finally, let
VnI be an invariant assigning to every location a linear
assertion i=1 [0 ≤ xi ∧ xi ≤ 1]. Note that the invariant I is very
simple and it is plausible that it can be easily discovered without
employing any significant insights into the structure of Pψ .
m
Lemma 2. Let A ∈ R
, B ∈ R
, b ∈ R and d ∈
Rk . Then all of the following three problems can be decided in
polynomial time in the binary encoding of A, B, b, d:
We claim that Pψ admits a linear ranking supermartingale with
respect to I if and only if there exists a satisfying assignment for ψ.
First let us assume that ψ is not satisfiable. Then, as noted above,
for all reachable valuations x there is 1 ≤ j ≤ m such that for all
ξj,k , 1 ≤ k ≤ 3, it holds gξj,k (x) = 0. It follows that ϕ holds in
every reachable valuation and hence the program never terminates.
From Theorem 1 it follows that Pψ does not admit an LRSM for
any invariant I.
?
1. {x | Ax ≤ b} = ∅;
?
2. {x | Ax ≤ b ∧ Bx < d} = ∅;
?
3. {x | Bx < d} = ∅.
Proof. The polynomial-time decidability of the first problem is
well-known (cf. [47]). The second problem can be solved by
checking whether the optimal value of the following linear program
is greater than zero:
max z subject to Ax ≤ b;
Bx + z · 1 ≤ d;
Let us now assume that there exists an assignment
ν : {x1 , . . . , xn } → {0, 1} satisfying ψ. We use ν to construct an LRSM for Pψ with respect to I. First, let us fix the
following notation of locations of GPψ : by `i we denote the
initial location of the sub-program Qi , by `1i and `0i the locations
corresponding to ”then” and ”else” branches of Qi , respectively,
and by `n+1 and `out the initial and terminal locations of GPψ ,
respectively. Next, we fix a penalty constant Pen = 4n + 3 and
for every pair of indexes 1 ≤ i ≤ n + 1, 1 ≤ j ≤ n we define a
linear expression
z ≥ 0.
The proof for the third problem is similar to the second one.
C.
Proofs for Section 4.2
Lemma 3. The LRA PP realizability problem for A PPs with angelic non-determinism is NP-hard, even for non-probabilistic nondemonic programs with simple guards.
Proof. We show a polynomial reduction from 3-SAT to the LRA PP
realizability problem.
hi,j
Let ψ be any propositional formula in a conjunctive normal form
with three literals per clause. Let C1 , . . . , Cm be all the clauses and
x1 , . . . , xn all the variables of ψ. For every 1 ≤ j ≤ m we write
Cj ≡ ξj,1 ∨ ξj,2 ∨ ξj,3 , where each ξj,k is either a positive or a
negative literal, i.e. a variable or its negation.
1 − xj
(1 − xj ) · Pen
=
xj
xj · Pen
if ν(xj ) = 1 and i ≤ j,
if ν(xj ) = 1 and i > j,
if ν(xj ) = 0 and i ≤ j,
if ν(xj ) = 0 and i > j.
Note that for x ∈ [0, 1]n the value hi,j (x) equals either |ν(xj ) −
x[j]| or |ν(xj ) − x[j]| · Pen depending on whether i ≤ j or not.
Finally, we define an LRSM η as follows: for every 1 ≤ i ≤ n + 1
we put
We construct a program Pψ as follows: the program variables of
Pψ correspond to the variables in ψ, the program has no random
variables. All the program variables are initially set to 1. To construct the body of the program, we define, for each literal ξ of ψ
involving a variable xi , a linear expression gξ which is equal either
η(`i , x) =
n
X
hi,j (x) + (n − i + 1) · 2.
j=1
18
2015/10/30
Next, for every 1 ≤ i ≤ n we put
ν(x )
η(`i i , x)
=
n
X
hi,j (x) + (2(n − i) + 1)
xwhile,else
j=1
1−ν(xi )
η(`i
, x)
= n · Pen + 2n + 1.
ywhile,while
Finally, we put η(`out , x) = −1/2.
ywhile,else
We show that η is an LRSM wrt I.
n
As hi,j
S(x) ≥ 0 whenever x ∈ [0, 1] , it is easy to check that
x ∈
I(`) ⇒ η(`, x) ≥ 0 for every non-terminal location `.
Next, for each x ∈ Rn and each angelic state `i it holds that
ν(x )
η(`i , x) = η(`i i , x) + 1; hence preη (`i , x) ≤ η(`i , x) − 1.
Further, let us observe that for all x ∈ [0, 1]n and all 1 ≤ i ≤ n + 1
we have η(`i , x) ≤ n · Pen + 2n; thus for all 1 ≤ i ≤ n it holds
1−ν(xi )
1−ν(xi )
that preη (`i
, x) ≤ η(`i
, x) − 1. Also, for all such x
and i it holds that
Intuitively, the variable x`,`0 represents the probability that when
starting in location ` with variable n having some value k, the first
location in which the value of n decreases to k − 1 is `0 (note that
this probability is independent of k). Next, y`,`0 is the conditional
expected number of steps needed to decrease n by 1, conditioned by
the event that we start in ` and the first state in which n drops below
the initial value is `0 . Formally, it is shown in [10] and [19] that
these probabilities and expectations are the minimal non-negative
solutions of the system.
ν(x )
=
≥
1 2
(xwhile,while + xwhile,else )
2
1
1
= + (xwhile,while · xwhile,else )
2
2
1
(3
+
2ywhile,while · x2while,while + (ywhile,else + 1) · xwhile,else )
= 2
xwhile,while
3
1
+ 2 (3 + ywhile,while + ywhile,else ) · xwhile,while · xwhile,else
= 2
xwhile,else
(2)
xwhile,while =
η(`i i , x)
η(`i+1 , [xi /ν(xi )](x)) + hi,i (x)
− hi,i+1 ([xi /ν(xi )](x)) + 1
η(`i+1 , [xi /ν(xi )](x)) + 1 ,
It is then easy to check that the expected termination time of P is
given by the formula
1 + xwhile,while (ywhile,while + 1) + xwhile,else (ywhile,else + 2).
Examining the solutions of (2) (we used √Wolfram Alpha to√iden5−1
3− 5
, xwhile,else
,
tify them) we get that
2
2
√ xwhile,while =
√ =
ywhile,while = (85 + 43 5)/10, ywhile,else = (15√
+ 29 5)/10. Plugging this into (3) we get that ET(P ) = 2(5 + 5).
where the latter inequality follows from hi,i+1 ([xi /ν(xi )](x)) =
0 and non-negativity of hi,i (x) for x ∈ [0, 1]n . Hence,
ν(x )
ν(x )
preη (`i i , x) ≤ η(`i i , x) − 1. Finally, let us focus on `n+1 .
Let x be such that x |= ϕ ∧ I(`n+1 ). It is easy to check that
D.2
preη (`n+1 , x) = η(`n+1 , x)+2n−(Pen −1)·
n
X
(3)
|ν(xi )−x[i]|.
Proof of Lemma 4
Lemma 4. The quantitative approximation problem can be solved
in a doubly exponential time for programs with only discrete probability choices.
i=1
(1)
Now since x |= ϕ, there is 1 ≤ j ≤ m such that (gξj,1 + gξj,2 +
gξj,3 )(x) ≤ 21 , which implies gξj,k (x) ≤ 21 for all 1 ≤ k ≤ 3,
as all these numbers are non-negative. On the other hand, there
is 1 ≤ k0 ≤ 3 such that ν(ξj,k0 ) = 1, as ν is a satisfying
assignment. Let xi be the variable in ξj,k0 . If ξj,k0 is a positive
literal, then ν(xi ) = 1 and x[i] = gξj,k0 (x) ≤ 21 , and if ξj,k0
is negative, then νxi = 0 and x[i] = 1 − gξj,k0 (x) ≥ 21 . In
both cases we get |ν(xi ) − x[i]| ≥ 21 . Plugging this into (1)
and combining with the fact that (Pen − 1)/2 ≥ 2n + 1 we
get that preη (`n+1 , x) ≤ η(`n+1 , x) − 1. Hence, η is indeed an
LRSM.
Proof. The algorithm proceeds via analysing a suitable finite unfolding of the input program P . Formally, an n-step unfolding of
P , where n ∈ N, is an SGS Unf (P, n) such that
• the set locations of Unf (P, n) consists of certain finite paths
in GP (i.e. in the SGS associated to P ). Namely, we take
those finite paths whose starting configuration is the initial
configuration of GP and whose length is at most n. The type
of such a location w is determined by the type of a location in
the last configuration of GP appearing on w.
• Unf (P, n) has the same program and random variables as
D.
GP (i.e. the same ones as P ) and the initial configuration of
Unf (P, n) is the tuple ((`0 , x0 ), x0 ), where (`0 , x0 ) is the initial configuration of GP (it can be identified with a finite path
of length 0).
Details of Section 5
• For every pair w, w 0 of locations of Unf (P, n) (i.e., w, w 0 are
D.1
finite paths in GP ) such that w0 can be formed by performing
a single transition τ out of the last configuration of w, there
is a transition (w, f, w0 ) in Unf (P, n), where f is an update
function of τ . Moreover, for every history w of length n that is
also a location of Unf (P, n) there is a transition (w, id, w) in
Unf (P, n).
Irrationality of expected termination time
Let us again consider the program P in Figure 9. We claim that
ET(P ) is irrational. Note that the stochastic game structure GP has
six locations, where the following two are of interest for us: the
location corresponding to the beginning of the while loop, which
we denote by while, and the location entered after executing the first
assignment in the else branch, which we denote by else. Consider
the following systems of polynomial equations:
• The probability distributions and guard functions for probabilis-
tic/deterministic locations w of Unf (P, n) are determined, in a
natural way, by the probability distribution/guard functions of
the location in the last configuration of w.
19
2015/10/30
Note that that each configuration of Unf (P, n) reachable from
the initial configuration is of the form ((`0 , x0 ), . . . , (`i , xi ), xi ),
where (`j , xj ), 0 ≤ j ≤ i are configurations of Unf (P, n).
In particular, the values of program variables in every reachable
configuration (w, x) of Unf (P, n) are uniquely determined by w.
To avoid confusion, in the following we will denote the locations in
GP by `, `0 , `1 , . . . and locations in Unf (P, n) by w, w0 , w1 , . . . .
exponential bound then follows from the aforementioned bound on
the size of Unf (P, n).
First let us show how to compute n. Since P is a bounded
LRA PPand we are given the corresponding ranking supermartingale η with one-step change bounds a, b, by Theorem 5 we can
compute, in polynomial time, a concentration bound B which is at
most exponential in the size of P . That is, there are positive numbers c1 , c2 , which depend just on P , such that for each x ≥ B it
holds Thr(P, x) ≤ c1 · exp(−c2 · x). Moreover, numbers c1 and
c2 are also computable in polynomial time and at most exponential in the size of P , as witnessed in the proof of Theorem 5. Now
denote by M0 the value of the LRSM η in the initial configuration
of P . Note that M0 , as well as the bounds a, b, are at most exponential in the size of P as η, a and b can be obtained by solving
a system of linear inequalities with coefficients determined by P .
We compute (in polynomial time) the smallest integer n ≥ B such
that c1 · e−c2 ·n · (n + (M0 + n · (b − a))/) ≤ δ, where is as in
Definition 8 (the minimal expected decrease of the value of η in a
single step). It is straightforward to verify that such an n exists and
that it is polynomial in M0 , , and a, b, and hence exponential in
the size of P .
Given P and n it is possible to construct Unf (P, n) in time exponential in n and in the encoding size of P . To see this, observe that
the graph of Unf (P, n) is a tree of depth n and so Unf (P, n) has
at most 2n locations. We can construct Unf (P, n) by, e.g. depthfirst enumeration of all finite paths in GP of length ≤ n. To bound
the complexity of the construction we need to bound the magnitude
of numbers (i.e. variable values) appearing on these finite paths –
these are needed to determine which transition to take in deterministic locations. Note that for every A PP P there is a positive number
K at most exponential in the encoding size of P such that in every step the absolute value of any variable of P can increase by the
factor of at most K in every step. Hence, after n steps the absolute
value of every variable is bounded by K n · L, where L is a maximal absolute value of any variable of P upon initialization (L is
again at most exponential in the size of P ). The exponential (in n
and size of P ) bound on the time needed to construct Unf (P, n)
follows.
Now let us construct the cost function cost. The initial location
gets cost 0 and all other locations get cost 1 except for locations w
that represents the paths in GP of lenth n. The latter locations get a
special cost C = (M0 + n · (b − a))/.
After constructing Unf (P, n), we post-process the SGS by removing all transitions outgoing from angelic locations that violate the
supermartingale property of η. Formally, for every angelic location w of Unf (P, n) and each its successor w0 we denote by c and
c0 the last configurations on w and w0 , respectively. We then check
whether preη (c) = η(c0 ), and if not, we remove the transition from
w to w0 .
We prove that for any pair of schedulers σ, π, σ ∈ cmp(η), it holds
|Eσ,π T − E∆(σ),Γ(π) W | ≤ δ. So fix an arbitrary such pair. We
start by proving that Eσ,π T ≤ E∆(σ),Γ(π) W . Since the behaviour
of the schedulers in the first n steps is mimicked by ∆(σ) and Γ(π)
in Unf (P, n), and in the latter SGS we accumulate one unit of cost
per each of the first n − 1 steps, it suffices to show that the expected
number of steps needed to terminate with these schedulers from any
configuration of GP that is reachable in exactly n steps is bounded
by C. Since the value of η can change by at most (b − a) in every
step, and σ ∈ cmp(η), the value of η after n steps in GP is at
most M0 + n · (b − a). By Theorem 1 the expected number of step
to terminate from a configuration with such an η-value is at most
(M0 + n · (b − a))/ as required.
There is a natural many-to-one surjective correspondence Γ between demonic schedulers in GP and demonic schedulers in
Unf (P, n) (the schedulers in GP that have the same behaviour up
to step n are mapped to a unique scheduler in Unf (P, n) which
induces this behaviour). Similarly, there is a many-to-one correspondence ∆ between angelic schedulers that belong to cmp(η) in
GP and angelic schedulers in Unf (P, n).
To finish the proof we show that E∆(σ),Γ(π) W ≤ Eσ,π T + δ. First
note that E∆(σ),Γ(π) W − Eσ,π T can be bounded by p · C, where p
is the probability that a run in GP does not terminate in first n steps
under these schedulers. Since n ≥ B it holds p ≤ c1 · e−c2 ·n . Thus
p · C ≤ c1 · e−c2 ·n · (n + (M0 + n · (b − a))/) ≤ δ, the last
inequality following from the choice of n.
Let T ermn be the set of all locations w in Unf (P, n) that, when
interpreted as paths in GP , end with a configuration of the form
(`out
P , x) for some x. It is easy to verify that for any adversarial
scheduler π in GP the probability Pπ (T ≤ x)) is equal to the
probability of reaching T ermn under Φ(π) in Unf (P, n).
To solve the quantitative approximation problem we proceed as
follows: First we compute number n, at most exponential in the
size of P , such that the infimum (among all angelic schedulers in
cmp(η)) probability of not terminating in the first n steps is ”sufficiently small” in GP . Then we construct, in time exponential in n
and size of P the unfolding Unf (P, n) and assign a non-negative
cost cost(w) to each location w of Unf (P, n). We then define
a random variable W which to each
(w0 , x0 ), (w1 , x1 ), . . .
Prun
j
in Unf (P, n) assigns the number
cost(wi ), where j =
i=0
min{n, min{k | wk ∈ T ermn }}. Using the fact that in GP we
terminate with very high probability in the first n steps, we will
show how to construct the cost function cost in such a way that
for every pair of schedulers σ, π, where σ ∈ cmp(η), it holds
|Eσ,π T − E∆(σ),Γ(π) W | ≤ δ (the construction of cost can be
done in time polynomial in the size of Unf (P, n)). Hence, to solve
the quantitative approximation problem it will suffice to compute
0 0
inf σ0 supπ0 Eσ ,π W , where the infimum and supremum are taken
over schedulers in Unf (P, n). This computation can be done in
polynomial time via standard backward iteration [41]. The doubly-
D.3
Proofs for Section 5.2
Lemma 5. For every C ∈ N the following problem is PSPACEhard: Given a program P without probability or non-determinism,
with simple guards, and belonging to bounded LRA PP; and a
number N ∈ N such that either ET(P ) ≤ N or ET(P ) ≥ N · C,
decide, which of these two alternatives hold.
Proof. Let us fix a constant C.
We start by noting that there exists a constant K such that the
following K-LINEARLY BOUNDED MEMBERSHIP PROBLEM is
PSPACE-hard: Given a deterministic Turing machine (DTM) T
such that on every input of length n the machine T uses at most
K · n tape cells, and given a word w over the input alphabet of T ,
decide, whether T accepts w. This is because there exists K for
which there is a DTM TQBF satisfying the above condition which
20
2015/10/30
program terminates in exactly C · J · W steps. On the other hand,
if T does accept w, then the program terminates in at most J · W
steps. To finish the reduction, we put the number N mentioned in
item 1. equal to J · W .
decides the QBF problem (TQBF works by performing a simple recursive search of the syntax tree of the input formula). We show
a polynomial-time reduction from this membership problem to our
problems.
Let T , w be an instance of the membership problem. Since T has
a linearly bounded complexity with a known coefficient K, there
is a number J computable in time polynomial in size of T and w
such that if T accepts w, it does so in at most J steps (note that the
magnitude of J is exponential in size of T and w). The intuition
behind the reduction is as follows: we construct a deterministic
affine program P simulating the computation of T on input w. The
program consists of a single while-loop guarded by an expression
m ≥ 1 ∧ r ≥ 1, where m is a ”master” variable of the program
initialized in the preamble to C · J · |w|, while r is initialized to
1. The body of the while loop encodes the transition function of T .
The current configuration of T is encoded in variables of P : for
every state s we have a variable xs which is equal to 1 when the
current state is s and 0 otherwise; next, for every 1 ≤ i ≤ K · |w|,
where |w| is the length of w, and every symbol a of the tape
alphabet of T we have a variable xi,a , which is equal to 1 if
a is currently on the i-th tape cell, and 0 otherwise; and finally
we have a variable xhead which stores the current position of the
head. Additionally, we add a variable xstep which records, whether
the current configuration was already updated during the current
iteration of the while-loop. The variables are initialized so as to
represent the initial configuration of T on input w, e.g. xhead = 1
and xi,a = 1 if and only if either i ≤ |w| and a is the i-th
symbol of w, or i > |w| and a is the symbol of an empty cell. It
is then straightforward to encode the transition function using just
assignments and if-then-else statements, see below. At the end of
each iteration of the while loop the master variable m is decreased
by 1. However, if the current state is also an accepting state of T ,
then r is immediately set to 0 and thus the program terminates.
E.
Proofs for Section 5.1
Theorem 7. Let {Xn }n∈N be a supermartingale wrt some filtration
{Fn }n∈N and {[an , bn ]}n∈N be a sequence of intervals of positive
length in R. If X1 is a constant random variable and Xn+1 − Xn ∈
[an , bn ] a.s. for all n ∈ N, then
−
P(Xn − X1 ≥ λ) ≤ e
2λ2
Pn
k=2
(bk −ak )2
for all n ∈ N and λ > 0.
Proof. The proof goes through the characteristic method similar to
the original proof of Hoeffding’s Inequality [26, 35], and hence we
present only the important details. For each n ≥ 2 and t > 0, we
have (using standard arguments similar to [26, 35]):
E(etXn )
=
E E(etXn | Fn−1 )
=
E E et(Xn −Xn−1 ) · etXn−1 | Fn−1
=
E etXn−1 · E E et(Xn −Xn−1 ) | Fn−1
.
Note that for all x ∈ [an , bn ],
etx
≤
≤
More formally, to a transition δ of T saying that in a configuration
(s, a) the state should be changed to s0 , the symbol rewritten by b,
and the head moved by h ∈ {−1, 0, +1}, we assign the following
WK·|w|
affine program Qδ in which a guard ϕδ = i=1 (xs = 1 and
xstep = 0 and xhead = i and xi,a = 1) is used:
x − an
bn − x
· etan +
· etbn
bn − an
bn − an
bn etbn − an etan
x
· (etbn − etan ) +
.
bn − an
bn − an
Then from Xn − Xn−1 ∈ [an , bn ] a.s. and E(Xn | Fn−1 ) ≤
Xn−1 , we obtain
bn etbn − an etan
E(etXn ) ≤ E etXn−1 ·
.
bn − an
By applying the analysis in the proof of [35, Lemma 2.6] (taking
a = an and b = bn ), we obtain
i f ϕδ t h e n
xs := 0 ; xs0 := 1 ; xi,a := 0 ; xi,b := 1 ;
xhead := xhead + h ; xstep = 1
else
skip
fi
i f xsacc = 1 t h e n r := 0 e l s e s k i p f i ;
2
1 2
bn etbn − an etan
≤ e 8 t (bn −an ) .
bn − an
This implies that
1 2 (b
E(etXn ) ≤ E etXn−1 · e 8 t
The overall program then looks as follows:
2
n −an )
.
By induction, it follows that
w h i l e m ≥ 1 ∧ r ≥ 1 do
m := m − 1 ;
xstep := 0
Qδ1 ; · · · Qδm
od ,
E(etXn )
1 t2
≤
E etX1 · e 8
=
etE(X1 ) · e 8
1 t2
Pn
k=2
Pn
k=2
(bk −ak )2
(bk −ak )2
.
Thus, by Markov Inequality, for all λ > 0,
P(Xn − X1 ≥ λ)
where δ1 , . . . , δm are all transitions of T . Clearly the program
always terminates and belongs to bounded LRA PP: there is a trivial
bounded LRSM whose value in the beginning of the while loop is
m while in further locations inside the while loop it takes the form
m + d, for suitably small constants d. Since m changes by at most
1 in every step, this LRSM is bounded.
By choosing t = Pn
k=2
Each iteration of the while loop takes the same amount of steps,
since exactly one of the programs Qδj enters the if-branch (we can
assume that the transition function of T is total). Let us denote this
number by W . It is easy to see that if T does not accept w, then the
=
P(et(Xn −X1 ) ≥ etλ )
≤
e−tλ · E et(Xn −X1 )
≤
e
1 t2
−tλ+ 8
4λ
,
(bk −ak )2
Pn
k=2
(bk −ak )2
.
we obtain
−
P(Xn − X1 ≥ λ) ≤ e
Pn
2λ2
k=2
(bk −ak )2
.
The desired result follows.
21
2015/10/30
where x is the updated program variable, ax` is the coefficient
vector variable of a` on program variable x and VarR (f (x, r))
is the variance on random variables r.
Proposition 2. {Yn }n∈N is a supermartingale and Yn+1 − Yn ∈
[a + , b + ] almost surely for all n ∈ N.
Proof. Recall that
Note that the formulae above can be transformed into existentiallyquantified non-strict linear assertions by Farkas’ Lemma. Also note
that there are no conditions for angelic or demonic locations since
once the angelic and demonic strategies are fixed, then there is no
stochastic behavior in one step for such locations, and hence the
variance is zero.
Yn = Xn + · (min{T, n} − 1) .
Consider the following random variable:
Un = min{T, n + 1} − min{T, n} ,
and observe that this is equal to 1T >n . From the properties of
conditional expectation [53, Page 88] and the facts that (i) the event
T > n is measurable in Fn (which implies that E(1T >n | Fn ) =
1T >n ); and (ii) Xn ≥ 0 iff T > n (cf. conditions C2 and C3), we
have
Then we can apply Bernstein’s Inequality in the same way as for
Hoeffding’s Inequality. Define the stochastic process {Zn }n∈N by
Zn = Xn + · (min{T, n} − 1) ;
similar to Yn for Proposition 2. We have the following proposition.
Proposition 3. {Zn }n∈N is a supermartingale. Moreover, Zn+1 −
E(Yn+1 | Fn ) − Yn = E(Xn+1 | Fn ) − Xn + · E(Un | Fn )
2
= E(Xn+1 | Fn ) − Xn + · E(1T >n | Fn ) E(Zn+1 | Fn ) ≤ M and Var(Zn+1 | Fn ) ≤ c , for all n ∈ N.
= E(Xn+1 | Fn ) − Xn + · 1T >n
Proof. By the same analysis in Proposition 2, we have {Zn } is a
≤ − · 1Xn ≥0 + · 1T >n
supermartingale. Consider the following random variable: Un =
= 0 .
min{T, n + 1} − min{T, n}, which is equal to 1T >n . From the
properties of conditional expectation [53, Page 88] and the facts
Note that the inequality above is due to the fact that Xn is a ranking
that (i) the event T > n is measurable in Fn (which implies that
out
supermartingale. Moreover, since T ≤ n implies θn = `P and
Var(1T >n (Xn+1 − Xn ) | Fn ) = 1T >n Var(Xn+1 − Xn | Fn ))
Xn+1 = Xn we have that (Xn+1 − Xn ) = 1T >n · (Xn+1 − Xn ).
we have the following:
Hence we have
Yn+1 − Yn
=
=
=
Xn+1 − Xn + · Un
(Xn+1 − Xn ) + · 1T >n
1T >n · (Xn+1 − Xn + ) .
Zn+1 − E(Zn+1 | Fn ) ≤ Xn+1 − E(Xn+1 | Fn ) ≤ M
and
Var(Zn+1 | Fn )
Hence Yn+1 − Yn ∈ [a + , b + ].
=
=
=
∗
Applying Bernstein Inequality for incremental programs in
LRA PP. Let η be an incremental LRSM template to be synthesized
for P and {Xn }n∈N be the stochastic process defined by
Xn := η(θn , {xk,n }k ) .
=
=
Var(1T >n · (Xn+1 − Xn + ) | Fn )
1T >n · Var(Xn+1 − Xn | Fn )
≤
c2 .
where (*) follows from the fact that T ≤ n implies θn = `out
P
and Xn+1 = Xn , and the last inequality follows the fact that
Var(Xn+1 − Xn | Fn ) ≤ c2 since c is obtained from the synthesis
of the LRSM.
To apply Bernstein’s Inequality, we need to synthesize constants
c, M to fulfill that for all n ∈ N,
p
Var(Xn+1 | Fn ) ≤ c and Xn+1 − E(Xn+1 | Fn ) ≤ M .
These conditions can be encoded by the following formulae:
Then similar to the derivation for Hoeffding’s Inequality, we have
P(T > n)
• for all ` ∈ LP with successor locations `1 , `2 ∈ L and the
≤
P(Zn − Z1 ≥ (n − 1) − W0 )
≤
e
branch probability value p, the sentence
∀x.∀i ∈ {1, 2}. η(`i , x) − preη (`, x) ≤ M
∧
p
p
1θn =` · Var(Xn+1 | Fn ) ≡ 1θn =` ·
p(1 − p)· b`1 − b`2
−
((n−1)−W0 )2
2c2 (n−1)+ 2 ·M ((n−1)−W0 )
3
.
The optimal choice of concentration threshold is the same as for
Hoeffding’s Inequality, and the optimality of upper bounds is reduced to a binary search on z satisfying
p(1 − p) · b`1 − b`2 ≤ c
((n − 1) − W0 )2
≥z
2c2 (n − 1) + 32 · M ((n − 1) − W0 )
holds, where one can obtain from easy calculation (on Bernoulli
random variables) that
p
Var(Zn+1 − Zn | Fn )
Var(Xn+1 − Xn + · Un | Fn )
Var(Xn+1 − Xn + · 1T >n | Fn )
;
and the constraints for LRSMs, the constraint · (n − 2) ≥ W0
and the constraint for c, M , which can be solved by quadratic
programming [52].
• for all ` ∈ LS and all τ = (`, f, `0 ) ∈7→` , the sentence
∀x.∀r. η(`0 , f (x, r)) − η(`0 , ER (f (x, r))) ≤ M
∧
ax`
·
p
F.
VarR (f (x, r)) ≤ c
We present below the details of the code modeling the various
random walks of Section 6, along with the invariants specified in
the brackets. Also in the description we use Unif to denote the
uniform distribution.
holds, for which
p
1θn =` · Var(Xn+1 | Fn ) ≡ 1θn =` · ax` ·
Details related to Experimental Results
p
VarR (f (x, r))
22
2015/10/30
Explanation why RW in 2D not bounded LRA PP. We explain why
the example RW in 2D is not a bounded LRA PP: in this example at
any point either the x or the y coordinates changes by at most 2,
hence intuitively, the difference between two steps is bounded.
However, to exploit such a fact one needs to consider a martingale
defining min{x, y}, i.e., the minimum of two coordinate. However
such a martingale is not linear. For this example, the LRSM is not a
bounded martingale as the difference between the two coordinates
can be large.
[x ≥ 0]
w h i l e n ≥ 0 do
[x ≥ 0]
x := x + r ;
[x ≥ 0]
i f demon t h e n
[x ≥ 0]
i f prob( 78 ) t h e n
[x ≥ 0]
x := x − 1
else
[x ≥ 0]
x := x + 1
fi
else
[ x ≥ 0 ] s k i p ; [ x ≥ 0 ] x := x − 1
fi
od
x := n ;
[ x ≥ −1 ]
w h i l e x ≥ 0 do
[x ≥ 0]
i f prob(0.3) t h e n
[x ≥ 0]
x := x + 1
else
[x ≥ 0]
x := x − 1 ;
fi
od
[x < 0]
[x < 0]
Figure 11. Integer-valued random walk in one dimension, along
with linear invariants in square brackets.
Figure 13. A queuing example: Adversarial random walk in one
dimension. The linear invariants in square brackets. The distribution of the random variable r is described on page 14
.
x := n ;
[ x ≥ −1 ]
w h i l e x ≥ 0 do
[x ≥ 0]
i f prob(0.3) t h e n
[x ≥ 0]
x := x + Unif[0, 1]
else
[x ≥ 0]
x := x − Unif[0, 1] ;
fi
od
[x < 0]
x := n1 ; y = n2 ;
Figure 12. Real-valued random walk in one dimension, along with
linear invariants in square brackets.
[ x ≥ −2 ∧ y ≥ −3 ]
w h i l e ( x ≥ 0 ∧ y ≥ 0 ) do
[x ≥ 0 ∧ y ≥ 0]
i f ( demon )
[x ≥ 0 ∧ y ≥ 0]
x := x + Unif[−2, 1]
else
[x ≥ 0 ∧ y ≥ 0]
y := y + Unif[−2, 1]
od
[x < 0 ∨ y < 0]
Figure 14. A 2D random walk example: adversarial random walk
in two dimension, along with linear invariants in square brackets.
23
2015/10/30
x := n1 ; y = n2 ;
[ x − y ≥ −3 ]
w h i l e ( x ≥ y ) do
[x ≥ y]
i f ( demon ) [ x ≥ y ]
i f ( prob ( 0 . 7 ) )
[ x ≥ y ] x := x + Unif[−2, 1]
else
[ x ≥ y ] y := y + Unif[−2, 1]
else [x ≥ y]
i f ( prob ( 0 . 7 ) )
[ x ≥ y ] y := y + Unif[2, −1]
else
[ x ≥ y ] x := x + Unif[2, −1]
od
[x < y]
Figure 15. A 2D random walk example: Variant adversarial random walk in two dimension. The linear invariants in square brackets.
24
2015/10/30
| 6 |
Gabora,
L.
(1992).
Should
I
stay
or
should
I
go:
Coordinating
biological
needs
with
continuously-‐updated
assessments
of
the
environment:
A
computer
model.
In
(S.
Wilson,
J.
A.
Mayer
&
H.
Roitblat,
Eds.)
Proceedings
of
the
Second
International
Conference
on
the
Simulation
of
Adaptive
Behavior
(pp.
156-‐
162),
Cambridge
MA:
MIT
Press.
Should
I
Stay
or
Should
I
Go:
Coordinating
Biological
Needs
with
Continuously-‐updated
Assessments
of
the
Environment
Liane
M.
Gabora
ABSTRACT
This paper presents Wanderer, a model of how autonomous adaptive systems coordinate
internal biological needs with moment-by-moment assessments of the probabilities of events
in the external world. The extent to which Wanderer moves about or explores its
environment reflects the relative activations of two competing motivational sub-systems: one
represents the need to acquire energy and it excites exploration, and the other represents the
need to avoid predators and it inhibits exploration. The environment contains food, predators,
and neutral stimuli. Wanderer responds to these events in a way that is adaptive in the short
turn, and reassesses the probabilities of these events so that it can modify its long term
behaviour appropriately. When food appears, Wanderer be-comes satiated and exploration
temporarily decreases. When a predator appears, Wanderer both decreases exploration in the
short term, and becomes more "cautious" about exploring in the future. Wanderer also forms
associations between neutral features and salient ones (food and predators) when they are
present at the same time, and uses these associations to guide its behaviour.
1.
INTRODUCTION
One approach to modeling animal behaviour is to create an animal that continually assesses its
needs, determines which need is most urgent, and implements the behaviour that satisfies that
need. However, in the absence of appetitive stimuli such as food or mates, or harmful stimuli
such as predators, behaviour is often not directed at the fulfillment of any particular need: an
animal either remains still or moves about, and both options haw repercussions on many aspects
of survival. So the question is not "What should I do next?", but rather, "Should I stay where I
am, conserving energy and minimizing exposure to predators, or should I explore my
environment, with the possibility of finding food, mates, or shelter"?
In this paper we present a computational model of how positively or negatively
reinforcing stimuli affect an animal's decision whether or not, and if so to what extent, to explore
its environment. The model is referred to as Wanderer. The architecture of Wanderer is an
extension of an architecture used to model the mechanisms underlying exploratory behaviour in
the absence of positively or negatively reinforcing stimuli (Gabora and Colgan, 1990). The
general approach is to consider an autonomous adaptive system as an assemblage of sub-systems
specialized to take care of different aspects of survival, and what McFarland (1975) refers to as
the "final behavioural common path" is the emergent outcome of the continual process of
attempting to mutually satisfy these competing subsystems. The relative impact of each
Gabora,
1992
subsystem on behaviour reflects the animal's internal state and its assessment of the dynamic
affordance probabilities of the environment (for example, how likely it seems that a predator or
food will appear). This distributed approach is similar in spirit to that of Braitenburg (1984),
Brooks (1986), Macs (1990) and Beer (1990). We make the simplifying assumption that the only
possible beneficial outcome of exploration is finding food, and the only possible harmful
outcome is an encounter with a predator. The amount of exploration that Wanderer engages in at
any moment reflects the relative activations of a subsystem that represents the need for food,
which has an excitatory effect on exploration, and a subsystem that represents the need to avoid
predators, which inhibits exploration.
The earlier version of Wanderer exhibited the increase and then decrease in activity
shown by animals in a novel environment (Welker, 1956; Dember & Earl, 1957; May, 1963:
McCall, 1974; Weisler & 1976) and all four characteristics that differentiate the pattern of
exploration exhibited by animals raised under different levels of predation were reproduced in
the model by changing the initial value of one parameter: the decay on the inhibitory subsystem,
which represents the animals assessment of the probability that a predator could appear. In the
present paper, we first address how salient events such as the appearance of food or predators
affect exploration. Wanderer does not have direct access to the probabilities that predators and
food will appear but it continually reassesses them based on its experiences, and adjusts its
behaviour accordingly. This approach merges Gigerenzer and Murray's (1987) notion of
cognition as intuitive statistics with Roitblat's (1987) concept of optimal decision making in
animals.
We then examine how initially neutral features can come to excite or inhibit exploration
by becoming associated with salient ones (food or predators). It has long been recognized that
animals form associations of this kind between simultaneously occurring stimuli or events
(Tolman, 1932; Hull, 1943). This is useful since in the real world features are clustered — for
example, predators may dwell in a particular type of cavernous rock, so the presence of rocks of
that sort can be a useful indicator that a predator is likely to be near. Thus features of
environments that contain a lot of food are responded to with increased exploration (even when
there is no food in sight) and features of environments that contain many predators inhibit
exploration (even when there are no predators).
2.
ARCHITECTURE
OF
WANDERER
Wanderer consists of two motivational subsystems that receive input from five sensory units and
direct their out-put to a motor unit, constructed in Common Lisp (Figure 1). One subsystem
represents the need to acquire and maintain energy. It has an excitatory effect on exploration and
is linked by a positive weight to the motor unit. Exploration in turn feeds back and inhibits
activation of this subsystem: this represents fatigue. Activation of the other subsystem represents
the need to avoid predators: it has an inhibitory effect on exploration and is linked by a negative
weight to the motor unit. Since every moment that passes without encountering a predator is
evidence that there is less need to be cautious, activation of the inhibitory subsystem decreases as
a function of time in the absence of predation. In addition, since moving about provides more
evidence that there are no predators nearby than does immobility, the inhibitory subsystem, like
the excitatory subsystem, receives feedback from the motor unit; its activation decreases by an
amount proportional to the amount of exploration that occurred during the previous iteration.
2
Gabora,
1992
Perception units have binary activations. Activation of unit 0 corresponds to detection of
food, activation of unit 1 corresponds to detection of a predator, and activation of units 2, 3, and
4 correspond to detection of rock, tree, and sun respectively. Activation can spread from
perception units to subsystems, but not the other way around.
The output for each iteration is either zero, signifying immobility, or a positive number
that indicates how much exploration is taking place.
Figure
1.
The
architecture
of
Wanderer.
Dark
lines
represent
fixed
connections.
Fine
lines
represent
learnable
connections.
The relevant variables and their initial values are
ai activation of perception unit i = {0,11}
s0 = activation of excitatory subsystem = 0.9
s1 = activation of inhibitory subsystem = 0.9
E = exploration = activation of motor unit
wij = weight from perception unit i to subsystem j:
w0, 0 =-0.5, w1 = 0.9, all others = 0.
w0 = weight from excitatory subsystem to motor unit
= -0.5
3
Gabora,
1992
w1 = weight from inhibitory subsystem to motor unit = 0.5
wf = feedback weight from motor unit to subsystems = -0.1
k0 = rate at which hunger increases= 1.05
k1 = decay on inhibitory subsystem = 0.5
Exploration is calculated using a logistic function as follows:
E
=
1
/
[1
+
e
-‐(w0s0
+
w1s1)
]
Thus exploration only occurs if the activation of the ex-citatory subsystem is greater than
that of the inhibitory subsystem. Subsystem activations are then updated:
3.
WANDERER'S
ENVIRONMENT
Wanderer's environment contains three kinds of stimuli: food, predators, and features that have
no direct effect on survival, which will be referred to as neutral features. The initial presence or
absence of neutral features is random. The more exploration Wanderer engages in, the greater the
probability that a neutral feature will change from present to absent or vice versa in the next
iteration:
c1 = constant = 0.75
p(∆a)t = c1Et-1
Perception unit 2 detects a stimulus that is predictive of the appearance of food. Let us say
that Wanderer's primary source of food is a plant that grows on a certain kind of soil, and that a
certain kind of tree also grows only in that soil, so that the presence oil that tree is predictive of
finding food. Perception unit 2 can only turn on when perception unit 0 is on (that is, food can
only be detected when the tree is detected). Also, in accord with the harsh realities of life,
Wanderer has to explore if it is to find food. The probability of finding food is proportional to the
amount of exploration that took place during the previous iteration:
a0 -- activation of food detection unit
c2 = constant specified at run-time
p(a0 = 1)t = c2a2Et-1
Perception unit 3 detects a stimulus that is predictive of the appearance of a predator. Let
us say that the animal that preys upon Wanderer lives in cavernous rocks, and this unit turns on
when rocks of that sort are detected. Perception unit 3 can only turn on when perception unit 1 is
on (that is, a predator will only appear when the rock is present). Since predators can appear even
when Wanderer is immobile, it is not necessary that Wanderer explore in order to come across a
predator.
4
Gabora,
1992
a1 = activation of predator detection unit
c3 = constant specified at run-time
p(a1= 1)t = c3a3
Perception unit 4 detects the presence of the sun, such as when it comes out from under a
cloud. The sun is predictive of neither food nor predator.
4.
EFFECT
OF
SALIENT
STIMULI
ON
MOTIVATION
4.1
IMPLEMENTATION
OF
SATIETY
Detection of food is represented by the activation of a single binary unit. Activation of this unit
decreases the activation of the excitatory subsystem, which in turn brings a short-term decrease
in exploration. This corresponds to satiety; once food has been found, the immediate need for
food decreases, thus exploration should decrease.
4.2
IMPLEMENTATION
OF
CAUTION
Detection of a predator is represented by the activation of a single binary node that is positively
linked to the inhibitory subsystem. Activation of this unit has two ef-fects. First, it causes an
increase in the activation of the inhibitory subsystem, which results in a pronounced short-term
decrease in exploration. Second, it decreases the decay on the inhibitory subsystem. This has the
long-term effect of decreasing the rate at which exploration is disinhibited; in other words, every
encounter with a predator causes Wanderer to be more "cautious". Decay on the inhibitory
subsystem is updated each iteration as follows:
∂ =0.2
k1min = 0.5
k1t = max { k1min , (k1t-1 + ∂[s1 – s1t-1])}
Since activation of the inhibitory subsystem increases in response to predation, decay
increases if a predator appears, and decreases when no predator is present.
5.
LEARNING
ALGORITHM
If a neutral feature — rock, tree, or sun — is present when a salient feature — food or predator
— appears, an association forms between the neutral feature and subsystem that is positively
linked with the salient feature. Weights on the lines between neutral features and hidden nodes
are initialized to zero, corresponding to the state in which no associations, either positive or
negative, have formed. If food or a predator appears in the environment, and if one or more
initially-neutral features (pi) is present, weights on links connecting initially-neutral features to
subsystems are updated as follows:
η = learning rate = 0.05
wt = wt-1 + η|st – st-1|ai
5
Gabora,
1992
6.
RESULTS
Figure 2 plots exploration during a run in which the probability of finding food is high and the
probability of predation is zero. Exploration increases quickly initially as activation of the
excitatory subsystem increases and the activation of the inhibitory subsystem decreases. It falls
sharply whenever food is encountered, and then gradually increases again. Each time food is
encountered exploration falls to the same level.
Figure
2.
Exploration
when
p(food)
is
high
and
p(predator)
=
0.0.
Figure 3 plots exploration during a run in which the probability of finding food is zero and
the probability of predation is high. The appearance of a predator causes activation of the
inhibitory subsystem to increase, temporarily decreasing exploration. Since no food is present,
activation of the excitatory subsystem is high, and exploration quickly resumes. Two more
predators are encountered in quick succession. With each encounter, the response is greater,
representing an increase in the assessed probability of predation in the current location.
Exploration ceases after the third encounter.
6
Gabora,
1992
Figure
3.
Exploration
when
p(predator)
is
high
and
p(food)
=
0.0.
Figure 4 illustrates the effect of associative learning in the presence of food when there are
no predators. The exploration curve is less regular. Rocks are present at the beginning of the run,
but disappear before food is found. Food never appears unless a tree is present. Food is first
found during iteration 24, and exploration drops sharply. At this point associations are formed
be-tween food and both the tree and the sun, and the weights on the lines from feature detection
units 3 and 4 to subsystem 1 increase (from 0.0 to 0.015). Note that the, association between sun
and food is spurious: the sun is not actually predictive of food. Exploration increases until it
reaches a plateau. It drops sharply when food is encountered during iteration 80 since at the same
time one of the cues predictive of food, the sun, disappears. (Not only is it no longer hungry, but
there is indication that there is no food around anyway.) During iteration 84, the tree, the other
feature that has been associated with food, disappears as well. Thus exploration increases very
slowly. Since little exploration is taking place, features of the environment change little. Rocks
appear in iteration 155, but since no association has been formed between rocks and food, this
has no effect on exploration. Exploration increases sharply for a brief period between iterations
176 and 183 when the sun comes out, and then again at iteration 193 when trees appear. It
plummets once again in iteration 195, with the final appearance of food.
7
Gabora,
1992
Figure
4.
Effect
of
associative
learning
when
p(food)
is
high
and
p(predator)
=
0.0.
Below:
Black
bar
indicates
presence
of
neutral
feature;
white
bar
indicates
absence.
The effect of associative learning on response to predation is illustrated in Figure 5. (In this
experiment, decay on the inhibitory subsystem is held constant so that exploration does not fall
quickly to zero despite the high predation rate.) Response ED predation grows increasingly
variable throughout the run, reflecting the ex-tent to which features that have become associated
with predation are present at the Lime a predator appears. Wanderer eventually associates all
three features of its environment with predators, and none with food. Since two of the three
features are present, exploration stops at iteration 179 and does not resume by the 200th iteration.
Since Wanderer is not moving, there is no further change in the neutral features until the end of
the run.
8
Gabora,
1992
Figure
5.
Effect
of
associative
learning
when
p(food)
is
high
and
p(predator)
=
0.0.
Below:
Black
bar
indicates
presence
of
neutral
feature;
white
bar
indicates
absence.
The effect of associative learning in the presence of both food and predators is illustrated in
Figure 6. Rocks become associated with predators, and trees become associated with food, as
expected. (After 200 iterations the weight on the line from Perception Unit 2 to Sub-system 0 is
0.091, and the weight on the line from Perception Unit 3 to Subsystem 1 is 0.170.) However,
spurious associations also form between neutral features and salient features (with weights on the
relevant lines ranging from 0.046 to 0.112).
9
Gabora,
1992
Figure
6.
Effect
of
associative
learning
when
both
p(food)
and
p(predator)
are
high.
Below:
Black
bar
indicates
presence
of
neutral
feature;
white
bar
indicates
absence.
7.
DISCUSSION
Wanderer is a simple qualitative model of the mechanisms underlying how animals coordinate
internal needs with external affordances. It does not ad-dress a number of real world
complexities: the perceptual inputs are ungrounded, and the problems associated with actively
moving about in a real environment are bypassed. Since weights never decrease using the delta
ride, once associations are formed, they cannot be unlearned. However Wanderer responds to
events in a way that is adaptive in the short term, and reassesses the probabilities of these events
so that it can modify its long term behaviour appropriately. When food appears, Wanderer
becomes satiated and temporarily decreases exploration. When a predator appears, Wanderer
both temporarily decreases exploration to avoid being caught, and becomes more cautious in the
near future. When predators are not encountered, Wanderer becomes less cautious. Wanderer
also forms associations between neutral features of its environment and salient features
(predators and food). Since in real environments, features are clustered — neutral features often
provide reliable clues regarding the proximity of predators and food — association-forming of
this kind can help to optimize behaviour.
In summary, this paper illustrates how an animal can be built using a distributed approach
in which sub-systems specialized to take care of different needs coordinate internal signals with
moment-by-moment assessments of probabilities of events in the external world. The relative
10
Gabora,
1992
activations of these subsystems determine the extent to which the animal moves about or
explores its environment.
ACKNOWLEDGEMENTS
I would like to thank Mike Gasser for discussion and Peter Todd for comments on the
manuscript. I would also like to thank the Center for the Study of the Evolution and Origin of
Life (CSEOL) at UCLA for support.
REFERENCES
Beer, R. D. and H. J. Chiel. (1990), The neural basis of behavioral choice in an artificial insect.
In J. A. Meyer and S. W. Wilson (Eds.) From Animals to Animals: Proceedings of the First
International Conference on the Simulation of Adaptive Behaviour. 247-254.. London:
MIT Press.
Braitenburg, V. (1984). Vehicles: Experiments in synthetic psychology. Cambridge, MA: MIT
Press.
Brooks, R. A. (1986). A robust layered control system for a mobile robot. IEEE Journal of
Robotics and Automation RA-2, 1, 253-262.
Dember, W. N. and R. W. Earl. (1957). Analysis of exploratory, manipulatory, and curiosity
behaviours. Psychological Review 64, 91- 96.
Gabora, L. M. and P. W. Colgan. (1990). A model of the mechanisms underlying exploratory
behaviour. In J. A. Meyer and S. W. Wilson (Eds.) From Animals to Animals: Proceedings
of the First International Conference on the Simulation of Adaptive Behaviour, 475-484.
London: MIT Press.
Gigerenzer, G. and D. J. Murray. (1987). Cognition as intuitive Statistics. Hillsdale, NI:
Lawrence Erlbaum Associates.
Hull, C. L. (1943). The problem of intervening variables in molar behavior theory. Psychological
Review, 50, 273-291.
Maes, P. (1989). The dynamics of action selection. In Proceedings of the Eleventh International
Joint Conference on AI (LICA' 89), 991-997. Morgan-Kaufmann Publishers,
May, R. B. (1963). Stimulus selection in preschool children under conditions of free choice.
Perceptual and Motor Skills, 16,200-206.
McCall, R. B. (1974). Exploratory manipulation and play in the human infant. Monographs of
the Society for Research in Child Development, 39 (No. 2).
McFarland, D. J. and R. M. Sibly. (1975). The behavioural final common path. Philosophical
Transactions of the London Royal Society, 270B, 265-293.
Roitblat, H. L. (1987). Introduction to comparative cognition. New York: Freeman,
Tolman, E. C. (1932). Purposive behaviour in animals and men. New York: Century,
We isler, A. and McCall, R. (1976). Exploration and play: resume and redirection. American
Psychologist 31, 492-508.
11
Gabora,
1992
Welker, W. I. (1956a). Some determinants of play and exploration in chimpanzees. Journal of
Comparative Physiological Psychology 49, 84-89.
Welker, W. I. (1956b). Variability of play and exploratory behaviour in chimpanzees. Journal of
Comparative Physiological Psychology 49, 181- 185.
Wilson, S. W. (1990). The animal path to Al In 3, A. Meyer and S. W. Wilson (Eds.) From
Animals to Animals: Proceedings of the First International Conference on the Simulation
of Adaptive Behaviour, 247-254. London: MIT Press.
12
| 9 |
arXiv:1610.07253v1 [] 24 Oct 2016
DUAL ORE’S THEOREM FOR DISTRIBUTIVE
INTERVALS OF SMALL INDEX
SEBASTIEN PALCOUX
Abstract. This paper proves a dual version of a theorem of Oystein Ore for every distributive interval of finite groups [H, G] of
index |G : H| < 9720, and for every boolean interval of rank < 7.
It has applications to representation theory for every finite group.
1. Introduction
Oystein Ore has proved that a finite group is cyclic if and only if its
subgroup lattice is distributive [3]. He has extended one side as follows:
Theorem 1.1 ([3]). Let [H, G] be a distributive interval of finite
groups. Then ∃g ∈ G such that hHgi = G.
We have conjectured the following dual version of this theorem:
Conjecture 1.2. Let [H, G] be a distributive interval of finite groups.
Then ∃V irreducible complex representation of G, with G(V H ) = H
(Definition 3.1); this property will be called linearly primitive.
The interval [1, G] is linearly primitive if and only if G is linearly
primitive (i.e. admits a faithful irreducible complex representation).
We will see that Conjecture 1.2 reduces to the boolean case, because a
distributive interval is bottom boolean (i.e. the interval generated by its
atoms is boolean). As application, Conjecture 1.2 leads to a new bridge
between combinatorics and representation theory of finite groups:
Definition 1.3. Let [H, G] be any interval. We define the combinatorial invariant bbℓ(H, G) as the minimal length ℓ for a chain of subgroups
H = H0 < H1 < · · · < Hℓ = G
with [Hi , Hi+1 ] bottom boolean. Then, let bbℓ(G) := bbℓ(1, G).
Application 1.4. Assuming Conjecture 1.2, bbℓ(G) is a non-trivial
upper bound for the minimal number of irreducible complex representations of G generating (for ⊕ and ⊗) the left regular representation.
2010 Mathematics Subject Classification. 20D60, 05E15, 20C15, 06C15.
Key words and phrases. group; representation; lattice; distributive; boolean.
1
2
SEBASTIEN PALCOUX
Remark 1.5. If the normal subgroups of G are also known, note that
cf ℓ(G) := min{bbℓ(H, G) | H core-free}
is a better upper bound. For more details on the applications, see [1, 4].
This paper is dedicated to prove Conjecture 1.2 for [H, G] boolean
of rank < 7, or distributive of index |G : H| < 9720. For so, we will
use the following new result together with two former results:
Theorem 1.6. Let [H, G] be a boolean interval and L a coatom with
|G : L| = 2. If [H, L] is linearly primitive, then so is [H, G].
Theorem 1.7 ([4]). A distributive interval [H, G] with
n
X
1
≤2
|K
:
H|
i
i=1
for K1 , . . . , Kn the minimal overgroups of H, is linearly primitive.
Theorem 1.8 ([1]). A boolean interval [H, G] with a (below) nonzero
dual Euler totient, is linearly primitive.
X
ϕ̂(H, G) :=
(−1)ℓ(H,K) |G : K|
K∈[H,G]
P
Remark 1.9 ([1]). The Euler totient ϕ(H, G) = (−1)ℓ(K,G) |K : H|
is the number of cosets Hg with hHgi = G, so ϕ > 0 by Theorem 1.1;
but in general ϕ̂ 6= ϕ. We extend ϕ to any distributive interval as
ϕ(H, G) = |T : H| · ϕ(T, G)
with [T, G] the top interval of [H, G], so that for n =
Y
Y
pini −1 · (pi − 1)
ϕ(1, Z/n) =
i
Q
i
pni i ,
i
which is the usual Euler totient ϕ(n). Idem for ϕ̂ and bottom interval.
We will also translate our planar algebraic proof of Theorem 1.7 in
the group theoretic framework (one claim excepted).
Contents
1. Introduction
2. Preliminaries on lattice theory
3. A dual version of Ore’s theorem
4. The proof for small index
5. Acknowledgments
References
1
3
4
7
15
15
DUAL ORE’S THEOREM FOR DISTRIBUTIVE INTERVAL, SMALL INDEX 3
2. Preliminaries on lattice theory
Definition 2.1. A lattice (L, ∧, ∨) is a partially ordered set (or poset)
L in which every two elements a, b have a unique supremum (or join)
a ∨ b and a unique infimum (or meet) a ∧ b.
Example 2.2. Let G be a finite group. The set of subgroups K ⊆ G
is a lattice, denoted by L(G), ordered by ⊆, with K1 ∨ K2 = hK1 , K2 i
and K1 ∧ K2 = K1 ∩ K2 .
Definition 2.3. A sublattice of (L, ∧, ∨) is a subset L′ ⊆ L such that
(L′ , ∧, ∨) is also a lattice. Let a, b ∈ L with a ≤ b, then the interval
[a, b] is the sublattice {c ∈ L | a ≤ c ≤ b}.
Definition 2.4. A finite lattice L admits a minimum and a maximum,
called 0̂ and 1̂.
Definition 2.5. An atom is an element a ∈ L such that
∀b ∈ L, 0̂ < b ≤ a ⇒ a = b.
A coatom is an element c ∈ L such that
∀b ∈ L, c ≤ b < 1̂ ⇒ b = c.
Definition 2.6. The top interval of a finite lattice L is the interval
[t, 1̂] with t the meet of all the coatoms. The bottom interval is the
interval [0̂, b] with b the join of all the atoms.
Definition 2.7. The length of a finite lattice L is the greatest length ℓ
of a chain 0 < a1 < a2 < · · · < aℓ = 1 with ai ∈ L.
Definition 2.8. A lattice (L, ∧, ∨) is distributive if ∀a, b, c ∈ L:
a ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c)
(or equivalently, ∀a, b, c ∈ L, a ∧ (b ∨ c) = (a ∧ b) ∨ (a ∧ c)).
Lemma 2.9. The reverse lattice and the sublattices of a distributive
lattice are also distributive. Idem for concatenation and direct product.
Definition 2.10. A distributive lattice is called boolean if any element
b admits a unique complement b∁ (i.e. b ∧ b∁ = 0̂ and b ∨ b∁ = 1̂).
Example 2.11. The subset lattice of {1, 2, . . . , n}, for union and intersection, is called the boolean lattice Bn of rank n (see B3 below).
4
SEBASTIEN PALCOUX
{1, 2, 3}
{1, 2} {1, 3} {2, 3}
{1}
{2}
{3}
∅
Remark 2.12. Any finite boolean lattice is isomorphic to some Bn .
Theorem 2.13 (Birkhoff’s representation theorem or FTFDL [5]).
Any finite distributive lattice embeds into a finite boolean lattice.
Corollary 2.14. The top and bottom intervals of a distributive lattice
are boolean.
Proof. See [5, items a-i p254-255], together with Lemma 2.9.
3. A dual version of Ore’s theorem
In this section, we will state the dual version of Ore’s theorem, and
prove it for any boolean interval of rank ≤ 4, after Theorem 1.7 proof.
Definition 3.1. Let W be a representation of a group G, K a subgroup
of G, and X a subspace of W . We define the fixed-point subspace
W K := {w ∈ W | kw = w , ∀k ∈ K}
and the pointwise stabilizer subgroup
G(X) := {g ∈ G | gx = x , ∀x ∈ X}
Lemma 3.2. [1, Section 3.2] Let G be a finite group, H, K two subgroups, V a complex representation of G and X, Y two subspaces. Then
(1) H ⊆ K ⇒ V K ⊆ V H
(2) X ⊆ Y ⇒ G(Y ) ⊆ G(X)
(3) V H∨K = V H ∩ V K
(4) H ⊆ G(V H )
(5) V G(V H ) = V H
(6) [H ⊆ K and V K ( V H ] ⇒ K 6⊆ G(V H )
Lemma 3.3. [1] Let V1 , . . . , Vr be the irreducible complex representations of a finite group G (up to equivalence), and H a subgroup. Then
|G : H| =
r
X
i=1
dim(Vi ) dim(ViH ).
DUAL ORE’S THEOREM FOR DISTRIBUTIVE INTERVAL, SMALL INDEX 5
Definition 3.4. An interval of finite groups [H, G] is called linearly
primitive if there is an irreducible complex representation V of G such
that G(V H ) = H.
Remark 3.5. The interval [1, G] is linearly primitive iff G is linearly
primitive (i.e. it admits an irreducible faithful complex representation).
The dual version of Ore’s Theorem 1.1 is the following:
Conjecture 3.6. A distributive interval [H, G] is linearly primitive.
Lemma 3.7. A boolean interval [H, G] of rank 1 is linearly primitive.
Proof. Note that [H, G] is of rank 1 iff H is a maximal subgroup of G.
Let V be a non-trivial irreducible complex representation of G with
V H 6= ∅, by Lemma 3.2 (4), H ⊆ G(V H ) . If G(V H ) = G then V must be
trivial (by irreducibility), so by maximality G(V H ) = H.
Lemma 3.8. [1, Lemma 3.37] An interval [H, G] is linearly primitive
if its bottom interval [H, B] is so (see Definition 2.6).
Proposition 3.9. An interval [H, G] satisfying
n
X
1
≤1
|Ki : H|
i=1
with K1 , . . . , Kn the minimal overgroups of H, is linearly primitive.
Proof. First, by Lemmas 3.7, 3.8, we can assume n > 1. By assumption
Pn |G:H|
Pn
i=1 |Ki :H| ≤ |G : H|, so
i=1 |G : Ki | ≤ |G : H|. Let V1 , . . . , Vr be
the irreducible complex representations of G. By Lemma 3.3
n
n X
r
r
n
X
X
X
X
|G : Ki | =
dim(Vα ) dim(VαKi ) =
dim(Vα )[
dim(VαKi )].
i=1 α=1
i=1
If ∀α,
P
Ki
i Vα
=
α=1
i=1
VαH , then
n
X
dim(VαKi ) ≥ dim(VαH ),
i=1
Pn
Pn
and
Pn so i=1 |G : Ki | ≥ |G : H|, but i=1 |G : Ki | ≤ |G : H|, then
i=1 |G : Ki | = |G : H|. So ∀α,
n
X
dim(VαKi ) = dim(VαH ),
i=1
P
but for V1 trivial, we get that n = ni=1 dim(V1Ki ) = dim(V1H ) = 1,
contradiction with n > 1.
P
Else there is α such that i VαKi ( VαH , then by Lemma 3.2 (6),
Ki 6⊆ G(VαH ) ∀i, which means that G(VαH ) = H by minimality.
6
SEBASTIEN PALCOUX
Corollary 3.10. If a subgroup H of G admits at most two minimal
overgroups, then [H, G] is linearly primitive. In particular, a boolean
interval of rank n ≤ 2 is linearly primitive.
P
Proof. i |Ki1:H| ≤ 12 + 12 = 1; the result follows by Proposition 3.9.
We can upgrade Proposition 3.9 in the distributive case as follows:
Theorem 3.11. A distributive interval [H, G] satisfying
n
X
1
≤2
|Ki : H|
i=1
with K1 , . . . , Kn the minimal overgroups of H, is linearly primitive.
Proof. By Lemma 3.8, Corollaries 2.14 and 3.10, we can assume the
interval to be boolean of rank n > 2.
If ∃α such that
X
(⋆)
VαKi∨Kj ( VαH
i,j,i6=j
then by Lemma 3.2 (6), ∀i, j with i 6= j, Ki ∨Kj 6⊆ G(VαH ) . If G(VαH ) = H
then ok, else by the boolean structure and minimality ∃i such that
G(VαH ) = Ki . Now Li := Ki∁ (see Definition 2.10) is a maximal subgroup
of G, so by Lemma 3.7, there is β such that G(V Li ) = Li .
β
Claim: ∃Vγ ≤ Vα ⊗ Vβ such that Ki ∩ G(VγH ) , G(VγH ) ∩ Li ⊆ Ki ∩ Li .
Proof: See the first part of [4, Theorem 6.8] proof; it exploits (⋆) in a
tricky way (we put this reference because we didn’t find an argument
which avoids the use of planar algebras).
By H ⊆ G(VγH ) , distributivity and Claim, we conclude as follows:
G(VγH ) = G(VγH ) ∨ H = G(VγH ) ∨ (Ki ∧ Li ) = (G(VγH ) ∧ Ki ) ∨ (G(VγH ) ∧ Li )
⊆ (Ki ∧ Li ) ∨ (Ki ∧ Li ) = H ∨ H = H
Else, ∀α,
X
VαKi ∨Kj = VαH .
i,j,i6=j
Ki ∨Kj
∀k, ∀(i, j) with i 6= j, ∃s ∈ {i, j} with s 6= k, but Vα
X
VαKs = VαH .
s6=k
It follows that ∀i, ∀α,
X
j6=i
dim(VαKj ) ≥ dim(VαH ).
⊆ VαKs , so
DUAL ORE’S THEOREM FOR DISTRIBUTIVE INTERVAL, SMALL INDEX 7
Now if ∃α∀i, VαKi ( VαH then (by Lemma 3.2 (6) and minimality)
P
K
G(VαH ) = H. Else ∀α∃i, VαKi = VαH , but j6=i dim(Vα j ) ≥ dim(VαH ), so
X
dim(VαKj ) ≥ 2 dim(VαH )
j
By using Lemma 3.3 and taking V1 trivial, we get
X
XX
X
X
|G : Ki | =
[
dim(Vα ) dim(VαKi )] =
dim(Vα )[
dim(VαKi )]
i
i
≥n+2
α
X
α
i
dim(Vα ) dim(VαH ) = 2|G : H| + (n − 2).
α6=1
It follows that
n
X
i=1
n−2
1
≥2+
|Ki : H|
|G : H|
which contradicts the assumption because n > 2.
Corollary 3.12. A rank n boolean interval [H, G] with |Ki : H| ≥ n/2
for any minimal overgroup Ki of H, is linearly primitive. In particular,
a boolean interval of rank n ≤ 4 is linearly primitive.
P
Proof. i |Ki1:H| ≤ n × n2 = 2; the result follows by Theorem 3.11.
In the next section, we get a proof at any rank n < 7.
4. The proof for small index
This section will prove dual Ore’s theorem, for any boolean interval of
rank < 7, and then for any distributive interval of index |G : H| < 9720.
Lemma 4.1. Let [H, G] be a boolean interval of rank 2 and let K, L
the atoms. Then (|G : K|, |G : L|) and (|K : H|, |L : H|) 6= (2, 2).
Proof. If |G : K| = |G : L| = 2, then K and L are normal subgroups
of G, and so H = K ∧ L is also normal. So G/H is a group and
[1, G/H] = [H, G] as lattices, but a boolean lattice is distributive, so
by Ore’s theorem, G/H is cyclic; but it has two subgroups of index 2,
contradiction. If |K : H| = |L : H| = 2, then H is a normal subgroup
of K and L, so of G = H ∨ K, contradiction as above.
Note the following immediate generalization:
Remark 4.2. Let [H, G] be boolean of rank 2, with K and L the atoms.
• If H is a normal subgroup of K and L, then |K : H| =
6 |L : H|.
• If K and L are normal subgroups of G then |G : K| =
6 |G : L|.
8
SEBASTIEN PALCOUX
Remark 4.3. Let G be a finite group and H, K two subgroups, then
|H| · |K| = |HK| · |H ∩ K| (Product Formula). It follows that
|H| · |K| ≤ |H ∨ K| · |H ∧ K|
Corollary 4.4. Let [H, G] be a boolean interval of finite groups and A
an atom. Any K1 , K2 ∈ [H, A∁ ] with K1 ⊂ K2 satisfy
|K1 ∨ A : K1 | ≤ |K2 ∨ A : K2 |
Moreover if |G : A∁ | = 2 then |K ∨ A : K| = 2, ∀K ∈ [H, A∁ ].
Proof. Suppose that K1 ⊂ K2 . By Remark 4.3,
|K1 ∨ A| · |K2 | ≤ |(K1 ∨ A) ∨ K2 | · |(K1 ∨ A) ∧ K2 |
but K1 ∩ K2 = K1 , K1 ∪ K2 = K2 and A ∧ K2 = H, so by distributivity
|K1 ∨ A| · |K2 | ≤ |K2 ∨ A| · |K1 |
Finally, A∁ ∨ A = G and ∀K ∈ [H, A∁ ], K ⊂ A∁ , so if |G : A∁ | = 2,
then
2 ≤ |K ∨ A : K| ≤ |A∁ ∨ A : A∁ | = 2,
It follows that |K ∨ A : K| = 2.
Lemma 4.5. Let [H, G] rank 2 boolean with K, L the atoms. Then
|K : H| = 2 ⇔ |G : L| = 2.
Proof. If |G : L| = 2 then |K : H| = 2 by Corollary 4.4.
If |K : H| = 2 then H ⊳ K and K = H ⊔ Hτ with τ H = Hτ and
(Hτ )2 = H, so Hτ 2 = H and τ 2 ∈ H. Now L ∈ (H, G) open, then
τ Lτ −1 ∈ (τ Hτ −1 , τ Gτ −1 ) = (H, G), so by assumption τ Lτ −1 ∈ {K, L}.
If τ Lτ −1 = K, then L = τ −1 Kτ = K, contradiction. So τ Lτ −1 = L.
Now H = Hτ 2 ⊂ Lτ 2 , and τ 2 ∈ H ⊂ L, so Lτ 2 = L. It follows that
hL, τ i = L ⊔ Lτ . But by assumption, G = hL, τ i, so |G : L| = 2.
Corollary 4.6. If a boolean interval [H, G] admits a subinterval [K, L]
of index 2, then there is an atom A with L = K ∨ A and |G : A∁ | = 2.
Proof. Let [K, L] be the edge of index |L : K| = 2. By the boolean
structure, there is an atom A ∈ [H, G] such that L = K ∨ A. Let
K = K1 < K2 < · · · < Kr = A∁
be a maximal chain from K to A∁ . Let Li = Ki ∨ A, then the interval
[Ki , Li+1 ] is boolean of rank 2, now |L1 : K1 | = 2, so by Lemma 4.5
2 = |L1 : K1 | = |L2 : K2 | = · · · = |Lr : Kr | = |G : A∁ |.
DUAL ORE’S THEOREM FOR DISTRIBUTIVE INTERVAL, SMALL INDEX 9
Remark 4.7. Let [H, G] of index |G : H| = 2. Then G = H ⋊ Z/2 if
|H| is odd, but it’s not true in general if |H| even1.
The following theorem was pointed out by Derek Holt2.
Theorem 4.8. Let G be a finite group, N a normal subgroup of prime
index p and π an irreducible complex represenation of N. Exactly one
of the following occurs:
(1) π extends to an irreducible representation of G,
(2) IndG
N (π) is irreducible.
Proof. It is a corollary of Clifford theory, see [2] Corollary 6.19.
Theorem 4.9. Let [H, G] be a boolean interval and L a coatom with
|G : L| = 2. If [H, L] is linearly primitive, then so is [H, G].
Proof. Let the atom A := L∁ . As an immediate corollary of the proofs
of Lemma 4.5 and Corollary 4.6, there is τ ∈ A such that ∀K ∈ [H, L],
Kτ = τ K and τ 2 ∈ H ⊂ K, so K ∨ A = K ⊔ Kτ and G = L ⊔ Lτ . By
assumption, [H, L] is linearly primitive, which means the existence of
an irreducible complex representation V of L such that L(V H ) = H.
Assume that πV extends to an irreducible representation πV+ of G.
Note that G(V+H ) = H ⊔ Sτ with
S = {l ∈ L | πV+ (lτ ) · v = v, ∀v ∈ V H }
If S = ∅ then G(V+H ) = H, ok. Else S 6= ∅ and note that
πV+ (lτ ) · v = v ⇔ πV+ (τ ) · v = πV (l−1 ) · v
but πV+ (τ )(V H ) ⊂ V H and τ 2 ∈ H, so ∀l1 , l2 ∈ S and ∀v ∈ V H ,
πV (l1 l2 )−1 · v = πV+ (τ 2 ) · v = v
It follows that S 2 ⊂ H. Now, HS = S, so HS 2 = (HS)S = S 2 ,
which means that S 2 is a disjoint union of H-coset, then |H| divides
|S 2 |, but S 2 ⊂ H and S 6= ∅, so S 2 = H. Let s0 ∈ S, then the maps
S ∋ s 7→ s0 s ∈ H and H ∋ h 7→ hs0 ∈ S are injective, so |S| = |H|. If
S 6= H, then A = H ⊔Hτ and G(V+H ) = H ⊔Sτ are two different groups
containing H with index 2, contradiction with the boolean structure
by Lemma 4.1. So we can assume that H = S. Now the extension
V+ is completely characterized by πV+ (τ ), and we can make an other
irreducible extension V− characterized by πV− (τ ) = −πV+ (τ ). As above,
G(V−H ) = H ⊔ S ′ τ with
S ′ = {l ∈ L | πV− (lτ ) · v = v, ∀v ∈ V H }.
1http://math.stackexchange.com/a/1609599/84284
2http://math.stackexchange.com/a/1966655/84284
10
SEBASTIEN PALCOUX
But πV− (lτ ) = −πV+ (lτ ), so
S ′ = {l ∈ L | πV+ (lτ ) · v = −v, ∀v ∈ V H }.
Then S ∩ S ′ = ∅, but S = H, so S ′ 6= H, contradiction as above.
Next, we can assume that πV does not extend to an irreducible representation of G. So πW := IndG
L (πV ) is irreducible by Theorem 4.8.
We need to check that G(W H ) = H. We can see W as V ⊕ τ V , with
πW (l) · (v1 + τ v2 ) = πV (l) · v1 + τ [πV (τ −1 lτ ) · v2 ],
with l ∈ L, and
πW (τ ) · (v1 + τ v2 ) = πV (τ 2 ) · v2 + τ v2
Then
W H = {v1 +τ v2 ∈ W | πV (h)·v1 = v1 and πV (τ −1 hτ )·v2 = v2 , ∀h ∈ H}
But τ −1 Hτ = H, so W H = V H ⊕ τ V H . Finally, according to πW (l)
and πW (τ ) above, we see that G(W H ) ⊂ L, and then G(W H ) = H.
Remark 4.10. It seems that we can extend Theorem 4.9, replacing
|G : L| = 2 by L⊳G (and so |G : L| = p prime), using Theorem 4.8 and
Remark 4.2. In the proof, we should have K ∨A = K ⊔Kτ ⊔· · ·⊔Kτ p−1 ,
τ p ∈ H, S p = H and πV− (τ ) = e2πi/p πV+ (τ ). We didn’t check the details
because we don’t need this extension.
Corollary 4.11. Let [H, G] be a boolean interval with an atom A satisfying |A : H| = 2. If [H, A∁ ] is linearly primitive, then so is [H, G].
Proof. Immediate by Corollary 4.6 and Theorem 4.9.
One of the main result of the paper is the following:
Theorem 4.12. A boolean interval [H, G] of rank n < 7, is linearly
primitive.
Proof. Let K1 , . . . , Kn be the atoms of [H, G]. By Corollary 4.11, we
can assume that |Ki : H| =
6 2, ∀i. Now n ≤ 6 and |Ki : H| ≥ 3, then
n
X
i=1
1
1
≤ 6 × = 2.
|Ki : H|
3
The result follows by Theorem 3.11.
For the upper bound on the index of distributive interval we will
need a former result (proved group theoretically in [1]):
DUAL ORE’S THEOREM FOR DISTRIBUTIVE INTERVAL, SMALL INDEX 11
Theorem 4.13. [1, Theorem 3.24] A boolean interval [H, G] with a
(below) nonzero dual Euler totient is linearly primitive.
X
ϕ̂(H, G) :=
(−1)ℓ(H,K) |G : K|
K∈[H,G]
with ℓ(H, K) the rank of [H, K].
Conjecture 4.14. A rank n boolean interval has ϕ̂ ≥ 2n−1 .
Remark 4.15. If Conjecture 4.14 is correct, then its lower bound is
optimal, because realized by the interval [1 × S2n , S2 × S3n ].
Lemma
4.16. Let [H, G]Pbe a boolean interval of rank n and index
Q ri
pi with pi prime and i ri = n. Then for any atom A and any
K ∈ [H, A∁ ], |K ∨ A : K| = pi for some i.
W
Proof. Let A1 , · · · , Ar be the atoms of [H, G] such that K = ri=1 Ai ,
let Ar+1 = A and Ar+2 , . . . , An all the other atoms. By considering the
corresponding maximal chain we have that
|G : H| = |A1 : H| · |A1 ∨ A2 : A1 | · · · |K ∨ A : K| · · · |G : A∁n−1 |
It’s a product of n numbers > 1 and the result is composed by n prime
numbers, so by the fundamental theorem of arithmetic, any component
above is prime, then |K ∨ A : K| = pi for some i.
Lemma 4.17. Let [H, G] be a boolean interval of rank n and index pn
with p prime. Then ϕ̂(H, G) = (p − 1)n > 0.
P
Proof. By Lemma 4.16, ϕ̂(H, G) = k (−1)k nk pk = (p − 1)n
Remark 4.18. Lemma 4.17 is coherent with Conjecture 4.14 because
if p = 2 then n = 1 by Lemma 4.1.
Proposition 4.19. Let [H, G] be a boolean interval of rank n and index
pn−1 q, with p, q prime and p ≤ q. Then
q−p
1
ϕ̂(H, G) = (p − 1)n [1 +
(1 −
)] ≥ (p − 1)n > 0.
p
(1 − p)m
with m be the number of coatoms L ∈ [H, G] with |G : L| = q.
Proof. If m = 0, then by Lemma 4.16, Corollary 4.4 and p ≤ q, for any
atom A ∈ [H, G] and ∀K ∈ [H, A∁ ], |K ∨ A : K| = p, so |G : H| = pn
and ϕ̂(H, G) = (p − 1)n by Lemma 4.17, ok.
Else m ≥ 1. We will prove the formula by induction. If n = 1, then
m = 1 and ϕ̂(H, G) = q − 1, ok. Next, assume it is true at rank < n.
Let L be a coatom with |G : L| = q, then for A = L∁ ,
ϕ̂(H, G) = q ϕ̂(H, L) − ϕ̂(A, G)
12
SEBASTIEN PALCOUX
Now |L : H| = pn−1 so by Lemma 4.17, ϕ̂(H, L) = (p − 1)n−1 . But
|A : H| = p or q. If |A : H| = p then |G : A| = pn−2 q and by induction
1
q−p
(1 −
)].
ϕ̂(A, G) = (p − 1)n−1 [1 +
p
(1 − p)m−1
Else |A : H| = q, |G : A| = pn−1 , m = 1 and the same formula works.
Then
q−p
1
ϕ̂(H, G) = (p − 1)n−1 [q − 1 −
(1 −
)]
p
(1 − p)m−1
1
q−1 q−p 1
+
(
−
)]
= (p − 1)n [
p−1
p 1 − p (1 − p)m
1
q−p
1
q−1 q−p
−
(1 +
)+
(1 −
)]
= (p − 1)n [
p−1
p
p−1
p
(1 − p)m
1
q−p
(1 −
)]
= (p − 1)n [1 +
p
(1 − p)m
The result follows.
Definition 4.20. A chain H1 ⊂ · · · ⊂ Hr+1 is of type (k1 , . . . , kr ) if
∃σ ∈ Sr with kσ(i) = |Hi+1 : Hi | (so that we can choose (ki )i increasing).
Remark 4.21. The proof of Proposition 4.19 is working without assuming p, q prime, but assuming type (p, . . . , p, q) for every maximal
chain of [H, G]. For p prime and q = p2 we deduce that at rank n and
index pn+1 , there is 1 ≤ m ≤ n such that
ϕ̂(H, G) = (p − 1)n+1 + (p − 1)n − (−1)m (p − 1)n+1−m ≥ (p − 1)n+1
If there is no edge of index 2, we can also take q = 2p or (p, q) = (3, 4).
Lemma 4.22. A boolean interval [H, G] of index |G : H| = an bc and
rank n+2 with 3 ≤ a ≤ b ≤ c ≤ 12, 1 ≤ n ≤ 6 and every maximal chain
of type (a, . . . , a, b, c), has a dual Euler totient ϕ̂(H, G) ≥ (a − 1)n+2 .
Proof. This is checked by computer calculation using the following iterative method. Let L be a coatom just that |G : L| = c and A = L∁ .
Then ϕ̂(H, G) = cϕ̂(H, L) − ϕ̂(A, G). Now |L : H| = an b so we can use
Propoposition 4.19 formula for ϕ̂(H, L). Next there are three cases:
|A : H| = a, b or c. If |A : H| = c then, by Corollary 4.4, ∀K ∈ [H, L],
|K ∨ A : K| = c, so ϕ̂(H, G) = (c − 1)ϕ̂(H, L). If |A : H| = b, then
|G : A| = an c so we can use Propoposition 4.19 formula for ϕ̂(A, G).
Else |A : H| = a and |G : A| = an−1 bc, so we iterate the method.
Remark 4.23. Let [H, G] be a boolean interval and A an atom such
that ∀K ∈ [H, A∁ ], |K ∨ A : K| = |A : H|. So ϕ̂(H, A∁ ) = ϕ̂(A, G) and
ϕ̂(H, G) = |A : H|ϕ̂(H, A∁ ) − ϕ̂(A, G) = (|A : H| − 1)ϕ̂(A, G).
DUAL ORE’S THEOREM FOR DISTRIBUTIVE INTERVAL, SMALL INDEX 13
Corollary 4.24. Let [H, G] be a boolean interval such that for any
atom A and ∀K ∈ [H, A∁ ], |K ∨ A : K| = |A : H|. Then
n
Y
ϕ̂(H, G) =
(|Ai : H| − 1) > 0.
i=1
with A1 , . . . , An all the atoms of [H, G].
Proof. By Remark 4.23 and induction.
Lemma 4.25. Let [H, G] boolean of rank 2 and index < 32. Let K, L
be the atoms, a = |G : K|, b = |G : L|, c = |L : H| and d = |K : H|.
a
G
b
K
L
d
H
c
If a 6= 7, then (a, b) = (c, d).
If a = 7 and a 6= c then a = b = 7 and c = d ∈ {3, 4}.
Proof. We can check by GAP3 that there are exactly 241 boolean intervals [H, G] of rank 2 and index |G : H| < 32 (up to equivalence).
They all satisfy (a, b) = (c, d), except [D8 , P SL2 (7)] and [S3 , P SL2 (7)],
for which (a, b) = (7, 7) and (c, d) = (3, 3) or (4, 4).
Corollary 4.26. Let [H, G] be a boolean interval having a maximal
chain such that the product of the index of two different edges is < 32,
and no edge has index 7. Then [H, G] satisfies Corollary 4.24.
Proof. Consider such a maximal chain
H = K0 ⊂ K1 ⊂ · · · ⊂ Kn = G
and A1 , . . . , An the atoms of [H, G] such that Ki = Ki−1 ∨ Ai . Now, ∀i
and ∀j < i, [Kj−1, Kj ∨ Ai ] is boolean of rank 2, so by Lemma 4.25,
|Ki : Ki−1 | = |Ki−2 ∨ Ai : Ki−2 | = |Ki−3 ∨ Ai : Ki−2 | = · · · = |Ai : H|
Next, ∀i and ∀j ≥ i, let Lj−1 = Kj ∧ A∁i , then [Lj , Kj+2 ] is boolean of
rank 2 and by Lemma 4.25,
|Ki : Ki−1 | = |Ki+1 : Li | = |Ki+2 : Li+1 | = · · · = |G : A∁i |
Finally, by Corollary 4.4, ∀K ∈ [H, A∁i ],
|Ai : H| ≤ |K ∨ Ai : K| ≤ |G : A∁i |
but |Ai : H| = |Ki : Ki−1 | = |G : A∁i |; the result follows.
3The
GAP Group, http://www.gap-system.org, version 4.8.3, 2016.
14
SEBASTIEN PALCOUX
Remark 4.27. A combinatorial argument could replace the use of
Corollary 4.4 in the proof of Corollary 4.26.
Remark 4.28. Here is the list of all the numbers < 10125 which are
product of at least seven integers ≥ 3; first with exactly seven integers:
2187 = 37
2916 = 36 4
3645 = 36 5
3888 = 35 42
4374 = 36 6
4860 = 35 41 5
5103 = 36 7
5184 = 34 43
5832 = 36 8
6075 = 35 52
6480 = 34 42 5
6561 = 36 9
6804 = 35 41 7
6912 = 33 44
7290 = 36 10
7776 = 35 41 8
8019 = 36 11
8100 = 34 41 52
8505 = 35 51 7
8640 = 33 43 5
8748 = 36 12
9072 = 34 42 7
9216 = 32 45
9477 = 36 13
9720 = 35 41 10;
next with exactly eight integers: 6561 = 38 , 8748 = 374; nothing else.
We can now prove the main theorem of the paper:
Theorem 4.29. A distributive interval [H, G] of index |G : H| < 9720,
is linearly primitive.
Proof. By Lemma 3.8, Corollary 2.14 and Theorem 4.12, we can assume
the interval to be boolean of rank n ≥ 7, and without edge of index
2 by Corollary 4.6 and Theorem 4.9. So by Theorem 4.13, it suffices
to check that for every index (except 9720) in the list of Remark 4.28,
any boolean interval as above with this index has a nonzero dual Euler
totient. We can assume the rank to be 7, because at rank 8, the indices
38 and 37 4 are checked by Lemma 4.17 and Remark 4.21, and there is
nothing else at rank > 8. Now, any maximal chain for such a boolean
interval of index 35 41 5 has type (3, . . . , 3, 4, 5), so it is checked by Corollary 4.26. Idem for index 36 10 with (3, . . . , 3, 10) or (3, . . . , 3, 5, 6). The
index 36 7 is checked by Proposition 4.19. For the index 36 12, if there is
a maximal chain of type (3, . . . , 3, 6, 6), 62 > 32 but using Lemma 4.25
with a, b, c, d ∈ {3, 6} we can deduce that (a, b) = (c, d), so the proof
of Corollary 4.26 is working; else 12 must appears in every maximal
chain, so that the proof of Proposition 4.19 works with q = 12. We can
do the same for every index, except 35 41 7, 35 41 8, 35 51 7, 34 42 7, 35 41 10.
For index 35 41 8, if there is a maximal chain of type (3, . . . , 3, 4, 4, 6),
then ok by Corollary 4.26, else (because there is no edge of index 2)
every maximal chain is of type (3, . . . , 3, 4, 8), so ok by Lemma 4.22.
We can do the same for every remaining index except 35 41 10 = 9720,
the expected upper bound.
Remark 4.30. The tools above don’t check 35 41 10 because the possible maximal chain types are (3, . . . , 3, 4, 5, 6), (3, . . . , 3, 4, 10) and
(3, . . . , 3, 5, 8). The first is ok by Corollary 4.26, but not the two last
because 4 · 10 = 5 · 8 = 40 > 32. So there is not necessarily a unique
maximal chain type, and Lemma 4.22 can’t be applied. Nevertheless,
more intensive computer investigation can probably leads beyond 9720.
DUAL ORE’S THEOREM FOR DISTRIBUTIVE INTERVAL, SMALL INDEX 15
5. Acknowledgments
I would like to thank Derek Holt for showing me a theorem on representation theory used in this paper. This work is supported by the
Institute of Mathematical Sciences, Chennai.
References
[1] Mamta Balodi and Sebastien Palcoux, On boolean intervals of finite groups,
arXiv:1604.06765v5, 25pp. submitted to Trans. Amer. Math. Soc.
[2] I. Martin Isaacs, Character theory of finite groups, Dover Publications, Inc.,
New York, 1994. Corrected reprint of the 1976 original [Academic Press, New
York; MR0460423 (57 #417)]. MR1280461
[3] Oystein Ore, Structures and group theory. II, Duke Math. J. 4 (1938), no. 2,
247–269, DOI 10.1215/S0012-7094-38-00419-3. MR1546048
[4] Sebastien Palcoux, Ore’s theorem for cyclic subfactor planar algebras and applications, arXiv:1505.06649v10, 50pp. submitted to Pacific J. Math.
[5] Richard P. Stanley, Enumerative combinatorics. Volume 1, 2nd ed., Cambridge
Studies in Advanced Mathematics, vol. 49, Cambridge University Press, Cambridge, 2012. MR2868112
Institute of Mathematical Sciences, Chennai, India
E-mail address: [email protected]
| 4 |
arXiv:1303.5532v1 [] 22 Mar 2013
N6 PROPERTY FOR THIRD VERONESE EMBEDDINGS
THANH VU
Abstract. The rational homology groups of the matching complexes are
closely related to the syzygies of the Veronese embeddings. In this paper we
will prove the vanishing of certain rational homology groups of matching complexes, thus proving that the third Veronese embeddings satisfy the property
N6 . This settles the Ottaviani-Paoletti conjecture for third Veronese embeddings. This result is optimal since ν3 (Pn ) does not satisfy the property N7 for
n ≥ 2 as shown by Ottaviani-Paoletti in [OP].
1. Introduction
Let k be a field of characteristic 0. Let V be a finite dimensional vector space
over k of dimension n + 1. The projective space P(V ) has coordinate ring naturally
isomorphic to Sym V . For each natural number d, the d-th Veronese embedding
of P(V ), which is naturally embedded into the projective space P(Symd V ) has
kd
V . For each set of integers p, q, b, let
coordinate ring Ver(V, d) = ⊕∞
k=0 Sym
d
Kp,q (V, b) be the associated Koszul cohomology group defined as the homology of
the 3 term complex
p+1
^
Symd V ⊗ Sym(q−1)d+b V →
→
p−1
^
p
^
Symd V ⊗ Symqd+b V
Symd V ⊗ Sym(q−1)d+b V.
d
(V, b) is the space of minimal p-th syzygies of degree p + q of the GL(V )
Then Kp,q
kd+b
d
(b) : Vect → Vect for the functor on finite
V. We write Kp,q
module ⊕∞
k=0 Sym
dimensional k-vector spaces that assigns to a vector space V the corresponding
d
(V, b).
syzygy module Kp,q
Conjecture 1 (Ottaviani-Paoletti).
d
Kp,q
(V, 0) = 0 for q ≥ 2 and p ≤ 3d − 3.
The conjecture is known for d = 2 by the work of Josefiak-Pragacz-Weyman
[JPW] and also known for dim V = 2 and dim V = 3 by the work of Green [G]
and Birkenhake [B]. All other cases are open. Also, Ottaviani and Paoletti in [OP]
d
showed that Kp,2
(V, 0) 6= 0 for p = 3d − 2, when dim V ≥ 3, d ≥ 3. In other words,
the conjecture is sharp. Recently, Bruns, Conca, and Römer in [BCR] showed that
d
Kd+1,q
(V, 0) = 0 for q ≥ 2. In this paper we will prove the conjecture in the case
d = 3. Thus the main theorem of the paper is:
Date: February 28, 2013.
2010 Mathematics Subject Classification. Primary 13D02, 14M12, 05E10.
Key words and phrases. Syzygies, Veronese varieties, matching complexes.
1
2
THANH VU
Theorem 3.6. The third Veronese embeddings of projective spaces satisfy property
N6 .
To prove Theorem 3.6 we prove that vanishing results hold for certain rational homology groups of matching complexes (defined below), and then use [KRW,
d
(V, b) and the homology
Theorem 5.3] to translate between the zyzygy modules Kp,q
groups of matching complexes.
Definition 2 (Matching Complexes). Let d > 1 be a positive integer and A a
d
finite set. The matching complex CA
is the simplicial complex whose vertices are
all the d-element subsets of A and whose faces are {A1 , ..., Ar } so that A1 , ..., Ar
are mutually disjoint.
d
The symmetric group SA acts on CA
by permuting the elements of A making
d
the homology groups of CA representations of SA . For each partition λ, we denote
by V λ the irreducible representation of S|λ| corresponding to the partition λ, and
Sλ the Schur functor corresponding to the partition λ. For each vector space V ,
Sλ (V ) is an irreducible representation of GL(V ). The relation between the syzygies
of the Veronese embeddings and the homologies of matching complexes is given by
the following theorem of Karaguezian, Reiner and Wachs.
Theorem 1.1. [KRW, Theorem 5.3] Let p, q be non-negative integers, let d be
a positive integer and let b be a non-negative integer. Write N = (p + q)d + b.
d
(b) coincides with
Consider a partition λ of N . Then the multiplicity of Sλ in Kp,q
λ
d
the multiplicity of the irreducible SN representation V in H̃p−1 (CN
).
This correspondence makes Conjecture 1 equivalent to the following conjecture.
d
for n = 1, ..., 3d − 1 is
Conjecture 3. The only non-zero homology groups of Cnd
H̃n−2 .
We will prove this conjecture for d = 3 by computing the homology groups of
Cn3 by induction. To compute the homology groups of the matching complexes
inductively, the following equivariant long-exact sequence introduced by Raicu in
[R] is useful. Let A be a finite set with |A| ≥ 2d. Let a ∈ A be an element of A.
Let α be a d-element subset of A such that a ∈ α. Let β = α \ a, and let C = A \ α,
B = A \ {a}. Then we have the following long-exact sequence of representations of
SB .
(1.1)
d
d
d
· · · → IndSSB
(H̃r (CC
) ⊗ 1) → H̃r (CB
) → ResSSB
(H̃r (CA
))
C ×Sβ
A
d
(H̃r−1 (CC
) ⊗ 1) → · · ·
→ IndSSB
C ×Sβ
kd+b
Morever, for each b, 0 ≤ b ≤ d − 1, the GL(V ) module ⊕∞
V is a
k=0 Sym
Cohen-Macaulay module in the coordinate ring of the projective space P(Symd V ).
The dual of the resolution of each of these modules is the resolution of another
d
such module giving us the duality among the Koszul homology groups Kp,q
(V, b).
For simplicity, since we deal with third Veronese embeddings, we will from now on
assume that d = 3. To compute the homology groups of the matching complexes
Cn3 for n ≤ 10 we use the duality in the case dim V = 2, and to compute the
homology groups of the matching complexes Cn3 for n ≥ 14, we use the duality in
the case dim V = 3, so we will make them explicit here. In the case dim V = 2, the
N6 PROPERTY FOR THIRD VERONESE EMBEDDINGS
3
canonical module of Ver(V, 3) as representation of GL(V ) is Sλ (V ) with λ = (5, 5).
From [W, Chapter 2],
∼ S(5−b,5−a)
Hom(S(a,b) , S(5,5) ) =
and the correspondence in Theorem 1.1, we have
Proposition 1.2. The multiplicity of V λ with λ = (λ1 , λ2 ) in the homology group
3
H̃p−1 (CN
) coincides with the multiplicity of V µ with µ = (5 − λ2 , 5 − λ1 ) in the
3
homology group H̃1−p (C10−N
).
Similarly, in the case dim V = 3, the canonical module of Ver(V, 3) as representation of GL(V ) is Sλ (V ) with λ = (9, 9, 9). From [W, Chapter 2],
Hom(S(a,b,c) , S(9,9,9) ) ∼
= S(9−c,9−b,9−a)
and the correspondence in Theorem 1.1, we have
Proposition 1.3. The multiplicity of V λ with λ = (λ1 , λ2 , λ3 ) in the homology
3
group H̃p−1 (CN
) coincides with the multiplicity of V µ with µ = (9−λ3 , 9−λ2 , 9−λ1 )
3
in the homology group H̃6−p (C27−N
).
3
Finally, to determine the homology groups of the matching complex CN
, we
apply the equivariant long exact sequence (1.1) to derive equalities and inequalities
for the multiplicities of the possible irreducible representations in the unknown
3
homology groups of CN
. This is carried out with the help of our Macaulay2 package
MatchingComplex.m2 that we will explain in the appendix. We then use Maple to
solve this system of equalities and inequalities.
The paper is organized as follows. In the second section, using the equivariant
long exact sequence (1.1) and the duality in Proposition 1.2 we compute homology
groups of matching complexes Cn3 for n ≤ 13. In the third section, using the results
in the second section and the duality in Proposition 1.3 we derive all irreducible
representations whose corresponding partitions have at most 3 rows in the homology
groups of Cn3 for n ≥ 14. The systems of equalities and inequalities from exact
sequences obtained by applying the equivariant long exact sequence (1.1) with |A| =
3
3
) uniquely. Thus we need
) and H̃5 (C23
20 and |A| = 23 will not determine H̃4 (C20
to compute the dimensions of Kp,1 (V, 2) with p = 5, 6 and dim V = 4. This is
done by using Macaulay2 to compute the dimensions of the spaces of minimal p-th
3k+2
syzygies of degree p + 1 for p = 6, 7 of the module ⊕∞
V with dim V = 4.
k=0 Sym
3
We then state the results for the homology groups of Cn with 14 ≤ n ≤ 24 and
finish the proof of our main theorem. The proof of Proposition 3.5 illustrates the
computation of the homology groups of the matching complexes Cn3 dealing with
3
the most complicated matching complex C23
in the series. In the appendix we
explain the ideas behind our package leading to the computation of the homology
groups of the matching complexes.
2. Homology of matching complexes
In this section, using the equivariant long exact sequence (1.1) we compute the
homology groups of the matching complexes Cn3 for n ≤ 13. In the following, we
denote the partition λ with row lengths λ1 ≥ λ2 ≥ ... ≥ λk ≥ 0 by the sequence
(λ1 , λ2 , ..., λk ) and we use the same notation for the representation V λ . To simplify
notation, we omit the subscript and superscript when we use the operators Ind and
4
THANH VU
Res. It is clear from the context and the equivariant long exact sequence what the
induction and restriction are. From the definition of the matching complexes, it is
not hard to see the following.
Proposition 2.1. The only non-vanishing homology groups of Cn3 with n = 4, 5, 6
are respectively
H̃0 C43 = (3, 1), H̃0 C53 = (4, 1) ⊕ (3, 2), H̃0 C63 = (4, 2).
Together with Proposition 1.2 we have
Proposition 2.2. The only irreducible representations whose corresponding partitions have at most 2 rows in the homology groups of the matching complexes Cn3 ,
7 ≤ n ≤ 10 are
3
)2 = (5, 5)
(H̃1 C83 )2 = (5, 3), (H̃1 C93 )2 = (5, 4), (H̃1 C10
where (H̃i Cn3 )2 denote the subrepresentation of H̃i Cn3 consisting of all irreducible
representations of H̃i Cn3 whose corresponding partitions have at most 2 rows.
Proposition 2.3. The only non-vanishing homology group of C73 is
H̃1 C73 = (5, 1, 1) ⊕ (3, 3, 1).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 7 we have
exact sequences
0 → Res H̃i C73 → 0
for i 6= 0, 1, and an exact sequence
0 → Res H̃1 C73 → Ind H̃0 C43 → H̃0 C63 → Res H̃0 C73 → 0.
Therefore, H̃i C73 = 0 for i 6= 0, 1. To show that H̃0 C73 is zero, note that H̃0 C63 maps
surjectively onto Res H̃0 C73 . Since H̃0 C63 = (4, 2) and by Proposition 2.2, H̃0 C73 does
not contain any irreducible representations whose corresponding partitions have at
most 2 rows, it must be zero. Therefore, we know that Res H̃1 C73 as representation
of S6 is equal to
Ind H̃0 C43 − H̃0 C63 = (5, 1) ⊕ (4, 1, 1) ⊕ (3, 3) ⊕ (3, 2, 1).
Moreover, by Proposition 2.2, H̃1 C73 does not contain any irreducible representations whose corresponding partitions have at most 2 rows, thus H̃1 C73 must consist
(5, 1, 1) and (3, 3, 1) as its restriction contains (5, 1) and (3, 3). But the restriction
of (5, 1, 1) ⊕ (3, 3, 1) is equal to Res H̃1 C73 so we have the desired conclusion.
Proposition 2.4. The only non-vanishing homology group of C83 is
H̃1 C83 = (6, 1, 1) ⊕ (5, 2, 1) ⊕ (5, 3) ⊕ (4, 3, 1) ⊕ (3, 3, 2).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 8 we have
exact sequences
0 → Res H̃i C83 → 0
for i 6= 1, and an exact sequence
0 → H̃1 C73 → Res H̃1 C83 → Ind H̃0 C53 → 0.
Therefore, H̃i C83 = 0 for i 6= 1. To compute H̃1 C83 , note that from the exact
sequence
Res H̃1 C83 = H̃1 C73 + Ind H̃0 C53
N6 PROPERTY FOR THIRD VERONESE EMBEDDINGS
5
which consists of irreducible representations whose corresponding partitions have
at most 3 rows, therefore H̃1 C83 contains only irreducible representations whose
corresponding partitions have at most 3 rows. Moreover, by Proposition 2.2, the
only irreducible representation whose corresponding partition has at most 2 rows
in H̃1 C83 is (5, 3). Therefore, the restrictions of irreducible representations whose
corresponding partitions have 3 rows in H̃1 C83 is equal to
(3, 2, 2) ⊕ 2 · (3, 3, 1) ⊕ 2 · (4, 2, 1) ⊕ (4, 3) ⊕ 2 · (5, 1, 1) ⊕ (5, 2) ⊕ (6, 1).
It is easy to see that the set of irreducible representations whose corresponding
partitions have 3 rows in H̃1 C83 has to be equal to (6, 1, 1) ⊕ (5, 2, 1) ⊕ (4, 3, 1) ⊕
(3, 3, 2).
Proposition 2.5. The only non-vanishing homology group of C93 is
H̃1 C93 = (6, 2, 1) ⊕ (5, 4) ⊕ (5, 3, 1) ⊕ (4, 3, 2).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 9 we have
exact sequences
0 → Res H̃i C93 → 0
for i 6= 1, and an exact sequence
0 → H̃1 C83 → Res H̃1 C93 → Ind H̃0 C63 → 0.
Therefore, H̃i C93 = 0 for i 6= 1, and
Res H̃1 C93 = H̃1 C83 + Ind H̃0 C63 .
Moreover, by Proposition 2.2, the only irreducible representation whose corresponding partition has at most 2 rows in H̃1 C93 is (5, 4). Therefore, the restrictions of
irreducible representations whose corresponding partitions have 3 rows in H̃1 C93 is
equal to
(3, 3, 2) ⊕ (4, 2, 2) ⊕ 2 · (4, 3, 1) ⊕ 2 · (5, 2, 1) ⊕ (5, 3) ⊕ (6, 1, 1) ⊕ (6, 2).
It is easy to see that the set of irreducible representations whose corresponding
partitions have 3 rows in H̃1 C93 has to be equal to (6, 2, 1) ⊕ (5, 3, 1) ⊕ (4, 3, 2).
3
Proposition 2.6. The only non-vanishing homology groups of C10
are
3
3
H̃2 C10
= (7, 1, 1, 1) ⊕ (5, 3, 1, 1) ⊕ (3, 3, 3, 1), H̃1 C10
= (5, 5).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 10 we have
exact sequences
3
0 → Res H̃i C10
→0
for i 6= 1, 2, and an exact sequence
3
3
0 → Res H̃2 C10
→ Ind H̃1 C73 → H̃1 C93 → Res H̃1 C10
→ 0.
3
3
Therefore, H̃i C10
= 0 for i 6= 1, 2. Since H̃1 C93 maps surjectively onto Res H̃1 C10
,
3
and by Proposition 2.2, H̃1 C10 contains the irreducible representation (5, 5), (6, 2, 1)⊕
(5, 3, 1)⊕(4, 3, 2) maps surjectively onto the restrictions of the other irreducible rep3
resentations of H̃1 C10
. Moreover, there is no irreducible representation of S10 whose
6
THANH VU
3
restriction is a subrepresentation of (6, 2, 1)⊕(5, 3, 1)⊕(4, 3, 2), thus H̃1 C10
= (5, 5).
3
Therefore, Res H̃2 C10 is equal to
3
Ind H̃1 C73 + Res H̃1 C10
− H̃1 C93 = (3, 3, 2, 1) ⊕ (3, 3, 3) ⊕ (4, 3, 1, 1)
⊕ (5, 2, 1, 1) ⊕ (5, 3, 1) ⊕ (6, 1, 1, 1) ⊕ (7, 1, 1).
3
Moreover, since H̃1 C10
= (5, 5), by Theorem 1.1, K2,1 (V, 1) = Sλ (V ) with λ =
∞
(5, 5). Since ⊕k=0 Sym3k+1 (V ) with dim V = 3 is a Cohen-Macaulay module of
codimension 7 with h-vector (3, 6),
7
7
dim K3,0 (V, 1) = 6 ·
+ dim K2,1 (V, 1) − 3 ·
= 0.
2
3
3
By Theorem 1.1, H̃2 C10
does not contain any irreducible representations whose
corresponding partitions have at most 3 rows, therefore it must contain irreducible
representations (3, 3, 3, 1), (5, 3, 1, 1), (7, 1, 1, 1) each with multiplicity 1. But the
sum of the restrictions of these irreducible representations is equal to the restriction
3
of H̃2 C10
already, thus we have the desired conclusion.
3
Proposition 2.7. The only non-vanishing homology group of C11
is
3
H̃2 C11
= (7, 3, 1) ⊕ (6, 4, 1) ⊕ (6, 3, 2) ⊕ (5, 4, 2) ⊕ (5, 3, 3)
⊕ (6, 3, 1, 1) ⊕ (8, 1, 1, 1) ⊕ (7, 2, 1, 1) ⊕ (5, 4, 1, 1)
⊕ (5, 3, 2, 1) ⊕ (4, 3, 3, 1) ⊕ (3, 3, 3, 2).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 11 we have
exact sequences
3
0 → Res H̃i C11
→0
for i 6= 1, 2 and an exact sequence
3
3
3
3
0 → H̃2 C10
→ Res H̃2 C11
→ Ind H̃1 C83 → H̃1 C10
→ Res H̃1 C11
→ 0.
3
3
3
,
is zero, note that H̃1 C10
= 0 for i 6= 1, 2. To show that H̃1 C11
Therefore, H̃i C11
3
3
consists of partition (5, 5) only, maps surjectively onto Res H̃1 C11 , so H̃1 C11 = 0.
Thus
3
3
3
Res H̃2 C11
= H̃2 C10
+ Ind H̃1 C83 − H̃1 C10
.
3
Moreover, H̃2 C11
does not contain any irreducible representations whose corre3
contains (6, 4) and (7, 3),
sponding partitions have at most 2 rows, but Res H̃2 C11
3
thus H̃2 C11 must contain (6, 4, 1) and (7, 3, 1) each with multiplicity 1. The restrictions of the remaining irreducible representations is equal to
3
3
A = H̃2 C10
+ Ind H̃1 C83 − H̃1 C10
− Res(6, 4, 1) − Res(7, 3, 1).
To find a set of irreducible representations whose sum of the restrictions is A, we
first induce A. We then eliminate all irreducible representations whose restrictions
are not contained in A. After that, we have a set of partitions B. Then we write
down the set of equations that make the restriction of B equal to A. Then we solve
for the non-negative integer solutions of that system. For this problem, it is easy
to see that we have a unique solution as in the statement of the proposition.
N6 PROPERTY FOR THIRD VERONESE EMBEDDINGS
7
3
Proposition 2.8. The only non-vanishing homology group of C12
is
3
H̃2 C12
= (7, 4, 1) ⊕ (7, 3, 2) ⊕ (6, 5, 1) ⊕ (6, 4, 2) ⊕ (6, 3, 3) ⊕ (5, 5, 2)
⊕ (5, 4, 3) ⊕ (8, 2, 1, 1) ⊕ (7, 3, 1, 1) ⊕ (6, 4, 1, 1) ⊕ (6, 3, 2, 1)
⊕ (5, 4, 2, 1) ⊕ (5, 3, 3, 1) ⊕ (4, 3, 3, 2).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 12 we have
exact sequences
3
0 → Res H̃i C12
→0
for i 6= 2 and an exact sequence
3
3
0 → H̃2 C11
→ Res H̃2 C12
→ Ind H̃1 C93 → 0.
3
Therefore, H̃i C12
= 0 for i 6= 2 and
3
3
+ Ind H̃1 C93 .
Res H̃2 C12
= H̃2 C11
3
Moreover, H̃2 C12
does not contain any irreducible representations whose corre3
sponding partitions have at most 2 rows, but Res H̃2 C12
contains (6, 5) and (7, 4),
3
thus H̃2 C12 must contain (6, 5, 1) and (7, 4, 1) each with multiplicity 1. Therefore,
3
the sum of the restrictions of other irreducible representations in H̃2 C12
is equal to
3
A = H̃2 C11
+ Ind H̃1 C93 − Res(6, 5, 1) − Res(7, 4, 1).
Let B be the set containing all irreducible representations that appears in the
induction of A whose restrictions are contained in A. Write down the equations
that make the restriction of B equal to A. Then we solve for the non-negative
integer solutions of that system. For this problem, it is easy to see that we have a
unique solution as in the statement of the proposition.
3
are
Proposition 2.9. The only non-vanishing homology groups of C13
3
H̃3 C13
= (9, 1, 1, 1, 1) ⊕ (7, 3, 1, 1, 1) ⊕ (5, 5, 1, 1, 1) ⊕ (5, 3, 3, 1, 1)
⊕ (3, 3, 3, 3, 1)
3
H̃2 C13
= (7, 5, 1) ⊕ (7, 3, 3) ⊕ (6, 5, 2) ⊕ (5, 5, 3).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 13 we have
exact sequences
3
0 → Res H̃i C13
→0
for i 6= 2, 3 and an exact sequence
3
3
3
3
3
0 → Res H̃3 C13
→ Ind H̃2 C10
→ H̃2 C12
→ Res H̃2 C13
→ Ind H̃1 C10
→ 0.
3
3
Therefore, H̃i C13
= 0 for i 6= 2, 3. From the sequence, Res H̃3 C13
is contained in
3
3
3
Ind H̃2 C10 and contains Ind H̃2 C10 − H̃2 C12 . There is a unique solution as indicated
in the proposition. Therefore,
3
3
3
3
3
Res H̃2 C13
= Res H̃3 C13
− Ind H̃2 C10
+ H̃2 C12
+ Ind H̃1 C10
.
3
Moreover, Res H̃2 C13
does not contain any irreducible representations whose corresponding partitions have at most two rows, but its restriction contains (7, 5), thus
it must contain (7, 5, 1) with multiplicity 1. Therefore, the sum of the restrictions
3
of the other irreducible representations in H̃2 C13
is equal to
(5, 4, 3) ⊕ 2 · (5, 5, 2) ⊕ (6, 3, 3) ⊕ (6, 4, 2) ⊕ (6, 5, 1) ⊕ (7, 3, 2).
It is easy to see that there is a unique solution as stated in the proposition.
8
THANH VU
3. Proof of the main theorem
In this section we determine the homology groups of the matching complexes Cn3
for 14 ≤ n ≤ 24 using the equivariant long exact sequence (1.1) and the duality
3
as stated in Proposition 1.3. Note that to determine the homology groups of C20
3
and C23
we need to compute the dimensions of Kp,0 (V, 2) for p = 6, 7. In the
following, for a representation W of the symmetric group SN and a positive number
k, we denote by Wk the direct sum of all irreducible representations of W whose
corresponding partitions have at most r rows and let W k = W − Wk .
Proposition 3.1. The only homology groups of Cn3 for 14 ≤ n ≤ 24 containing
irreducible representations whose corresponding partitions have at most 3 rows are
3
3
3
3
3
3
3
3
3
3
H̃3 C14
, H̃3 C15
, H̃3 C16
, H̃4 C17
, H̃4 C18
, H̃4 C19
, H̃4 C20
, H̃5 C21
, H̃5 C22
and H̃5 C23
.
Moreover,
3
3
)3 = (9, 8, 6).
(H̃4 C20
)3 = (8, 8, 4) ⊕ (8, 6, 6), and (H̃5 C23
Proof. This follows from the results of the homology of matching complexes Cn3 for
n ≤ 13 in section 2 and the duality in Proposition 1.3.
Proposition 3.2. The only non-vanishing homology groups of Cn3 for 14 ≤ n ≤ 19
3
3
3
3
3
3
3
3
are H̃3 C14
, H̃3 C15
, H̃3 C16
, H̃4 C16
, H̃4 C17
, H̃4 C18
, H̃4 C19
, H̃5 C19
.
Proof. The computational proof is given in our Macaulay2 package MatchingComplex.m2.
3
3
Proposition 3.3. The only non-vanishing homology groups of C20
are H̃5 C20
and
3
H̃4 C20
= (8, 8, 4) ⊕ (8, 6, 6).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 20 we have
exact sequences
3
0 → Res H̃i C20
→0
for i 6= 4, 5 and an exact sequence
3
3
3
3
3
0 → H̃5 C19
→ Res H̃5 C20
→ Ind H̃4 C17
→ H̃4 C19
→ Res H̃4 C20
→ 0.
3
= 0 for i 6= 4, 5. By Theorem 1.1, K4,2 (2) = 0. To determine
Therefore, H̃i C20
3
H̃4 C20 , note that by Proposition 3.1 and Theorem 1.1, K5,1 (V, 2) contains M =
Sλ (V ) ⊕ Sµ (V ) where λ = (8, 8, 4) and µ = (8, 6, 6). Moreover, using Macaulay2 to
3k+2
compute the dimensions of minimal linear syzygies of the module ⊕∞
(V )
k=0 Sym
3k+2
∞
with dim V = 4, we get dim K6,0 (V, 2) = 14003. Since ⊕k=0 Sym
(V ) is a
Cohen-Macaulay module of codimension 16 with h-vector (10, 16, 1),
16
16
16
dim K5,1 (V, 2) = 14003 − 10 ·
+ 16 ·
−
= 1991.
6
5
4
∼ M . By Theorem 1.1,
Since dim M = 1991, K5,1 (V, 2) =
3
H̃4 C20
= (8, 8, 4) ⊕ (8, 6, 6).
Finally,
3
3
3
3
3
Res H̃5 C20
= H̃5 C19
+ Ind H̃4 C17
− H̃4 C19
+ Res H̃4 C20
.
3
This determines H̃5 C20 as given in our package MatchingComplex.m2.
3
3
Proposition 3.4. The only non-vanishing homology groups of C21
and C22
are
3
3
3
H̃5 C21 , H̃5 C22 , H̃6 C22 .
N6 PROPERTY FOR THIRD VERONESE EMBEDDINGS
9
Proof. The computational proof is given in our Macaulay2 package MatchingComplex.m2.
3
3
Proposition 3.5. The only non-vanishing homology groups of C23
are H̃6 C23
and
3
H̃5 C23
= (9, 8, 6) ⊕ (8, 6, 6, 3) ⊕ (8, 7, 6, 2) ⊕ (8, 8, 4, 3) ⊕ (8, 8, 5, 2)
⊕ (8, 8, 6, 1) ⊕ (9, 6, 6, 2) ⊕ (9, 7, 6, 1) ⊕ (9, 8, 4, 2) ⊕ (9, 8, 5, 1)
⊕ (10, 6, 6, 1) ⊕ (10, 8, 4, 1).
Proof. Applying the equivariant long exact sequence (1.1) with |A| = 23 we have
exact sequences
3
0 → Res H̃i C23
→0
for i 6= 5, 6, and an exact sequence
3
3
3
3
0 → H̃6 C22
→ Res H̃6 C23
→ Ind H̃5 C20
→ H̃5 C22
→
(3.1)
3
3
→ 0.
→ Res H̃5 C23
→ Ind H̃4 C20
3
3
Therefore, H̃i C23
= 0 for i 6= 5, 6. Moreover, by Proposition 3.3, (Ind H̃5 C20
)3 =
0. Therefore, we have an exact sequence
3
3
3
)3 → 0.
0 → (H̃5 C22
)3 → (Res H̃5 C23
)3 → (Ind H̃4 C20
(3.2)
By Proposition 3.1, we know that
3
(H̃5 C23
)3 = (9, 8, 6).
Let Y be the direct sum of irreducible representations whose corresponding parti3
tions have 4 rows in H̃5 C23
. Then from exact sequence (3.2), we have
3
3
(Res Y )3 = (H̃5 C22
)3 + (Ind H̃5 C20
)3 − Res(9, 8, 6)
= (8, 8, 6) ⊕ (9, 7, 6) ⊕ (9, 8, 5) ⊕ (10, 6, 6) ⊕ (10, 8, 4).
Therefore,
Y1 = (8, 8, 6, 1) ⊕ (9, 7, 6, 1) ⊕ (9, 8, 5, 1) ⊕ (10, 6, 6, 1) ⊕ (10, 8, 4, 1)
3
= (8, 8, 4) ⊕ (8, 6, 6), thus
is a subrepresentation of Y . By Proposition 3.3, H̃4 C20
3
Res Y1 ⊕ Res(9, 8, 6) maps surjectively onto Ind H̃4 C20 . Let
3
3
D = H̃5 C22
+ Ind H̃4 C20
− Res Y1 − Res(9, 8, 6).
Then the exact sequence (3.1) becomes
(3.3)
3
3
3
0 → H̃6 C22
→ Res H̃6 C23
→ Ind H̃5 C20
→ D → Res Z → 0
3
where Z = H̃5 C23
− Y1 − (9, 8, 6).
3
contains only irreducible representations whose corBy Proposition 3.4, H̃6 C22
3
responding partitions have 8 rows while H̃5 C22
contains only irreducible representations whose corresponding partitions have at most 6 rows. Thus we have
3
3 6
3 6
0 → H̃6 C22
→ (Res H̃6 C23
) → (Ind H̃5 C20
) →0
3 6
3
3
3 6
3
where (Res H̃6 C23
) = Res H̃6 C23
− (Res H̃6 C23
)6 and (Ind H̃5 C20
) = Ind H̃5 C20
−
3
(Ind H̃5 C20 )6 . This exact sequence determines the irreducible representations of
3
H̃6 C23
whose corresponding partitions have 7 or 8 rows as given in our package
MatchingComplex.m2. Let
3
3
3 6
C = Ind H̃5 C20
+ H̃6 C22
− Res(H̃6 C23
) ,
10
THANH VU
3 6
3
where (H̃6 C23
) is the direct sum of irreducible representations in H̃6 C23
whose corresponding partitions have 7 and 8 rows described above. Then the exact sequence
(3.3) becomes
(3.4)
3
)6 → C → D → Res Z → 0.
0 → Res(H̃6 C23
Finally, using Macaulay2 to compute the dimensions of minimal linear syzygies of
3k+2
the module ⊕∞
(V ) with dim V = 4, we get dim K7,0 (V, 2) = 5400. Morek=0 Sym
3k+2
3
(V ) is
over, H̃4 C23 = 0, thus by Theorem 1.1, K5,2 (V, 2) = 0. Since ⊕∞
k=0 Sym
a Cohen-Macaulay module of codimension 16 with h-vector (10, 16, 1),
16
16
16
dim K6,1 (V, 2) = 10 ·
− 16 ·
+
− 5400 = 14760.
7
6
5
Since the sum of dimensions of irreducible representations Sλ (V ) corresponding to
partitions λ in Y1 ⊕ (9, 8, 6) is equal to 11520, the sum of dimensions of irreducible
representations Sλ (V ) corresponding to partitions λ in Z is 14760 − 11520 = 3240.
3
This fact and exact sequence (3.4) determine H̃5 C23
as stated in the proposition
3
and H̃6 C23 as given in our package MatchingComplex.m2.
Theorem 3.6. The third Veronese embeddings of projective spaces satisfy property
N6 .
Proof. It remains to prove that the only non-zero homology groups of the matching
3
3
complex C24
is H̃6 C24
. Applying the equivariant long exact sequence (1.1) with
|A| = 24 we have exact sequences
3
0 → Res H̃i C24
→0
for i 6= 5, 6, and an exact sequence
3
3
3
3
3
0 → H̃6 C23
→ Res H̃6 C24
→ Ind H̃5 C21
→ H̃5 C23
→ Res H̃5 C24
→ 0.
3
3
3
.
maps surjectively onto Res H̃5 C24
= 0 for i 6= 5, 6 and H̃5 C23
Therefore, H̃i C24
Moreover, by the result of Ottaviani-Paoletti [OP], the third Veronese embedding of
3
P3 satisfies property N6 . By Theorem 1.1, H̃5 C24
does not contain any irreducible
representations whose corresponding partitions have at most 4 rows. By Proposition
3
contains only irreducible representations whose corresponding partitions
3.5, H̃5 C23
3
is zero.
have at most 4 rows, thus H̃5 C24
Appendix
In this appendix we explain the ideas behind our Macaulay2 package Match3
ingComplex.m2. To determine the homology groups of CN
inductively we use the
equivariant long exact sequence (1.1) with |A| = N to determine two representations B and C of SN −1 satisfying the following property. B is a subrepresentation
3
3
3
of Res H̃i CN
and C is a superrepresentation of Res H̃i CN
. To determine H̃i CN
,
we need to find a representation X of SN satifying the property that Res X is a
subrepresentation of C and a superrepresentation of B. The function findEquation
in our package will first determine a list D of allPpossible partition λ of N such
that Res λ is a subrepresentation of C. Let X = λ∈D xλ · λ where xλ ≥ 0 is the
multiplicity of the partition λ ∈ D that we need to determine. Restricting X we
have inequalities (obtained by calling findEquation B and findEquation C) express3
ing the fact that Res H̃i CN
is a subrepresentation of C and a superrepresentation
N6 PROPERTY FOR THIRD VERONESE EMBEDDINGS
11
of B. We then use maple to solve for non-negative integer solutions of this system
of inequalities.
Acknowledgements
I would like to thank Claudiu Raicu for introducing the problem and its relation
to the homology of matching complexes and my advisor David Eisenbud for useful
conversations and comments on earlier drafts of the paper.
References
[B] C. Birkenhake, Linear systems on projective spaces, Manuscripta Math. 88 (1995), no.2, 177
- 184.
[BCR] W. Bruns, A. Conca and T. Römer, Koszul homology and syzygies of Veronese subalgebras,
Math. Ann. 351 (2011), no.4, 761-779.
[G] M. Green, Koszul cohomology and the geometry of projective varieties I, II, J. Differ. Geometry, 20 (1984), 125-171, 279-289.
[GS] D. Grayson, M. Stillman, Macaulay2, a software system for research in algebraic geometry,
Available at http://www.math.uiuc.edu/Macaulay2/.
[JPW] T. Josefiak, P. Pragacz and J. Weyman, Resolutions of determinantal varieties and tensor
complexes associated with symmetric and antisymmetric matrices, Asterisque, 87-88 (1981),
109-189.
[KRW] D. Karaguezian, V. Reiner and M. Wachs, Matching complexes, bounded degree graph
complexes and weight spaces of GLn complexes, Journal of Algebra 239 (2001), no.1, 77-92.
[OP] G. Ottaviani, R. Paoletti, Syzygies of Veronese embeddings, Compositio. Math. 125 (2001),
no. 1, 31-37.
[R] C. Raicu, Representation stability for syzygies of line bundles on Segre-Veronese varieties,
Arxiv 1209.1183v1 (2012).
[W] J. Weyman, Cohomology of vector bundles and zygyzies, Cambridge Tracts in Mathematics,
vol.149, Cambridge University Press, Cambridge, 2003.
Department of Mathematics, University of California at Berkeley, Berkeley, CA
94720
E-mail address: [email protected]
| 0 |
A Classical Realizability Model for a
Semantical Value Restriction
arXiv:1603.07484v2 [cs.LO] 7 Apr 2016
Rodolphe Lepigre
LAMA, UMR 5127 - CNRS
Université Savoie Mont Blanc, France
[email protected]
Abstract. We present a new type system with support for proofs of
programs in a call-by-value language with control operators. The proof
mechanism relies on observational equivalence of (untyped) programs. It
appears in two type constructors, which are used for specifying program
properties and for encoding dependent products. The main challenge
arises from the lack of expressiveness of dependent products due to the
value restriction. To circumvent this limitation we relax the syntactic
restriction and only require equivalence to a value. The consistency of the
system is obtained semantically by constructing a classical realizability
model in three layers (values, stacks and terms).
Introduction
In this work we consider a new type system for a call-by-value language, with
control operators, polymorphism and dependent products. It is intended to serve
as a theoretical basis for a proof assistant focusing on program proving, in a
language similar to OCaml or SML. The proof mechanism relies on dependent
products and equality types t ≡ u, where t and u are (possibly untyped) terms
of the language. Equality types are interpreted as ⊤ if the denoted equivalence
holds and as ⊥ otherwise.
In our system, proofs are written using the same language as programs. For
instance, a pattern-matching corresponds to a case analysis in a proof, and a
recursive call to the use of an induction hypothesis. A proof is first and foremost
a program, hence we may say that we follow the “program as proof” principle,
rather than the usual “proof as program” principle. In particular, proofs can be
composed as programs and with programs to form proof tactics.
Programming in our language is similar to programming in any dialect of
ML. For example, we can define the type of unary natural numbers, and the
corresponding addition function.
type nat = Z [] | S [ nat ]
let rec add n m = match n with
| Z []
→ m
| S [ nn ] → S [ add nn m ]
We can then prove properties of addition such as add Z[] n ≡ n for all n in
nat. This property can be expressed using a dependent product over nat and
an equality type.
let addZeroN n : nat : ( add Z [] n ≡ n ) = 8<
The term 8< (to be pronounced “scissors”) can be introduced whenever the
goal is derivable from the context with equational reasoning. Our first proof is
immediate since we have add Z[] n ≡ n by definition of add.
Let us now show that add n Z[] ≡ n for every n in nat. Although the
statement of this property is similar to the previous one, its proof is slightly
more complex and requires case analysis and induction.
let rec addNZero n : nat : ( add n Z [] ≡ n ) =
match n with
| Z []
→ 8<
| S [ nn ] → let r = addNZero nn in 8<
In the S[nn] case, the induction hypothesis (i.e. add nn Z[] ≡ nn) is obtained by
a recursive call. It is then used to conclude the proof using equational reasoning.
Note that in our system, programs that are considered as proofs need to go
through a termination checker. Indeed, a looping program could be used to
prove anything otherwise. The proofs addZeroN and addNZero are obviously
terminating, and hence valid.
Several difficulties arise when combining call-by-value evaluation, side-effects,
dependent products and equality over programs. Most notably, the expressiveness of dependent products is weakened by the value restriction: elimination of
dependent product can only happen on arguments that are syntactic values. In
other words, the typing rule
Γ ⊢ t : Πa:A B
Γ ⊢u:A
Γ ⊢ t u : B[a := u]
cannot be proved safe if u is not a value. This means, for example, that we
cannot derive a proof of add (add Z[] Z[]) Z[] ≡ add Z[] Z[] by applying
addNZero (which has type Πn:nat (add n Z[] ≡ n)) to add Z[] Z[] since it
is not a value. The restriction affects regular programs in a similar way. For
instance, it is possible to define a list concatenation function append with the
following type.
Πn:nat Πm:nat List(n) ⇒ List(m) ⇒ List(add n m)
However, the append function cannot be used to implement a function concatenating three lists. Indeed, this would require being able to provide append with
a non-value natural number argument of the form add n m.
Surprisingly, the equality types and the underlying observational equivalence
relation provide a solution to the lack of expressiveness of dependent products.
The value restriction can be relaxed to obtain the rule
Γ, u ≡ v ⊢ t : Πa:A B
Γ, u ≡ v ⊢ u : A
Γ, u ≡ v ⊢ t u : B[a := u]
which only requires u to be equivalent to some value v. The same idea can be
applied to every rule requiring value restriction. The obtained system is conservative over the one with the syntactic restriction. Indeed, finding a value equivalent
to a term that is already a value can always be done using the reflexivity of the
equivalence relation.
Although the idea seems simple, proving the soundness of the new typing
rules semantically is surprisingly subtle. A model is built using classical realizability techniques in which the interpretation of a type A is spread among two
sets: a set of values JAK and a set of terms JAK⊥⊥ . The former contains all values
that should have type A. For example, JnatK should contain the values of the
form S[S[...Z[]...]]. The set JAK⊥⊥ is the completion of JAK with all the
terms behaving like values of JAK (in the observational sense). To show that the
relaxation of the value restriction is sound, we need the values of JAK⊥⊥ to also
be in JAK. In other words, the completion operation should not introduce new
values. To obtain this property, we need to extend the language with a new,
non-computable instruction internalizing equivalence. This new instruction is
only used to build the model, and will not be available to the user (nor will it
appear in an implementation).
About effects and value restriction
A soundness issue related to side-effects and call-by-value evaluation arose in
the seventies with the advent of ML. The problem stems from a bad interaction
between side-effects and Hindley-Milner polymorphism. It was first formulated
in terms of references [30, section 2], and many alternative type systems were
designed (e.g. [4, 14, 15, 29]). However, they all introduced a complexity that
contrasted with the elegance and simplicity of ML’s type system (for a detailed
account, see [31, section 2] and [5, section 2]).
A simple and elegant solution was finally found by Andrew Wright in the
nineties. He suggested restricting generalization in let-bindings1 to cases where
the bound term is a syntactic value [30, 31]. In slightly more expressive type
systems, this restriction appears in the typing rule for the introduction of the
universal quantifier. The usual rule
Γ ⊢t:A
X 6∈ F V (Γ )
Γ ⊢ t : ∀X A
cannot be proved safe (in a call-by-value system with side-effects) if t is not a
syntactic value. Similarly, the elimination rule for dependent product (shown
previously) requires value restriction. It is possible to exhibit a counter-example
breaking the type safety of our system if it is omitted [13].
1
In ML the polymorphism mechanism is strongly linked with let-bindings. In OCaml
syntax, they are expressions of the form let x = u in t.
In this paper, we consider control structures, which have been shown to give
a computational interpretation to classical logic by Timothy Griffin [6]. In 1991,
Robert Harper and Mark Lillibridge found a complex program breaking the type
safety of ML extended with Lisp’s call/cc [7]. As with references, value restriction
solves the inconsistency and yields a sound type system. Instead of using control
operators like call/cc, we adopt the syntax of Michel Parigot’s λµ-calculus [24].
Our language hence contains a new binder µα t capturing the continuation in the
µ-variable α. The continuation can then be restored in t using the syntax u ∗ α2 .
In the context of the λµ-calculus, the soundness issue arises when evaluating
t (µα u) when µα u has a polymorphic type. Such a situation cannot happen
with value restriction since µα u is not a value.
Main results
The main contribution of this paper is a new approach to value restriction. The
syntactic restriction on terms is replaced by a semantical restriction expressed
in terms of an observational equivalence relation denoted (≡). Although this approach seems simple, building a model to prove soundness semantically (theorem
6) is surprisingly subtle. Subject reduction is not required here, as our model
construction implies type safety (theorem 7). Furthermore our type system is
consistent as a logic (theorem 8).
In this paper, we restrict ourselves to a second order type system but it can
easily be extended to higher-order. Types are built from two basic sorts of objects: propositions (the types themselves) and individuals (untyped terms of the
language). Terms appear in a restriction operator A ↾ t ≡ u and a membership predicate t ∈ A. The former is used to define the equality types (by taking
A = ⊤) and the latter is used to encode dependent product.
Πa:A B
:=
∀a(a ∈ A ⇒ B)
Overall, the higher-order version of our system is similar to a Curry-style HOL
with ML programs as individuals. It does not allow the definition of a type
which structure depends on a term (e.g. functions with a variable number of
arguments). Our system can thus be placed between HOL (a.k.a. Fω ) and the
pure calculus of constructions (a.k.a. CoC) in (a Curry-style and classical version
of) Barendregt’s λ-cube.
Throughout this paper we build a realizability model à la Krivine [12] based
on a call-by-value abstract machine. As a consequence, formulas are interpreted
using three layers (values, stacks and terms) related via orthogonality (definition 9). The crucial property (theorem 4) for the soundness of semantical value
restriction is that
φ⊥⊥ ∩ Λv = φ
for every set of values φ (closed under (≡)). Λv denotes the set of all values
and φ⊥ (resp. φ⊥⊥ ) the set of all stacks (resp. terms) that are compatible with
2
This was originally denoted [α]u.
every value in φ (resp. stacks in φ⊥ ). To obtain a model satisfying this property,
we need to extend our programming language with a term δv,w which reduction
depends on the observational equivalence of two values v and w.
Related work
To our knowledge, combining call-by-value evaluation, side-effects and dependent
products has never been achieved before. At least not for a dependent product
fully compatible with effects and call-by-value. For example, the Aura language
[10] forbids dependency on terms that are not values in dependent applications.
Similarly, the F ⋆ language [28] relies on (partial) let-normal forms to enforce
values in argument position. Daniel Licata and Robert Harper have defined
a notion of positively dependent types [16] which only allow dependency over
strictly positive types. Finally, in language like ATS [32] and DML [33] dependent
types are limited to a specific index language.
The system that seems the most similar to ours is NuPrl [2], although it
is inconsistent with classical reasoning. NuPrl accommodates an observational
equivalence (∼) (Howe’s “squiggle” relation [8]) similar to our (≡) relation. It
is partially reflected in the syntax of the system. Being based on a Kleene style
realizability model, NuPrl can also be used to reason about untyped terms.
The central part of this paper consists in a classical realizability model construction in the style of Jean-Louis Krivine [12]. We rely on a call-by-value presentation which yields a model in three layers (values, terms and stacks). Such a
technique has already been used to account for classical ML-like polymorphism
in call-by-value in the work of Guillaume Munch-Maccagnoni [21]3 . It is here
extended to include dependent products.
The most actively developed proof assistants following the Curry-Howard
correspondence are Coq and Agda [18,22]. The former is based on Coquand and
Huet’s calculus of constructions and the latter on Martin-Löf’s dependent type
theory [3, 17]. These two constructive theories provide dependent types, which
allow the definition of very expressive specifications. Coq and Agda do not directly give a computational interpretation to classical logic. Classical reasoning
can only be done through the definition of axioms such as the law of the excluded
middle. Moreover, these two languages are logically consistent, and hence their
type-checkers only allow terminating programs. As termination checking is a
difficult (and undecidable) problem, many terminating programs are rejected.
Although this is not a problem for formalizing mathematics, this makes programming tedious.
The TRELLYS project [1] aims at providing a language in which a consistent
core can interact with type-safe dependently-typed programming with general
recursion. Although the language defined in [1] is call-by-value and allows effect,
it suffers from value restriction like Aura [10]. The value restriction does not
appear explicitly but is encoded into a well-formedness judgement appearing as
the premise of the typing rule for application. Apart from value restriction, the
3
Our theorem 4 seems unrelated to lemma 9 in Munch-Maccagnoni’s work [21].
main difference between the language of the TRELLYS project and ours resides
in the calculus itself. Their calculus is Church-style (or explicitly typed) while
ours is Curry-style (or implicitly typed). In particular, their terms and types
are defined simultaneously, while our type system is constructed on top of an
untyped calculus.
Another similar system can be found in the work of Alexandre Miquel [20],
where propositions can be classical and Curry-style. However the rest of the language remains Church style and does not embed a full ML-like language. The
PVS system [23] is similar to ours as it is based on classical higher-order logic.
However this tool does not seem to be a programming language, but rather a
specification language coupled with proof checking and model checking utilities.
It is nonetheless worth mentioning that the undecidability of PVS’s type system
is handled by generating proof obligations. Our system will take a different approach and use a non-backtracking type-checking and type-inference algorithm.
1
Syntax, Reduction and Equivalence
The language is expressed in terms of a Krivine Abstract Machine [11], which is
a stack-based machine. It is formed using four syntactic entities: values, terms,
stacks and processes. The distinction between terms and values is specific to the
call-by-value presentation, they would be collapsed in call-by-name. We require
three distinct countable sets of variables:
– Vλ = {x, y, z...} for λ-variables,
– Vµ = {α, β, γ...} for µ-variables (also called stack variables) and
– Vι = {a, b, c...} for term variables. Term variables will be bound in formulas,
but never in terms.
We also require a countable set L = {l, l1 , l2 ...} of labels to name record fields
and a countable set C = {C, C1 , C2 ...} of constructors.
Definition 1. Values, terms, stacks and processes are mutually inductively defined by the following grammars. The names of the corresponding sets are displayed on the right.
v, w ::= x | λx t | C[v] | {li = vi }i∈I
(Λv )
t, u ::= a | v | t u | µα t | p | v.l | casev [Ci [xi ] → ti ]i∈I | δv,w
π, ρ ::= α | v.π | [t]π
(Λ)
(Π)
p, q ::= t ∗ π
(Λ × Π)
Terms and values form a variation of the λµ-calculus [24] enriched with ML-like
constructs (i.e. records and variants). For technical purposes that will become
clear later on, we extend the language with a special kind of term δv,w . It will
only be used to build the model and is not intended to be accessed directly by
the user. One may note that values and processes are terms. In particular, a
process of the form t ∗ α corresponds exactly to a named term [α]t in the most
usual presentation of the λµ-calculus. A stack can be either a stack variable, a
value pushed on top of a stack, or a stack frame containing a term on top of a
stack. These two constructors are specific to the call-by-value presentation, only
one would be required in call-by-name.
Remark 1. We enforce values in constructors, record fields, projection and case
analysis. This makes the calculus simpler because only β-reduction will manipulate the stack. We can define syntactic sugars such as the following to hide the
restriction from the programmer.
t.l := (λx x.l) t
C[t] := (λx C[x]) t
Definition 2. Given a value, term, stack or process ψ we denote F Vλ (ψ) (resp.
F Vµ (ψ), T V (ψ)) the set of free λ-variables (resp. free µ-variables, term variables) contained in ψ. We say that ψ is closed if it does not contain any free
variable of any kind. The set of closed values and the set of closed terms are
denoted Λ∗v and Λ∗ respectively.
Remark 2. A stack, and hence a process, can never be closed as they always at
least contain a stack variable.
1.1
Call-by-value reduction relation
Processes form the internal state of our abstract machine. They are to be thought
of as a term put in some evaluation context represented using a stack. Intuitively,
the stack π in the process t ∗ π contains the arguments to be fed to t. Since we
are in call-by-value the stack also handles the storing of functions while their
arguments are being evaluated. This is why we need stack frames (i.e. stacks of
the form [t]π). The operational semantics of our language is given by a relation
(≻) over processes.
Definition 3. The relation (≻) ⊆ (Λ × Π)2 is defined as the smallest relation
satisfying the following reduction rules.
tu∗π
≻
u ∗ [t]π
v ∗ [t]π
λx t ∗ v.π
≻
≻
t ∗ v.π
t[x := v] ∗ π
µα t ∗ π
≻
t[α := π] ∗ π
p∗π
{li = vi }i∈I .lk ∗ π
≻
≻
p
vk ∗ π
k∈I
caseCk [v] [Ci [xi ] → ti ]i∈I ∗ π
≻
tk [xk := v] ∗ π
k∈I
We will denote (≻+ ) its transitive closure, (≻∗ ) its reflexive-transitive closure
and (≻k ) its k-fold application.
The first three rules are those that handle β-reduction. When the abstract machine encounters an application, the function is stored in a stack-frame in order
to evaluate its argument first. Once the argument has been completely computed,
a value faces the stack-frame containing the function. At this point the function
can be evaluated and the value is stored in the stack ready to be consumed by
the function as soon as it evaluates to a λ-abstraction. A capture-avoiding substitution can then be performed to effectively apply the argument to the function.
The fourth and fifth rules rules handle the classical part of computation. When
a µ-abstraction is reached, the current stack (i.e. the current evaluation context)
is captured and substituted for the corresponding µ-variable. Conversely, when
a process is reached, the current stack is thrown away and evaluation resumes
with the process. The last two rules perform projection and case analysis in the
expected way. Note that for now, states of the form δv,w ∗ π are unaffected by
the reduction relation.
Remark 3. For the abstract machine to be simpler, we use right-to-left call-byvalue evaluation, and not the more usual left-to-right call-by-value evaluation.
Lemma 1. The reduction relation (≻) is compatible with substitutions of variables of any kind. More formally, if p and q are processes such that p ≻ q then:
– for all x ∈ Vλ and v ∈ Λv , p[x := v] ≻ q[x := v],
– for all α ∈ Vµ and π ∈ Π, p[α := π] ≻ q[α := π],
– for all a ∈ Vι and t ∈ Λ, p[a := t] ≻ q[a := t].
Consequently, if σ is a substitution for variables of any kind and if p ≻ q (resp.
p ≻∗ q, p ≻+ q, p ≻k q) then pσ ≻ qσ (resp. pσ ≻∗ qσ, pσ ≻+ qσ, pσ ≻k qσ).
Proof. Immediate case analysis on the reduction rules.
We are now going to give the vocabulary that will be used to describe some
specific classes of processes. In particular we need to identify processes that are
to be considered as the evidence of a successful computation, and those that are
to be recognised as expressing failure.
Definition 4. A process p ∈ Λ × Π is said to be:
–
–
–
–
–
final if there is a value v ∈ Λv and a stack variable α ∈ Vµ such that p = v∗α,
δ-like if there are values v, w ∈ Λv and a stack π ∈ Π such that p = δv,w ∗ π,
blocked if there is no q ∈ Λ × Π such that p ≻ q,
stuck if it is not final nor δ-like, and if for every substitution σ, pσ is blocked,
non-terminating if there is no blocked process q ∈ Λ × Π such that p ≻∗ q.
Lemma 2. Let p be a process and σ be a substitution for variables of any kind.
If p is δ-like (resp. stuck, non-terminating) then pσ is also δ-like (resp. stuck,
non-terminating).
Proof. Immediate by definition.
Lemma 3. A stuck state is of one of the following forms, where k ∈
/ I.
C[v].l ∗ π
(λx t).l ∗ π
caseλx t [Ci [xi ] → ti ]i∈I ∗ π
C[v] ∗ w.π
{li = vi }i∈I ∗ v.π
case{li =vi }i∈I [Cj [xj ] → tj ]j∈J ∗ π
caseCk [v] [Ci [xi ] → ti ]i∈I ∗ π
{li = vi }i∈I .lk ∗ π
Proof. Simple case analysis.
Lemma 4. A blocked process p ∈ Λ × Π is either stuck, final, δ-like, or of one
of the following forms.
x.l ∗ π
x ∗ v.π
casex [Ci [xi ] → ti ]i∈I ∗ π
a∗π
Proof. Straight-forward case analysis using lemma 3.
1.2
Reduction of δv,w and equivalence
The idea now is to define a notion of observational equivalence over terms using
a relation (≡). We then extend the reduction relation with a rule reducing a
state of the form δv,w ∗ π to v ∗ π if v 6≡ w. If v ≡ w then δv,w is stuck. With
this rule reduction and equivalence will become interdependent as equivalence
will be defined using reduction.
Definition 5. Given a reduction relation R, we say that a process p ∈ Λ × Π
converges, and write p ⇓R , if there is a final state q ∈ Λ × Π such that pR∗ q
(where R∗ is the reflexive-transitive closure of R). If p does not converge we say
that it diverges and write p ⇑R . We will use the notations p ⇓i and p ⇑i when
working with indexed notation symbols like (։i ).
Definition 6. For every natural number i we define a reduction relation (։i )
and an equivalence relation (≡i ) which negation will be denoted (6≡i ).
(։i ) = (≻) ∪ {(δv,w ∗ π, v ∗ π) | ∃j < i, v 6≡j w}
(≡i ) = {(t, u) | ∀j ≤ i, ∀π, ∀σ, tσ ∗ π ⇓j ⇔ uσ ∗ π ⇓j }
It is easy to see that (։0 ) = (≻). For every natural number i, the relation (≡i ) is
indeed an equivalence relation as it can be seen as an intersection of equivalence
relations. Its negation can be expressed as follows.
(6≡i ) = {(t, u), (u, t) | ∃j ≤ i, ∃π, ∃σ, tσ ∗ π ⇓j ∧ uσ ∗ π ⇑j }
Definition 7. We define a reduction relation (։) and an equivalence relation
(≡) which negation will be denoted (6≡).
[
\
(։) =
(։i )
(≡) =
(≡i )
i∈N
i∈N
These relations can be expressed directly (i.e. without the need of a union or an
intersection) in the following way.
(≡) = {(t, u) | ∀i, ∀π, ∀σ, tσ ∗ π ⇓i ⇔ uσ ∗ π ⇓i }
(6≡) = {(t, u), (u, t) | ∃i, ∃π, ∃σ, tσ ∗ π ⇓i ∧ uσ ∗ π ⇑i }
(։) = (≻) ∪ {(δv,w ∗ π, v ∗ π) | v 6≡ w}
Remark 4. Obviously (։i ) ⊆ (։i+1 ) and (≡i+1 ) ⊆ (≡i ). As a consequence
the construction of (։i )i∈N and (≡i )i∈N converges. In fact (։) and (≡) form a
fixpoint at ordinal ω. Surprisingly, this property is not explicitly required.
Theorem 1. Let t and u be terms. If t ≡ u then for every stack π ∈ Π and
substitution σ we have tσ ∗ π ⇓։ ⇔ uσ ∗ π ⇓։ .
Proof. We suppose that t ≡ u and we take π0 ∈ Π and a substitution σ0 .
By symmetry we can assume that tσ0 ∗ π0 ⇓։ and show that uσ0 ∗ π0 ⇓։ . By
definition there is i0 ∈ N such that tσ0 ∗ π0 ⇓i0 . Since t ≡ u we know that for
every i ∈ N, π ∈ Π and substitution σ we have tσ ∗ π ⇓i ⇔ uσ ∗ π ⇓i . This is
true in particular for i = i0 , π = π0 and σ = σ0 . We hence obtain uσ0 ∗ π0 ⇓i0
which give us uσ0 ∗ π0 ⇓։ .
Remark 5. The converse implication is not true in general: taking t = δλx x,{}
and u = λx x gives a counter-example. More generally p ⇓։ ⇔ q ⇓։ does not
necessarily imply p ⇓i ⇔ q ⇓i for all i ∈ N.
Corollary 1. Let t and u be terms and π be a stack. If t ≡ u and t ∗ π ⇓։ then
u ∗ π ⇓։ .
Proof. Direct consequence of theorem 1 using π and an empty substitution.
1.3
Extensionality of the language
In order to be able to work with the equivalence relation (≡), we need to check
that it is extensional. In other words, we need to be able to replace equals by
equals at any place in terms without changing their observed behaviour. This
property is summarized in the following two theorems.
Theorem 2. Let v and w be values, E be a term and x be a λ-variable. If v ≡ w
then E[x := v] ≡ E[x := w].
Proof. We are going to prove the contrapositive so we suppose E[x := v] 6≡
E[x := w] and show v 6≡ w. By definition there is i ∈ N, π ∈ Π and a substitution
σ such that (E[x := v])σ ∗ π ⇓i and (E[x := w])σ ∗ π ⇑i (up to symmetry). Since
we can rename x in such a way that it does not appear in dom(σ), we can
suppose Eσ[x := vσ] ∗ π ⇓i and Eσ[x := wσ] ∗ π ⇑i . In order to show v 6≡ w we
need to find i0 ∈ N, π0 ∈ Π and a substitution σ0 such that vσ0 ∗ π0 ⇓i0 and
wσ0 ∗ π0 ⇑i0 (up to symmetry). We take i0 = i, π0 = [λx Eσ]π and σ0 = σ.
These values are suitable since by definition vσ0 ∗ π0 ։i0 Eσ[x := vσ] ∗ π ⇓i0
and wσ0 ∗ π0 ։i0 Eσ[x := wσ] ∗ π ⇑i0 .
Lemma 5. Let s be a process, t be a term, a be a term variable and k be a
natural number. If s[a := t] ⇓k then there is a blocked state p such that s ≻∗ p
and either
– p = v ∗ α for some value v and a stack variable α,
– p = a ∗ π for some stack π,
– k > 0 and p = δ(v, w) ∗ π for some values v and w and stack π, and in this
case v[a := t] 6≡j w[a := t] for some j < k.
Proof. Let σ be the substitution [a := t]. If s is non-terminating, lemma 2 tells
us that sσ is also non-terminating, which contradicts sσ ⇓k . Consequently, there
is a blocked process p such that s ≻∗ p since (≻) ⊆ (։k ). Using lemma 1 we get
sσ ≻∗ pσ from which we obtain pσ ⇓k . The process p cannot be stuck, otherwise
pσ would also be stuck by lemma 2, which would contradict pσ ⇓k . Let us now
suppose that p = δv,w ∗ π for some values v and w and some stack π. Since
δvσ,wσ ∗ π ⇓k there must be i < k such that vσ 6≡j wσ, otherwise this would
contradict δvσ,wσ ∗ π ⇓k . In this case we necessarily have k > 0, otherwise there
would be no possible candidate for i. According to lemma 4 we need to rule out
four more forms of therms: x.l ∗ π, x ∗ v.π, casex B ∗ π and b ∗ π in the case
where b 6= a. If p was of one of these forms the substitution σ would not be able
to unblock the reduction of p, which would contradict again pσ ⇓k .
Lemma 6. Let t1 , t2 and E be terms and a be a term variable. For every k ∈ N,
if t1 ≡k t2 then E[a := t1 ] ≡k E[a := t2 ].
Proof. Let us take k ∈ N, suppose that t1 ≡k t2 and show that E[a := t1 ] ≡k
E[a := t1 ]. By symmetry we can assume that we have i ≤ k, π ∈ Π and a
substitution σ such that (E[a := t1 ])σ ∗ π ⇓i and show that (E[a := t2 ])σ ∗ π ⇓i .
As we are free to rename a, we can suppose that it does not appear in dom(σ),
T V (π), T V (t1 ) or T V (t2 ). In order to lighten the notations we define E ′ = Eσ,
σ1 = [a := t1 σ] and σ2 = [a := t2 σ]. We are hence assuming E ′ σ1 ∗ π ⇓i and trying
to show E ′ σ2 ∗ π ⇓i .
We will now build a sequence (Ei , πi , li )i∈I in such a way that E ′ σ1 ∗ π ։∗k
Ei σ1 ∗ πi σ1 in li steps for every i ∈ I. Furthermore, we require that (li )i∈I is
increasing and that it has a strictly increasing subsequence. Under this condition
our sequence will necessarily be finite. If it was infinite the number of reduction
steps that could be taken from the state E ′ σ1 ∗ π would not be bounded, which
would contradict E ′ σ1 ∗ π ⇓i . We now denote our finite sequence (Ei , πi , li )i≤n
with n ∈ N. In order to show that (li )i≤n has a strictly increasing subsequence,
we will ensure that it does not have three equal consecutive values. More formally,
we will require that if 0 < i < n and li−1 = li then li+1 > li .
To define (E0 , π0 , l0 ) we consider the reduction of E ′ ∗ π. Since we know
that (E ′ ∗ π)σ1 = E ′ σ1 ∗ π ⇓i we use lemma 5 to obtain a blocked state p such
that E ′ ∗ π ≻j p. We can now take E0 ∗ π0 = p and l0 = j. By lemma 1 we
have (E ′ ∗ π)σ1 ≻j E0 σ1 ∗ π0 σ1 from which we can deduce that (E ′ ∗ π)σ1 ։∗k
E0 σ1 ∗ π0 σ1 in l0 = j steps.
To define (Ei+1 , πi+1 , li+1 ) we consider the reduction of the process Ei σ1 ∗ πi .
By construction we know that E ′ σ1 ∗ π ։∗k Ei σ1 ∗ πi σ1 = (Ei σ1 ∗ πi )σ1 in li
steps. Using lemma 5 we know that Ei ∗ πi might be of three shapes.
– If Ei ∗ πi = v ∗ α for some value v and stack variable α then the end of the
sequence was reached with n = i.
– If Ei = a then we consider the reduction of Ei σ1 ∗πi . Since (Ei σ1 ∗πi )σ1 ⇓k we
know from lemma 5 that there is a blocked process p such that Ei σ1 ∗ πi ≻j p.
Using lemma 1 we obtain that Ei σ1 ∗ πi σ1 ≻j pσ1 from which we can deduce
that Ei σ1 ∗ πi σ1 ։k pσ1 in j steps. We then take Ei+1 ∗ πi+1 = p and
li+1 = li + j.
Is it possible to have j = 0? This can only happen when Ei σ1 ∗ πi is of one
of the three forms of lemma 5. It cannot be of the form a ∗ π as we assumed
that a does not appear in t1 or σ. If it is of the form v ∗ α, then we reached
the end of the sequence with i + 1 = n so there is no trouble. The process
Ei σ1 ∗ πi may be of the form δ(v, w) ∗ π, but we will have li+2 > li+1 .
– If Ei = δ(v, w) for some values v and w we have m < k such that vσ1 6≡m
wσ1 . Hence Ei σ1 ∗ πi = δ(vσ1 , wσ1 ) ∗ πi ։k vσ1 ∗ πi by definition. Moreover
Ei σ1 ∗ πi σ1 ։k vσ1 ∗ πi σ1 by lemma 1. Since E ′ σ1 ∗ π ։∗k Ei σ1 ∗ πi σ1 in li
steps we obtain that E ′ σ1 ∗ π ։∗k vσ1 ∗ πi σ1 in li + 1 steps. This also gives
us (vσ1 ∗ πi )σ1 = vσ1 ∗ πi σ1 ⇓k .
We now consider the reduction of the process vσ1 ∗ πi . By lemma 5 there
is a blocked process p such that vσ1 ∗ πi ≻j p. Using lemma 1 we obtain
vσ1 ∗ πi σ1 ≻j pσ1 from which we deduce that vσ1 ∗ πi σ1 ։∗k pσ1 in j steps.
We then take Ei+1 ∗ πi+1 = p and li+1 = li + j + 1. Note that in this case
we have li+1 > li .
Intuitively (Ei , πi , li )i≤n mimics the reduction of E ′ σ1 ∗ π while making explicit
every substitution of a and every reduction of a δ-like state.
To end the proof we show that for every i ≤ n we have Ei σ2 ∗ πi σ2 ⇓k .
For i = 0 this will give us E ′ σ2 ∗ π ⇓k which is the expected result. Since
En ∗ πn = v ∗ α we have En σ2 ∗ πn σ2 = vσ2 ∗ α from which we trivially obtain
En σ2 ∗ πn σ2 ⇓k . We now suppose that Ei+1 σ2 ∗ πi σ2 ⇓k for 0 ≤ i < n and show
that Ei σ2 ∗ πi σ2 ⇓k . By construction Ei ∗ πi can be of two shapes4 :
– If Ei = a then t1 σ ∗ πi ։∗k Ei+1 ∗ πi+1 . Using lemma 1 we obtain t1 σ ∗
πi σ2 ։k Ei+1 σ2 ∗ πi σ2 from which we deduce t1 σ ∗ πi σ2 ⇓k by induction
hypothesis. Since t1 ≡k t2 we obtain t2 σ ∗ πi σ2 = (Ei ∗ πi )σ2 ⇓k .
– If Ei = δ(v, w) then v ∗ πi ։k Ei+1 ∗ πi+1 and hence vσ2 ∗ πi σ2 ։k Ei+1 σ2 ∗
πi+1 σ2 by lemma 1. Using the induction hypothesis we obtain vσ2 ∗ πi σ2 ⇓k .
It remains to show that δ(vσ2 , wσ2 ) ∗ πi σ2 ։∗k vσ2 ∗ πi σ2 . We need to find
j < k such that vσ2 6≡j wσ2 . By construction there is m < k such that
vσ1 6≡m wσ1 . We are going to show that vσ2 6≡m wσ2 . By using the global
induction hypothesis twice we obtain vσ1 ≡m vσ2 and wσ1 ≡m vσ2 . Now
if vσ2 ≡m wσ2 then vσ1 ≡m vσ2 ≡m wσ2 ≡m wσ1 contradicts vσ1 6≡ wσ1 .
Hence we must have vσ2 6≡m wσ2 .
4
Only En ∗ πn can be of the form v ∗ α.
Theorem 3. Let t1 , t2 and E be three terms and a be a term variable. If t1 ≡ t2
then E[a := t1 ] ≡ E[a := t2 ].
Proof. We suppose that t1 ≡ t2 which means that t1 ≡i t2 for every i ∈ N.
We need to show that E[a := t1 ] ≡ E[a := t2 ] so we take i0 ∈ N and show
E[a := t1 ] ≡i0 E[a := t2 ]. By hypothesis we have t1 ≡i0 t2 and hence we can
conclude using lemma 6.
2
Formulas and Semantics
The syntax presented in the previous section is part of a realizability machinery
that will be built upon here. We aim at obtaining a semantical interpretation of
the second-order type system that will be defined shortly. Our abstract machine
slightly differs from the mainstream presentation of Krivine’s classical realizability which is usually call-by-name. Although call-by-value presentations have
rarely been published, such developments are well-known among classical realizability experts. The addition of the δ instruction and the related modifications
are however due to the author.
2.1
Pole and orthogonality
As always in classical realizability, the model is parametrized by a pole, which
serves as an exchange point between the world of programs and the world of
execution contexts (i.e. stacks).
Definition 8. A pole is a set of processes ⊥⊥ ⊆ Λ × Π which is saturated (i.e.
closed under backward reduction). More formally, if we have q ∈ ⊥⊥ and p ։ q
then p ∈ ⊥⊥.
Here, for the sake of simplicity and brevity, we are only going to use the pole
⊥⊥ = {p ∈ Λ × Π | p ⇓։ }
which is clearly saturated. Note that this particular pole is also closed under the
reduction relation (։), even though this is not a general property. In particular
⊥⊥ contains all final processes.
The notion of orthogonality is central in Krivine’s classical realizability. In
this framework a type is interpreted (or realized) by programs computing corresponding values. This interpretation is spread in a three-layered construction,
even though it is fully determined by the first layer (and the choice of the pole).
The first layer consists of a set of values that we will call the raw semantics.
It gathers all the syntactic values that should be considered as having the corresponding type. As an example, if we were to consider the type of natural
numbers, its raw semantics would be the set {n̄ | n ∈ N} where n̄ is some encoding of n. The second layer, called falsity value is a set containing every stack
that is a candidate for building a valid process using any value from the raw
semantics. The notion of validity depends on the choice of the pole. Here for
instance, a valid process is a normalizing one (i.e. one that reduces to a final
state). The third layer, called truth value is a set of terms that is built by iterating the process once more. The formalism for the two levels of orthogonality
is given in the following definition.
Definition 9. For every set φ ⊆ Λv we define a set φ⊥ ⊆ Π and a set φ⊥⊥ ⊆ Λ
as follows.
φ⊥ = {π ∈ Π | ∀v ∈ φ, v ∗ π ∈ ⊥⊥}
φ⊥⊥ = {t ∈ Λ | ∀π ∈ φ⊥ , t ∗ π ∈ ⊥⊥}
We now give two general properties of orthogonality that are true in every
classical realizability model. They will be useful when proving the soundness of
our type system.
Lemma 7. If φ ⊆ Λv is a set of values, then φ ⊆ φ⊥⊥ .
Proof. Immediate following the definition of φ⊥⊥ .
Lemma 8. Let φ ⊆ Λv and ψ ⊆ Λv be two sets of values. If φ ⊆ ψ then
φ⊥⊥ ⊆ ψ ⊥⊥ .
Proof. Immediate by definition of orthogonality.
The construction involving the terms of the form δv,x and (≡) in the previous
section is now going to gain meaning. The following theorem, which is our central
result, does not hold in every classical realizability model. Obtaining a proof
required us to internalize observational equivalence, which introduces a noncomputable reduction rule.
Theorem 4. If Φ ⊆ Λv is a set of values closed under (≡), then Φ⊥⊥ ∩ Λv = Φ.
Proof. The direction Φ ⊆ Φ⊥⊥ ∩ Λv is straight-forward using lemma 7. We are
going to show that Φ⊥⊥ ∩ Λv ⊆ Φ, which amounts to showing that for every
value v ∈ Φ⊥⊥ we have v ∈ Φ. We are going to show the contrapositive, so let
us assume v 6∈ Φ and show v 6∈ Φ⊥⊥ . We need to find a stack π0 such that
v ∗ π0 6∈ ⊥⊥ and for every value w ∈ Φ, w ∗ π0 ∈ ⊥⊥. We take π0 = [λx δx,v ] α and
show that is is suitable. By definition of the reduction relation v ∗ π0 reduces to
δv,v ∗ α which is not in ⊥⊥ (it is stuck as v ≡ v by reflexivity). Let us now take
w ∈ Φ. Again by definition, w ∗ π0 reduces to δw,v ∗ α, but this time we have
w 6≡ v since Φ was supposed to be closed under (≡) and v 6∈ Φ. Hence w ∗ π0
reduces to w ∗ α ∈ ⊥⊥.
It is important to check that the pole we chose does not yield a degenerate
model. In particular we check that no term is able to face every stacks. If it were
the case, such a term could be use as a proof of ⊥.
Theorem 5. The pole ⊥⊥ is consistent, which means that for every closed term
t there is a stack π such that t ∗ π 6∈ ⊥⊥.
Proof. Let t be a closed term and α be a stack constant. If we do not have t∗α ⇓։
then we can directly take π = α. Otherwise we know that t ∗ α ։∗ v ∗ α for some
value v. Since t is closed α is the only available stack variable. We now show
that π = [λx {}]{}.β is suitable. We denote σ the substitution [α := π]. Using a
trivial extension of lemma 1 to the (։) relation we obtain t ∗ π = (t ∗ α)σ ։∗
(v ∗ α)σ = vσ ∗ π. We hence have t ∗ π ։∗ vσ ∗ [λx {}]{}.β ։2 {} ∗ {}.β 6∈ ⊥⊥.
2.2
Formulas and their semantics
In this paper we limit ourselves to second-order logic, even though the system
can easily be extended to higher-order. For every natural number n we require
a countable set Vn = {Xn , Yn , Zn ...} of n-ary predicate variables.
Definition 10. The syntax of formulas is given by the following grammar.
A, B ::= Xn (t1 , ..., tn ) | A ⇒ B | ∀a A | ∃a A | ∀Xn A | ∃Xn A
| {li : Ai }i∈I | [Ci : Ai ]i∈I | t ∈ A | A ↾ t ≡ u
Terms appear in several places in formulas, in particular, they form the individuals of the logic. They can be quantified over and are used as arguments
for predicate variables. Besides the ML-like formers for sums and products (i.e.
records and variants) we add a membership predicate and a restriction operation. The membership predicate t ∈ A is used to express the fact that the term t
has type A. It provides a way to encode the dependent product type using universal quantification and the arrow type. In this sense, it is inspired and related
to Krivine’s relativization of quantifiers.
Πa:A B
:=
∀a(a ∈ A ⇒ B)
The restriction operator can be thought of as a kind of conjunction with no
algorithmic content. The formula A ↾ t ≡ u is to be interpreted in the same way
as A if the equivalence t ≡ u holds, and as ⊥ otherwise5 . In particular, we will
define the following types:
A ↾ t 6≡ u := A ↾ t ≡ u ⇒ ⊥
t ≡ u := ⊤ ↾ t ≡ u
t 6≡ u := ⊤ ↾ t 6≡ u
To handle free variables in formulas we will need to generalize the notion of
substitution to allow the substitution of predicate variables.
Definition 11. A substitution is a finite map σ ranging over λ-variables, µvariables, term and predicate variables such that:
– if x ∈ dom(σ) then σ(x) ∈ Λv ,
– if α ∈ dom(σ) then σ(α) ∈ Π,
– if a ∈ dom(σ) then σ(a) ∈ Λ,
5
We use the standard second-order encoding: ⊥ = ∀X0 X0 and ⊤ = ∃X0 X0 .
– if Xn ∈ dom(σ) then σ(Xn ) ∈ Λn → P(Λv /≡).
Remark 6. A predicate variable of arity n will be substituted by a n-ary predicate. Semantically, such predicate will correspond to some total (set-theoretic)
function building a subset of Λv /≡ from n terms. In the syntax, the binding of
the arguments of a predicate variables will happen implicitly during its substitution.
Definition 12. Given a formula A we denote F V (A) the set of its free variables.
Given a substitution σ such that F V (A) ⊆ dom(σ) we write A[σ] the closed
formula built by applying σ to A.
In the semantics we will interpret closed formulas by sets of values closed
under the equivalence relation (≡).
Definition 13. Given a formula A and a substitution σ such that A[σ] is closed,
we define the raw semantics JAKσ ⊆ Λv / ≡ of A under the substitution σ as
follows.
JXn (t1 , ..., tn )Kσ = σ(Xn )(t1 σ, ..., tn σ)
JA ⇒ BKσ = {λx t | ∀v ∈ JAKσ , t[x := v] ∈ JBK⊥⊥
σ }
J∀a AKσ = ∩t∈Λ∗ JAKσ[a:=t]
J∃a AKσ = ∪t∈Λ∗ JAKσ[a:=t]
J∀Xn AKσ = ∩P ∈Λn →P(Λv /≡) JAKσ[Xn :=P ]
J∃Xn AKσ = ∪P ∈Λn →P(Λv /≡) JAKσ[Xn :=P ]
J{li : Ai }i∈I Kσ = {{li = vi }i∈I | ∀i ∈ I vi ∈ JAi Kσ }
J[Ci : Ai ]i∈I Kσ = ∪i∈I {Ci [v] | v ∈ JAi Kσ }
Jt ∈ AKσ = {v ∈ JAKσ | tσ ≡ v}
JAKσ if tσ ≡ uσ
JA ↾ t ≡ uKσ =
∅
otherwise
In the model, programs will realize closed formulas in two different ways
according to their syntactic class. The interpretation of values will be given in
terms of raw semantics, and the interpretation of terms in general will be given
in terms of truth values.
Definition 14. Let A be a formula and σ a substitution such that A[σ] is closed.
We say that:
– v ∈ Λv realizes A[σ] if v ∈ JAKσ ,
– t ∈ Λ realizes A[σ] if t ∈ JAK⊥⊥
σ .
2.3
Contexts and typing rules
Before giving the typing rules of our system we need to define contexts and
judgements. As explained in the introduction, several typing rules require a
value restriction in our context. This is reflected in typing rule by the presence
of two forms of judgements.
Definition 15. A context is an ordered list of hypotheses. In particular, it contains type declarations for λ-variables and µ-variables, and declaration of term
variables and predicate variables. In our case, a context also contains term equivalences and inequivalences. A context is built using the following grammar.
Γ, ∆ ::= • | Γ, x : A | Γ, α : ¬A | Γ, a : T erm
| Γ, Xn : P redn | Γ, t ≡ u | Γ, t 6≡ u
A context Γ is said to be valid if it is possible to derive Γ Valid using the rules
of figure 1. In the following, every context will be considered valid implicitly.
Γ Valid
x 6∈ dom(Γ )
F V (A) ⊆ dom(Γ ) ∪ {x}
Γ, x : A Valid
Γ Valid
α 6∈ dom(Γ )
F V (A) ⊆ dom(Γ )
Γ, α : ¬A Valid
Γ Valid
a 6∈ dom(Γ )
Γ, a : T erm Valid
Γ Valid
Γ Valid
Γ Valid
Xn 6∈ dom(Γ )
Γ, Xn : P redn Valid
F V (t) ∪ F V (u) ⊆ dom(Γ )
Γ, t ≡ u Valid
F V (t) ∪ F V (u) ⊆ dom(Γ )
Γ, t 6≡ u Valid
• Valid
Fig. 1. Rules allowing the construction of a valid context.
Definition 16. There are two forms of typing judgements:
– Γ ⊢val v : A meaning that the value v has type A in context Γ ,
– Γ ⊢ t : A meaning that the term t has type A in context Γ .
The typing rules of the system are given in figure 2. Although most of them
are fairly usual, our type system differs in several ways. For instance the last four
rules are related to the extensionality of the calculus. One can note the value
restriction in several places: both universal quantification introduction rules and
the introduction of the membership predicate. In fact, some value restriction is
also hidden in the rules for the elimination of the existential quantifiers and the
elimination rule for the restriction connective. These rules are presented in their
left-hand side variation, and only values can appear on the left of the sequent.
It is not surprising that elimination of an existential quantifier requires value
restriction as it is the dual of the introduction rule of a universal quantifier.
Γ, x : A ⊢val x : A
Γ ⊢val v : A
↑
Γ ⊢v:A
ax
Γ ⊢v:A ↓
Γ ⊢val v : A
Γ, x : A ⊢ t : B
⇒i
Γ ⊢val λx t : A ⇒ B
Γ ⊢t:A⇒B
Γ ⊢ u : A ⇒e
Γ ⊢tu:B
Γ, α : ¬A ⊢ t : A
µ
Γ ⊢ µα t : A
Γ, α : ¬A ⊢ t : A
∗
Γ, α : ¬A ⊢ t ∗ α : B
Γ, x : A, x ≡ u ⊢ t : A
∈e
Γ, x : u ∈ A ⊢ t : A
Γ ⊢val v : A
∈i
Γ ⊢val v : v ∈ A
Γ, u1 ≡ u2 ⊢ t : A
↾i
Γ, u1 ≡ u2 ⊢ t : A ↾ u1 ≡ u2
Γ, x : A, u1 ≡ u2 ⊢ t : B
↾e
Γ, x : A ↾ u1 ≡ u2 ⊢ t : B
Γ ⊢val v : A
a 6∈ F V (Γ )
∀i
Γ ⊢val v : ∀a A
Γ ⊢ t : ∀a A
∀e
Γ ⊢ t : A[a := u]
Γ, y : A ⊢ t : B
a 6∈ F V (Γ, B) ∪ T V (t)
∃e
Γ, y : ∃a A ⊢ t : B
Γ ⊢ t : A[a := u]
∃i
Γ ⊢ t : ∃a A
Γ ⊢val v : A
Xn 6∈ F V (Γ )
∀I
Γ ⊢val v : ∀Xn A
Γ ⊢ t : ∀Xn A
∀E
Γ ⊢ t : A[Xn := P ]
Γ, x : A ⊢ t : B
Xn 6∈ F V (Γ, B)
∃E
Γ, x : ∃Xn A ⊢ t : B
Γ ⊢ t : A[Xn := P ]
∃I
Γ ⊢ t : ∃Xn A
[Γ ⊢val vi : Ai ]1≤i≤n
×i
Γ ⊢val {li = vi }n
i=1 : {li : Ai }1≤i≤n
Γ ⊢val v : {li : Ai }1≤i≤n
×e
Γ ⊢ v.li : Ai
Γ ⊢val v : Ai
+i
Γ ⊢val Ci [v] : [Ci : Ai ]1≤i≤n
Γ ⊢val v : [Ci : Ai ]1≤i≤n
[Γ, x : Ai , Ci [x] ≡ v ⊢ ti : B]1≤i≤n
+e
Γ ⊢ casev [Ci [x] → ti ]1≤i≤n : B
Γ, w1 ≡ w2 ⊢ t[x := w1 ] : A
≡v,l
Γ, w1 ≡ w2 ⊢ t[x := w2 ] : A
Γ, w1 ≡ w2 ⊢ t : A[x := w1 ]
≡v,r
Γ, w1 ≡ w2 ⊢ t : A[x := w2 ]
Γ, t1 ≡ t2 ⊢ t[a := t1 ] : A
≡t,l
Γ, t1 ≡ t2 ⊢ t[a := t2 ] : A
Γ, t1 ≡ t2 ⊢ t : A[a := t1 ]
≡t,r
Γ, t1 ≡ t2 ⊢ t : A[a := t2 ]
Fig. 2. Second-order type system.
An important and interesting difference with existing type systems is the
presence of ↑ and ↓. These two rules allow one to go from one kind of sequent
to the other when working on values. Going from Γ ⊢val v : A to Γ ⊢ v : A is
straight-forward. Going the other direction is the main motivation for our model.
This allows us to lift the value restriction expressed in the syntax to a restriction
expressed in terms of equivalence. For example, the two rules
Γ, t ≡ v ⊢ t : A
a 6∈ F V (Γ )
∀i,≡
Γ, t ≡ v ⊢ t : ∀a A
Γ, u ≡ v ⊢ t : Πa:A B
Γ, u ≡ v ⊢ u : A
Πe,≡
Γ, u ≡ v ⊢ t u : B[a := u]
can be derived in the system (see figure 3). The value restriction can be removed
similarly on every other rule. Thus, judgements on values can be completely
ignored by the user of the system. Transition to value judgements will only
happen internally.
Γ, t ≡ v ⊢ t : A
≡t,l
Γ, t ≡ v ⊢ v : A
↓
Γ, t ≡ v ⊢val v : A
a 6∈ F V (Γ )
∀i
Γ, t ≡ v ⊢val v : ∀a A
↑
Γ, t ≡ v ⊢ v : ∀a A
≡t,l
Γ, t ≡ v ⊢ t : ∀a A
Γ, u ≡ v ⊢ u : A
≡t,l
Γ, u ≡ v ⊢ v : A
↓
Γ, u ≡ v ⊢val v : A
∈i
Γ, u ≡ v ⊢val v : v ∈ A
↑
Γ, u ≡ v ⊢ t : Πa:AB
Γ, u ≡ v ⊢ v : v ∈ A
≡t,l
Γ, u ≡ v ⊢ t : ∀a(a ∈ A ⇒ B)
Γ, u ≡ v ⊢ u : v ∈ A
≡t,r
∀e
Γ, u ≡ v ⊢ t : u ∈ A ⇒ B[a := u]
Γ, u ≡ v ⊢ u : u ∈ A
⇒e
Γ, u ≡ v ⊢ t u : B[a := u]
Fig. 3. Derivation of the rules ∀i,≡ and Πe,≡ .
2.4
Adequacy
We are now going to prove the soundness of our type system by showing that
it is compatible with our realizability model. This property is specified by the
following theorem which is traditionally called the adequacy lemma.
Definition 17. Let Γ be a (valid) context. We say that the substitution σ realizes Γ if:
–
–
–
–
–
–
for
for
for
for
for
for
every
every
every
every
every
every
x : A in Γ we have σ(x) ∈ JAKσ ,
α : ¬A in Γ we have σ(α) ∈ JAK⊥
σ,
a : T erm in Γ we have σ(a) ∈ Λ,
Xn : P redn in Γ we have σ(Xn ) ∈ Λn → Λv /≡,
t ≡ u in Γ we have tσ ≡ uσ and
t 6≡ u in Γ we have tσ 6≡ uσ.
Theorem 6. (Adequacy.) Let Γ be a (valid) context, A be a formula such that
F V (A) ⊆ dom(Γ ) and σ be a substitution realizing Γ .
– If Γ ⊢val v : A then vσ ∈ JAKσ ,
– if Γ ⊢ t : A then tσ ∈ JAK⊥⊥
σ .
Proof. We proceed by induction on the derivation of the judgement Γ ⊢val v : A
(resp. Γ ⊢ t : A) and we reason by case on the last rule used.
(ax) By hypothesis σ realizes Γ, x : A from which we directly obtain xσ ∈ JAKσ .
(↑) and (↓) are direct consequences of lemma 7 and theorem 4 respectively.
⊥
(⇒e ) We need to prove that tσ uσ ∈ JBK⊥⊥
σ , hence we take π ∈ JBKσ and show
tσ uσ ∗ π ∈ ⊥⊥. Since ⊥⊥ is saturated, we can take a reduction step and show
uσ ∗ [tσ]π ∈ ⊥⊥. By induction hypothesis uσ ∈ JAK⊥⊥
so we only have to show
σ
.
To
do
so
we
take
v
∈
JAK
and
show
v
∗
[tσ]π ∈ ⊥⊥. Here we can
[tσ]π ∈ JAK⊥
σ
σ
again take a reduction step and show tσ ∗ v.π ∈ ⊥⊥. By induction hypothesis we
⊥
have tσ ∈ JA ⇒ BK⊥⊥
σ , hence it is enough to show v.π ∈ JA ⇒ BKσ . We now
take a value λx tx ∈ JA ⇒ BKσ and show that λx tx ∗ v.π ∈ ⊥⊥. We then apply
again a reduction step and show tx [x := v] ∗ π ∈ ⊥⊥. Since π ∈ JBK⊥
σ we only
need to show tx [x := v] ∈ JBK⊥⊥
which
is
true
by
definition
of
JA
⇒
BKσ .
σ
(⇒i ) We need to show λx tσ ∈ JA ⇒ BKσ so we take v ∈ JAKσ and show
tσ[x := v] ∈ JBK⊥⊥
σ . Since σ[x := v] realizes Γ, x : A we can conclude using the
induction hypothesis.
(µ) We need to show that µα tσ ∈ JAK⊥⊥
hence we take π ∈ JAK⊥
σ
σ and show
µα tσ ∗ π ∈ ⊥⊥. Since ⊥⊥ is saturated, it is enough to show tσ[α := π] ∗ π ∈ ⊥⊥.
As σ[α := π] realizes Γ, α : ¬A we conclude by induction hypothesis.
⊥
(∗) We need to show tσ ∗ ασ ∈ JBK⊥⊥
σ , hence we take π ∈ JBKσ and show that
(tσ ∗ ασ) ∗ π ∈ ⊥⊥. Since ⊥⊥ is saturated, we can take a reduction step and show
hence it is enough to show
tσ ∗ ασ ∈ ⊥⊥. By induction hypothesis tσ ∈ JAK⊥⊥
σ
ασ ∈ JAK⊥
which
is
true
by
hypothesis.
σ
(∈i ) We need to show vσ ∈ Jv ∈ AKσ . We have vσ ∈ JAKσ by induction hypothesis, and vσ ≡ vσ by reflexivity of (≡).
(∈e ) By hypothesis we know that σ realizes Γ, x : u ∈ A. To be able to conclude
using the induction hypothesis, we need to show that σ realizes Γ, x : A, x ≡ u.
Since we have σ(x) ∈ Ju ∈ AKσ , we obtain that xσ ∈ JAKσ and xσ ≡ uσ by
definition of Ju ∈ AKσ .
(↾i ) We need to show tσ ∈ JA ↾ u1 ≡ u2 K⊥⊥
σ . By hypothesis u1 σ ≡ u2 σ, hence
JA ↾ u1 ≡ u2 Kσ = JAKσ . Consequently, it is enough to show that tσ ∈ JAK⊥⊥
σ ,
which is exactly the induction hypothesis.
(↾e ) By hypothesis we know that σ realizes Γ, x : A ↾ u1 ≡ u2 . To be able to use
the induction hypothesis, we need to show that σ realizes Γ, x : A, u1 ≡ u2 . Since
we have σ(x) ∈ JA ↾ u1 ≡ u2 Kσ , we obtain that xσ ∈ JAKσ and that u1 σ ≡ u2 σ
by definition of JA ↾ u1 ≡ u2 Kσ .
T
(∀i ) We need to show that vσ ∈ J∀a AKσ = t∈Λ JAKσ[a:=t] so we take t ∈ Λ and
show vσ ∈ JAKσ[a:=t] . This is true by induction hypothesis since a 6∈ F V (Γ ) and
hence σ[a := t] realizes Γ .
(∀e ) We need to show tσ ∈ JA[a := u]K⊥⊥
= JAK⊥⊥
σ
σ[a:=uσ] for some u ∈ Λ. By
⊥⊥
induction hypothesis we know tσ ∈ J∀a AKσ , hence we only need to show that
J∀a AK⊥⊥
⊆ JAK⊥⊥
σ
σ[a:=uσ] . By definition we have J∀a AKσ ⊆ JAKσ[a:=uσ] so we can
conclude using lemma 8.
(∃e ) By hypothesis we know that σ realizes Γ, x : ∃a A. In particular, we know
that σ(x) ∈ J∃a AKσ , which means that there is a term u ∈ Λ∗ such that
σ(x) ∈ JAKσ[a:=u] . Since a ∈
/ F V (Γ ), we obtain that the substitution σ[a := u]
realizes the context Γ, x : A. Using the induction hypothesis, we finally get
⊥⊥
tσ = tσ[a := u] ∈ JBK⊥⊥
since a ∈
/ T V (t) and a ∈
/ F V (B).
σ[a:=u] = JBKσ
(∃i ) The proof for this rule is similar to the one for (∀e ). We need to show that
⊥⊥
JA[a := u]K⊥⊥
= JAK⊥⊥
σ
σ[a:=uσ] ⊆ J∃a AKσ . This follows from lemma 8 since
JAKσ[a:=uσ] ⊆ J∃a AKσ by definition.
(∀I ), (∀E ), (∃E ) and (∃I ) are similar to similar to (∀i ), (∀e ), (∃e ) and (∃i ).
(×i ) We need to show that {li = vi σ}i∈I ∈ J{li : Ai }i∈I Kσ . By definition we need
to show that for all i ∈ I we have vi σ ∈ JAi Kσ . This is immediate by induction
hypothesis.
(×e ) We need to show that vσ.li ∈ JAi K⊥⊥
for some i ∈ I. By induction hypothσ
esis we have vσ ∈ J{li : Ai }i∈I Kσ and hence v has the form {li = vi }i∈I with
vi σ ∈ JAi Kσ . Let us now take π ∈ JAi K⊥
⊥.
σ and show that {li = vi σ}i∈I .li ∗ π ∈ ⊥
Since ⊥⊥ is saturated, it is enough to show vi σ ∗ π ∈ ⊥⊥. This is true since
vi σ ∈ JAi Kσ and π ∈ JAi K⊥
σ.
(+i ) We need to show Ci [vσ] ∈ J[Ci : Ai ]i∈I Kσ for some i ∈ I. By induction
hypothesis vσ ∈ JAi Kσ and hence we can conclude by definition of J[Ci : Ai ]i∈I Kσ .
(+e ) We need to show casevσ [Ci [x] → ti σ]i∈I ∈ JBK⊥⊥
σ . By induction hypothesis
vσ ∈ J[Ci of Ai ]i∈I Kσ which means that there is i ∈ I and w ∈ JAi Kσ such that
vσ = Ci [w]. We take π ∈ JBK⊥
⊥.
σ and show caseCi [w] [Ci [x] → ti σ]i∈I ∗ π ∈ ⊥
Since ⊥⊥ is saturated, it is enough to show ti σ[x := w] ∗ π ∈ ⊥⊥. It remains to
show that ti σ[x := w] ∈ JBK⊥⊥
σ . To be able to conclude using the induction
hypothesis we need to show that σ[x := w] realizes Γ, x : Ai , Ci [x] ≡ v. This is
true since σ realizes Γ , w ∈ JAi Kσ and Ci [w] ≡ vσ by reflexivity.
(≡v,l ) We need to show t[x := w1 ]σ = tσ[x := w1 σ] ∈ JAKσ . By hypothesis we
know that w1 σ ≡ w2 σ from which we can deduce tσ[x := w1 σ] ≡ tσ[x := w2 σ]
by extensionality (theorem 2). Since JAKσ is closed under (≡) we can conclude
using the induction hypothesis.
(≡t,l ), (≡v,r ) and (≡t,r ) are similar to (≡v,l ), using extensionality (theorem 2
and theorem 3).
Remark 7. For the sake of simplicity we fixed a pole ⊥⊥ at the beginning of the
current section. However, many of the properties presented here (including the
adequacy lemma) remain valid with similar poles. We will make use of this fact
in the proof of the following theorem.
Theorem 7. (Safety.) Let Γ be a context, A be a formula such that F V (A) ⊆
dom(Γ ) and σ be a substitution realizing Γ . If t is a term such that Γ ⊢ t : A
and if A[σ] is pure (i.e. it does not contain any ⇒ ), then for every stack
∗
π ∈ JAK⊥
σ there is a value v ∈ JAKσ and α ∈ Vµ such that tσ ∗ π ։ v ∗ α.
Proof. We do a proof by realizability using the following pole.
⊥⊥A = {p ∈ Λ × Π | p ։∗ v ∗ α ∧ v ∈ JAKσ }
It is well-defined as A is pure and hence JAKσ does not depend on the pole.
Using the adequacy lemma (theorem 6) with ⊥⊥A we obtain tσ ∈ JAK⊥⊥
σ . Hence
for every stack π ∈ JAK⊥
⊥A . We can then conclude using the
σ we have tσ ∗ π ∈ ⊥
definition of the pole ⊥⊥A .
Remark 8. It is easy to see that if A[σ] is closed and pure then v ∈ JAKσ implies
that • ⊢ v : A.
Theorem 8. (Consistency.) There is no t such that • ⊢ t : ⊥.
Proof. Let us suppose that • ⊢ t : ⊥. Using adequacy (theorem 6 ) we obtain
⊥
that t ∈ J⊥K⊥⊥
σ . Since J⊥Kσ = ∅ we know that J⊥Kσ = Π by definition. Now
using theorem 5 we obtain J⊥K⊥⊥
=
∅.
This
is
a
contradiction.
σ
3
Deciding Program Equivalence
The type system given in figure 2 does not provide any way of discharging an
equivalence from the context. As a consequence the truth of an equivalence
cannot be used. Furthermore, an equational contradiction in the context cannot
be used to derive falsehood. To address these two problems, we will rely on a
partial decision procedure for the equivalence of terms. Such a procedure can be
easily implemented using an algorithm similar to Knuth-Bendix, provided that
we are able to extract a set of equational axioms from the definition of (≡). In
particular, we will use the following lemma to show that several reduction rules
are contained in (≡).
Lemma 9. Let t and u be terms. If for every stack π ∈ Π there is p ∈ Λ × Π
such that t ∗ π ≻∗ p and u ∗ π ≻∗ p then t ≡ u.
Proof. Since (≻) ⊆ (։i ) for every i ∈ N, we can deduce that t ∗ π ։∗i p
and u ∗ π ։∗i p for every i ∈ N. Using lemma 1 we can deduce that for every
substitution σ we have tσ ∗π ։∗i pσ and uσ ∗π ։∗i pσ for all i ∈ N. Consequently
we obtain t ≡ u.
The equivalence relation contains call-by-value β-reduction, projection on records
and case analysis on variants.
Theorem 9. For every x ∈ Vλ , t ∈ Λ and v ∈ Λv we have (λx t)v ≡ t[x := v].
Proof. Immediate using lemma 9.
Theorem 10. For all k such that 1 ≤ k ≤ n we have the following equivalences.
(λx t)v ≡ t[x := v]
caseCk [v] [Ci [xi ] → ti ]1≤i≤n ≡ tk [xk := v]
Proof. Immediate using lemma 9.
To observe contradictions, we also need to derive some inequivalences on
values. For instance, we would like to deduce a contradiction if two values with
a different head constructor are assumed to be equivalent.
Theorem 11. Let C, D ∈ C be constructors, and v, w ∈ Λv be values. If C 6= D
then C[v] 6≡ D[w].
Proof. We take π = [λx casex [C[y] → y | D[y] → Ω]]α where Ω is an arbitrary
diverging term. We then obtain C[v] ∗ π ⇓0 and D[w] ∗ π ⇑0 .
Theorem 12. Let {li = vi }i∈I and {lj = vj }j∈J be two records. If k is a index
such that k ∈ I and k ∈
/ J then we have {li = vi }i∈I 6≡ {lj = vj }j∈J .
Proof. Immediate using the stack π = [λx x.lk ]α.
Theorem 13. For every x ∈ Vλ , v ∈ Λv , t ∈ Λ, C ∈ C and for every record
{li = vi }i∈I we have the following inequivalences.
λx t 6≡ C[v]
λx t 6≡ {li = vi }i∈I
C[v] 6≡ {li = vi }i∈I
Proof. The proof is mostly similar to the proofs of the previous two theorems.
However, there is a subtlety with the second inequivalence. If for every value v
the term t[x := v] diverges, then we do not have λx t 6≡ {}. Indeed, there is no
evaluation context (or stack) that is able to distinguish the empty record {} and
a diverging function. To solve this problem, we can extend the language with a
new kind of term unitv and extend the relation (≻) with the following rule.
unit{} ∗ π
≻
{} ∗ π
The process unitv ∗ π is stuck for every value v 6= {}. The proof can the be
completed using the stack π = [λx unitx ]α.
The previous five theorems together with the extensionality of (≡) and its
properties as an equivalence relation can be used to implement a partial decision
procedure for equivalence. We will incorporate this procedure into the typing
rules by introducing a new form of judgment.
Definition 18. An equational context E is a list of hypothetical equivalences and
inequivalences. Equational contexts are built using the following grammar.
E := • | E, t ≡ u | E, t 6≡ u
Given a context Γ , we denote EΓ its restriction to an equational context.
Definition 19. Let E be an equational context. The judgement E ⊢ ⊥ is valid if
and only if the partial decision procedure is able to derive a contradiction in E.
We will write E ⊢ t ≡ u for E, t 6≡ u ⊢ ⊥ and E ⊢ t 6≡ u for E, t ≡ u ⊢ ⊥
To discharge equations from the context, the following two typing rules are
added to the system.
Γ, u1 ≡ u2 ⊢ t : A
EΓ ⊢ u 1 ≡ u 2
≡
Γ ⊢t:A
Γ, u1 6≡ u2 ⊢ t : A
EΓ ⊢ u1 6≡ u2
6≡
Γ ⊢t:A
The soundness of these new rules follows easily since the decision procedure
agrees with the semantical notion of equivalence. The axioms that were given
at the beginning of this section are only used to partially reflect the semantical
equivalence relation in the syntax. This is required if we are to implement the
decision procedure.
Another way to use an equational context is to derive a contradiction directly.
For instance, if we have a context Γ such that EΓ yields a contradiction, one
should be able to finish the corresponding proof. This is particularly useful when
working with variants and case analysis. For instance, some branches of the case
analysis might not be reachable due to constraints on the matched term. For
instance, we know that in the term
caseC[v] [C[x] → x | D[x] → t]
the branch corresponding to the D constructor will never be reached. Consequently, we can replace t by any term and the computation will still behave
correctly. For this purpose we introduce a special value 8< on which the abstract
machine fails. It can be introduced with the following typing rule.
EΓ ⊢ ⊥
8<
Γ ⊢val 8< : ⊥
The soundness of this rule is again immediate.
4
Further Work
The model presented in the previous sections is intended to be used as the basis
for the design of a proof assistant based on a call-by-value ML language with
control operators. A first prototype (with a different theoretical foundation) was
implemented by Christophe Raffalli [27]. Based on this experience, the design of a
new version of the language with a clean theoretical basis can now be undertaken.
The core of the system will consist of three independent components: a typechecker, a termination checker and a decision procedure for equivalence.
Working with a Curry style language has the disadvantage of making typechecking undecidable. While most proof systems avoid this problem by switching
to Church style, it is possible to use heuristics making most Curry style programs
that arise in practice directly typable. Christophe Raffalli implemented such a
system [26] and from his experience it would seem that very little help from the
user is required in general. In particular, if a term is typable then it is possible
for the user to provide hints (e.g. the type of a variable) so that type-checking
may succeed. This can be seen as a kind of completeness.
Proof assistants like Coq [18] or Agda [22] both have decidable type-checking
algorithms. However, these systems provide mechanisms for handling implicit
arguments or meta-variables which introduce some incompleteness. This does
not make these systems any less usable in practice. We conjecture that going
even further (i.e. full Curry style) provides a similar user experience.
To obtain a practical programming language we will need support for recursive programs. For this purpose we plan on adapting Pierre Hyvernat’s termination checker [9]. It is based on size change termination and has already been
used in the first prototype implementation. We will also need to extend our type
system with inductive (and coinductive) types [19, 25]. They can be introduced
in the system using fixpoints µX A (and νX A).
Acknowledgments
I would like to particularly thank my research advisor, Christophe Raffalli, for his
guidance and input. I would also like to thank Alexandre Miquel for suggesting
the encoding of dependent products. Thank you also to Pierre Hyvernat, Tom
Hirschowitz, Robert Harper and the anonymous reviewers for their very helpful
comments.
References
1. Casinghino, C., Sjöberg, V., Weirich, S.: Combining proofs and programs in a
dependently typed language. In: Jagannathan, S., Sewell, P. (eds.) The 41st Annual
ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,
POPL ’14, San Diego, CA, USA. pp. 33–46. ACM (2014)
2. Constable, R.L., Allen, S.F., Bromley, M., Cleaveland, R., Cremer, J.F., Harper,
R.W., Howe, D.J., Knoblock, T.B., Mendler, N.P., Panangaden, P., Sasaki, J.T.,
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
Smith, S.F.: Implementing mathematics with the Nuprl proof development system.
Prentice Hall (1986)
Coquand, T., Huet, G.: The calculus of constructions. Inf. Comput. 76(2-3), 95–120
(Feb 1988)
Damas, L., Milner, R.: Principal type-schemes for functional programs. In: Proceedings of the 9th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. pp. 207–212. POPL ’82, ACM, New York, NY, USA (1982)
Garrigue, J.: Relaxing the value restriction. In: Kameyama, Y., Stuckey, P. (eds.)
Functional and Logic Programming, Lecture Notes in Computer Science, vol. 2998,
pp. 196–213. Springer Berlin Heidelberg (2004)
Griffin, T.G.: A formulæ-as-types notion of control. In: In Conference Record of the
Seventeenth Annual ACM Symposium on Principles of Programming Languages.
pp. 47–58. ACM Press (1990)
Harper, R., Lillibridge, M.: ML with callcc is unsound (Jul 1991),
http://www.seas.upenn.edu/~ sweirich/types/archive/1991/msg00034.html
Howe, D.J.: Equality in lazy computation systems. In: Proceedings of the Fourth
Annual Symposium on Logic in Computer Science (LICS ’89), Pacific Grove, California, USA, June 5-8, 1989. pp. 198–203 (1989)
Hyvernat, P.: The size-change termination principle for constructor based languages. Logical Methods in Computer Science 10(1) (2014)
Jia, L., Vaughan, J.A., Mazurak, K., Zhao, J., Zarko, L., Schorr, J., Zdancewic, S.:
AURA: a programming language for authorization and audit. In: Hook, J., Thiemann, P. (eds.) Proceeding of the 13th ACM SIGPLAN international conference
on Functional programming, ICFP 2008, Victoria, BC, Canada, September 20-28,
2008. pp. 27–38. ACM (2008)
Krivine, J.: A call-by-name lambda-calculus machine. Higher-Order and Symbolic
Computation 20(3), 199–207 (2007)
Krivine, J.: Realizability in classical logic. In: Interactive models of computation
and program behaviour, Panoramas et synthèses, vol. 27, pp. 197–229. Société
Mathématique de France (2009)
Lepigre, R.: A realizability model for a semantical value restriction (2015),
https://lama.univ-savoie.fr/~ lepigre/files/docs/semvalrest2015.pdf,
long version
Leroy, X.: Polymorphism by name for references and continuations. In: 20th symposium Principles of Programming Languages. pp. 220–231. ACM Press (1993)
Leroy, X., Weis, P.: Polymorphic type inference and assignment. In: Proceedings
of the 18th ACM SIGPLAN-SIGACT Symposium on Principles of Programming
Languages. pp. 291–302. POPL ’91, ACM, New York, NY, USA (1991)
Licata, D.R., Harper, R.: Positively dependent types. In: Altenkirch, T., Millstein,
T.D. (eds.) Proceedings of the 3rd ACM Workshop Programming Languages meets
Program Verification, PLPV 2009, Savannah, GA, USA, January 20, 2009. pp. 3–
14. ACM (2009)
Martin-Löf, P.: Constructive mathematics and computer programming. In: Cohen,
L., Loś, J., Pfeiffer, H., Podewski, K.P. (eds.) Logic, Methodology and Philosophy
of Science VI, Studies in Logic and the Foundations of Mathematics, vol. 104, pp.
153–175. North-Holland (1982)
The Coq development team: The Coq proof assistant reference manual. LogiCal
Project (2004), http://coq.inria.fr, version 8.0
Mendler, N.P.: Recursive types and type constraints in second-order lambda calculus. In: Proceedings of the Symposium on Logic in Computer Science (LICS) 1987.
pp. 30–36 (1987)
20. Miquel, A.: Le Calcul des Constructions Implicites : Syntaxe et Sémantique. Ph.D.
thesis, Université Paris VII (2001)
21. Munch-Maccagnoni, G.: Focalisation and classical realisability. In: Computer Science Logic, 23rd international Workshop, CSL 2009, 18th Annual Conference of
the EACSL. pp. 409–423 (2009)
22. Norell, U.: Dependently Typed Programming in Agda. In: Lecture Notes from the
Summer School in Advanced Functional Programming (2008)
23. Owre, S., Rajan, S., Rushby, J., Shankar, N., Srivas, M.: PVS: combining specification, proof checking, and model checking. In: Alur, R., Henzinger, T.A. (eds.)
Computer-Aided Verification, CAV ’96. pp. 411–414. No. 1102 in Lecture Notes in
Computer Science (1996)
24. Parigot, M.: λµ-calculus: An algorithmic interpretation of classical natural deduction. In: Lecture Notes in Computer Science, vol. 624, pp. 190–201. Springer (1992)
25. Raffalli, C.: L’Arithmétiques Fonctionnelle du Second Ordre avec Points Fixes.
Ph.D. thesis, Université Paris VII (1994)
26. Raffalli, C.: A normaliser for pure and typed λ-calculus (1996),
http://lama.univ-savoie.fr/~ raffalli/normaliser.html
27. Raffalli, C.: The PML programming language. LAMA - Université Savoie MontBlanc (2012), http://lama.univ-savoie.fr/tracpml/
28. Swamy, N., Chen, J., Fournet, C., Strub, P., Bhargavan, K., Yang, J.: Secure
distributed programming with value-dependent types. In: Chakravarty, M.M.T.,
Hu, Z., Danvy, O. (eds.) Proceeding of the 16th ACM SIGPLAN international
conference on Functional Programming, ICFP 2011, Tokyo, Japan, September 1921, 2011. pp. 266–278. ACM (2011)
29. Tofte, M.: Type inference for polymorphic references. Inf. Comput. 89(1), 1–34
(Sep 1990)
30. Wright, A.K.: Simple imperative polymorphism. In: LISP and Symbolic Computation. pp. 343–356 (1995)
31. Wright, A.K., Felleisen, M.: A syntactic approach to type soundness. Inf. Comput.
115(1), 38–94 (1994)
32. Xi, H.: Applied Type System (extended abstract). In: post-workshop Proceedings
of TYPES 2003. pp. 394–408. Springer-Verlag LNCS 3085 (2004)
33. Xi, H., Pfenning, F.: Dependent types in practical programming. In: Proceedings
of the 26th ACM SIGPLAN Symposium on Principles of Programming Languages.
pp. 214–227. San Antonio (January 1999)
| 6 |
Accelerating Deep Learning with Memcomputing
Haik Manukian1 , Fabio L. Traversa2 , Massimiliano Di Ventra1
arXiv:1801.00512v2 [cs.LG] 24 Jan 2018
Abstract
Restricted Boltzmann machines (RBMs) and their extensions, called “deep-belief networks”, are powerful neural networks
that have found applications in the fields of machine learning and big data. The standard way to train these models
resorts to an iterative unsupervised procedure based on Gibbs sampling, called “contrastive divergence” (CD), and
additional supervised tuning via back-propagation. However, this procedure has been shown not to follow any gradient
and can lead to suboptimal solutions. In this paper, we show an efficient alternative to CD by means of simulations
of digital memcomputing machines (DMMs). We test our approach on pattern recognition using a modified version
of the MNIST data set. DMMs sample effectively the vast phase space given by the model distribution of the RBM,
and provide a very good approximation close to the optimum. This efficient search significantly reduces the number
of pretraining iterations necessary to achieve a given level of accuracy, as well as a total performance gain over CD.
In fact, the acceleration of pretraining achieved by simulating DMMs is comparable to, in number of iterations, the
recently reported hardware application of the quantum annealing method on the same network and data set. Notably,
however, DMMs perform far better than the reported quantum annealing results in terms of quality of the training. We
also compare our method to advances in supervised training, like batch-normalization and rectifiers, that work to reduce
the advantage of pretraining. We find that the memcomputing method still maintains a quality advantage (> 1% in
accuracy, and a 20% reduction in error rate) over these approaches. Furthermore, our method is agnostic about the
connectivity of the network. Therefore, it can be extended to train full Boltzmann machines, and even deep networks
at once.
Keywords: Deep Learning, Restricted Boltzmann Machines, Memcomputing
1. Introduction
The progress in machine learning and big data driven
by successes in deep learning is difficult to overstate. Deep
learning models (a subset of which are called “deep-belief
networks”) are artificial neural networks with a certain
amount of layers, n, with n > 2 [1]. They have proven
themselves to be very useful in a variety of applications,
from computer vision [2] and speech recognition [3] to
super-human performance in complex games[4], to name
just a few. While some of these models have existed for
some time [5], the dramatic increases in computational
power combined with advances in effective training methods have pushed forward these fields considerably [6].
Successful training of deep-belief models relies heavily
on some variant of an iterative gradient-descent procedure, called back-propagation, through the layers of the
network [7]. Since this optimization method uses only gradient information, and the error landscapes of deep networks are highly non-convex [8], one would at best hope
to find an appropriate local minimum.
However, there is evidence that in these highdimensional non-convex settings, the issue is not getting
1 Department of Physics, University of California, San Diego, La
Jolla, CA 92093
2 MemComputing, Inc., San Diego, CA, 92130 CA
stuck in some local minima but rather at saddle points,
where the gradient also vanishes [9], hence making the
gradient-descent procedure of limited use. A takeaway
from this is that a “good” initialization procedure for assigning the weights of the network, known as pretraining,
can then be highly advantageous.
One such deep-learning framework that can utilize this
pretraining procedure is the Restricted Boltzmann Machine (RBM) [5], and its extension, the Deep Belief Network (DBN) [10]. These machines are a class of neural network models capable of unsupervised learning of a
parametrized probability distribution over inputs. They
can also be easily extended to the supervised learning
case by training an output layer using back-propagation
or other standard methods [1].
Training RBMs usually distinguishes between an unsupervised pretraining, whose purpose is to initialize a good
set of weights, and the supervised procedure. The current most effective technique for pretraining RBMs utilizes an iterative sampling technique called contrastive divergence (CD) [11]. Computing the exact gradient of the
log-likelihood is exponentially hard in the size of the RBM,
and so CD approximates it with a computationally friendly
sampling procedure. While this procedure has brought
RBMs most of their success, CD suffers from the slow
mixing of Gibbs sampling, and is known not to follow the
gradient of any function [12].
Partly due to these shortcomings of pretraining with
CD, much research has gone into making the backpropagation procedure more robust and less sensitive to
the initialization of weights and biases in the network.
This includes research into different non-linear activation
functions (e.g., “rectifiers”) [13] to combat the vanishing
gradient problem and normalization techniques (such as
“batch-normalization”) [14] that make back-propagation
in deep networks more stable and less dependent on initial
conditions. In sum, these techniques make training deep
networks an easier (e.g., more convex) optimization problem for a gradient-based approach like back-propagation.
This, in turn, relegates the standard CD pretraining procedure’s usefulness to cases where the training set is sparse
[1], which is becoming an increasingly rare occurrence.
In parallel with this research into back-propagation, sizable effort has been expended toward improving the power
of the pretraining procedure, including extensions of CD
[15, 16], CD done on memristive hardware [17], and more
recently, approaches based on quantum annealing that try
to recover the exact gradient [18] involved in pretraining.
Some of these methods are classical algorithms simulating
quantum sampling [19], and still others attempt to use a
hardware quantum device in contact with an environment
to take independent samples from its Boltzmann distribution for a more accurate gradient computation. For instance, in a recent work, the state of the RBM has been
mapped onto a commercial quantum annealing processor
(a D-Wave machine), the latter used as a sampler of the
model distribution [20]. The results reported on a reduced
version of the well-known MNIST data set look promising
as compared to CD [20]. However, these approaches require expensive hardware, and cannot be scaled to larger
problems as of yet.
In the present paper, inspired by the theoretical underpinnings [21, 22] and recent empirical demonstrations [23, 24] of the advantages of a new computing
paradigm –memcomputing [25]– on a variety of combinatorial/optimization problems, we seek to test its power
toward the computationally demanding problems in deep
learning.
Memcomputing [25] is a novel computing paradigm that
solves complex computational problems using processing
embedded in memory. It has been formalized by two of
us (FLT and MD) by introducing the concept of universal memcomputing machines[21]. In short, to perform a
computation, the task at hand is mapped to a continuous
dynamical system that employs highly-correlated states
[26] (in both space and time) of the machine to navigate
the phase space efficiently and find the solution of a given
problem as mapped into the equilibrium states of the dynamical system.
In this paper, we employ a subset of these machines
called digital memcomputing machines (DMMs) and, more
specifically, their self-organizing circuit realizations [22].
The distinctive feature of DMMs is their ability to read and
write the initial and final states of the machine digitally,
namely requiring only finite precision. This feature makes
them easily scalable as our modern computers.
From a practical point of view DMMs can be built with
standard circuit elements with and without memory [22].
These elements, however, are non-quantum. Therefore,
the ordinary differential equations of the corresponding circuits can be efficiently simulated on our present computers.
Here, we will indeed employ only simulations of DMMs on
a single Xeon processor to train RBMs. These simulations
show already substantial advantages with respect to CD
and even quantum annealing, despite the latter is executed
on hardware. Of course, the hardware implementation of
DMMs applied to these problems would offer even more
advantages since the simulation times will be replaced by
the actual physical time of the circuits to reach equilibrium. This would then offer a realistic path to real-time
pretraining of deep-belief networks.
In order to compare directly with quantum annealing
results recently reported[20], we demonstrate the advantage of our memcomputing approach by first training on
a reduced MNIST data set as that used in Ref [20]. We
show that our method requires far less pretraining iterations to achieve the same accuracy as CD, as well as an
overall accuracy gain over both CD and quantum annealing. We also train the RBMs on the reduced MNIST data
set without mini-batching, where the quantum annealing
results are not available. Also in this case, we find both
a substantial reduction in pretraining iterations needed as
well a higher level of accuracy of the memcomputing approach over the traditional CD.
Our approach then seems to offer many of the advantages of quantum approaches. However, since it is based
on a completely classical system, it can be efficiently deployed in software (as we demonstrate in this paper) as
well as easily implemented in hardware, and can be scaled
to full-size problems.
Finally, we investigate the role of recent advances in supervised training by comparing accuracy obtained using
only back-propagation with batch-normalization and rectifiers starting from a random initial condition versus the
back-propagation procedure initiated from a network pretrained with memcomputing, with sigmoidal activations
and without batch-normalization. Even without these advantages, namely operating on a more non-convex landscape, we find the network pretrained with memcomputing maintains an accuracy gain over state-of-the-art backpropagation by more than 1% and a 20% reduction in
error rate. This gives further evidence to the fact that
memcomputing pretraining navigates to an advantageous
initial point in the non-convex loss surface of the deep network.
2. RBMs and Contrastive Divergence
An RBM consists of m visible units, vj , j = 1 . . . m, each
fully connected to a layer of n hidden units, hi , i = 1 . . . n,
2
updates from the n-th to the (n + 1)-th iteration:
Output
layer
o2
o1
n+1
n
wij
= αwij
+ [hvi hj iDATA − hvi hj iMODEL ],
o3
(4)
where α is called the “momentum” and is the “learning
rate”. A similar update procedure is applied to the biases:
h1
h2
h3
bn+1
= αbni + [hvi iDATA − hvi iMODEL ],
i
(5)
cn+1
= αcnj + [hhj iDATA − hhj iMODEL ].
j
(6)
RBM
v2
v1
v3
v4
This form of the weight updates is referred to as
“stochastic gradient optimization with momentum”. Here
α is the momentum parameter and the learning rate. The
first expectation value on the rhs of Eqs. (4), (5), and (6)
is taken with respect to the conditional probability distribution with the data fixed at the visible layer. This
is relatively easy to compute. Evaluation of the second
expectation on the rhs of Eqs. (4), (5), and (6) is exponentially hard in the size of the network, since obtaining
independent samples from a high-dimensional model distribution easily becomes prohibitive with increasing size
[11]. This is the term that CD attempts to approximate.
The CD approach attempts to reconstruct the difficult
expectation term with iterative Gibbs sampling. This
works by sequentially sampling each layer given the sigmoidal conditional probabilities, namely
X
p(hi = 1|v) = σ
wij vj + ci ,
(7)
Figure 1: A sketch of an RBM with four visible nodes, three hidden nodes, and an output layer with three nodes. The value of
each stochastic binary node is represented by vi , hi ∈ {0, 1}, which
are sampled from the probabilities in Eqs. (7), (8). The connections between the layers represent the weights, wij ∈ R (biases not
shown). Note the lack of connections between nodes in the same
layer, which distinguishes the RBM from a Boltzmann machine. The
RBM weights are trained separately from the output layer with generative pretraining, then tuned together via back-propogation (just
as in a feed-forward neural network).
both usually taken to be binary variables. In the restricted
model, no intra-layer connection are allowed, see Fig. 1.
The connectivity structure of the RBM implies that,
given the hidden variables, each input node is conditionally
independent of all the others:
p(vi , vj |h) = p(vi |h)p(vj |h).
j
(1)
for the visible layer, and similarly for the hidden layer
!
X
p(vj = 1|h) = σ
wij hi + bj ,
(8)
The joint probability is given by the Gibbs distribution,
p(v, h) =
1 −E(v,h)
e
,
Z
with an energy function
XX
X
X
E(v, h) = −
wij hi vj −
bj v j −
ci hi ,
i
j
j
i
(2)
with σ(x) = (1 + e−x )−1 . The required expectation values are calculated with the resulting samples. In the limit
of infinite sampling iterations, the expectation value is recovered. However, this convergence is slow and in practice
usually only one iteration, referred to CD-1, is used [27].
(3)
i
where wij is the weight between the i-th hidden neuron
and the j-th visible neuron, and bj , ci are real numbers
indicating the “biases” of the neurons. The value, Z, is
a normalization constant, and is known in statistical mechanics as the partition function. Training an RBM then
amounts to finding a set of weights and biases that maximizes the likelihood (or equivalently minimizes the energy)
of the observed data.
A common approach to training RBMs for a supervised
task is to first perform generative unsupervised learning
(pretraining) to initialize the weights and biases, then run
back-propagation over input-label pairs to fine tune the
parameters of the network. The pretraining is framed as
a gradient ascent over the log-likelihood of the observed
data, which gives a particularly tidy form for the weight
3. Efficient Sampling with Memcomputing
We replace the CD reconstruction with our approach
that utilizes memcomputing to compute a much better approximation to the model expectation than CD.
More specifically, we map the RBM pretraining problem onto DMMs realized by self-organizing logic circuits
(SOLCs) [22]. These electrical circuits define a set of coupled differential equations which are set to random initial conditions and integrated forward toward the global
solution of the given problem[22, 23, 24]. The ordinary
differential equations we solve can be found in Ref. [22]
appropriately adapted to deal with the particular problem
discussed in this paper.
3
(a) 100 back-propagation iterations
20
20
10
16
Mem-QUBO
CD-1
1
15
0
10
20
30
0.8
40
Accuracy
Total Weight
18
14
0.6
0.4
12
0.2
10
35.5
36
36.5
37
37.5
38
38.5
39
39.5
0
40
Simulation Time (arbitrary units)
0
5
10
15
20
25
30
35
40
45
50
45
50
45
50
Pretraining Iterations
Figure 2: Plot of the total weight of a MAX-SAT clause as a function of simulation time (in arbitrary units) of a DMM. A lower weight
variable assignment corresponds directly to a higher probability assignment of the nodes of an RBM. If the simulation has not changed
assignments in some time, we restart with another random (independent) initial condition. The inset shows the full simulation, with all
restarts. The main figure focuses on the last three restarts, signified
by the black box in the inset.
(b) 200 back-propagation iterations
Accuracy
0.8
Within this memcomputing context, we construct a
reinterpretation of the RBM pretraining that explicitly
shows how it corresponds to an NP-hard optimization
problem, which we then tackle using DMMs. We first observe that to obtain a sample near most of the probability
mass of the joint distribution, p(v, h) ∝ e−E(v,h) , one must
find the minimum of the energy of the form Eq. 3, which
constitutes a quadratic unconstrained binary optimization
(QUBO) problem [28].
We can see this directly by considering the visible and
hidden nodes as one vector x = (v, h) and re-writing the
energy of an RBM configuration as
B
0
W
,
C
0.4
0
0
5
10
15
20
25
30
35
40
Pretraining Iterations
(c) 400 back-propagation iterations
Mem-QUBO
CD-1
1
0.8
(9)
where Q is the matrix
Q=
0.6
0.2
Accuracy
E = −xT Qx,
Mem-QUBO
CD-1
1
0.6
0.4
0.2
(10)
0
0
with B and C being the diagonal matrices representing the
biases bj and ci , respectively, while the matrix W contains
the weights.
We then employ a mapping from a general QUBO problem to a weighted maximum satisfiability (weighted MAXSAT) problem, similar to [29], which is directly solved by
the DMM. The weighted MAX-SAT problem is to find an
assignment of boolean variables that minimizes the total
weight of a given boolean expression written in conjunctive
normal form [28].
This problem is a well-known problem in the NP-hard
complexity class [28]. However, it was recently shown in
Ref. [23], that simulations of DMMs show dramatic (exponential) speed-up over the state-of-the-art solvers [23],
when attempting to find better approximations to hard
MAX-SAT instances beyond the inapproximability gap
[30].
5
10
15
20
25
30
35
40
Pretraining Iterations
Figure 3: Memcomputing (Mem-QUBO) accuracy on the test set
of the reduced MNIST problem versus contrastive divergence for
n = 100(a), 200(b), 400(c) iterations of back-propagation with
√ minibatches of 100. The plots show average accuracy with ±σ/ N error
bars calculated across 10 DBNs trained on N = 10 different partitions of the training set. One can see a dramatic acceleration with
the memcomputing approach needing far less iterations to achieve
the same accuracy, as well as an overall performance gap (indicated
by a black arrow) that back-propagation cannot seem to overcome.
Note that some of the error bars for both Mem-QUBO and CD1 are very small on the reported scale for a number of pretraining
iterations larger than about 20.
The approximation to the global optimum of the
weighted MAX-SAT problem given by memcomputing is
then mapped back to the original variables that represent
4
(a) 100 back-propagation iterations
the states of the RBM nodes. Finally, we obtain an approximation to the “ground state” (lowest energy state) of
the RBM as a variable assignment, x∗ , close to the peak
of the probability distribution, where ∇P (x∗ ) = 0. This
assignment is obtained by integration of the ordinary differential equations that define the SOLCs dynamics. In
doing so we collect an entire trajectory, x(t), that begins
at a random initial condition in the phase space of the
problem, and ends at the lowest energy configuration of
the variables (see Fig. 2).
Since the problem we are tackling here is an optimization one, we do not have any guarantee of finding the
global optimum. (This is in contract to a SAT problem
where we can guarantee DMMs do find the solutions of
the problem corresponding to equilibrium points, if these
exist [22, 31, 32].) Therefore, there is an ambiguity about
what exactly constitutes the stopping time of the simulation, since a priori, one cannot know that the simulation
has reached the global minimum.
We then perform a few “restarts” of the simulation (that
effectively correspond to a change of the initial conditions)
and stop the simulation when the machine has not found
any better configuration within that number of restarts.
The restarts are clearly seen in Fig. 2 as spikes in the
total weight of the boolean expression. In this work we
have employed 28 restarts, which is an over-kill since a
much smaller number would have given similar results.
The full trajectory, x(t), together with the above
“restarts” is plotted in Fig. 2. It is seen that this trajectory, in between restarts, spends most of its time in
“low-energy regions,” or equivalently areas of high probability. A time average, hx(t)i, gives a good approximation
to the required expectations in the gradient calculation in
Eqs. (4), (5), and (6). In practice, even using the best assignment found, x∗ , shows a great improvement over CD
in our experience. This is what we report in this paper.
Note also that a full trajectory, as the one shown in Fig. 2,
takes about 0.5 seconds on a single Xeon processor.
As a testbed for the memcomputing advantage in deep
learning, and as a direct comparison to the quantum annealing hardware approaches, we first looked to the reduced MNIST data set as reported in [20] for quantum
annealing using a D-wave machine. Therefore, we have
first applied the same reduction to the full MNIST problem as given in that work, which consists of removing two
pixels around all 28 × 28 grayscale values in both the test
and training sets. Then each 4 × 4 block of pixels is replaced by their average values to give a 6×6 reduced image.
Finally, the four corner pixels are discarded resulting in a
total of 32 pixels representing each image.
We also trained the same-size DBN consisting of two
stacked RBMs each with 32 visible and hidden nodes,
training each RBM one at a time. We put both the CD1 and our Memcomputing-QUBO (Mem-Qubo) approach
through N = 1, · · · , 50 generative pretraining iterations
using no mini-batching.
For the memcomputing approach, we solve one QUBO
1
Mem-QUBO
CD-1
Accuracy
0.8
0.6
0.4
0.2
0
0
5
10
15
20
25
30
35
40
45
50
45
50
45
50
Pretraining iterations
(b) 500 back-propagation iterations
1
Mem-QUBO
CD-1
Accuracy
0.8
0.6
0.4
0.2
0
0
5
10
15
20
25
30
35
40
Pretraining iterations
(c) 800 back-propagation iterations
1
Mem-QUBO
CD-1
Accuracy
0.8
0.6
0.4
0.2
0
0
5
10
15
20
25
30
35
40
Pretraining iterations
Figure 4: Mem-QUBO accuracy on the reduced MNIST test set vs.
CD-1 after n = 100(a), 500(b), 800(c) iterations of back-propagation
with no mini-batching. The resulting pretraining acceleration shown
by the memcomputing approach is denoted by the horizontal arrow.
A performance gap also appears, emphasized by the vertical arrow,
with Mem-QUBO obtaining a higher level of accuracy than CD-1,
even for the highest number of back-propagation iterations. No error
bars appear here since we have trained the full test set.
problem per pretraining iteration to compute the model
expectation value in Eqs. (4), (5), and (6). We pick out
the best variable assignment, x∗ , which gives the ground
state of Eq. (3) as an effective approximation of the required expectation. After generative training, an output
classification layer with 10 nodes was added to the net5
work (see Fig. 1) and 1000 back-propagation iterations
were applied in both approaches using mini-batches of 100
samples to generate Fig. 3. For both pretraining and backpropagation, our learning rate was set to = 0.1 and momentum parameters were α = 0.1 for the first 5 iterations,
and α = 0.5 for the rest, same as in [20].
Accuracy on the test set versus CD-1 as a function of
the number of pretraining iterations is seen in Fig. 3.
The memcomputing method reaches a far better solution
faster, and maintains an advantage over CD even after
hundreds of back-propagation iterations. Interestingly, our
software approach is even competitive with the quantum
annealing method done in hardware [20] (cf. Fig. 3 with
Figs. 7, 8, and 9 in Ref. [20]). This is quite a remarkable
result, since we integrate a set of differential equations
of a classical system, in a scalable way, with comparable
sampling power to a physically-realized system that takes
advantage of quantum effects to improve on CD.
Finally, we also trained the RBM on the reduced MNIST
data set without mini-batches. We are not aware of
quantum-annealing results for the full data set, but we
can still compare with the CD approach. We follow a similar procedure as discussed above. In this case, however,
no mini-batching was used for a more direct comparison
between the Gibbs sampling of CD and our memcomputing approach. The results are shown in Fig. 4 for different numbers of back-propagation iterations. Even on the
full modified MNIST set, our memcomputing approach requires a substantially lower number of pretraining iterations to achieve a high accuracy and, additionally, shows
a higher level of accuracy over the traditional CD, even
after 800 back-propagation iterations.
1
Mem-QUBO+Sig
BN+ReLU
0.975
0.95
Accuracy
0.925
0.9
0.875
0.96
0.85
0.95
0.825
0.94
0.8
0.93
0.775
0.75
0.92
800
0
100
200
300
400
500
600
900
700
1000
800
900
1000
Backpropagation Iterations
Figure 5: Accuracy on the reduced MNIST test set obtained on a
network pretrained with (blue curve) Mem-QUBO and sigmoidal activation functions (Sig) versus the same size network with (red curve)
no pretraining but with batch normalization (BN) and rectified linear
units (ReLU). Both networks were trained with stochastic gradient
descent with momentum and mini-batches of 100. The inset clearly
shows an accuracy advantage of Mem-QUBO greater than 1% and
an error rate reduction of 20% throughout the training.
we employ the batch-normalization procedure [14] coupled
with rectified linear units (ReLUs) [13]. As anticipated,
batch-normalization smooths out the role of initial conditions, while rectifiers should render the energy landscape
defined by Eq. (3) more convex. Therefore, combined they
indeed seem to provide an advantage compared to the network trained with CD using sigmoidal functions [13].
However, they are not enough to overcome the advantages of our memcomputing approach. In fact, it is obvious
from Fig. 5 that the network pretrained with memcomputing maintains an accuracy advantage (of more than 1% and
a 20% reduction in error rate) on the test set out to more
than a thousand back-propagation iterations. It is key
to note that the network pretrained with memcomputing
contains sigmoidal activations compared to the rectifiers
in the network with no pretraining. Also, the pretrained
network was trained without any batch normalization procedure.
Therefore, considering all this, the pretrained network
should pose a more difficult optimization problem for
stochastic gradient descent. Instead, we found an accuracy advantage of memcomputing throughout the course
of training. This points to the fact that with memcomputing, the pretraining procedure is able to operate close to
the “true gradient” (Eqs. (4), (5), and (6)) during training, and in doing so, initializes the weights and biases of
the network in a advantageous way.
4. The Role of Supervised Training
The computational difficulty of computing the exact
gradient update in pretraining, combined with the inaccuracies of CD, has inspired research into methods which
reduce (or outright eliminate) the role of pretraining deep
models. These techniques include changes to the numerical
gradient procedure itself, like adaptive gradient estimation
[33], changes to the activation functions (e.g., the introduction of rectifiers) to reduce gradient decay and enforce
sparsity [13], and techniques like batch normalization to
make back-propagation less sensitive to initial conditions
[14]. With these new updates, in many contexts, deep networks initialized from a random initial condition are found
to compete with networks pretrained with CD [13].
To complete our analysis we have then compared a
network pretrained with our memcomputing approach
to these back-propagation methods with no pretraining.
Both networks were trained with stochastic gradient descent with momentum and the same learning rates and
momentum parameter we used in Section 3.
In Fig. 5, we see how these techniques fair against a network pretrained with the memcomputing approach on the
reduced MNIST set. In the randomly initialized network,
5. Conclusions
In this paper we have demonstrated how the memcomputing paradigm (and, in particular, its digital realization [22]) can be applied toward the chief bottlenecks in
deep learning today. In this paper, we directly assisted
a popular algorithm to pretrain RBMs and DBNs, which
6
consists of gradient ascent on the log-likelihood. We have
shown that memcomputing can accelerate considerably the
pretraining of these networks far better than what is currently done.
In fact, simulations of digital memcomputing machines
achieve accelerations of pretraining comparable to, in number of iterations, the hardware application of the quantum
annealing method, but with better quality. In addition,
unlike quantum computers, our approach can be easily
scaled on classical hardware to full size problems.
In addition, our memcomputing method retains an advantage also with respect to advances in supervised training, like batch-norming and rectifiers, that have been introduced to eliminate the need of pretraining. We find indeed,
that despite our pretraining done with sigmoidal functions,
hence on a more non-convex landscape than that provided
by rectifiers, we maintain an accuracy advantage greater
than 1% (and a 20% reduction in error rate) throughout
the training.
Finally, the form of the energy in Eq. 3 is quite general
and encompasses full DBNs. In this way, our method can
also be applied to pretraining entire deep-learning models
at once, potentially exploring parameter spaces that are
inaccessible by any other classical or quantum methods.
We leave this interesting line of research for future studies.
Acknowledgments – We thank Forrest Sheldon for useful
discussions and Yoshua Bengio for pointing out the role of
supervised training techniques. H.M. acknowledges support from a DoD-SMART fellowship. M.D. acknowledges
partial support from the Center for Memory and Recording Research at UCSD.
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
References
[22]
[1] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature
521 (7553) (2015) 436–444.
[2] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in
neural information processing systems, 2012, pp. 1097–1105.
[3] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed,
N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath,
et al., Deep neural networks for acoustic modeling in speech
recognition: The shared views of four research groups, IEEE
Signal Processing Magazine 29 (6) (2012) 82–97.
[4] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness,
M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland,
G. Ostrovski, et al., Human-level control through deep reinforcement learning, Nature 518 (7540) (2015) 529–533.
[5] P. Smolensky, Information processing in dynamical systems:
foundations of harmony theory, in: Parallel distributed processing: explorations in the microstructure of cognition, vol. 1,
MIT Press, 1986, pp. 194–281.
[6] Y. Bengio, et al., Learning deep architectures for ai, Foundations and trends R in Machine Learning 2 (1) (2009) 1–127.
[7] D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning representations by back-propagating errors, nature 323 (6088) (1986)
533–536.
[8] A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, Y. LeCun, The loss surfaces of multilayer networks, in: Artificial Intelligence and Statistics, 2015, pp. 192–204.
[9] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli,
Y. Bengio, Identifying and attacking the saddle point problem
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
7
in high-dimensional non-convex optimization, in: Advances in
neural information processing systems, 2014, pp. 2933–2941.
G. E. Hinton, S. Osindero, Y.-W. Teh, A fast learning algorithm
for deep belief nets, Neural computation 18 (7) (2006) 1527–
1554.
G. E. Hinton, Training products of experts by minimizing contrastive divergence, Training 14 (8).
I. Sutskever, T. Tieleman, On the convergence properties of
contrastive divergence, in: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,
2010, pp. 789–795.
X. Glorot, A. Bordes, Y. Bengio, Deep sparse rectifier neural
networks, in: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011, pp. 315–
323.
S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep
network training by reducing internal covariate shift, in:
F. Bach, D. Blei (Eds.), Proceedings of the 32nd International
Conference on Machine Learning, Vol. 37 of Proceedings of Machine Learning Research, PMLR, Lille, France, 2015, pp. 448–
456.
URL http://proceedings.mlr.press/v37/ioffe15.html
T. Tieleman, G. Hinton, Using fast weights to improve persistent contrastive divergence, in: Proceedings of the 26th Annual
International Conference on Machine Learning, ACM, 2009, pp.
1033–1040.
E. Romero Merino, F. Mazzanti Castrillejo, J. Delgado Pin,
D. Buchaca Prats, Weighted Contrastive Divergence, ArXiv eprintsarXiv:1801.02567.
A. M. Sheri, A. Rafique, W. Pedrycz, M. Jeon, Contrastive
divergence for memristor-based restricted boltzmann machine,
Engineering Applications of Artificial Intelligence 37 (2015)
336–342.
J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe,
S. Lloyd, Quantum machine learning, Nature 549 (7671) (2017)
195–202.
N. Wiebe, A. Kapoor, K. M. Svore, Quantum deep learning,
arXiv preprint arXiv:1412.3489.
S. H. Adachi, M. P. Henderson, Application of quantum annealing to training of deep neural networks, arXiv preprint
arXiv:1510.06356.
F. L. Traversa, M. Di Ventra, Universal memcomputing machines, IEEE Trans. on Neural Networks 26 (2015) 2702–2715.
F. L. Traversa, M. Di Ventra, Polynomial-time solution of prime
factorization and np-complete problems with digital memcomputing machines, Chaos: An Interdisciplinary Journal of Nonlinear Science 27 (2) (2017) 023107. arXiv:http://dx.doi.org/
10.1063/1.4975761, doi:10.1063/1.4975761.
URL http://dx.doi.org/10.1063/1.4975761
F. L. Traversa, P. Cicotti, F. Sheldon, M. Di Ventra, Evidence
of an exponential speed-up in the solution of hard optimization
problems, ArXiv e-printsarXiv:1710.09278.
H. Manukian, F. L. Traversa, M. Di Ventra, Memcomputing numerical inversion with self-organizing logic gates, IEEE Transactions on Neural Networks and Learning Systems PP (99) (2017)
1–6. doi:10.1109/TNNLS.2017.2697386.
M. Di Ventra, Y. V. Pershin, The parallel approach, Nature
Physics 9 (2013) 200–202.
M. Di Ventra, F. L. Traversa, I. V. Ovchinnikov, Topological
field theory and computing with instantons, Annalen der Physik
(2017) 1700123doi:10.1002/andp.201700123.
G. Hinton, A practical guide to training restricted boltzmann
machines, Momentum 9 (1) (2010) 926.
S. Arora, B. Barak, Computational Complexity: A Modern Approach, Cambridge University Press, 2009.
Z. Bian, F. Chudak, W. G. Macready, G. Rose, The ising model:
teaching an old problem new tricks, D-wave systems (2010) 1–
32.
J. Håstad, Some optimal inapproximability results, Journal of
the ACM (JACM) 48 (4) (2001) 798–859.
M. Di Ventra, F. L. Traversa, Absence of periodic orbits in
digital memcomputing machines with solutions, Chaos: An Interdisciplinary Journal of Nonlinear Science 27 (2017) 101101.
[32] M. Di Ventra, F. L. Traversa, Absence of chaos in digital memcomputing machines with solutions, Phys. Lett. A 381 (2017)
3255.
[33] D. Kingma, J. Ba, Adam: A method for stochastic optimization,
arXiv preprint arXiv:1412.6980.
8
| 2 |
COMPUTING UPPER CLUSTER ALGEBRAS
arXiv:1307.0579v1 [] 2 Jul 2013
JACOB MATHERNE AND GREG MULLER
Abstract. This paper develops techniques for producing presentations of upper cluster algebras. These techniques are suited to computer implementation, and will always
succeed when the upper cluster algebra is totally coprime and finitely generated. We
include several examples of presentations produced by these methods.
1. Introduction
1.1. Cluster algebras. Many notable varieties have a cluster structure, in the following
sense. They are equipped with distinguished regular functions called cluster variables,
which are grouped into clusters, each of which form a transcendence basis for the field
of rational functions. Each cluster is endowed with mutation rules for moving to other
clusters, and in this way, every cluster can be reconstructed from any other cluster. Excamples of this include semisimple Lie groups [BFZ05], Grassmannians [Sco06], partial
flag varieties [GLS08], moduli of local systems [FG06], and others.
The obvious algebra to consider in this situation is the cluster algebra A, the ring
generated by the cluster variables.1 However, from a geometric perspective, the more
natural algebra to consider is the upper cluster algebra U, defined by intersecting certain
Laurent rings (see Remark 3.2.2 for the explicit geometric interpretation).
The Laurent phenomenon guarantees that A ⊆ U. This can be strengthened to an
equality A = U in many of the geometric examples and simpler classes of cluster algebras
(such as acyclic and locally acyclic cluster algebras [BFZ05, Mul13]). In most cases where
A = U is known, the structures and properties of the algebra A = U are fairly wellunderstood; for example, [BFZ05, Corollary 1.21] presents an acyclic cluster algebra as a
finitely generated complete intersection.
However, there are examples where A ( U; the standard counterexample is the Markov
cluster algebra (see Remark 6.2.1 for details). In these examples, both A and U are more
difficult to work with directly, and either can exhibit pathologies. For example, the Markov
cluster algebra is non-Noetherian [Mul13], and Speyer recently produced a non-Noetherian
upper cluster algebra [Spe13].
1.2. Presenting upper cluster algebras. Nevertheless, because of its geometric nature,
the authors expect that an upper cluster algebra U is generally better behaved than its
cluster algebra A. This is supported in the few concretely understood examples where
A ( U; however, the scarcity of examples makes investigating U difficult.
2010 Mathematics Subject Classification. Primary 13F60, Secondary 14Q99.
Keywords: Cluster algebras, upper cluster algebras, presentations of algebras, computational algebra.
The second author was supported by the VIGRE program at LSU, National Science Foundation grant
DMS-0739382.
1Technically, the construction of the cluster algebra used in this note includes the inverses to a finite set.
1
2
JACOB MATHERNE AND GREG MULLER
The goal of this note is to alleviate this problem by developing techniques to produce
explicit presentations of U. The main tool is the following lemma, which gives several
computationally distinct criteria for when a Noetherian ring S is equal to U.
Lemma 1.2.1. If A is a cluster algebra with deep ideal D, and S is a Noetherian ring
such that A ⊆ S ⊆ U, then the following are equivalent.
(1)
(2)
(3)
(4)
(5)
S = U.
S is normal and codim(SD) ≥ 2.
S is S2 and codim(SD) ≥ 2.
Ext1S (S/SD, S) = 0.
Sf = (Sf : (SD)∞ ), where f := x1 x2 ...xm for some cluster x = {x1 , ..., xn }.
If Sf 6= (Sf : (SD)∞ ), then (Sf : (SD)∞ )f −1 contains elements of U not in S.
Here, the deep ideal D is the ideal in A defined by the products of the mutable cluster
variables (see Section 3.2).
This lemma is constructive, in that a negative answer to condition (5) explicitly provides
new elements of U. Even without a clever guess for a generating set of U, iteratively
checking this criterion and adding new elements can produce a presentation for U. Speyer’s
example demonstrates that this algorithm cannot always work; however, if U is finite, this
approach will always produce a generating set (Corollary 5.2.2).
Naturally, we include several examples of these explicit presentations. Sections 6 and 7
contain presentations for the upper cluster algebras of the seeds pictured in Figure 1.
c
f1
a
x1
x1
a
x1
b
x2
x3
x2
x3
a
x2
a
∀a ≥ 2
f2
x3
f3
d
f
Figure 1. Seeds of upper cluster algebras presented in this note.
Remark 1.2.2. To compute examples, we use a variation of Lemma 4.4.1 involving lower
bounds and upper bounds, which requires that our cluster algebras are totally coprime.
2. Cluster algebras
Cluster algebras are a class of commutative unital domains. Up to a finite localization, they are generated in their field of fractions by distinguished elements, called cluster
variables. The cluster variables (and hence the cluster algebra) are produced by an recursive procedure, called mutation. While cluster algebras are geometrically motivated, their
construction is combinatorial and determined by some simple data called a ’seed’.
2.1. Ordered seeds. A matrix M ∈ Matm,m (Z) is skew-symmetrizable if there is a nonnegative, diagonal matrix D ∈ Matm,m (Z) such that DM is skew-symmetric; that is, that
(DM)⊤ = −DM.
COMPUTING UPPER CLUSTER ALGEBRAS
3
Let n ≥ m ≥ 0 be integers, and let B ∈ Matn,m (Z) be an integer valued n × m-matrix.
Let B0 ∈ Matm,m (Z) be the principal part, the submatrix of B obtained by deleting the
last n − m rows.
An ordered seed is a pair (x, B) such that...
• B ∈ Matn,m (Z),
• B0 is skew-symmetrizable, and
• x = (x1 , ..., xn ) is an n-tuple of elements in a field F of characteristic zero, which
is a free generating set for F as a field over Q.
The various parts of an ordered seed have their own names.
• The matrix B is the exchange matrix.
• The n-tuple x is the cluster.
• Elements xi ∈ x are cluster variables.2 These are further subdivided by index.
– If 0 < i ≤ m, xj is a mutable variable.
– If m < i ≤ n, xj is a frozen variable.
The ordering of the cluster variables in x is a matter of convenience. A permutation
of the cluster variables which preserves the flavor of the cluster variable (mutable/frozen)
acts on the ordered seed by reordering x and conjugating B.
A skew-symmetric seed (x, B) can be diagrammatically encoded as an ice quiver (Figure
2). Put each mutable variable xi in a circle, and put each frozen variable xi in a square.
For each pair of indices i < j with i ≤ m, add Bji arrows from i to j, where ‘negative
arrows’ go from j to i.
0 −3
x = {x1 , x2 , x3 }, B = 3
0
−2 1
x1
Q=
x2
x3
Figure 2. The ice quiver associated to a seed
A seed (x, B) is called acyclic if Q does not contain a directed cycle of mutable vertices.
The seed in Figure 2 is acyclic.
2.2. Cluster algebras. Given an ordered seed (x, B) and some 1 ≤ k ≤ m, define the
mutation of (x, B) at k to be the ordered seed (x′ , B′ ), where
)
( Q
Q
−B
B
if i = k
xj jk + Bjk <0 xj jk xi−1
′
B
>0
jk
xi :=
xi
otherwise
−Bij
if i = k or j = k
′
Bij :=
|B |B +B |B |
Bij + ik kj 2 ik kj
otherwise
Since (x′ , B′ ) is again an ordered seed, mutation may be iterated at any sequence of indices
in 1, 2, ..., m. Mutation twice in a row at k returns to the original ordered seed.
Two ordered seeds (x, B) and (y, C) are mutation-equivalent if (y, C) is a obtained from
(x, B) by a sequence of mutations and permutations.
2Many authors do not consider frozen variables to be cluster variables, instead referring to them as
‘geometric coefficients’, following [FZ07].
4
JACOB MATHERNE AND GREG MULLER
Definition 2.2.1. Given an ordered seed (x, B), the associated cluster algebra A(x, B)
is the subring of the ambient field F generated by
[
y
{xi−1 | m < i ≤ n} ∪
(y,C)∼(x,B)
A cluster variable in A(x, B) is a cluster variable in any ordered seed mutation-equivalent
to (x, B), and it is mutable or frozen based on its index in any seed. A cluster in A(x, B)
is a set of cluster variables appearing as the cluster in some ordered seed. Mutationequivalent seeds define the same cluster algebra A. The seed will often be omitted from
the notation when clear.
A cluster algebra A is acyclic if there exists an acyclic seed of A; usually, an acyclic
cluster algebra will have many non-acyclic seeds as well. Acyclic cluster algebras have
proven to be the most easily studied class; for example, [BFZ05, Corollary 1.21] gives a
presentation of A with 2n generators and n relations.
2.3. Upper cluster algebras. A basic tool in the theory of cluster algebras is the following theorem, usually called the Laurent phenomenon.
Theorem 2.3.1 (Theorem 3.1, [FZ02]). Let A be a cluster algebra, and x = {x1 , x2 , ..., xn }
be a cluster in A. As subrings of F ,
±1
A ⊂ Z[x±1
1 , ..., xn ]
This is the localization of A at the mutable variables x1 , ..., xm .
The theorem says elements of A can be expressed as Laurent polynomials in many
different sets of variables (one such expression for each cluster). The set of all rational
functions in F with this property is an important algebra in its own right, and the central
object of study in this note.
Definition 2.3.2. Given a cluster algebra A, the upper cluster algebra U is defined
\
U :=
Z[x1±1 , ..., x±1
n ] ⊂F
clusters
x={x1 ,...,xn } in A
The Laurent phenomenon is equivalent to the containment A ⊆ U.
Proposition 2.3.3. [Mul13, Proposition 2.1] Upper cluster algebras are normal.
Remark 2.3.4. Any intersection of normal domains in a fraction field is normal.
2.4. Lower and upper bounds. The cluster algebras that have finitely many clusters
have an elegant classification by Dynkin diagrams [FZ03]. However, such finite-type cluster
algebras are quite rare; even the motivating examples are frequently infinite-type. Working
with infinite-type A or U can be daunting because the definitions involve infinite generating
sets or intersections (this is especially a problem for computer computations).
Following [BFZ05], to any seed x, we associate bounded analogs of A and U called
lower and upper bounds. The definitions are the same, except the only seeds considered
are x and those seeds a single mutation away from x.
As a standard abuse of notation, for a fixed seed (x = {x1 , x2 , ..., xn }, B), let x′i denote
the mutation of xi in (x, B).
Definition 2.4.1. Let (x, B) be a seed in F .
The lower bound Lx is the subring of F generated by {x1 , x2 , ..., xn }, the one-step
−1
mutations {x′1 , x′2 , ..., x′m }, and the inverses to invertible frozen variables {x−1
m+1 , ..., xn }.
COMPUTING UPPER CLUSTER ALGEBRAS
5
The upper bound Ux is the intersection in F of the n + 1 Laurent rings corresponding
to x and its one-step mutations.
\
±1
′±1
±1
±1
±1
Z[x±1
Ux := Z[x±1
1 , ..., xi−1 , xi , xi+1 ..., xn ]
1 , ..., xn ] ∩
i
The names ‘lower bound’ and ‘upper bound’ are justified by the obvious inclusions
L x ⊆ A ⊆ U ⊆ Ux
When does U = Ux ? A seed (x, B) is coprime if every pair of columns in B are linearly
independent. A cluster algebra is totally coprime if every seed is coprime.
Theorem 2.4.2 (Corollary 1.7, [BFZ05]). If A is totally coprime, then U = Ux for any
seed (x, B).
Mutating a seed can make coprime seeds non-coprime (and vice versa), so verifying a
cluster algebra is totally coprime may be hard in general. A stronger condition is that the
exchange matrix B has full rank (ie, kernel 0); this is preserved by mutation, so it implies
the cluster algebra A(B) is totally coprime.
Theorem 2.4.3 (Proposition 1.8, [BFZ05]). If the exchange matrix B of a seed of A is
full rank, then A is totally coprime.
Of course, there are many totally coprime cluster algebras which are not full rank.3
3. Regular functions on an open subscheme
This section collects some generalities about the ring we denote Γ(R, I) – the ring of
regular functions on the open subscheme of Spec(R) whose complement is V (I) – and
relates this idea to cluster algebras.
3.1. Definition. Let R be a domain with fraction field F (R).4 For any ideal I ⊂ R,
define the ring Γ(R, I) as the intersection (taken in F (R))
\
Γ(R, I) :=
R[r−1 ]
r∈I
Remark 3.1.1. In geometric terms, Γ(R, I) is the ring of rational functions on Spec(R)
which are regular on the complement of V (I). As a consequence, Γ(R, I) only depends on
I up to radical. Neither of these facts are necessary for the rest of this note, however.
Proposition 3.1.2. If I is generated by a set π ⊂ R, then
\
Γ(R, I) =
R[r−1 ]
P
r∈π
Proof. Choose some f ∈ I, and write f = r∈π0 br r, where π0 is a finite subset of π. Let
T
g ∈ r∈π R[r−1 ]; therefore, there are nr ∈ R and αr ∈ N such that g = rnαrr for all r ∈ π.
Define
X
αr
β =1+
r∈π0
P
β
and consider f g. Expanding f = ( br r) , every monomial expression in the {r}
′
contains at least one r ∈ π0 with exponent greater orTequal to αr′ . Since r′αr′ g = nr′ ∈ R,
it follows that f β g ∈ R and g ∈ R[f −1 ]. Therefore, r∈π R[r−1 ] ⊆ Γ(R, I).
β
β
3Proposition 6.1.2 provides a class of such examples.
4All rings in this note are commutative and unital, but need not be Noetherian.
6
JACOB MATHERNE AND GREG MULLER
Proposition 3.1.3. If R ⊆ S ⊆ Γ(R, I), then Γ(R, I) = Γ(S, SI).
Proof. For i ∈ I, Γ(R, I) ⊂ R[r−1 ], and so S ⊂ R[r−1 ]. Then S[r−1 ] = R[r−1 ] for all
r ∈ I. If π generates I over R, then π generates SI over S. By Proposition 3.1.2,
\
\
Γ(R, I) =
R[r−1 ] =
S[r−1 ] = Γ(S, SI)
r∈π
r∈π
This completes the proof.
3.2. Upper cluster algebras. The relation between a cluster algebra A and its upper
cluster algebra U is an example of this construction. Define the deep ideal D of A by
X
Ax1 x2 ...xm
D :=
clusters {x1 ,x2 ,...,xn }
That is, it is the A-ideal generated by the product of the mutables variables in each cluster.
Proposition 3.2.1. Γ(A, D) = U.
Proof. Since D is generated by the products of the mutable variables in the clusters,
\
A[(x1 x2 ...xm )−1 ]
Γ(A, D) =
clusters {x1 ,x2 ,...,xn }
=
\
±1
Z[x1±1 , x±1
2 , ..., xn ]
clusters {x1 ,x2 ,...,xn }
Thus, Γ(A, D) = U.
Remark 3.2.2. The proposition is equivalent to the following well-known geometric interpretation of U. If {x1 , ..., xn } is a cluster, then the isomorphism
±1
±1
A[(x1 x2 ...xm )−1 ] ≃ Z[x±1
1 , x2 , ..., xn ]
determines an open inclusion GnZ ֒→ Spec(A).5 The union of all such open affine subschemes is a smooth open subscheme in Spec(A), whose complement is V (D).6 The
proposition states that U is the ring of regular functions on this union.
3.3. Upper bounds. Let (x, B) be a seed with x = {x1 , x2 , ..., xn }. As in Section 2.4,
let x′i denote the mutation of xi in x. The lower deep ideal Dx is the Lx -ideal
X
Lx (x1 x2 ...xi−1 x′i xi+1 ...xm )
Dx := Lx (x1 x2 ...xm ) +
i
Proposition 3.2.1 has an analog.
Proposition 3.3.1. Γ(Lx , Dx ) = Ux
Proof. Since Dx is generated by the products of the mutable variables in Lx ,
\
Lx [(x1 x2 ...x′i ...xm )−1 ]
Γ(Lx , Dx ) = Lx [(x1 x2 ...xm )−1 ] ∩
=
±1
Z[x±1
1 , ..., xn ]
∩
\
i
±1
Z[x1±1 , ..., x′±1
i , ..., xn ]
i
Thus, Γ(Lx , Dx ) = Ux .
5These open algebraic tori are called toric charts in [Sco06] and cluster tori in [Mul13].
6This union is called the cluster manifold in [GSV03].
COMPUTING UPPER CLUSTER ALGEBRAS
7
In practice, Γ(Lx , Dx ) is much easier to work with than Γ(A, D), because the objects
involved are defined by finite generating sets.
Remark 3.3.2. For any set of clusters S in A, one may define LS generated by the variables
in S, US as the intersection of the Laurent rings of clusters in S, and DS an ideal in LS
generated by the products of clusters in S. Again, one has US = Γ(LS , DS ).
4. Criteria for Γ(R, I)
Given a ‘guess’ for Γ(R, I) – a ring S such that R ⊆ S ⊆ Γ(R, I) – there are several
criteria for verifying if S = Γ(R, I). This section develops these criteria.
4.1. Saturations. Given two ideals I, J in R, define the saturation
(J : I ∞ ) = {r ∈ R | ∀g ∈ I, ∃n ∈ N s.t. rg n ∈ J}
Computer algebra programs can compute saturations, at least when R is finitely generated.
Remark 4.1.1. When I Sis not finitely generated, this definition of saturation may differ
from the infinite union n (J : I n ), which amounts to reversing the order of quantifiers.
Saturations can be used to compute the sub-R-module of Γ(R, I) with denominator f .
Proposition 4.1.2. If f ∈ R, then
Rf −1 ∩ Γ(R, I) = (Rf : I ∞ )f −1
Proof. If g ∈ R ∩ f Γ(R, I), then for any r ∈ I, we may write gf −1 = hr−m for some h ∈ R
and m ∈ N. Then grm = hf ∈ Rf ; and so g ∈ (Rf : I ∞ ).
If g ∈ (Rf : I ∞ ), then for any r ∈ I, there is some m such that grm ∈ Rf . It follows
that gf −1 ∈ Rr−m ⊂ R[r−1 ]. Therefore, gf −1 ∈ Γ(R, I), and so g ∈ f Γ(R, I).
Saturations can also detect when R = Γ(R, I).
Proposition 4.1.3. Let f ∈ I. Then R = Γ(R, I) if and only if Rf = (Rf : I ∞ ).
Proof. If R = Γ(R, I), then (Rf : I ∞ ) = Rf ∩ Γ(R, I) = Rf .
Assume Rf = (Rf : I ∞ ). Let g ∈ Γ(R, I), and let n be the smallest integer such that
n
f g ∈ R. If n ≥ 1, then
f (f n−1 g) ∈ R ∩ (f Γ(R, I)) = (Rf : I ∞ ) = Rf
and so f n−1 g ∈ R, contradicting minimality of n. So g ∈ R, and so Γ(R, I) = R.
4.2. The saturation criterion. Given a ring S with R ⊆ S ⊆ Γ(R, I), the following
lemma gives a necessary and computable criterion for when S = Γ(R, I). Perhaps more
importantly, if S ( Γ(R, I), it explicitly gives new elements of Γ(R, I), which can be used
to generate a better guess S ′ ⊆ Γ(R, I).
Lemma 4.2.1. Let R ⊆ S ⊆ Γ(R, I). For and f ∈ I,
S ⊆ (Sf : (SI)∞ )f −1 ⊂ Γ(R, I)
Furthermore, either
• S = Γ(R, I), or
• S ( (Sf : (SI)∞ )f −1 ⊆ Γ(R, I).
Proof. By Proposition 3.1.3, Γ(R, I) = Γ(S, SI). The containment (Sf : (SI)∞ )f −1 ⊂
Γ(R, I) follows from Proposition 4.1.2. The containment S ⊆ (Sf : (SI)∞ )f −1 is clear
from the definition of the saturation. If (Sf : (SI)∞ )f −1 = S, then Proposition 4.1.3
implies that S = Γ(R, I).
8
JACOB MATHERNE AND GREG MULLER
4.3. Noetherian algebraic criteria. When the ring S is Noetherian, there are several
alternative criteria to verify that S = Γ(R, I).7 When S is also normal, these criteria are
sharp, but none of them can give a constructive negative answer similar to Lemma 4.2.1.
The definitions of ‘codimension’, ‘S2’ and ‘depth’ used here are found in [Eis95].
Lemma 4.3.1. Let R ⊆ S ⊆ Γ(R, I). If S is Noetherian, then each of the following
statements implies the next.
(1) S is normal and codim(SI) ≥ 2.
(2) S is S2 and codim(SI) ≥ 2.
(3) depthS (SI) ≥ 2; that is, Ext1S (S/SI, S) = 0.
(4) S = Γ(R, I).
If S is normal and Noetherian, then the above statements are equivalent.
Proof. (1) ⇒ (2). By Serre’s criterion [Eis95, Theorem 11.5.i], a normal Noetherian
domain is S2.
(2) ⇒ (3). The S2 condition implies that every ideal of codimension ≥ 2 has depth ≥ 2;
see the proof of [Eis95, Theorem 18.15].8
(Not 4) ⇒ (Not 3). Assume that S ( Γ(R, I), and let f ∈ I. By Lemma 4.2.1 and
Proposition 4.1.2,
S ( (Sf : (SI)∞ )f −1 = Sf −1 ∩ Γ(S, SI)
Since S is Noetherian, SI is finitely-generated, and so it is possible to find an element
g ∈ Sf −1 ∩ Γ(S, SI) such that g 6∈ S but gI ⊆ S. The natural short exact sequence
0 → S ֒→ Sg → Sg/S → 0
is an essential extension, and so Ext1S (Sg/S, S) 6= 0.
The map S/SI → Sg/S which sends 1 to g is a surjection, and its kernel K is a torsion
S-module. Hence, there is a long exact sequence which contains
· · · → HomS (K, S) → Ext1S (Sg/S, S) → Ext1S (S/SI, S) → ...
Since K is torsion, HomS (K, S) = 0, and so Ext1S (S/SI, S) 6= 0.
(S normal) + (Not 1) ⇒ (Not 4). Assume that S is normal, and that codim(SI) = 1.
Therefore, there is a prime S-ideal P containing SI with codim(P ) = 1. By Serre’s
criterion [Eis95, Theorem 11.5.ii], the localization SP is a discrete valuation ring. Let
ν : F (S)∗ → Z be the corresponding valuation.
Let a1 , a2 , ..., aj generate P over S. Then a1 , a2 , ..., aj generate SP P over SP . There
must be some ai with ν(ai ) = 1, and this element generates SP P . Reindexing as needed,
assume that ν(a1 ) = 1. For each ai , there exists fi , gi ∈ S − P such that
fi ν(a )
ai = a1 i
gi
Let d = gcd(ν(ai )). Then, for all 1 ≤ k ≤ j,
d
Y ν(ad )
Y ν(ad )
fk ν(ak )
1
−1
x := d
gi i
gi i =
d
∈ S[ak ]
a1 1<i≤j
ak
1<i≤j
i6=k
It follows that x ∈ Γ(S, P ) ⊆ Γ(S, SI) = Γ(R, I). However, since ν(x) = −d, it follows
that x 6∈ S, and so S 6= Γ(R, I).
7However, even when R is Noetherian, one cannot always expect that Γ(R, I) is Noetherian.
8Some sources take this as the definition of S2.
COMPUTING UPPER CLUSTER ALGEBRAS
9
Remark 4.3.2. The implication (1) ⇒ (4) is one form of the ‘algebraic Hartog lemma’, in
analogy with Hartog’s lemma in complex analysis.
Remark 4.3.3. The assumption that S is Noetherian is essential. If
R = S = C[[xt | t ∈ Q≥0 ]]
is the ring of Puiseux series without denominator, and I is generated by {xt }t>0 , then R
is normal and Ext1 (R/I, R) = 0. Nevertheless,
Γ(R, I) = C[[xt | t ∈ Q]] 6= R
is the field of all Puiseux series.
4.4. Criteria for U. We restate the previous criteria for upper cluster algebras.
Lemma 4.4.1. If A is a cluster algebra with deep ideal D, and S is a Noetherian ring
such that A ⊆ S ⊆ U, then the following are equivalent.
(1) S = U.
(2) S is normal and codim(SD) ≥ 2.
(3) S is S2 and codim(SD) ≥ 2.
(4) Ext1S (S/SD, S) = 0.
(5) Sf = (Sf : (SD)∞ ), where f := x1 x2 ...xm for some cluster x = {x1 , ..., xn }.
If Sf 6= (Sf : (SD)∞ ), then (Sf : (SD)∞ )f −1 contains elements of U not in S.
However, we are interested in infinite-type cluster algebras, where the containments
A ⊆ S ⊆ U cannot be naively verified by hand or computer. This is where lower and
upper bounds are helpful, since the analogous containments can be checked directly.
Lemma 4.4.2. If (x, B) is seed in a totally coprime cluster algebra A and S a Noetherian
ring such that Lx ⊆ S ⊆ Ux , then the following are equivalent.
(1) S = Ux = U.
(2) S is normal and codim(SDx ) ≥ 2.
(3) S is S2 and codim(SDx ) ≥ 2.
(4) Ext1S (S/SDx , S) = 0.
(5) Sf = (Sf : (SDx )∞ ), where f is the product of the mutable variables in x.
If Sf 6= (Sf : (SDx )∞ ), then (Sf : (SDx )∞ )f −1 contains elements of U not in S.
Note that Ux is normal by Remark 2.3.4, and so the strong form of Lemma 4.3.1 applies.
Remark 4.4.3. Criterion (2) was used implicitly in the proofs of [BFZ05, Theorem 2.10]
and [Sco06, Proposition 7], and a form of it is stated in [FP13, Proposition 3.6].
5. Presenting U
This section outlines the steps for checking if a set of Laurent polynomials generates a
totally coprime upper cluster algebra U over the subring of frozen variables.
5.1. From conjectural generators to a presentation. Fix a seed (x = {x1 , ..., xm }, B)
in a totally coprime cluster algebra A. Let
±1
±1
ZP := Z[x±1
m+1 , xm+2 , ..., xn ]
be the coefficient ring – the Laurent ring generated by the frozen variables and their
inverses.
10
JACOB MATHERNE AND GREG MULLER
Start with a finite set of Laurent polynomials in Z[x1±1 , ..., x±1
n ], which hopefully generates U over ZP. We assume that all the initial mutable variables x1 , ..., xn are in this set.
Write this set as
x1 , x2 , ...xm , y1 , ...., yp
where
yi =
Ni (x1 , ..., xn )
±1
±1
αni ∈ Z[x1 , ..., xn ]
1i α2i
xα
1 x2 ...xn
for some polynomial Ni (x1 , ..., xn ).
• Compute the ideal of relations. Let
Se := ZP[x1 , ..., xm , y1 , ..., yp ]
be a polynomial ring over ZP (here, the yi s are just symbols). Define Ie to be the
e
S-ideal
generated by elements of the form
αni
1i α2i
yi (xα
1 x2 ...xn ) − Ni (x1 , ..., xn )
e 1 ...xm )∞ ) be the saturation of I by the
as i runs from 1 to p. Let I := (Ie : S(x
e
principal S-ideal
generated by the product of the mutable variables x1 x2 ...xm .
±1
Lemma 5.1.1. The sub-ZP-algebra of Z[x±1
1 , ..., xn ] generated by
x1 , x2 , ...xm , y1 , ...., yp
e
is naturally isomorphic to the quotient S := S/I.
e 1 x2 ...xm )−1 ] is the ring
Proof. Let the localization S[(x
±1
Z[x±1
1 , ..., xn , y1 , ..., yp ]
e 1 x2 ...xm )−1 ]Ie is generated by elements of the form
The induced ideal S[(x
ni
yi − (x1−α1i x2−α2i ...x−α
)Ni (x1 , ..., xn )
n
e 1 x2 ...xm )−1 ]/S[(x
e 1 x2 ...xm )−1 ]Ie eliminates the yi s and is
and so the quotient S[(x
±1
±1
isomorphic to Z[x1 , ..., xn ]. The kernel of the composition
e 1 x2 ...xm )−1 ] → Z[x±1 , ..., x±1 ]
Se → S[(x
n
1
consists of elements r ∈ Se such that (x1 x2 ...xm )i r ∈ I for some i; this is the
saturation I.
• Verify that Lx ⊆ S ⊆ Ux . For the first containment, it suffices to check that
x′1 , x′2 , ..., x′m ∈ S, because the other generators of Lx are in S by construction.
For the second containment, it suffices to check that for each 1 ≤ i ≤ m and
1 ≤ k ≤ p,
′±1
±1
yk ∈ Z[x±1
1 , ..., xi , ..., xn ]
±1
This is because x1 , ..., xm , x±1
m+1 , ..., xn are in Ux by the Laurent phenomenon.
• Check whether S = U using Lemma 4.4.2. Any of the four criteria (2) − (5)
in Lemma 4.4.2 can be used. They all may be implemented by a computer, and
each method potentially involves a different algorithm, so any of the four might
be the most efficient computationally.
COMPUTING UPPER CLUSTER ALGEBRAS
11
• If S ( U, find additional generators and return to the beginning. If
S 6= U, then (Sf : (SDx )∞ )f −1 contains elements of U which are not in S (where
f = x1 x2 ...xm ). One or more of these elements may be added to the original list
of Laurent polynomials to get a larger guess S ′ for U. Note that any S ′ produced
this way satisfies Lx ⊆ S ′ ⊆ Ux .
5.2. An iterative algorithm. The preceeding steps can be regarded as an iterative algorithm for producing successively larger subrings S ⊆ U as follows. Start with an initial
guess Lx ⊆ S ⊆ Ux . In lieu of cleverness, the lower bound Lx = S makes an functional
initial guess; this amounts to starting with generators x1 , ..., xm , x′1 , ..., x′m .
Denote S1 := S, and inductively define Si+1 to be the sub-ZP-algebra of Q(x1 , x2 , ..., xn )
generated by Si and (Si f : (Si I)∞ )f −1 . If Si is finitely generated over ZP (resp. Noetherian), then the saturation (Si f : (Si I)∞ ) is finitely generated over Si and so Si+1 is
finitely generated over ZP (resp. Noetherian).
This gives a nested sequence of subrings
Lx ⊆ S = S1 ⊆ S2 ⊆ S3 ⊆ ... ⊆ U = Ux
By Lemma 4.2.1, if Si = Si+1 , then Si = Si+1 = Si+2 = ... = U = Ux .
Proposition 5.2.1. If U is finitely generated over S, then for some i, Si = U.
Proof. Let f = x1 x2 ...xm . By Proposition 4.1.2,
(Si f : (Si Dx )∞ ) = Si f −1 ∩ U
Induction on i shows that Sf −i ∩ U ⊆ Si+1 . If U is finitely generated over S, then there
is some i such that Sf −i+1 contains a generating set, and so Si = U.
Corollary 5.2.2. Let be A a totally coprime cluster algebra, and S = Lx for some seed
in A. If U is finitely generated, then U = Si for some i.
In other words, this algorithm will always produce U in finitely many steps, even starting
with the ‘worst’ guess Lx .
Remark 5.2.3. This algorithm can be implemented by computational algebra software,
so long as the initial guess S is finitely presented. However, in the authors’ experience,
naively implementing this algorithm was computationally prohibitive after the first step.
A more effective approach was to pick a few simple elements of (Si f : (Si I)∞ ) and use
them to generate a bigger ring Si+1 .
6. Examples: m = n = 3
The smallest non-acyclic seed will have m = n = 3; that is, 3 mutable variables and no
frozen variables. We consider these examples.
6.1. Generalities. Consider an arbitrary skew-symmetric seed (x, Ba,b,c ) with m = n =
3, as in Figure 3. Let Aa,b,c and Ua,b,c be the corresponding cluster algebra and upper
cluster algebra, respectively.9
The seed (x, Ba,b,c ) is acyclic unless a, b, c > 0 or a, b, c < 0, and permuting the variables
can exchange these two inequalities. Even when a, b, c > 0, the cluster algebra Aa,b,c
may not be acyclic, since there may be a acyclic seed mutation equivalent to (x, Ba,b,c ).
Thankfully, there is a simple inequality which detects when Aa,b,c is acyclic.
9The notation U
a,b,c is dangerous, in that it leaves no room to distinguish between the upper cluster
algebra and the upper bound of Ba,b,c . However, we will only consider non-acyclic examples, and so by
Theorem 2.4.2, these two algebras coincide. The reader is nevertheless warned.
12
JACOB MATHERNE AND GREG MULLER
Ba,b,c
0
= a
−c
−a c
0 −b
b
0
a
x1
c
Qa,b,c =
x2
x3
b
Figure 3. A general skew-symmetric seed with 3 mutable variables
Theorem 6.1.1. [BBH11, Theorem 1.1] Let a, b, c > 0. The seed (x, Ba,b,c ) is mutationequivalent to an acyclic seed if and only if a < 2, b < 2, c < 2, or
abc − a2 − b2 − c2 + 4 < 0
Acyclic Aa,b,c = Ua,b,c can be presented using [BFZ05, Corollary 1.21]; and so we focus
on the non-acyclic cases. As the next proposition shows, these cluster algebras are totally
coprime, and so it will suffice to present Ux .
Proposition 6.1.2. Let A be a cluster algebra with m = 3. If A is not acyclic, then A is
totally coprime.
Proof. Let (x, B) be a non-acyclic seed for A with quiver Q; that is, there is a directed
cycle of mutable cluster variables. There are no 2-cycles in Q by construction, and so the
directed cycle in Q passes through every vertex. It follows that Bij 6= 0 if i 6= j. Then the
ith and jth columns are linearly independent, because Bii = 0 and Bij 6= 0. Hence, (x, B)
is a coprime seed, and A is totally coprime.
Remark 6.1.3. This proof does not assume that B0 is skew-symmetric or that n = 3 (ie,
that there are no frozen variables).
6.2. The (a, a, a) cluster algebra. Consider a = b = c ≥ 0 as in Figure 4.
Ba,a,a
0
= a
−a
−a a
0 −a
a
0
a
x1
a
Qa,a,a =
x2
x3
a
Figure 4. The exchange matrix and quiver for the (a, a, a) cluster algebra
If a = 0 or 1, then Aa,a,a is acyclic.10 For a ≥ 2, Aa,a,a is not acyclic by Theorem 6.1.1.
Remark 6.2.1. The case a = 2 was specifically investigated in [BFZ05], as the first example
of a cluster algebra for which A =
6 U, and it has been subsequently connected to the
Teichmüller space of the the once-punctured torus and to the theory of Markov triples
[FG07, Appendix B] (A2,2,2 is sometimes called the Markov cluster algebra). See Section
7.1 for the analog of U2,2,2 with a specific choice of frozen variables.
Proposition 6.2.2. For a ≥ 2, the upper cluster algebra Ua,a,a is generated over Z by
xa + xa2 + xa3
x1 , x2 , x3 , M := 1
x1 x2 x3
10In fact, finite-type of type A × A × A or A , respectively.
1
1
1
3
COMPUTING UPPER CLUSTER ALGEBRAS
13
The ideal of relations among these generators is generated by
x1 x2 x3 M − xa1 − xa2 − xa3 = 0
Proof. Since a3 − 3a2 + 4 ≥ 0 for a ≥ 2, Theorem 6.1.1 implies that this cluster algebra is
not acyclic, and Proposition 6.1.2 implies that it is totally coprime.
The element x1 x2 x3 M − xa1 − xa2 − xa3 in Z[x1 , x2 , x3 , M ] is irreducible. The ideal it
generates is prime and therefore it is saturated with respect to x1 x2 x3 . By Lemma 5.1.1,
S = Z[x1 , x2 , x3 , M ]/ < x1 x2 x3 M − xa1 − xa2 − xa3 >
±1
±1
is the subring of Z[x±1
1 , x2 , x3 ] generated by x1 , x2 , x3 and M .
The following identities imply that Lx ⊂ S.
x′1 =
xa2 + xa3
= x2 x3 M −xa1 ,
x1
x′2 =
xa1 + xa3
= x1 x3 M −xa2 ,
x2
x′3 =
xa1 + xa2
= x1 x2 M −xa3
x3
The following identities imply that S ⊂ Ux .
M=
x′a+1
+ (xa2 + xa3 )a
x′a+1
+ (xa1 + xa3 )a
x′a+1
+ (xa1 + xa2 )a
1
2
3
=
=
x′a
x1 x′a
x1 x2 x′a
1 x2 x3
2 x3
3
Since S is a hypersurface, it is a complete intersection, and so it Cohen-Macaulay [Eis95,
Prop. 18.13], and in particular it is S2.11
Let P be a prime ideal in S containing
Dx =< x1 x2 x3 , x′1 x2 x3 , x1 x′2 x3 , x1 x2 x′3 >
Since x1 x2 x3 ⊂ P , at least one of {x1 , x2 , x3 } ∈ P by primality. If any two xi , xj are, then
xak = xi xj xk M − xai − xaj ∈ P ⇒ xk ∈ P
If only one xi ∈ P , then x′i xj xk ∈ P implies that x′i ∈ P . Then xi + x′a
i = xj xk M ∈ P ,
which implies M ∈ P . Additionally, xaj + xak = xi xj xk M − xai ∈ P .
Therefore, P contains at least one of the four prime ideals
(6.1)
< x1 , x2 , x3 >, < x1 , xa2 + xa3 , M >, < x2 , xa1 + xa3 , M >, < x3 , xa1 + xa2 , M >
Since {x1 , x2 }, {x1 , M }, {x2 , M }, and {x3 , M } are each regular sequences in S, it follows
that codim(Dx ) ≥ 2. By Lemma 4.4.2, S = U.
Remark 6.2.3. The final step of the proof has some interesting geometric content. In
this case, D = Dx , and the four prime ideals (6.1) are the minimal primes containing D.
Geometrically, they define the irreducible components of V (D); that is, the complement
of the cluster tori.
One of these components (x1 = x2 = x3 = 0) is an affine line on which every cluster
variable vanishes. The other 3 components (xi = xaj + xak = M = 0) are geometrically
reducible; over C they each decompose into a-many affine lines. Over C, V (D) consists of
3a + 1-many affine lines, which intersect at the point x1 = x2 = x3 = M = 0 and nowhere
else.
6.3. The (3, 3, 2) cluster algebra. Consider the initial seed in Figure 5. The cluster
algebra A3,3,2 is non-acyclic, by Theorem 6.1.1. Up to permuting the vertices, it is the
only non-acyclic Aa,b,c with 0 ≤ a, b, c ≤ 3 besides A2,2,2 and A3,3,3 .
11A ring is Cohen-Macaulay if and only if it satisfies the Sn property for every n.
14
JACOB MATHERNE AND GREG MULLER
x1
0 −3 2
0 −3
B= 3
−2 3
0
Q=
x2
x3
Figure 5. The exchange matrix and quiver for the (3, 3, 2) cluster algebra.
Proposition 6.3.1. The upper cluster algebra U3,3,2 is generated over Z by
x1 , x2 , x3 ,
x32
x21
x23
+ +
x1 x32 + x32 x3 + x31 + x33
, y1 =
,
x1 x3
x1 x2 x3
x6 + 2x21 x32 + x1 x32 x3 + 2x32 x23 + x41 + x31 x3 + x1 x33 + x43
,
y2 = 2
x21 x2 x23
x9 + 3x21 x62 + 3x62 x23 + 3x41 x32 + 3x21 x32 x23 + 3x32 x43 + x61 + 2x31 x33 + x63
y3 = 2
.
x31 x22 x33
The ideal of relations is generated by the elements
y0 =
y22 = y0 y3 + 2y3 , y02 = x2 y2 − y0 + 2
y1 y2 = x1 y3 + x3 y3 , y0 y2 = x2 y3 + y2
y0 y1 = x1 y2 + x3 y2 − 2y1 , x1 y0 + x3 y0 = x2 y1 + x1 + x3
x22 y2 = x1 x3 y3 + 3x2 y0 − y12 − 3x2 , x22 y0 = x1 x3 y2 + x22 − x1 y1 − x3 y1
x32 + x23 y0 = x2 x3 y1 − x21 + x1 x3 .
Proof. Since a = 3, b = 3, c = 2, and abc − a2 − b2 − c2 + 4 = 0, Theorem 6.1.1 implies
that A is not acyclic. Thus, Proposition 6.1.2 asserts that A is totally coprime. Let S be
the domain in F (A) generated by the seven listed elements. Using Lemma 5.1.1 and a
computer, we see that the ideal of relations in S is generated by the elements above.
The following identities imply that Lx ⊆ S.
x′1 = x3 y0 − x1 , x′2 = x1 x3 y1 − x1 x22 − x22 x3 , x′3 = −x3 y0 + x2 y1 + x1
The following identities imply that S ⊆ Ux .
y0 =
y1
3
′
(x32 + x23 )2 + x′2
(x1 + x3 )3 (x21 − x1 x3 + x23 )2 + x′3
2
1 (x2 + x3 x1 )
=
x2 x3 x′2
x1 x3 x′2
1
2
3
′
(x21 + x32 )2 + x′2
3 (x2 + x1 x3 )
x1 x2 x′2
3
=
=
y2
(x3 + x33 )3 + (x21 + x23 )x′3
x2 + x32 + x′2
x32 + x23 + x′2
2
3
1
= 1
= 1
′
′3
x3 x1
x1 x3 x2
x1 x′3
=
=
=
′3
′4
(x32 + x23 )2 + x3 (x32 + x23 )x′1 + 2x32 x′2
1 + x3 x1 + x1
2
′2
x2 x3 x1
3
3 5
2
′6
(x1 + x3 ) + (2x1 + x1 x3 + 2x23 )(x31 + x23 )2 x′3
2 + (x1 + x3 )x2
2
2
′5
x1 x3 x2
′2
2
′
3
2
(x1 + x2 − x1 x3 + x3 )(x1 + x32 + 2x1 x′3 + x′2
3 )
x21 x2 x′2
3
COMPUTING UPPER CLUSTER ALGEBRAS
y3
=
=
=
15
2 3
2
′
′2
(x32 + x23 − x3 x′1 + x′2
1 ) (x2 + x3 + 2x3 x1 + x1 )
2
3
′3
x2 x3 x1
3 2
2
2
2 3
′3
((x1 + x3 ) (x1 − x1 x3 + x23 )2 + x′3
2 ) ((x1 + x3 )(x1 − x1 x3 + x3 ) + x2 )
3
3
′7
x1 x3 x2
2
3
′
′2 2 2
3
(x1 + x2 − x1 x3 + x3 ) (x1 + x2 + 2x1 x′3 + x′2
3)
x31 x22 x′3
3
A computer verifies that (Sx1 x2 x3 : (SDx )∞ ) = Sx1 x2 x3 . By Lemma 4.4.2, S = U.
Remark 6.3.2. This example serves of a ‘proof of concept’ for the algorithm of Section
5.2. The above generating set has no distinguishing properties known to the authors; it is
merely the generating set produced by an implementation of this algorithm.
7. Larger examples
We explicitly present a few other non-acyclic upper cluster algebras.
7.1. The Markov cluster algebra with principal coefficients. Consider the initial
seed in Figure 6. As in the previous section, this seed has 3 mutable variables, but it
has principal coefficients – a frozen variable for each mutable variable, and the exchange
matrix extended by an identity matrix. Results about principal coefficients and why they
are important can be found in [FZ07].
B=
f1
0 −2 2
2
0 −2
−2 2
0
1
0
0
0
1
0
0
0
1
x1
Q=
x2
x3
f2
f3
Figure 6. The exchange matrix and quiver for the Markov cluster algebra with principal coefficients.
Proposition 7.1.1. The upper cluster algebra U is generated over Z[f1±1 , f2±1 , f3±1 ] by
x1 , x2 , x3 ,
x2 + f3 f1 x21 + f1 x22
x2 + f1 f2 x22 + f2 x23
x22 + f2 f3 x23 + f3 x21
, L2 = 3
, L3 = 1
x2 x3
x3 x1
x1 x2
f1 L21 + (f1 f2 f3 − 1)2
f2 L22 + (f1 f2 f3 − 1)2
f3 L23 + (f1 f2 f3 − 1)2
y1 =
, y2 =
, y3 =
x1
x2
x3
The ideal of relations is generated by the elements
L1 =
x1 x2 L3 = x21 + f1 f2 x22 + f2 x23 , y1 y2 L3 = f1 f2 y12 + y22 + f1 y32
x2 x3 L1 = x22 + f2 f3 x23 + f3 x21 ,
x3 x1 L2 = x23 + f3 f1 x21 + f1 x22 ,
f3 x1 L3 − x3 L1 = αx2 ,
f1 x2 L1 − x1 L2 = αx3 ,
y2 y3 L1 = f2 f3 y22 + y32 + f2 y12
y3 y1 L2 = f3 f1 y32 + y12 + f3 y22
f1 L1 y3 − L3 y1 = αy2
f2 L2 y1 − L1 y2 = αy3
16
JACOB MATHERNE AND GREG MULLER
f2 x3 L2 − x2 L3 = αx1 , f3 L3 y2 − L2 y3 = αy1
x1 L2 L3 = f1 f2 x2 L2 + f1 x1 L1 + x3 L3 , y1 L2 L3 = y2 L2 + f1 y1 L1 + f1 f3 y3 L3
x2 L3 L1 = f2 f3 x3 L3 + f2 x2 L2 + x1 L1 , y2 L3 L1 = y3 L3 + f2 y2 L2 + f2 f1 y1 L1
x3 L1 L2 = f3 f1 x1 L1 + f3 x3 L3 + x2 L2 , y3 L1 L2 = y1 L1 + f3 y3 L3 + f3 f2 y2 L2
x2 y3 = f2 f3 L2 L3 − αL1 , x3 y1 = f3 f1 L3 L1 − αL2 , x1 y2 = f1 f2 L1 L2 − αL3
x1 y3 = L1 L3 + f2 αL2 , x2 y1 = L2 L1 + f3 αL3 , x3 y2 = L3 L2 + f1 αL1
x1 y1 = f1 L21 + α2 , x2 y2 = f2 L22 + α2 , x3 y3 = f3 L23 + α2
L1 L2 L3 − f1 L21 − f2 L22 − f3 L23 = α2
where α := f1 f2 f3 − 1.
Proof. The exchange matrix B for the initial seed above contains a submatrix that is a
scalar multiple of the identity, thus B is full rank. Theorem 2.4.3 asserts that A is totally
coprime. Let S be the domain in F (A) generated by the twelve listed elements. Using
Lemma 5.1.1 and a computer, we see that the ideal of relations in S is generated by the
elements above.
The following identities imply that Lx ⊆ S.
x′1 = x3 L2 − f3 f1 x1 , x′2 = x1 L3 − f1 f2 x2 , x′3 = x2 L1 − f2 f3 x3
The following identities imply that S ⊆ Ux .
L1 =
2
2
2
′2 2
2
x′2
x2 + x23 f2 + f3 x′2
x′2 + f2 f3 (x22 + x21 f3 )
1 x2 + x1 x3 f2 f3 + f3 (x3 + x2 f1 )
2
= 1
= 3
′2
′
x1 x2 x3
x2 x3
x2 x′3
2
′2 2
2
2
2
x2 + x21 f3 + f1 x′2
x′2 + f3 f1 (x23 + x22 f1 )
x′2
3
2 x3 + x2 x1 f3 f1 + f1 (x1 + x3 f2 )
= 2
= 1
′2
′
x2 x3 x1
x3 x1
x3 x′1
2
2
2
2
x2 + x22 f1 + f2 x′2
x′2 + f1 f2 (x21 + x23 f2 )
x′2 x2 + x′2
1
3 x2 f1 f2 + f2 (x2 + x1 f3 )
= 3
= 2
L3 = 3 1
′2
′
x3 x1 x2
x1 x2
x1 x′2
L2 =
y1
=
=
=
y2
=
=
=
y3
=
=
=
2
′4 2
2 2
′2
2
2
2
2
2
2
3
′2
2
2
2
2
x′4
1 x2 +x1 x3 f1 f2 f3 +2x1 (x3 +x2 f1 )x2 f1 f3 +f1 f3 (x3 +x2 f1 ) +2x1 (x3 +x2 f1 )x3 f1 f2 f3
2 2
x′2
1 x2 x3
2
2 ′2 2
f1 (x21 + x23 f2 + f3 x′2
2 ) + (f1 f2 f3 − 1) x2 x3
2
x1 x′2
2 x3
′3
′
2
f1 (x1 (x3 + x3 f2 f3 (x2 + x21 f3 )))2 + (f1 f2 f3 − 1)2 x21 x22 x′4
3
x31 x22 x′4
3
2
′4 2
2 2
′2
2
2
2
2
2
2
3
′2
2
2
2
2
x′4
2 x3 +x2 x1 f2 f3 f1 +2x2 (x1 +x3 f2 )x3 f2 f1 +f2 f1 (x1 +x3 f2 ) +2x2 (x1 +x3 f2 )x1 f2 f3 f1
2 x2
x′2
x
2 3 1
2
2 ′2 2
f2 (x22 + x21 f3 + f1 x′2
3 ) + (f2 f3 f1 − 1) x3 x1
′2
2
x2 x3 x1
′3
′
f2 (x2 (x1 + x1 f3 f1 (x23 + x22 f1 )))2 + (f2 f3 f1 − 1)2 x22 x23 x′4
1
x32 x23 x′4
1
2
′4 2
2 2
′2
2
2
2
2
2
2
3
′2
2
2
2
2
x′4
3 x1 +x3 x2 f3 f1 f2 +2x3 (x2 +x1 f3 )x1 f3 f2 +f3 f2 (x2 +x1 f3 ) +2x3 (x2 +x1 f3 )x2 f3 f1 f2
2 x2
x′2
x
3 1 2
2
2 ′2 2
f3 (x23 + x22 f1 + f2 x′2
1 ) + (f3 f1 f2 − 1) x1 x2
′2
2
x3 x1 x2
′3
′
f3 (x3 (x2 + x2 f1 f2 (x21 + x23 f2 )))2 + (f3 f1 f2 − 1)2 x23 x21 x′4
2
x33 x21 x′4
2
A computer verifies that (Sx1 x2 x3 : (SDx )∞ ) = Sx1 x2 x3 . By Lemma 4.4.2, S = U.
COMPUTING UPPER CLUSTER ALGEBRAS
17
This presentation is enough to demonstrate an unfortunate pathology of upper cluster
algebras. If B is an exchange matrix, and B† is an exchange matrix obtained from B by
deleting some rows corresponding to frozen variables, then there are natural ring maps
s : A(B) → A(B† ),
s : U(B) → U(B† )
which send the deleted frozen variables to 1. It may be naively hoped that the map on
upper cluster algebras is a surjection, but this does not always happen.
Corollary 7.1.2. For B be as in Figure 6 and B2,2,2 as in Figure 3, the map
s : U(B) → U(B2,2,2 )
for which s(xi ) = xi and s(fi ) = 1 is not a surjection.
7.2. The ‘dreaded torus’. Consider the initial seed in Figure 7.
c
0 −1
1
0
−1
2
B=
1 −1
−1 0
1
1
−2 1
0 −1
1
0
0
1
b
Q=
a
d
f
Figure 7. The exchange matrix and quiver for the dreaded torus cluster algebra.
Proposition 7.2.1. The upper cluster algebra U is generated over Z[f ±1 ] by
a, b, c, d
b2 + c2 + ad
ad2 + ac2 + bcf + b2 d
a2 d + ac2 + bcf + b2 d
, Y =
, Z=
.
bc
acd
abd
The ideal of relations is generated by the elements
X=
bcX = b2 + c2 + ad,
cY − bZ = d − a
acX − adZ = ab − bd − cf, bdX − adY = cd − ac − bf
bXZ − aX − bY − cZ = f.
Proof. The exchange matrix B is full rank, and so Theorem 2.4.3 asserts that A is totally
coprime. Let S be the domain in F (A) generated by the eight listed elements. Using
Lemma 5.1.1 and a computer, we see that the ideal of relations in S is generated by the
elements above.
The following identities imply that Lx ⊆ S.
a′ = −cX + dZ + b, b′ = cX − b, c′ = bX − c, d′ = −bX + aY + c
The following identities imply that S ⊆ Ux .
X
=
=
(b2 + c2 )a′ + (bd + cf )d
c2 + ad + b′2
=
a′ bc
cb′
2
2 ′
2
(c + b )d + (ca + bf )a
b + da + c′2
=
′
d cb
bc′
18
JACOB MATHERNE AND GREG MULLER
Y
=
=
b′2 ad2 + b′2 ac2 + b′ (c2 + ad)cf + (c2 + ad)2 d
d2 + c2 + a′ b
=
cd
ab′2 cd
′2
2
′
c d + a(b + ad) + c bf
a(ac + bf ) + d′2 c + d′ b2
=
ac′ d
acd′
c′2 da2 + c′2 db2 + c′ (b2 + da)bf + (b2 + da)2 a
a2 + b2 + d′ c
=
ba
dc′2 ba
′2
2
′
d(db + cf ) + a′2 b + a′ c2
b a + d(c + da) + b cf
=
=
db′ a
dba′
∞
A computer verifies that (Sabcd : (SDx ) ) = Sabcd. By Lemma 4.4.2, S = U.
Z
=
This presentation makes it easy to explore the geometry of Spec(U). One interesting
result is the following, which can be proven by computer verification.
Proposition 7.2.2. The induced deep ideal UD is trivial.
±1
As a consequence, Spec(U) is covered by the cluster tori {Spec(Z[x±1
1 , ..., xn ])} coming
12
from the clusters of U. Since affine schemes are always quasi-compact, this cover has a
finite subcover; that is, some finite collection of cluster tori cover Spec(U).
Remark 7.2.3. This cluster algebra comes from a marked surface with boundary (via
the construction of [FST08]); specifically, the torus with one boundary component and a
marked point on that boundary component. In this perspective, the additional generators
X, Y, Z correspond to loops.
The epithet ‘the dreaded torus’ was coined by Gregg Musiker in a moment of frustation
– among cluster algebras of surfaces, it lies in the grey area between having enough marked
points to be well-behaved [Mul13, MSW11] and having few enough marked points to be
provably badly-behaved (like the Markov cluster algebra). As a consequence, it is still not
clear whether A = U in this case (despite the presentation for U above).
8. Acknowledgements
This paper would not have been possible without helpful insight from S. Fomin, J.
Rajchgot, D. Speyer, and K. Smith. This paper owes its existence to the VIR seminars in
Cluster Algebras at LSU, and to the second author’s time at MSRI’s Thematic program
on Cluster Algebras.
References
[BBH11] Andre Beineke, Thomas Brüstle, and Lutz Hille, Cluster-cylic quivers with three vertices and
the Markov equation, Algebr. Represent. Theory 14 (2011), no. 1, 97–112, With an appendix
by Otto Kerner. MR 2763295 (2012a:16028)
[BFZ05] Arkady Berenstein, Sergey Fomin, and Andrei Zelevinsky, Cluster algebras. III. Upper bounds
and double Bruhat cells, Duke Math. J. 126 (2005), no. 1, 1–52. MR 2110627 (2005i:16065)
[Eis95]
David Eisenbud, Commutative algebra, Graduate Texts in Mathematics, vol. 150, SpringerVerlag, New York, 1995, With a view toward algebraic geometry. MR MR1322960 (97a:13001)
[FG06]
Vladimir Fock and Alexander Goncharov, Moduli spaces of local systems and higher Teichmüller
theory, Publ. Math. Inst. Hautes Études Sci. (2006), no. 103, 1–211. MR 2233852 (2009k:32011)
[FG07]
Vladimir V. Fock and Alexander B. Goncharov, Dual Teichmüller and lamination spaces, Handbook of Teichmüller theory. Vol. I, IRMA Lect. Math. Theor. Phys., vol. 11, Eur. Math. Soc.,
Zürich, 2007, pp. 647–684. MR 2349682 (2008k:32033)
[FP13]
Sergey Fomin and Pavlo Pylyavsky, Tensor diagrams and cluster algebras, preprint, arxiv:
1210.1888 (2013).
12[Har77, Exercise 2.13.b].
COMPUTING UPPER CLUSTER ALGEBRAS
[FST08]
19
Sergey Fomin, Michael Shapiro, and Dylan Thurston, Cluster algebras and triangulated surfaces.
I. Cluster complexes, Acta Math. 201 (2008), no. 1, 83–146. MR 2448067 (2010b:57032)
[FZ02]
Sergey Fomin and Andrei Zelevinsky, Cluster algebras. I. Foundations, J. Amer. Math. Soc. 15
(2002), no. 2, 497–529 (electronic). MR 1887642 (2003f:16050)
[FZ03]
, Cluster algebras. II. Finite type classification, Invent. Math. 154 (2003), no. 1, 63–121.
MR 2004457 (2004m:17011)
, Cluster algebras. IV. Coefficients, Compos. Math. 143 (2007), no. 1, 112–164.
[FZ07]
MR 2295199 (2008d:16049)
[GLS08] Christof Geiss, Bernard Leclerc, and Jan Schröer, Partial flag varieties and preprojective algebras, Ann. Inst. Fourier (Grenoble) 58 (2008), no. 3, 825–876. MR 2427512 (2009f:14104)
[GSV03] Michael Gekhtman, Michael Shapiro, and Alek Vainshtein, Cluster algebras and Poisson geometry, Mosc. Math. J. 3 (2003), no. 3, 899–934, 1199, {Dedicated to Vladimir Igorevich Arnold
on the occasion of his 65th birthday}. MR 2078567 (2005i:53104)
[Har77] Robin Hartshorne, Algebraic geometry, Springer-Verlag, New York, 1977, Graduate Texts in
Mathematics, No. 52. MR MR0463157 (57 #3116)
[MSW11] Gregg Musiker, Ralf Schiffler, and Lauren Williams, Positivity for cluster algebras from surfaces,
Adv. Math. 227 (2011), no. 6, 2241–2308. MR 2807089
[Mul13] Greg Muller, Locally acyclic cluster algebras, Adv. Math. 233 (2013), 207–247. MR 2995670
[Sco06]
Joshua S. Scott, Grassmannians and cluster algebras, Proc. London Math. Soc. (3) 92 (2006),
no. 2, 345–380. MR 2205721 (2007e:14078)
[Spe13]
David Speyer, An infinitely generated upper cluster algebra., preprint, arxiv: 1305.6867 (2013).
Department of Mathematics, Louisiana State University, Baton Rouge, LA 70808, USA
E-mail address: [email protected]
Department of Mathematics, Louisiana State University, Baton Rouge, LA 70808, USA
E-mail address: [email protected]
| 0 |
A Simple Convex Layers Algorithm
Raimi A. Rufai1 and Dana S. Richards2
arXiv:1702.06829v2 [cs.CG] 16 Mar 2017
1
SAP Labs, 111 rue Duke, Suite 9000, Montreal QC H3C 2M1, Canada
[email protected]
2
Department of Computer Science, George Mason University,
4400 University Drive MSN 4A5, Fairfax, VA 22030, USA
[email protected]
Abstract. Given a set of n points P in the plane, the first layer L1 of
P is formed by the points that appear on P ’s convex hull. In general,
a
S point belongs to layer Li , if it lies on the convex hull of the set P \
j<i {Lj }. The convex layers problem is to compute the convex layers Li .
Existing algorithms for this problem either do not achieve the optimal
O (n log n) runtime and linear space, or are overly complex and difficult
to apply in practice. We propose a new algorithm that is both optimal
and simple. The simplicity is achieved by independently computing four
sets of monotone convex chains in O (n log n) time and linear space.
These are then merged in O (n log n) time.
Keywords: convex hull, convex layers, computational geometry
1
Introduction
Algorithms for the convex layers problem that achieve optimal time and space
complexities are arguably complex and discourage implementation. We give a
simple O (n log n)-time and linear space algorithm for the problem, which is
optimal. Our algorithm computes four quarter convex layers using a plane-sweep
paradigm as the first step. The second step then merges these in O (n log n)-time.
Formally, the convex layers, L(P ) = {L1 , L2 , · · · , Lk }, of a set P of n ≥ 3
points is a partition of P into k ≤ ⌈n/3⌉ disjoint subsets Li , i = 1, 2, · · · , k called
layers,Ssuch that each layer Li is the ordered3 set of vertices of the convex hull
of P \ j<i {Lj }. Thus, the outermost layer L1 coincides exactly with the convex
hull of P , conv(P ). The convex layers problem is to compute L(P ).
Convex layers have several applications in various domains, including robust
statistics, computational geometry, and pattern recognition.
2
Related Work
A brute-force solution to the convex layers
S problem is obvious—construct each
layer Li as the convex hull of the set P \ j<i Lj using some suitable convex hull
3
That is the layers are polygons not sets.
algorithm. The brute-force algorithm will take O(kn log n) time where k is the
number of the layers. We say this algorithm “peels off” a set of points one layer at
a time. This peeling approach is reminiscent of many convex layers algorithms.
Another general approach to this problem is the plane-sweep paradigm.
One of the earliest works to take the peeling approach is Green and Silverman
[3]. Their algorithm repeatedly invokes quickhull to extract a convex layer at
each invocation. This algorithm runs in O(n3 ) worst-case time, and O(n2 log n)
expected time.
Overmars and van Leeuwen’s [5] algorithm for this problem runs in O(n log2 n).
It is based on a fully dynamic data structure for maintaining a convex hull under arbitrary insertions and deletions of points. Each of these update operations
takes O(log2 n) time, since constructing the convex layers can be reduced to inserting all the points into the data structure in time O(n log2 n), marking points
on the current convex hull, deleting them, and then repeating the process for
the next layer. Since each point is marked exactly once and deleted exactly once,
these steps together take no more than O(n log2 n) time.
Chazelle’s [2] algorithm for this problem runs in O(n log n) time and O(n)
space, both of which are optimal. A new algorithm is discussed in Sect. 3.
(Chazelle [2] used a balanced tree approach as well as our new algorithm, but the
information stored in our tree corresponds to a very different set of polygonal
chains.)
The first algorithm on record that uses the plane-sweep approach is a modification of Jarvis march proposed by Shamos [6]. The algorithm works by doing
a radial sweep, changing the pivot along the way, just as in the Jarvis march,
but does not stop after processing all the points. It proceeds with another round
of Jarvis march that excludes points found to belong to the convex hull in the
previous iteration. This way, the algorithm runs in O(n2 ).
Nielsen [4] took advantage of Chan’s grouping trick [1] to obtain yet another
optimal algorithm for the convex layers problem. Nielsen’s algorithm is outputsensitive in that it can be parametrized by the number of layers l to be computed.
It runs in O(n log Hl ) time where Hl is the number of points appearing on the
first l layers.
3
New Algorithm
Our algorithm builds four sets of convex layers, each with a distinct direction
of curvature. The set of points P must be known ahead of time. For ease of
presentation, we will assume below that the points are in general position (no
three on a line and no two share a coordinate). Removing this assumption is easy
and our implementation does not make the assumption. Each point’s horizontal
ranking is precomputed by sorting the points using their x-coordinates.
A northwest monotone convex chain C = (p1 , p2 , · · · , pn ) has pi .x < pi+1 .x,
pi .y < pi+1 .y and no point in the chain is above the extended line through pi
and pi+1 , 1 ≤ i < n. The head (tail) of the chain C is defined as head(C) =
p1 (tail(C) = pn ). A full monotone convex chain is formed by augmenting
C = (p1 , p2 , · · · , pn ) with two fictional sentinel points, p0 = (p1 .x, −∞), and
pn+1 = (∞, pn .y). Note that given a full chain C, the calls head(C) and tail(C)
return p1 and pn and not the sentinels. The ∞’s are symbolic and the operations
are overloaded for them.
We call the chain northwest since it bows outward in that direction. Similarly
we will refer to a chain as southwest if it is northwest after rotating the point
set 90 degrees clockwise about the origin. Northeast and southeast are defined
analogously. Except in the section on merging below, all of our chains will be
northwest monotone convex chains, so we will simply call them chains.
We say a chain C1 precedes another chain C2 , if tail(C1 ).x < head(C2 ).x.
Additionally, we say a line is tangent to a chain, if it touches the chain and no
point in the chain is above the line.
Let chain C1 precede chain C2 . If tail(C1 ).y < tail(C2 ).y then a bridge
between the chains is a two-point chain (p1 , p2 ) where p1 is in chain C1 , p2
is in chain C2 and the line through p1 and p2 is tangent to both chains. If
tail(C1 ).y ≥ tail(C2 ).y then the chain (tail(C1 ), tail(C2 )) is a degenerate
bridge.
Let C = (p0 , p1 , p2 , · · · , pn , pn+1 ) be a full chain. C dominates a point p if p
is below the segment (pi , pi+1 ), for some 1 ≤ i ≤ n. A full chain C dominates a
full chain C ′ if C dominates every point of C ′ . The (northwest) hull of a point
set P , or just the hull chain of P , is the chain of points from P that dominates
every other point in P .
3.1
Hull Tree Data Structure
The hull tree T for the point set P , is a binary tree. T is either nil or has: a) A
root node that contains the hull chain for P , and b) A partition of the non-hull
points from P by some x coordinate into PL and PR . The left and right children
of the root are the hull trees T.L and T.R for PL and PR , respectively.
The root node contains the fields: a) T.hull is the full hull chain; b) T.l is a
cursor into the hull chain, that initially scans rightwards, and c) T.r is a cursor
into the hull chain, that initially goes leftwards.
The reason for the two cursors is so that we can be explicit about how the
hull chain is scanned. Our analysis depends on the claim that a point is only
scanned a constant number of times before being deleted from that chain. We
will maintain the invariant that T.l is never to the right of T.r. (For example,
if T.r coincides with T.l and T.r moves left then T.l will be pushed back to
maintain the invariant.)
As a preprocessing step, the points are sorted by x-coordinates and assigned
a 0-based integer rank, represented by a ⌈log n⌉ bit binary string. We will use
a perfectly balanced binary tree as the skeleton for our hull tree. It will have
n leaves with no gaps from left to right. The skeleton is only implicit in our
construction. The leaves will be associated with the points by rank from left to
right. We will use the conventional trick that the path from the root to a leaf
is determined by the binary representation of the rank of the leaf, going left on
0 and right on 1. We now specify PL as the set of points, not in the hull chain,
whose rank starts with 0 in binary and PR is analogous with a leading bit of 1.
We will say a point in, say, PL “belongs” to the subtree T.L.
As hull points are peeled off, the corresponding leaf nodes will become obsolete, but we never recalculate ranks. It follows that the skeleton of the tree
will always be of height ⌈log n⌉, even though, over time, more and more of the
bottom of the tree will become vacant. In addition to this invariant, we explicitly
mention these other invariants: (a) T.hull is a full northwest monotone convex
chain, (b) T.hull is the hull of P , (c) T.l.x ≤ T.r.x, and (d) T.L and T.R are the
hull trees for PL and PR .
Lemma 1. The space complexity of a hull-tree T for a set P of n points is
Θ (n).
Proof. The skeleton of the binary tree with n leaves clearly has O(n) nodes of
constant size. There is also the space for all the lists representing the various hull
chains. However, from the definitions of PL and PR , every point is on exactly
one hull chain. This completes the proof.
⊓
⊔
3.2
Tree Construction
The construction of the hull tree is done by repeated insertions into an initially
empty tree. The overall procedure is a plane-sweep algorithm, since the inserted
points will have increasing y-coordinates. We shall come back to the buildTree
routine after first looking into the insert algorithm.
3.2.1 insert. Algorithm insert is a recursive algorithm. Rather than insert
a single point at a time, we feel it is more natural to batch several such insertions
when we can first link them into a chain. It takes as input a chain C of vertices
and a hull tree T . Because insert will only be used in a plane-sweep manner,
we will be able to assume as a precondition that C is nonempty, no point in C
was previously inserted, and no point in C is dominated by the initial hull tree
of T .
We specify the procedure tangents(al , ar , T ) which assumes al .x < ar .x
al .y < ar .y and neither al nor ar is dominated by T.hull = (h0 , h1 , . . . , hk , hk+1 ).
It returns a pair of points (ql , qr ) each from T.hull. We require the line through
ql and al be a leftward tangent. If h1 .x < al .x and h1 .y < al .y, this is welldefined. Otherwise, we return a degenerate tangent with ql = h0 . Similarly, if
ar .x < hk .x and ar .y < hk .y, then qr defines a rightward tangent with ar .
Otherwise, we return qr = hk+1 .
We sketch the implementation of tangents. If the leftward tangent is welldefined, we compute ql by scanning from the current position of T.l. In constant
time we can determine if we should scan to the left or to the right. (As is standard
we keep track of the changing slopes of lines through al .) Similarly, if the tangent
is well-defined, we scan for qr using T.r.
Lemma 2. Algorithm insert correctly inserts C into T .
Algorithm 3.1: insert(C, T )
1
2
3
4
5
6
7
8
9
10
11
Input : C, a chain of points to be inserted into T ,
T , the hull tree for some point set P .
Output: T , the hull tree for P ∪ C.
if T = nil then
Create a root node with T.hull = C
T.l = head(C); T.r = tail(C)
else
(ql , qr ) = tangents(head(C), tail(C), T )
C ′ = the portion of T.hull strictly between ql and qr
Replace C ′ by C within T.hull
Scan and split C ′ to create these two chains
CL = {p ∈ C ′ | p belongs in T.L}; CR = {p ∈ C ′ | p belongs in T.R}
T.L = insert(CL , T.L) ; T.R = insert(CR , T.R)
return T
Proof. The proof is by induction on the number of points. The base case, where
T is empty, is clear. In general, we only need to establish that the new T.hull
is correct; this follows since all the points removed, in C ′ , are dominated by the
new hull. By the recursive definition of hull trees the points of C ′ now belong in
either T.L or T.R and are recursively inserted into those trees.
⊓
⊔
3.2.2 buildTree. Given a point set P , algorithm buildTree starts by
sorting these points by their x-coordinates. The 0-based index of a point p in
such a sorted order is called its rank. As discussed above, a point’s rank is used
to guide its descent down the hull tree during insertion.
Algorithm 3.2: buildTree(P )
1
2
3
4
5
6
Input : P , a set of points, {p1 , p2 , . . . , pn }.
Output: T , a hull tree built from P .
Compute the rank of each point in P by x-coordinate
Sort the points in P by increasing y-coordinate
Create an empty hull tree T
for each p in order do
insert(p, T )
return T
Recall that the insert procedure expects a hull chain as the first parameter,
so the call to insert in buildTree is a understood to be a chain of one vertex.
Note that such singleton chains satisfy the preconditions of insert. Once all the
points have been inserted, the hull tree is returned.
Lemma 3. Right after a point p is inserted into a hull tree T , tail(T.hull) = p.
Proof. Since points are inserted into T by increasing y-coordinate value, the
most recently inserted point must have the largest y coordinate value so far. So
it must be in the root hull and cannot have any point after it.
⊓
⊔
Lemma 4. Algorithm buildTree constructs a hull tree of a set of n points in
O (n log n) time.
Proof. Clearly the initial steps are within the time bound. It remains only to
show that all the invocations of insert take no more than O (n log n) time.
Consider an arbitrary point p inserted into T by buildTree. Initially, it
goes into the T.hull by Lemma 3. In subsequent iterations, the point either stays
within its current hull chain or descends one level owing to an eviction from its
current hull chain. The cost of all evictions from a chain C is dominated by the
right-to-left tangent scan. We consider the number of points we scan past (i.e.
not counting the points at the beginning and end of scan). Consider the cursor
T.l. It scans left to right past points once; if we scan past a point a second time,
going right to left, then that point will be in C ′ and will be evicted from this
level. Symmetric observations hold for T.r. And a point will be scanned a final
time if it is in CL or CR . Hence we will scan past a point a constant number of
times before it is evicted.
A call to insert takes constant time every time it is invoked (and it is only
invoked when at least one point has been evicted from its parent). In addition
insert takes time bounded by the number of points scanned past. Note that any
particular point p starts at the root and only moves downward (when evicted)
and there are only O(log n) levels. Hence during the execution of buildTree
both the total number of points evicted and the total number of points scanned
past is bounded by O(n log n).
⊓
⊔
Lemma 5. Each point is handled by buildTree in O (log n) amortized time.
Proof. By Lemma 4, the cost of all invocations of insert by algorithm buildTree
is O (n log n), which amortizes to O (log n) per point.
⊓
⊔
3.3
Hull Peeling
We begin the discussion of hull peeling by examining algorithm extractHull,
which takes a valid hull tree T , extracts from it the root hull chain T.hull, and
then returns it.
Algorithm 3.3: extractHull(T )
1
2
3
Input : T is a hull tree for a non-empty pointset P and H be the set of points
in T.hull.
Output: the hull h and T a hull tree for the point set P \ H
h = T.hull
delete(h, T )
return h
The correctness and cost of algorithm extractHull obviously depend on
delete. delete is called after a subchain has been cut out of the middle of
the root hull chain. This can be visualized if we imagine the root hull as a roof.
Further there is a left and a right “overhang” remaining after the middle of a roof
has caved in. The overhang might degenerate to just a sentinel point. Algorithm
delete itself also depends on other procedures which we discuss first.
3.3.1 below. We could just connect the two endpoints of the overhangs with
a straight line to repair the roof. However because of the curvature of the old
roof, some of the points in T.L or T.R might be above this new straight line. In
that case, these points need to move out of their subtrees and join in to form
the new root hull chain.
Therefore, we will need a Boolean function below(T, p, q) to that end. It
returns true if there exists a tangent of the root hull such that both p and q are
above it. A precondition of below(T, p, q) is that the root hull can rise above
the line through p and q only between p and q.
We also specify the Boolean function above(p, q, r) to be true if point r is
above the line passing through p and q. This is done with a standard constant
time test. Note that below is quite different than above. The functions pred
and succ operate on the corresponding hull chain in the obvious way.
Algorithm 3.4: below(T, pl , pr )
1
2
3
4
5
6
7
8
9
Input : T , a hull tree,
pl : the rightmost end in the left overhang,
pr : the leftmost point in the right overhang
Output: True iff every point of T.hull is below the line through pl and pr
if pl or pr is a sentinel then return false
if above(pred(T.r), T.r, Pl ) then
while ¬ above(T.l, succ(T.l), pr ) and ¬ above(pl , pr , T.l) do
T.l = succ(T.l)
return ¬ above(pl , pr , T.l)
else
while ¬ above(pred(T.r), T.r, pl ) and ¬ above(pl , pr , T.r) do
T.r = pred(T.r)
return ¬ above(pl , pr , T.r)
Lemma 6. Algorithm below runs in linear time. Further, if it returns false
either T.l or T.r is above the line through pl and pr .
Proof. Recall that one cursor may push the other cursor as it moves. Only one
cursor moves. If T.r is too far left to help decide, then T.l moves until it is above
the line or we find a tangent.
⊓
⊔
3.3.2 getBridge. Given two hull trees, where one precedes the other, algorithm getBridge scans the hull chains of the hull trees to find the bridge that
connects them.
Algorithm 3.5: getBridge(T.L, T.R)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Input : T.L: a left hull tree of some hull tree T ,
T.R: a right hull tree of T
Output: pl , pr : the left and right bridge points for T.L and T.R
if T.L = nil then pl = (−∞, −∞)
else pl = T.L.l
if T.R = nil or tail(T.L.hull).y > tail(T.R.hull).y then
pr = (+∞, −∞)
else pr = T.R.r
if above(T.L.l, T.R.r, succ(T.L.l)) then
T.L.l = succ(T.L.l)
(pl , pr ) = getBridge(T.L, T.R)
if above(T.L.l, T.R.r, pred(T.L.l)) then
T.L.l = pred(T.L.l)
(pl , pr ) = getBridge(T.L, T.R)
if above(T.L.l, T.R.r, succ(T.R.r)) then
T.R.r = succ(T.R.r)
(pl , pr ) = getBridge(T.L, T.R)
if above(T.L.l, T.R.r, pred(T.R.r)) then
T.R.r = pred(T.R.r)
(pl , pr ) = getBridge(T.L, T.R)
return (pl , pr )
Lemma 7. Given two valid hull trees T.L and T.R, getBridge correctly computes the bridge connecting T.L.hull and T.R.hull in time linear in the lengths
of those hulls.
Proof. The scan for the left bridge point in T.L is done using its left-to-right
cursor T.L.l, while the scan for the right bridge point in T.R is done using T.R’s
right-to-left cursor T.R.r. On completion, the two cursors will be pointing to
the bridge points. Since each vertex is scanned past at most once, the runtime
is O (|T.L.hull| + |T.R.hull|)). This completes the proof.
⊓
⊔
3.3.3 delete. The general idea of delete(C, T ) is that if C is a subchain
of T.hull then the procedure will return with the tree T being a valid hull tree
for the point set P \ H, where H is the set of points in C. The procedure will
be employed during the peeling process, and so C will initially be the entire
root hull. The root hull will have to be replaced by moving points up from the
subtrees. In fact the points moved up will be subchains of the root hulls of T.L
and T.R. Recursively, these subtrees will in turn need to repair their root hulls.
We do a case analysis below and find that the only procedure we will need is
delete.
Again we shall employ the analogy of a roof caving in, in the middle. The
rebuilding of the “roof” starts with identifying the endpoints of the remaining
left and right overhangs. These points will be the sentinels if the overhangs are
empty. The endpoints al and ar define a line segment, (al , ar ), which we shall
call the roof segment.
Before continuing, let us examine the dynamics of the points in the root
hull of a (sub)tree T during successive invocations of delete. Successive calls
might cause the roof segment (al , ar ) to get bigger. For successive roof segments,
the root hull of T is queried. There are two phases involved: before the root hull
intersects the roof segment, and thereafter. During the first phase, each new roof
segment is below the previous one (cf. Fig. 1 and Fig. 2). During the first phase,
the root hull is not changed but is queried by a series of roof segments (al , ar ).
In the second phase, it gives up its subchain from T.l to T.r to its parent (or
is extracted). Thereafter, for each new excision, T.l and T.r will move further
apart, until they become a sentinel. This is shown inductively on the depth of
the subtree.
In the first phase the cursors (T.l and T.r) start at the head and tail of
the list and move in response to below queries. Each cursor will move in one
direction at first and then, only once, change direction. This is because each
subsequent query has (al , ar ) moving apart on the parent’s convex chain. See
Fig. 1 and Fig. 2. Now we examine the algorithm more carefully.
ar
al
a′l
a′r
a′r
T.l
T.r
T.l
Fig. 1. Before al and ar move up.
T.r
a′l
Fig. 2. After al and ar have moved up.
The rebuilding process breaks into four cases depending on whether any
points from TL and TR are above the roof segment and hence will be involved
in the rebuilding.
Case 1. Neither subtree is needed to rebuild the roof.
This case, depicted in Fig. 3, is when the deletion of subchain C from T.hull
leaves a hull that already dominates all other points.
Case 2. Only the right subtree is needed to rebuild the roof.
This case, depicted in Fig. 4, is when T.hull no longer dominates the hull chain
in the right subtree. A second subcase is when the left root hull does extend
above the (al , ar ) segment but is still below the left tangent from the right root
hull. To maintain the hull tree invariants, a subchain of T.R.hull will have to be
extracted and moved up to become part of T.hull. In Case 2, only the vertices
of T.R.hull that will be moved up are scanned past twice, since points scanned
past in phase two are removed from the current hull.
Algorithm 3.6: delete(C, T )
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Input : C, a chain of points to be deleted from T.hull,
T : a hull tree or points et P
Output: T : The updated hull tree for the point set P \ H
al = pred(head(C)); ar = succ(tail(C))
Use L = ¬ below(T.L, al , ar ); Use R = ¬ below(T.R, al , ar )
if Use L then (Ll , Lr ) = tangents(al , ar , T.L)
if Use R then (Rl , Rr ) = tangents(al , ar , T.R)
Case 1: Neither subtree used to rebuild the roof
case ¬ Use L and ¬ Use R do nothing
Case 2: Only the right subtree used to rebuild the roof
case Use R and (¬ Use L or above(ar , Ll , Rl )) do
CR = chain in T.R.hull from Rl to Rr , inclusive
Update T.hull with CR inserted between al and ar
delete(CR , T.R)
Case 3: Only the left subtree used to rebuild the roof
case Use L and (¬ Use R or above(Rl , al , Lr )) do
CL = chain in T.L.hull from Ll to Lr , inclusive
Update T.hull with CL inserted between al and ar
delete(CL , T.L)
Case 4: Both subtrees used to rebuild the roof
case Use L and Use R and above(al , Rl , Ll ) and above(Lr , ar , Rr ) do
(ql , qr ) = getBridge(T.L, T.R)
CL = chain in T.L.hull from Ll to ql , inclusive
CR = chain in T.R.hull from qr to Rr , inclusive
D = chain from concatenating CL to CR
Update T.hull with D inserted between al and ar
delete(CL , T.L)
delete(CR , T.R)
return T
ar
ar
al
al
Fig. 3. Case 1: al can connect to ar .
Fig. 4. Case 2: Right subtree involved in
rebuilding.
Case 3. Only the left subtree is needed to rebuild the roof.
This case, depicted in Fig. 5, is the converse of case 2, Again, a second subcase
is when the right root hull does extend above the (al , ar ) segment but is still
below the right tangent from the left root hull. As in the previous case, only the
vertices of T.L.hull that will be moved up are scanned twice.
Case 4. Both subtrees are needed to rebuild the roof.
In this case, we need to compute two subchains, one from T.l.hull and the other
from T.r.hull, which are then connected by a bridge to fix the roof.
ar
al
Fig. 5. Case 3: Only left subtree involved.
ar
al
Fig. 6. Case 4: Both subtrees involved.
Lemma 8. In Case 4, only the vertices of T.L.hull and T.R.hull that will be
moved up to join the roof are scanned twice.
Proof. Recall that after the call to getBridge, the two cursors T.L.l and T.R.r
are already pointing to the left and right bridge points.
The scan for the left tangent point visible to al and above the segment al ar
is done by walking T.L.l forward or backward. The decision of which direction
to walk can be done in constant time. If the walk toward T.L.r is chosen, then
all the points encountered will be encountered for the first time. However, if the
scan is backward toward the head of T.L.hull, then any point encountered is one
that will be moved up. A symmetrical argument applies on the right.
⊓
⊔
Theorem 1. Consider a sequence of calls to extractHull, starting with n
points, until all points have been extracted. The total time amortized over all
calls is O (n log n)
Proof. We assume that we start with a hull tree (built with buildTree). The
run time is dominated by cursor movement. Each procedure takes constant time
(and the number of calls is proportional to the number of chain movements) plus
the number of points a cursor has moved past. The above discussion shows that
each point is passed over a constant number of times before moving out a chain.
As with buildTree, this leads to our result.
⊓
⊔
3.4
Merge
Recall our discussion so far has only been for “northwest” hull trees, which we
now call TN W . We rotate the point set 90 degrees and recompute three times,
resulting in the four hull trees TN W , TN E , TSE , and TSW . We will use these to
construct the successive convex hulls by peeling. When the points are in general
position, the extreme points (topmost, bottommost, rightmost, and leftmost)
are on the convex hull. Note that some of these may coincide. Further, it is clear
that the chain that connects the leftmost point with the topmost is just the
northwest hull chain found at the root of TN W . The rest of the convex hull is
the root hull chains of the other trees.
Initially all the points are “unmarked”. When marked, a point is marked in all
four trees. We iteratively perform the following actions to construct each layer:
a) Retrieve and delete the root hull chain from each of the hull trees; b) Remove
the marked points from each chain; c) Mark the points remaining in each chain,
and d ) Concatenate the four chains to form the convex hull for this layer.
This process stops when all vertices have been marked, which is when all the
points have been deleted from all the trees. This correctness follows from above.
Lemma 9. Given a set S of n points and the four hull trees of P with the
four orientations of N W, N E, SE, and SE, the merge procedure executes in
O (n log n) time.
Proof. Note that the sum of the lengths of all the chains is O (n). So marking
points and removing them later all in all takes linear time. Recall that all the
calls to delete altogether take time O (n log n) time.
⊓
⊔
4
Conclusion
We have provided a new simple optimal algorithm for the convex layers problem. Detailed pseudocode, space and time complexity results are also given. The
pseudocode might appear detailed, but that is only because the approach is simple enough that we can deal with all cases explicitly. However, by using four sets
of hulls, we only need to work with monotone chains which simplifies our case
analyses and makes the correctness argument straightforward. The extension to
dynamic point sets remains an open problem.
References
1. Chan, T.: Optimal output-sensitive convex hull algorithms in two and three dimensions. Discrete & Computational Geometry 16(4), 361–368 (04 1996)
2. Chazelle, B.: On the convex layers of a planar set. IEEE Trans. Inf. Theor. 31(4),
509–517 (Sep 2006)
3. Green, P., Silverman, B.: Constructing the convex hull of a set of points in the plane.
Computer Journal 22, 262–266 (1979)
4. Nielsen, F.: Output-sensitive peeling of convex and maximal layers. Inf. Process.
Lett. 59(5), 255–259 (1996)
5. Overmars, M.H., van Leeuwen, J.: Maintenance of configurations in the plane. Journal of Computer and System Sciences 23(2), 166 – 204 (1981)
6. Preparata, F.P., Shamos, M.I.: Computational Geometry: An Introduction.
Springer-Verlag, 3rd edn. (10 1990)
| 8 |
arXiv:1705.01838v1 [math.AG] 4 May 2017
ON PERMUTATIONS INDUCED BY TAME AUTOMORPHISMS
OVER FINITE FIELDS
KEISUKE HAKUTA
Abstract. The present paper deals with permutations induced by tame automorphisms over finite fields. The first main result is a formula for determining
the sign of the permutation induced by a given elementary automorphism over
a finite field. The second main result is a formula for determining the sign of
the permutation induced by a given affine automorphism over a finite field.
We also give a combining method of the above two formulae to determine the
sign of the permutation induced by a given triangular automorphism over a
finite field. As a result, for a given tame automorphism over a finite field, if we
know a decomposition of the tame automorphism into a finite number of affine
automorphisms and elementary automorphisms, then one can easily determine
the sign of the permutation induced by the tame automorphism.
1. Introduction
Let k be a field and let k[X1 , . . . , Xn ] be a polynomial ring in n indeterminates
over k. An n-tuple of polynomials F = (f1 , . . . , fn ) is called a polynomial map,
when fi belongs to k[X1 , . . . , Xn ] for all i, 1 ≤ i ≤ n. A polynomial map F =
(f1 , . . . , fn ) can be viewed as a map from k n to itself by defining
F (a1 , . . . , an ) = (f1 (a1 , . . . , an ), . . . , fn (a1 , . . . , an ))
(1.1)
n
for (a1 , . . . , an ) ∈ k . The set of polynomial maps over k and the set of maps from
k n to itself are denoted by MAn (k) and Maps(k n , k n ), respectively. Then by (1.1),
we see that there exists a natural map
π : MAn (k)
→ Maps(k n , k n ).
MAn (k) (resp. Maps(k n , k n )) is a semi-group with respect to the composition of
polynomial maps (resp. maps). Moreover, these two semi-groups are monoids with
the identity maps as neutral elements, and the map π is a monoid homomorphism.
Let GAn (k) be the subset of invertible elements in MAn (k). An element in
GAn (k) is called a polynomial automorphism. A polynomial automorphism F =
(f1 , . . . , fn ) is said to be affine when deg fi = 1 for i = 1, . . . , n. A polynomial map
Eai of the form
Eai = (X1 , . . . , Xi−1 , Xi + ai , Xi+1 , . . . , Xn ),
ai ∈ k[X1 , . . . , X̂i , . . . , Xn ] = k[X1 , . . . , Xi−1 , Xi+1 , . . . , Xn ],
(1.2)
= E−ai . A polynomial automoris also a polynomial automorphism since Ea−1
i
phism Eai of form (1.2) is said to be elementary. Let Aff n (k) denote the set of
Date: May 4, 2017.
2010 Mathematics Subject Classification. Primary: 14R10; Secondary: 12E20, 20B25.
Key words and phrases. Affine algebraic geometry, polynomial automorphism, tame automorphism, finite field, permutation.
1
all affine automorphisms, and let EAn (k) denote the set of all elementary automorphisms. Aff n (k) and EAn (k) are subgroups of GAn (k). We put TAn (k) :=
hAff n (k), EAn (k)i, where hH1 , H2 i is a subgroup of a group G generated by two
subgroups H1 , H2 ⊂ G. Then TAn (k) is also a subgroup of GAn (k) and is called
the tame subgroup. If τ ∈ TAn (k) then τ is called tame automorphism, and otherwise (i.e., τ ∈ GAn (k) \ TAn (k)) τ is called wild automorphism. A polynomial
automorphism Ja,f of the form
Ja,f = (a1 X1 + f1 (X2 , . . . , Xn ), a2 X2 + f2 (X3 , . . . , Xn ), . . . , an Xn + fn ) ,
ai ∈ k (i = 1, . . . , n) , fi ∈ k[Xi+1 , . . . , Xn ] (i = 1, . . . , n − 1) , fn ∈ k,
is called de Jonquières automorphism (or triangular automorphism). From the
definition, triangular automorphism is trivially polynomial automorphism. The set
of all elements of the form Ja,f is denoted by BAn (k):
BAn (k) := {Ja,f ∈ GAn (k) | ai ∈ k (i = 1, . . . , n) ,
fi ∈ k[Xi+1 , . . . , Xn ] (i = 1, . . . , n − 1) , fn ∈ k}.
BAn (k) is a subgroup of GAn (k). BAn (k) is also a subgroup of TAn (k), and it is
known that TAn (k) = hAff n (k), BAn (k)i (cf. [2, Exercises for § 5.1 – 1, 2]). The
Tame Generators Problem asks whether GAn (k) = TAn (k), and is related to the
famous Jacobian conjecture.
Tame Generators Problem. GAn (k) = TAn (k)?
For any field k, we denote the characteristic of the field k by p = char(k). If k
is a finite field Fq with q elements (p = char(Fq ), q = pm , and m ≥ 1), we use the
symbol πq instead of π:
πq : MAn (Fq ) → Maps(Fnq , Fnq ).
The map πq induces a group homomorphism
πq : GAn (Fq ) → Sym(Fnq ),
where Sym(S) is a symmetric group on a finite set S. Let
sgn : Sym(S) → {±1}
be the sign function. The sign function sgn is a group homomorphism, and Ker(sgn) =
Alt(S), where Alt(S) is the alternating group on S. Recall that for any subgroup
G ⊆ GAn (Fq ), πq (G) is a subgroup of Sym(Fnq ).
In the case where k = Fq , we can consider a slightly different problem from the
Tame Generators Problem, namely, it is natural to investigate the subgroup πq (G)
of Sym(Fnq ). This problem has first been investigated in the case G = TAn (Fq ) by
Maubach [3]. Indeed, Maubach proved the following theorem ([3, Theorem 2.3])
and proposed a following problem ([4, page 3, Problem]):
Theorem 1. ([3, Theorem 2.3]) If n ≥ 2, then πq (TAn (Fq )) = Sym(Fnq ) if q is
odd or q = 2. If q = 2m where m ≥ 2 then πq (TAn (Fq )) = Alt(Fnq ).
Question 1. ([4, page 3, Problem]) For q = 2m and m ≥ 2, do there exist
polynomial automorphisms such that the permutations induced by the polynomial
automorphisms belong to Sym(Fnq ) \ Alt(Fnq )?
2
If there exists F ∈ GAn (F2m ) such that π2m (F ) ∈ Sym(Fn2m ) \ Alt(Fn2m ), then we
must have F ∈ GAn (F2m ) \ TAn (F2m ), namely, F is a wild automorphism. Thus,
Question 1 is a quite important problem for the Tame Generators Problem in positive characteristic. Furthermore, we refer the reader to [5, Section 1.2] for several
questions related to [3, Theorem 2.3]. The present paper deals with permutations
induced by tame automorphisms over finite fields. We address the following questions related to [3, Theorem 2.3] which is a little different from the questions in [5,
Section 1.2].
Question 2. For a given tame automorphism φ ∈ TAn (Fq ), how to determine the
sign of the permutation induced by φ? (how to determine sgn(πq (φ))?)
Question 3. Suppose that G is a subgroup of GAn (Fq ). What are sufficient
conditions on G such that the inclusion relation πq (G) ⊂ Alt(Fnq ) holds?
Question 2 seems to be natural since if q 6= 2m (m ≥ 2), one can not determine
the sign of the permutation induced by a given tame automorphism over a finite
field from [3, Theorem 2.3]. The information about sign of the permutations might
be useful for studying the Tame Generators Problem, Question 1, or other related
questions such as [4, page 5, Conjecture 4.1], [4, page 5, Conjecture 4.2] and so on.
Question 3 also seems to be natural since if q 6= 2m (m ≥ 2) and G 6= TAn (Fq ), one
can not obtain any sufficient condition for the inclusion relation πq (G) ⊂ Alt(Fnq )
from [3, Theorem 2.3].
The contributions of the present paper are as follows. The first main result is a
formula for determining the sign of the permutation induced by a given elementary
automorphism over a finite field (Section 3, Main Theorem 1). Our method to determine the sign of the permutation induced by a given elementary automorphism over
a finite field, is based on group theory. As a consequence of Main Theorem 1, one
can derive a sufficient condition for the inclusion relation πq (EAn (Fq )) ⊂ Alt(Fnq )
(Section 3, Corollary 1). The second main result is a formula for determining the
sign of the permutation induced by a given affine automorphism over a finite field
(Section 4, Main Theorem 2). Our method to determine the sign of the permutation
induced by a given affine automorphism over a finite field, is based on linear algebra.
As a consequence of Main Theorem 2, one can also derive a sufficient condition for
the inclusion relation πq (Aff n (Fq )) ⊂ Alt(Fnq ) (Section 4, Corollary 2). Section 5
gives a combining method of Main Theorem 1 and Main Theorem 2 to determine
the sign of the permutation induced by a given triangular automorphism over a
finite field (Section 5, Corollary 3). As a result, for a given tame automorphism
over a finite field, if we know a decomposition of the tame automorphism into a finite number of affine automorphisms and elementary automorphisms, then one can
easily determine the sign of the permutation induced by the tame automorphism
(Section 5, Corollary 5).
The rest of this paper is organized as follows. In Section 2, we fix our notation.
In Section 3, we give a method to determine the sign of the permutation induced by
a given elementary automorphism over a finite field. In Section 4, we give a method
to determine the sign of the permutation induced by a given affine automorphism
over a finite field. Section 5 deals with how to determine the sign of the permutation
induced by a given triangular automorphism and a given tame automorphism over
a finite field.
3
2. Notation
Throughout this paper, we use the following notation. For any field k, we denote
the characteristic of the field k by p = char(k). We denote the multiplicative
group of a field k by k ∗ = k \ {0}. We use the symbols Z, C, Fq to represent the
rational integer ring, the field of complex numbers, and a finite field with q elements
(p = char(Fq ), q = pm , m ≥ 1). We denote the set of non-negative integers by Z≥0 .
For a group G and g ∈ G, ordG (g) is the order of g, namely, ordG (g) is the smallest
non-negative integer x that holds g x = 1G , where 1G is the identity element in the
group G. We denote by GLn (k) the set of invertible matrices with entries in k.
The polynomial ring n indeterminates over k is denoted by k[X1 , . . . , Xn ]. We
denote the polynomial ring k[X1 , . . . , Xi−1 , Xi+1 , . . . , Xn ] (omit the indeterminate
Xi ) by k[X1 , . . . , X̂i , . . . , Xn ]. Let MAn (k) be the set of polynomial maps over
k. MAn (k) is a monoid with respect to the composition of polynomial maps, and
the neutral elements of the monoid MAn (k) is the identity map. We denote the
subset of invertible elements in MAn (k), the set of all affine automorphisms, the
set of all elementary automorphisms, and the set of all triangular automorphisms
by GAn (k), Aff n (k), EAn (k), and BAn (k), respectively. GAn (k) is a group with
respect to the composition of polynomial automorphisms, and Aff n (k), EAn (k) are
subgroups of GAn (k). Recall that
Aff n (k) ∼
= GLn (k) ⋉ k n .
(2.1)
For a finite set S, we denote the cardinality of S by ♯S, and denote the symmetric
group on S and the alternating group on S by Sym(S) and Alt(S), respectively.
Let S = {s1 , . . . , sn }, ♯S = n, and σ ∈ Sym(S). For σ ∈ Sym(S),
s1
s2
···
sn
σ=
,
sσ(1) sσ(2) · · · sσ(n)
means that σ (si ) = sσ(i) for 1 ≤ i ≤ n. For σ, τ ∈ Sym(S), we denote by τ ◦ σ the
composition
s1
s2
···
sn
τ ◦σ =
.
sτ (σ(1)) sτ (σ(2)) · · · sτ (σ(n))
The permutation on S defined by
S
∈
−→
∈
S
si
si1
7−→ si
7−→ si2
..
.
sir−1
sir
7−→ sir
7−→ si1
if i 6∈ {i1 , . . . , ir }
is called the cycle of length r and is denoted by (i1 i2 . . . ir ). Let δ : R → R be
a function satisfying that δ (x) = 0 when x = 0, and δ (x) = 1 when x 6= 0. Let
χ : F∗q → C∗ be a non-trivial multiplicative character of order ℓ. We extend χ to
Fq by defining χ (0) = 0.
4
3. Sign of permutations induced by elementary automorphisms
In this section, we consider the sign of permutations induced by elementary
automorphisms over finite fields. The main result of this section is as follows.
(q)
Main Theorem 1. (Sign of elementary automorphisms) Suppose that Eai
is an elementary automorphism over a finite field, namely,
= (X1 , . . . , Xi−1 , Xi + ai , Xi+1 , . . . , Xn ) ∈ EAn (Fq ) ,
Ea(q)
i
(3.1)
and ai ∈ Fq [X1 , . . . , X̂i , . . . , Xn ]. If q is odd or q = 2m , m ≥ 2 then we have
(q)
πq Eai ∈ Alt(Fnq ). Namely, if q is odd or q = 2m , m ≥ 2 then
sgn πq Ea(q)
= 1.
(3.2)
i
(q)
depends only on the number of monomials of the
If q = 2 then sgn πq Eai
e
e
i+1
i−1
· · ·Xnen with c ∈ F∗q and e1 , . . . , en ≥ 1 appearing in the
form cX1e1 · · ·Xi−1
Xi+1
(q)
= (−1)Mai , where
polynomial ai . More precisely, if q = 2 then sgn πq Eai
e
e
i+1
i−1
· · ·Xnen with c ∈ F∗q
Xi+1
Mai is the number of monomials of the form cX1e1 · · ·Xi−1
and e1 , . . . , en ≥ 1 appearing in the polynomial ai .
Proof. It suffices to assume that ai ∈ Fq [X1 , . . . , X̂i , . . . , Xn ] is a monomial (Similar
discussion can be found in [3, page 96, Proof of Theorem 5.2.1]). Let ci ∈ Fq , Nî :=
n−1
(e1 , . . . , eˆi , . . . , en ≥
{1, . . . , i − 1, i + 1, . . . , n}, and e := (e1 , . . . , eˆi , . . . , en ) ∈ Z≥0
Q
ej
0). We put ai = ci 1≤j≤n Xj and
j6=i
Ec(q)
i ,e
:=
X1 , . . . , Xi−1 , Xi + ci
Y
e
Xj j , Xi+1 , . . . , Xn
1≤j≤n
j6=i
=
X1 , . . . , Xi−1 , Xi + ci
Y
e
Xj j , Xi+1 , . . . , Xn
j∈Nî
!
!
∈ EAn (Fq ) .
(q)
In the following, we determine the value of sgn πq Eci ,e
by decomposing the
(q)
permutation πq Eci ,e as a product of transpositions. Let y1 , . . . , yi−1 , yi+1 , . . . , yn
be elements of Fq . We put y = (y1 , . . . , yi−1 , yi+1 , . . . , yn ) ∈ Fqn−1 . For y ∈ Fqn−1 ,
(q)
we define the map λci ,e,y as follows:
Fnq
Fnq
∈
−→
∈
(q)
λci ,e,y :
(x1 , . . . , xn )
(x1 , . . . , xn )
7−→
(x1 , . . . , xn )
if ∃j ∈ Nî s.t. xj 6= yj ,
(q)
7−→ Eci ,e (x1 , . . . , xn ) xj = yj for all j ∈ Nî .
(q)
It follows from the definition of the map λci ,e,y that
λc(q)
(x1 , . . . , xn ) = (x1 , . . . , xn )
i ,e,y
for each (x1 , . . . , xn ) ∈ Fnq \ {(y1 , . . . , yi−1 , y, yi+1 , . . . , yn ) | y ∈ Fq } and
λc(q)
(x1 , . . . , xn ) = Ec(q)
(x1 , . . . , xn )
i ,e,y
i ,e
5
for each (x1 , . . . , xn ) ∈ {(y1 , . . . , yi−1 , y, yi+1 , . . . , yn ) | y ∈ Fq }. Thus the map
(q)
(q)
(q)
(q)
λci ,e,y is bijective and the inverse of λci ,e,y is λ−ci ,e,y . Remark that λci ,e,y is the
identity map if and only if e = (0, . . . , 0) and ci = 0, or there exists j ∈ Nî such
that ej 6= 0 and yj = 0. Put
n
o
n
(q)
B λc(q)
:=
(x
,
.
.
.
,
x
)
∈
F
λ
(x
,
.
.
.
,
x
)
=
6
(x
,
.
.
.
,
x
)
.
1
n
n
1
n
q
ci ,e,y 1
i ,e,y
Since
(q)
B λc(q)
∩
B
λ
=∅
′
ci ,e,y
i ,e,y
′
′
6 y, the permutation
for any y′ = y1′ , . . . , yi−1
, yi+1
, . . . , yn′ ∈ Fqn−1 satisfying y′ =
(q)
πq Eci ,e can be written as
Y
λc(q)
= πq
πq Ec(q)
i ,e,y
i ,e
y1 ,...,yi−1 ,yi+1 ,...,yn ∈Fq
Y
=
y1 ,...,yi−1 ,yi+1 ,...,yn ∈Fq
,
πq λc(q)
,e,y
i
(3.3)
which is a composition of disjoint permutations on Fnq . We denote the number
of distinct permutations other than the identity map in the right-hand side of
Equation (3.3) by M1 . It is straightforward to check that M1 satisfies the equation
Y
ℓ
M1 = χ (ci ) ×
(q − δ (ej )) .
(3.4)
j∈Nî
(q)
Next, we decompose each permutation πq λci ,e,y as a composition of disjoint
cycles on Fnq . In order to find such a decomposition, we define an equivalence
relation ∼ on Fq : y ∈ Fq and y ′ ∈ Fq are equivalent
if and only if there exists
Q
ej
′
l ∈ {0, 1, . . . , p − 1} such that y = y + l ci j∈N yj . Note that the equivalence
î
relation ∼ depends on the choice of y1 , . . . , yi−1 , yi+1 , . . . , yn . Put
Cy := {y ′ ∈ Fq | y ∼ y ′ }.
We now choose a complete system of representatives R for the above equivalence
relation ∼. Since R is a complete system of representatives, it follows that
♯R = q/p = pm /p = pm−1 .
We write
R := {w1 , . . . , wpm−1 } ⊂ Fq .
For any ws ∈ R (1 ≤ s ≤ pm−1 ), we set yws := (y1 , . . . , yi−1 , ws , yi+1 , . . . , yn ). We
(q)
define the bijective map λci ,e,yws as follows:
Fnq
−→
(x1 , . . . , xn )
(x1 , . . . , xn )
∈
Fnq
∈
(q)
λci ,e,yws :
7−→
(x1 , . . . , xn )
if ∃j ∈ Nî s.t. xj 6= yj , or xi 6∈ Cws ,
(q)
7 → Eci ,e (x1 , . . . , xn )
−
xj = yj for all j ∈ Nî and xi ∈ Cws .
6
(q)
Note that the map λci ,e,yws is a cycle of length p, and the standard result from
elementary group theory yields that its sign is (−1)p−1 , namely,
p−1
= (−1)
.
(3.5)
sgn πq λc(q)
i ,e,yws
(q)
We also remark that the map λci ,e,yws is the identity map if and only if e =
(0, . . . , 0) and ci = 0, or there exists j ∈ Nî such that ej 6= 0 and yj = 0. If
1 ≤ s1 , s2 ≤ pm−1 and s1 6= s2 , then we have Cws1 ∩ Cws2 = ∅. This yields a
decomposition of the permutation πq (λci ,y ) into a composition of disjoint cycles on
Fnq :
m−1
m−1
pY
pY
(q)
(q)
.
(3.6)
πq λc(q)
λci ,e,yws =
πq λci ,e,y = πq
i ,e,yws
s=1
s=1
We denote the number of disjoint cycles other than the identify map in Equation (3.6) by M2 . By counting the number of disjoint cycles appearing in Equation (3.6), we have
ℓ
M2 = χ (ci ) × pm−1 .
(3.7)
(q)
Eci ,e . By Equation (3.3)
Now we determine the sign of the permutation πq
through Equation (3.7), we obtain
= (−1)M ,
sgn πq Ec(q)
i ,e
where
M = M1 × M2 × (p − 1)
ℓ
= χ (ci ) × p
m−1
× (p − 1) ×
Y
j∈Nî
(q − δ (ej )) .
(3.8)
If q is odd (and thus p is odd) or q = 2m , m ≥ 2, we have M ≡ 0 mod 2 and
Q
ℓ
if q = 2 (namely, p = 2 and m = 1), we have M ≡ χ (ci ) × j∈N δ (ej ) mod 2.
î
(q)
Therefore if q is odd or q = 2m , m ≥ 2 then πq Eci ,e ∈ Alt(Fnq ). Hence, we
(q)
(q)
= 1. On the other hand, if
have πq Eai ∈ Alt(Fnq ), namely, sgn πq Eai
ℓ Q
χ(ci ) × j∈N δ(ej )
(q)
(q)
î
= −1
= (−1)
. Thus, sgn πq Eci ,e
q = 2 then sgn πq Eci ,e
Q
(q)
if and only if ci 6= 0 and j∈N δ (ej ) = 1, or more generally, sgn πq Eai
= −1
î
e
e
i+1
i−1
· · ·Xnen with
if and only if the number of monomials of the form cX1e1 · · ·Xi−1
Xi+1
∗
c ∈ Fq and e1 , . . . , en ≥ 1 appearing in the polynomial ai , is odd. This completes
the proof.
Corollary 1. If q is odd or q = 2m , m ≥ 2 then we have πq (EAn (Fq )) ⊂ Alt(Fnq ).
Remark 1. Let θ : Fq → C be a map satisfying that θ (c) = 0 when c = 0, and
θ (c) = 1 when c ∈ F∗q . Then one can also prove Main Theorem 1 by replacing
ℓ
χ (ci ) with θ (ci ) in the proof of Main Theorem 1.
Remark 2. Note that Main Theorem 1 is similar to [5, Lemma 6.4], but Main
Theorem 1 is more general result than [5, Lemma 6.4].
7
Example 1. Let ai := αX1 · · · Xi−1 Xi+1 · · · Xn , bi := βX12 X2 · · · Xi−1 Xi+1 · · · Xn ∈
Fq [X1 , . . . , X̂i , . . . , Xn ], and α, β ∈ F∗q . We consider the sign of the permutations
induced by the following elementary automorphisms:
= (X1 , . . . , Xi−1 , Xi + ai , Xi+1 , . . . , Xn ) ∈ EAn (Fq ) ,
Ea(q)
i
(q)
Ebi = (X1 , . . . , Xi−1 , Xi + bi , Xi+1 , . . . , Xn ) ∈ EAn (Fq ) ,
and
(q)
Eai +bi = (X1 , . . . , Xi−1 , Xi + ai + bi , Xi+1 , . . . , Xn ) ∈ EAn (Fq ) .
m
We first assume
q is odd or
, m ≥ 2. Then by Main Theorem 1,
q = 2
that
(q)
(q)
= 1. By the fact that πq and sgn
= sgn πq Ebi
we have sgn πq Eai
(q)
(q)
×
= sgn πq Eai
are group homomorphisms, we also have sgn πq Eai +bi
(q)
= 1. We next suppose that q = 2. Since α, β ∈ F∗q , we remark that
sgn πq Ebi
(q)
(q)
=
= sgn πq Ebi
α = β = 1. From Main Theorem 1, we have sgn πq Eai
−1. Again,
by
the fact that πq and
homomorphisms, we obtain
sgn are group
(q)
(q)
(q)
sgn πq Eai +bi
= sgn πq Eai
×sgn πq Ebi
= 1. We can directly prove
(q)
(q)
= 1 by using the fact that πq Eai +bi = πq ((X1 , . . . , Xn )).
that sgn πq Eai +bi
4. Sign of permutations induced by affine automorphisms
In this section, we consider the sign of permutations induced by affine automorg
(q)
phisms over finite fields. Suppose that Ab is an affine automorphism over a finite
field, where,
!
!
!
n
n
X
X
g
(q)
(4.1)
an,i Xi + bn ∈ Aff n (Fq ) ,
a1,i Xi + b1 , . . . ,
Ab :=
i=1
i=1
and ai,j , bi ∈ Fq for 1 ≤ i, j ≤ n. We also assume that A(q) is the homogeneous
part (linear automorphism) of the affine automorphism (4.1), namely,
!
n
n
X
X
(4.2)
an,i Xi ∈ Aff n (Fq ) .
a1,i Xi , . . . ,
A(q) =
i=1
i=1
By Equation (2.1) and by Main Theorem 1, we obtain
g
(q)
= sgn πq A(q) .
sgn πq Ab
(4.3)
Thus, it is sufficient to consider the sign of affine automorphisms over finite field of
the form (4.2). We put
Ti,j := (X1 , . . . , Xi−1 , Xj , Xi+1 , . . . , Xj−1 , Xi , Xj+1 , . . . , Xn ) ∈ Aff n (Fq ) ,
(4.4)
Di (c) := (X1 , . . . , Xi−1 , cXi , Xi+1 , . . . , Xn ) ∈ Aff n (Fq ) ,
(4.5)
Ri,j (c) := (X1 , . . . , Xi−1 , Xi + cXj , Xi+1 , . . . , Xn ) ∈ Aff n (Fq ) ,
(4.6)
and
where c ∈ F∗q . It is easy to see that
2
Ti,j
:= Ti,j ◦ Ti,j = (X1 , . . . , Xn ) ,
8
q−1
Di (c)
:= Di (c) ◦ · · · ◦ Di (c) = (X1 , . . . , Xn ) ,
|
{z
}
q − 1 times
and
p
Ri,j (c) := Ri,j (c) ◦ · · · ◦ Ri,j (c) = (X1 , . . . , Xn ) .
{z
}
|
p times
Since each invertible matrix is a product of elementary matrices ([1, Proposition 2.18]), a linear automorphism can be written as a finite composition of linear
automorphisms of form (4.4), (4.5), and (4.6). Namely, there exists ℓA ∈ Z≥0 and
(A)
(A)
linear automorphisms M1 , . . . , MℓA such that
(A)
A(q) = M1
(A)
◦ · · · ◦ M ℓA ,
(4.7)
(A)
where Mi is a linear automorphism of form (4.4), (4.5), or (4.6) for each i (1 ≤
i ≤ ℓA ). We remark that the representation (4.7) is not unique in general. Since
πq and sgn are group homomorphisms, we obtain
(q)
sgn πq A
=
ℓA
Y
i=1
(A)
.
sgn πq Mi
(4.8)
Therefore, it is sufficient to consider the sign of the permutation induced by a
linear automorphism of form (4.4), (4.5), and (4.6). We use for the symbols NT (A),
ND (A), and NR (A) to represent the number of linear automorphisms of form (4.4),
(4.5), and (4.6) appearing in (4.7), respectively. Then we have
NT (A) + ND (A) + NR (A) = ℓA .
o
n
(A)
(A)
Furthermore, we suppose that i1 , . . . , iND (A) is the subset of {1, . . . , ℓA } satisfying the following two conditions:
(A)
(A)
(A)
(A)
(i): There exist uj ∈ {1, . . . , n} and cj ∈ F∗q such that M (A) = Du(A) cj
ij
j
for 1 ≤ j ≤ ND (A).
(A)
(A)
(A)
(ii): For i ∈ {1, . . . , ℓA } \ {i1 , . . . , iND (A) }, Mi
is of form (4.4) or (4.6).
In the following, we determine the sign of permutations induced by the above
three types of linear automorphisms of form (4.4), (4.5), and (4.6).
Lemma 1. (Sign of πq (Ti,j )) Suppose that Ti,j ∈ Aff n (Fq ) is a linear automorphism of form (4.4). If n ≥ 2, then
(q = 2m , m ≥ 2 or q = 2, n ≥ 3) ,
1
(q = 2 and n = 2) ,
sgn (πq (Ti,j )) = −1
(4.9)
q−1
(−1) 2
(q is odd) .
Proof. Put
B (Ti,j ) := {(x1 , . . . , xn ) ∈ Fnq | Ti,j ((x1 , . . . , xn )) 6= (x1 , . . . , xn )}
= {(x1 , . . . , xn ) ∈ Fnq | xi 6= xj }.
Since
B (Ti,j ) = Fnq \ {(x1 , . . . , xn ) ∈ Fnq | xi = xj },
9
we have ♯B (Ti,j ) = q n − q n−2 × q = q n−1 (q − 1). Hence one can see that the
permutation πq (Ti,j ) is a compositon of q n−1 (q − 1) /2 transpositions on Fnq . We
write NTi,j = q n−1 (q − 1) /2.
Case 1. q = 2m , m ≥ 2.
Since q n−1 /2 = 2m(n−1)−1 ≥ 2, we have NTi,j ≡ 0 mod 2. Therefore, πq (Ti,j ) is an
even permutation.
Case 2. q = 2.
In this case, we see that NTi,j = 2n−2 . If n ≥ 3, we obtain NTi,j ≡ 0 mod 2.
Otherwise NTi,j = 1.
Case 3. q is odd.
If q is odd then q ≡ 1 mod 4 or q ≡ 3 mod 4. From this fact, we obtain
(
0 (q ≡ 1 mod 4) ,
q−1
≡
NTi,j ≡
2
1 (q ≡ 3 mod 4) .
Thus, Equation (4.9) holds.
Lemma 2. (Sign of πq (Di (c))) Suppose that Di (c) ∈ Aff n (Fq ) is a linear automorphism of form (4.5). If n ≥ 2, then
(
1
(q is even) ,
sgn (πq (Di (c))) =
(4.10)
ordF∗
(c)
q
(−1)
(q is odd) .
Proof. Let us suppose that q is even. We assume that sgn (πq (Di (c))) = −1. Since
sgn and πq are group homomorphisms, we have
q−1
q−1
q−1
= sgn (πq (Di (c)))
= (−1)
= −1.
sgn πq Di (c)
q−1
On the other hand, from Di (c)
= (X1 , . . . , Xn ), we must have
q−1
= sgn (πq ((X1 , . . . , Xn ))) = 1.
sgn πq Di (c)
This is a contradiction. Therefore, sgn (πq (Di (c))) = 1.
Next suppose that q is odd. We define the map Hc : Fq → Fq as follows:
Hc : F q
∈
Fq
∈
−→
x
7−→ cx.
The map Hc is bijective, and the inverse map is Hc−1 . Since we can regard the
map Hc as a permutation on Fq , it is obious that
sgn (πq (Di (c))) = sgn (Hc ) .
Let g be a generator of the multiplicative group F∗q . We put c = g h , 0 ≤ h =
ordF∗q (c) ≤ q − 2. If h = 0 then sgn (Hc ) = 1. We assume that h 6= 0. If h = 1 then
the map Hc is the length
q − 1 cycle g 0 g 1 · · · g q−2
as a permutation
on Fq , and
hence Hc = g q−3 g q−2 ◦ g q−4 g q−3 ◦ · · · ◦ g 1 g 2 ◦ g 0 g 1 , the product of q − 2
transpositions as a permutation on Fq . This yields that for 1 ≤ h ≤ q− 2, the map
Hc is the product of h copies of the length q − 1 cycle g 0 g 1 · · · g q−2 , namely, the
product of h × (q − 2) transpositions as a permutation on Fq . Therefore, we obtain
10
h
sgn (Hc ) = (−1)h(q−2) . Since q is odd, it satisfies that (−1)h(q−2) = (−1)q−2 =
h
(−1) . Hence
ordF∗
(c)
q
sgn (Hc ) = (−1)h = (−1)
for c ∈
holds.
F∗q ,
(4.11)
c 6= 1. Equation (4.11) is obviously true for c = 1. Thus the assertion
Lemma 3. (Sign of πq (Ri,j (c))) Suppose that Ri,j (c) ∈ Aff n (Fq ) is a linear
automorphism of form (4.6). If n ≥ 2, then
sgn (πq (Ri,j (c))) = 1.
(4.12)
Proof. By Ri,j (c) ∈ Aff n (Fq ) ∩ EAn (Fq ) ⊂ EAn (Fq ), it follows immediately from
Main Theorem 1.
We are now in the position to prove the main result of this section.
Main Theorem 2. (Sign of affine automorphisms) With notation as above,
the following assertions are hold:
(1) If q = 2m , m ≥ 2 then
g
(q)
= 1.
(4.13)
sgn πq Ab
(2) If q = 2 and n = 2 then
g
(q)
= (−1)NT (A) .
sgn πq Ab
(4.14)
g
(q)
= 1.
sgn πq Ab
(4.15)
(3) If q = 2 and n ≥ 3 then
(4) If q is odd then
PN (A)
(A)
D
g
ordF∗
cj
+ q−1
(q)
2 NT (A)
q
= (−1) j=1
sgn πq Ab
.
In particular, if q ≡ 1 mod 4 then
P
ND (A)
(A)
g
ordF∗
cj
(q)
q
= (−1) j=1
sgn πq Ab
.
(4.16)
(4.17)
Proof. The assertions (1) through (4) follow immediately from Equation (4.3),
Equation (4.7), and Lemma 1 through Lemma 3. This completes the proof.
Corollary 2. If q = 2m and m ≥ 2, or q = 2 and n ≥ 3 then we have πq (Aff n (Fq )) ⊂
Alt(Fnq ).
Remark 3. One can see that Equation (4.14), (4.15), (4.16), and
(4.17)
depend on
g
(q)
is uniquely
the representation (4.7) which is not unique. However, since πq Ab
g
(q)
does not depend on the
determined as a permutation on Fnq , sgn πq Ab
representation (4.7).
11
Example 2. Let α, β ∈ F∗q . We consider the sign of the permutation induced
by the affine automorphism A(q) := (X3 , X2 , αX1 + βX3 ) ∈ Aff 3 (Fq ). It is easy
to see that A(q) = (X3 , X2 , X1 ) ◦ (X1 + βX3 , X2 , X3 ) ◦ (αX1 , X2 , X3 ) = T1,3 ◦
R1,3 (β) ◦ D1 (α).We remark that NT (A) = ND (A) = NR (A) = 1, ℓA = 3,
and sgn πq A(q) = sgn (πq (T1,3 )) × sgn (πq (R1,3 (β))) × sgn (πq (D1 (α))). If
p = 2 then by
Equation (4.13) and by Equation (4.15), one can easily see that
sgn πq A(q) = 1. If q is odd then by Equation (4.16),
q−1
ord ∗ (α)
= (−1) Fq
× (−1) 2 .
sgn πq A(q)
ord ∗ (α)
In particular, if q ≡ 1 mod 4 then sgn πq A(q) = (−1) Fq , and if q ≡ 3 mod 4
ord ∗ (α)
ord ∗ (α)+1
= (−1) Fq
.
then sgn πq A(q) = −1 × (−1) Fq
5. Sign of permutations induced by triangular automorphisms and
tame automorphisms
In this section, we consider the sign of permutations induced by triangular automorphisms and tame automorphisms over finite fields. By Main Theorem 1 and
Main Theorem 2, we obtain the following corollary (Corollary 3).
(q)
Corollary 3. (Sign of triangular automorphisms) Suppose that Ja,f is a
triangular automorphism over a finite field, namely,
(q)
Ja,f = (a1 X1 + f1 (X2 , . . . , Xn ), a2 X2 + f2 (X3 , . . . , Xn ), . . . , an Xn + fn ) ∈ BAn (Fq ),
ai ∈ Fq (i = 1, . . . , n) , fi ∈ Fq [Xi+1 , . . . , Xn ] (i = 1, . . . , n − 1) , fn ∈ Fq .
(5.1)
(q)
If q = 2m , m ≥ 2 then πq Ja,f ∈ Alt(Fnq ). Namely, if q = 2m , m ≥ 2 then
(q)
= 1.
(5.2)
sgn πq Ja,f
If q is odd then
Pn
ord ∗ (a )
(q)
(5.3)
= (−1) i=1 Fq i .
sgn πq Ja,f
(q)
depends only on the coefficients
In other words, if q is odd then sgn πq Ja,f
a1 , . . . , an . If q = 2 then
(q)
M
= (−1) f1 ,
(5.4)
sgn πq Ja,f
where Mf1 is the number of monomials of the form cX2e2 · · ·Xnen with c ∈ F∗q and
e2 , . . . , en ≥ 1 appearing in the polynomial f1 ∈ Fq [X2 , . . . , Xn ].
Proof. By using the notation of Equation (3.1) and Equation (4.5), we have
(q)
(q)
(q)
Ja,f = Dn (an ) ◦ · · · D1 (a1 ) ◦ Ea−1 f ◦ · · · ◦ Ea−1 f .
1
1
n
n
Hence we obtain
n
n
Y
Y
(q)
(q)
.
sgn πq Ea−1 f
sgn (πq (Di (ai ))) ×
=
sgn πq Ja,f
i=1
i=1
i
i
(5.5)
By Equation (5.5), Main Theorem 1, and Main Theorem 2, we obtain the desired
results.
12
(q)
Corollary 4. (Sign of strictly triangular automorphisms) Suppose that Jf
is a strictly triangular automorphism over a finite field, namely,
(q)
Jf
= (X1 + f1 (X2 , . . . , Xn ), X2 + f2 (X3 , . . . , Xn ), . . . , Xn + fn ) ∈ BAn (Fq ),
fi ∈ Fq [Xi+1 , . . . , Xn ] (i = 1, . . . , n − 1) , fn ∈ Fq .
(5.6)
(q)
∈ Alt(Fnq ). Namely, if q is odd or
If q is odd or q = 2m , m ≥ 2 then πq Jf
q = 2m , m ≥ 2 then
(q)
= 1.
(5.7)
sgn πq Jf
If q = 2 then
(q)
M
= (−1) f1 ,
sgn πq Jf
cX2e2 · · ·Xnen
where Mf1 is the number of monomials of the form
e2 , . . . , en ≥ 1 appearing in the polynomial f1 ∈ Fq [X2 , . . . , Xn ].
(5.8)
with c ∈
Proof. It follows immediately from Corollary 3.
F∗q
and
We recall that for any φ(q) ∈ TAn (Fq ), there exist l ∈ Z≥0 , ǫ1 , ǫ2 ∈ {0, 1} ⊂ Z,
^
(q)
As,b(s) ∈ Aff n (Fq ) (1 ≤ s ≤ l + 1) of the form
!
!
!
n
n
X
X
^
(s)
(s)
(s)
(q)
(s)
,
an,i Xi + bn
a1,i Xi + b1 , . . . ,
As,b(s) =
i=1
i=1
and
(q)
Js,t(s) ,f (s)
such that
(q)
φ
∈ BAn (Fq ) (1 ≤ s ≤ l) of the form
(q)
(s)
(s)
(s)
Js,t(s) ,f (s) = t1 X1 + f1 (X2 , . . . , Xn ), . . . , t(s)
,
n Xn + f n
^
(q)
= A1,b(1)
ǫ1
◦
(q)
J1,t(1) ,f (1)
ǫ2
^
^
(q)
(q)
(q)
◦ · · · ◦ Al,b(l) ◦ Jl,t(l) ,f (l) ◦ Al+1,b(l+1) ,
(5.9)
^
(q)
(q)
As,b(s) 6∈ BAn (Fq ) for 2 ≤ s ≤ l + 1, and Js,t(s) ,f (s) 6∈ Aff n (Fq ) for 1 ≤ s ≤ l (for
example, [2, Lemma 5.1.1]).
(q)
We use the symbol As to denote the homogeneous part (linear automorphism)
^
(q)
of the affine automorphism As,b(s) (as in Equation (4.1) and Equation (4.2)) for
1 ≤ s ≤ l + 1. Furthermore, for each s (1 ≤ s ≤ l), we denote by Mf (s) the number
1
of monomials of the form cX2e2 · · ·Xnen with c ∈ F∗q and e2 , . . . , en ≥ 1 appearing in
(s)
the polynomial f1
∈ Fq [X2 , . . . , Xn ].
The following corollary (Corollary 5) states that if we know Equation (5.9) for
a given φ(q) ∈ TAn (Fq ), then one can easily compute the sign of the permutation
induced by φ(q) ∈ TAn (Fq ).
Corollary 5. (Sign of tame automorphisms) With notation as above, the
following assertions are hold:
(1) If q = 2m , m ≥ 2 then
= 1.
(5.10)
sgn πq φ(q)
13
(2) If q = 2 and n = 2 then
P
P
ǫ1 NT (A1 )+( ls=2 NT (As ))+ǫ2 NT (Al+1 )+ ℓs=1 M (s)
(q)
f
1
= (−1)
.
sgn πq φ
(5.11)
(3) If q = 2 and n ≥ 3 then
Pℓ
s=1 Mf (s)
1
sgn πq φ(q)
.
= (−1)
(5.12)
(4) If q is odd then
Pl PND (As )
(A )
ordF∗
cj s
q
sgn πq φ(q)
= (−1) s=2 j=1
ǫ1
PN (A )
1
D
ǫ2
PN (A
D
l+1 )
× (−1)
× (−1)
× (−1)
j=1
(A )
ordF∗
cj 1
q
(A
)
ordF∗
cj l+1
q
j=1
q−1
2
(ǫ1 NT (A1 )+ǫ2 NT (Al+1 )+
P
(s)
ti
1≤i≤n ordF∗
q
× (−1)
1≤s≤l
Pl
s=2
NT (As ))
.
(5.13)
In particular, if q ≡ 1 mod 4 then
Pl PND (As )
(A )
cj s
ordF∗
q
= (−1) s=2 j=1
sgn πq φ(q)
ǫ1
P
ND (A1 )
ǫ2
PN (A
D
l+1 )
× (−1)
× (−1)
j=1
P
× (−1)
j=1
1≤i≤n
1≤s≤l
(A )
ordF∗
cj 1
q
(A
)
ordF∗
cj l+1
q
(s)
ordF∗
ti
q
.
(5.14)
Proof. By Equation (5.9), we obtain
l
ǫ1
Y
(q)
(q)
× sgn πq A1
sgn πq Ai
=
sgn πq φ(q)
i=2
l
ǫ2 Y
(q)
(q)
sgn πq Ji,t(i) ,f (i) .
×
× sgn πq Al+1
(5.15)
i=1
By Equation (5.15), Main Theorem 2, and Corollary 3, we obtain the desired
results.
Remark 4. As in Remark 3, one can see that Equation (5.11), (5.12), (5.13), and
(5.14) dependon the representations (4.7) and (5.9) which are not unique. However,
since πq φ(q) is uniquely determined as a permutation on Fnq , sgn πq φ(q) does
not depend on the representations (4.7) and (5.9).
Remark 5. Suppose that n is greater than or equal to two. Corollary 5 tells us
that πq (TAn (Fq )) ⊂ Sym Fnq (this is a trivial inclusion relation) if q is odd or
q = 2, and πq (TAn (Fq )) ⊂ Alt Fnq if q = 2m and m ≥ 2. This indicates that
Corollary 5 is strictly weaker than [3, Theorem 2.3]. However, Main Theorem 1,
Main Theorem 2, Corollary 3, and Corollary 5 are useful for determining the sign
of permutations induced by tame automorphisms over finite fields. The reasons are
as follows. Firstly, for q = 2m and m ≥ 2, we prove that πq (TAn (Fq )) ⊂ Alt(Fnq )
14
by directly showing that πq (EAn (Fq )) ⊂ Alt(Fnq ) and πq (Aff n (Fq )) ⊂ Alt(Fnq ).
Secondly, if q is odd then one can not determine the sign of permutation induced
by an elementary automorphisms over a finite field by using [3, Theorem 2.3]. In
contrast to [3, Theorem 2.3], Main Theorem 1 tells us that if q is odd then each
permutation induced by an elementary automorphism over a finite field is even. In
other words, if q is odd then πq (EAn (Fq )) ⊂ Alt(Fnq ) (Corollary 1). Similarly, in
contrast to [3, Theorem 2.3], Main Theorem 2 tells us that if q = 2 and n ≥ 3 then
each permutation induced by an affine automorphism over a finite field is even.
Namely, if q = 2 and n ≥ 3 then πq (Aff n (Fq )) ⊂ Alt(Fnq ) (Corollary 2). Thus, our
results (Main Theorem 1, Main Theorem 2, Corollary 3, and Corollary 5) and [3,
Theorem 2.3] are complementary to each other.
Acknowledgements
This work was supported by JSPS KAKENHI Grant-in-Aid for Young Scientists
(B) 16K16066.
References
[1] M. Artin, Algebra, Prentice-Hall, 1991.
[2] A. van den Essen, Polynomial Automorphisms and the Jacobian Conjecture, Progress in
Mathematics, Vol.190, Birkhäuser Verlag, Basel-Boston-Berlin, 2000.
[3] S. Maubach, Polynomial automorphisms over finite fields, Serdica Math. J., 27 (2001), no.4,
343–350.
[4] S. Maubach, A problem on polynomial maps over finite fields, arXiv preprint (2008),
arXiv:0802.0630.
[5] S. Maubach and A. Rauf, The profinite polynomial automorphism group, J. Pure Appl.
Algebra, 219 (2015), no.10, 4708–4727.
Interdisciplinary Graduate School of Science and Engineering, Shimane University,
1060 Nishikawatsu-cho, Matsue, Shimane 690-8504, Japan
E-mail address: hakuta(at)cis.shimane-u.ac.jp
15
| 0 |
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
Under consideration for publication in J. Functional Programming
1
arXiv:1406.5106v1 [] 19 Jun 2014
Pushdown flow analysis with abstract garbage
collection
J. IAN JOHNSON
Northeastern University
ILYA SERGEY
IMDEA Software Institute
CHRISTOPHER EARL
University of Utah
MATTHEW MIGHT
University of Utah
DAVID VAN HORN
University of Maryland
Abstract
In the static analysis of functional programs, pushdown flow analysis and abstract garbage collection
push the boundaries of what we can learn about programs statically. This work illuminates and poses
solutions to theoretical and practical challenges that stand in the way of combining the power of these
techniques. Pushdown flow analysis grants unbounded yet computable polyvariance to the analysis
of return-flow in higher-order programs. Abstract garbage collection grants unbounded polyvariance
to abstract addresses which become unreachable between invocations of the abstract contexts in
which they were created. Pushdown analysis solves the problem of precisely analyzing recursion in
higher-order languages; abstract garbage collection is essential in solving the “stickiness” problem.
Alone, our benchmarks demonstrate that each method can reduce analysis times and boost precision
by orders of magnitude.
We combine these methods. The challenge in marrying these techniques is not subtle: computing
the reachable control states of a pushdown system relies on limiting access during transition to the
top of the stack; abstract garbage collection, on the other hand, needs full access to the entire stack to
compute a root set, just as concrete collection does. Conditional pushdown systems were developed
for just such a conundrum, but existing methods are ill-suited for the dynamic nature of garbage
collection.
We show fully precise and approximate solutions to the feasible paths problem for pushdown
garbage-collecting control-flow analysis. Experiments reveal synergistic interplay between garbage
collection and pushdown techniques, and the fusion demonstrates “better-than-both-worlds” precision.
ZU064-05-FPR
paper-jfp
2
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
(define (id x) x)
(define (f n)
(cond [(<= n 1)
[else
1]
(* n (f (- n 1)))]))
(define (g n)
(cond [(<= n 1)
[else
1]
(+ (* n n) (g (- n 1)))]))
(print (+ ((id f) 3) ((id g) 4)))
Fig. 1: A small example to illuminate the strengths and weaknesses of both pushdown
analysis and abstract garbage collection.
1 Introduction
The development of a context-free1 approach to control-flow analysis (CFA2) by Vardoulakis and Shivers (2010) provoked a shift in the static analysis of higher-order programs. Prior to CFA2, a precise analysis of recursive behavior had been a challenge—even
though flow analyses have an important role to play in optimization for functional languages, such as flow-driven inlining (Might and Shivers 2006a), interprocedural constant
propagation (Shivers 1991) and type-check elimination (Wright and Jagannathan 1998).
While it had been possible to statically analyze recursion soundly, CFA2 made it possible
to analyze recursion precisely by matching calls and returns without approximating the
stack as k-CFA does. The approximation is only in the binding structure, and not the control
structure of the program. In its pursuit of recursion, clever engineering steered CFA2 to
a theoretically intractable complexity, though in practice it performs well. Its payoff is
significant reductions in analysis time as a result of corresponding increases in precision.
For a visual measure of the impact, Figure 2 renders the abstract transition graph (a
model of all possible traces through the program) for the toy program in Figure 1. For this
example, pushdown analysis eliminates spurious return-flow from the use of recursion. But,
recursion is just one problem of many for flow analysis. For instance, pushdown analysis
still gets tripped up by the spurious cross-flow problem; at calls to (id f) and (id g) in
the previous example, it thinks (id g) could be f or g. CFA2 is not confused in this due
to its precise stack frames, but can be confused by unreachable heap-allocated bindings.
Powerful techniques such as abstract garbage collection (Might and Shivers 2006b) were
developed to address the cross-flow problem (here in a way complementary to CFA2’s
stack frames). The cross-flow problem arises because monotonicity prevents revoking a
judgment like “procedure f flows to x,” or “procedure g flows to x,” once it’s been made.
1
As in context-free language, not a context-insensitive analysis.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
3
398
250
76
401
607
344
100
369
414
54
304
287
350
218
407
360
453
133
150
79
647
45
47
300
103
186
508
635
382
377
562
495
433
346
58
267
310
122
138
392
145
560
273
329
499
163
340
482
90
406
615
447
68
245
484
596
28
35
338
628
175
631
408
581
332
48
586
40
569
25
526
254
496
306
480
253
603
271
176
233
216
371
529
439
405
213
53
98
134
400
390
200
509
627
217
142
610
410
555
74
244
327
208
240
549
374
432
566
351
320
323
189
541
72
540
474
352
363
42
278
417
430
147
459
489
110
165
367
85
302
498
152
514
305
373
117
201
313
265
99
595
234
604
516
316
443
416
94
5
10
18
543
177
641
112
570
241
551
275
146
512
299
559
645
281
486
108
246
1
17
121
101
455
397
527
450
16
501
442
257
404
523
222
29
599
524
578
162
141
183
235
106
465
358
425
381
114
609
36
468
343
359
477
620
118
418
8
411
536
161
533
178
312
308
435
545
553
283
212
303
84
11
368
438
120
574
629
269
193
479
298
485
567
552
136
614
573
585
347
571
650
255
6
292
554
587
184
215
296
575
148
111
196
317
105
181
621
471
249
355
135
230
137
214
339
258
159
78
606
630
589
409
648
561
440
594
331
263
535
59
451
309
353
539
153
113
444
31
481
372
92
365
494
598
396
488
619
584
556
104
262
341
590
144
274
280
210
384
168
179
272
37
426
89
593
314
95
649
9
515
402
518
46
119
534
276
356
232
446
39
399
15
470
64
483
259
67
191
502
82
558
140
592
188
487
223
293
131
531
572
225
419
463
33
307
23
199
375
57
12
333
457
335
125
167
154
107
319
361
182
14
321
203
616
66
424
376
532
591
88
205
221
170
624
530
83
506
362
456
563
568
420
537
348
580
441
513
602
458
517
421
285
452
336
622
102
252
123
172
242
493
43
600
227
207
391
194
130
427
4
27
55
236
507
143
370
185
237
467
431
260
318
180
224
187
87
290
73
129
546
38
155
49
284
34
202
192
608
190
434
206
415
448
195
636
50
550
460
156
633
476
466
383
2
77
472
63
21
229
605
289
646
115
65
548
547
7
510
387
132
62
149
75
286
139
228
324
625
454
357
279
251
174
490
97
116
429
634
395
579
577
126
297
198
542
60
330
128
612
231
93
52
393
473
642
389
294
436
295
412
475
51
626
525
157
30
582
328
564
337
91
462
504
56
311
261
644
322
70
96
69
379
597
449
611
437
342
288
538
617
544
380
268
211
623
291
325
301
209
428
26
557
423
601
264
413
22
497
588
20
520
19
13
491
500
528
86
511
386
394
226
3
173
81
197
632
613
366
469
270
505
640
326
204
239
583
464
354
219
461
521
277
315
109
24
522
124
334
248
61
639
345
637
71
403
618
638
422
151
385
643
576
32
378
364
349
164
243
565
256
445
80
44
158
282
492
220
388
160
238
127
503
41
519
247
171
166
266
478
169
(1) without pushdown analysis or abstract GC: 653 states
85
60
16
123
110
48
100
50
84
70
94
119
42
59
112
7
43
67
96
105
80
64
4
102
109
69
76
47
18
20
66
82
92
88
65
133
115
56
54
136
45
121
1
74
22
120
78
40
128
57
124
134
19
97
68
55
95
28
93
98
8
139
49
73
26
135
63
17
75
113
13
32
108
11
99
104
36
10
116
3
27
114
77
101
23
2
106
90
33
87
118
107
12
62
9
51
132
15
91
37
81
52
21
5
111
130
29
125
53
83
41
103
14
38
126
58
25
129
31
89
86
72
61
79
39
137
131
24
117
6
71
46
35
138
44
122
30
34
127
(2) with pushdown only: 139 states
3
84
50
94
21
19
102
10
36
55
28
65
74
34
97
98
13
44
24
43
104
72
70
100
1
48
85
78
82
41
103
47
12
18
11
8
15
64
40
52
73
53
58
45
93
81
96
79
105
87
16
59
61
20
60
62
17
88
71
6
99
5
57
83
63
49
7
26
38
32
80
14
66
42
86
69
101
22
35
33
27
54
76
4
67
37
75
89
77
29
9
51
31
46
92
2
25
23
56
91
39
68
90
30
95
(3) with GC only: 105 states
31
26
64
8
47
56
75
24
68
50
28
73
19
15
34
14
63
76
32
20
57
16
38
41
59
52
33
49
3
58
74
12
9
10
39
42
22
5
7
36
48
54
51
35
72
66
44
46
6
43
70
65
18
53
2
11
23
40
77
37
62
29
17
13
61
27
45
69
60
55
4
30
67
25
21
1
71
(4) with pushdown analysis and abstract GC: 77 states
Fig. 2: We generated an abstract transition graph for the same program from Figure 1 four
times: (1) without pushdown analysis or abstract garbage collection; (2) with only abstract
garbage collection; (3) with only pushdown analysis; (4) with both pushdown analysis and
abstract garbage collection. With only pushdown or abstract GC, the abstract transition
graph shrinks by an order of magnitude, but in different ways. The pushdown-only analysis
is confused by variables that are bound to several different higher-order functions, but
for short durations. The abstract-GC-only is confused by non-tail-recursive loop structure.
With both techniques enabled, the graph shrinks by nearly half yet again and fully recovers
the control structure of the original program.
ZU064-05-FPR
paper-jfp
4
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
In fact, abstract garbage collection, by itself, also delivers significant improvements to
analytic speed and precision in many benchmarks. (See Figure 2 again for a visualization
of that impact.)
It is natural to ask: can abstract garbage collection and pushdown analysis work together? Can their strengths be multiplied? At first, the answer appears to be a disheartening “No.”
1.1 The problem: The whole stack versus just the top
Abstract garbage collection seems to require more than pushdown analysis can decidably
provide: access to the full stack. Abstract garbage collection, like its name implies, discards unreachable values from an abstract store during the analysis. Like concrete garbage
collection, abstract garbage collection also begins its sweep with a root set, and like concrete garbage collection, it must traverse the abstract stack to compute that root set. But,
pushdown systems are restricted to viewing the top of the stack (or a bounded depth)—a
condition violated by this traversal.
Fortunately, abstract garbage collection does not need to arbitrarily modify the stack. It
only needs to know the root set of addresses in the stack. This kind of system has been
studied before in the context of compilers that build a symbol table (a so-called “oneway stack automaton” (Ginsburg et al. 1967)),in the context of first-order model-checking
(pushdown systems with checkpoints (Esparza et al. 2003)),and also in the context of
points-to analysis for Java (conditional weighted pushdown systems (CWPDS) (Li and
Ogawa 2010)). We borrow the definition of (unweighted) conditional pushdown system
(CPDS) in this work, though our analysis does not take CPDSs as inputs.
Higher-order flow analyses typically do not take a control-flow graph, or similar preabstracted object, as input and produce an annotated graph as output. Instead, they take a
program as input and “run it on all possible inputs” (abstractly) to build an approximation
of the language’s reduction relation (semantics), specialized to the given program. This
semantics may be non-standard in such a way that extra-semantic information might be
accumulated for later analyses’ consumption. The important distinction between higherorder and first-order analyses is that the model to analyze is built during the analysis,
which involves interpreting the program (abstractly).
When a language’s semantics treats the control stack as an actual stack, i.e., it does not
have features such as first-class continuations, an interpreter can be split into two parts: a
function that takes the current state and returns all next states along with a pushed activation
frame or a marker that the stack is unchanged; and a function that takes the current state, a
possible “top frame” of the stack, and returns the next states after popping this frame. This
separation is crucial for an effective algorithm, since pushed frames are understood from
program text, and popped frames need only be enumerated from a (usually small) set that
we compute along the way.
Control-state reachability for the straightforward formulation of stack introspection ends
up being uncomputable. Conditional pushdown systems introduce a relatively weak regularity constraint on transitions’ introspection: a CPDS may match the current stack against
a choice of finitely many regular languages of stacks in order to transition from one state to
the next along with the stack action. The general solutions to feasible paths in conditional
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
5
pushdown systems enumerate all languages of stacks that a transition may be conditioned
on. This strategy is a non-starter for garbage collection, since we delineate stacks by the
addresses they keep live; this is exponential in the number of addresses. The abstraction
step that finitizes the address space is what makes the problem fall within the realm of
CPDSs, even if the target is so big it barely fits. But abstract garbage collection is special
— we can compute which languages of stacks we need to check against, given the current
state of the analysis. It is therefore possible to fuse the full benefits of abstract garbage
collection with pushdown analysis. The dramatic reduction in abstract transition graph
size from the top to the bottom in Figure 2 (and echoed by later benchmarks) conveys the
impact of this fusion.
Secondary motivations There are four secondary motivations for this work:
1.
2.
3.
4.
bringing context-sensitivity to pushdown analysis;
exposing the context-freedom of the analysis;
enabling pushdown analysis without continuation-passing style; and
defining an alternative algorithm for computing pushdown analysis, introspectively
or otherwise.
In CFA2, monovariant (0CFA-like) context-sensitivity is etched directly into the abstract
“local” semantics, which is in turn phrased in terms of an explicit (imperative) summarization algorithm for a partitioned continuation-passing style. Our development exposes
the classical parameters (exposed as allocation functions in a semantics) that allow one to
tune the context-sensitivity and polyvariance (accomplishing (1)), thanks to the semantics
of the analysys formulated in the form of an “abstracted abstract machine” (Van Horn and
Might 2012).
In addition, the context-freedom of CFA2 is buried implicitly inside an imperative summarization algorithm. No pushdown system or context-free grammar is explicitly identified. Thus, a motivating factor for our work was to make the pushdown system in CFA2
explicit, and to make the control-state reachability algorithm purely functional (accomplishing (2)).
A third motivation was to show that a transformation to continuation-passing style is
unnecessary for pushdown analysis. In fact, pushdown analysis is arguably more natural
over direct-style programs. By abstracting all machine components except for the program
stack, it converts naturally and readily into a pushdown system (accomplishing (3)). In
his dissertation, Vardoulakis showed a direct-style version of CFA2 that exploits the metalanguage’s runtime stack to get precise call-return matching. The approach is promising,
but its correctness remains unproven, and it does not apply to generic pushdown systems.
Finally, to bring much-needed clarity to algorithmic formulation of pushdown analysis,
we have included an appendix containing a reference implementation in Haskell (accomplishing (4)). We have kept the code as close in form to the mathematics as possible, so
that where concessions are made to the implementation, they are obvious.
1.2 Overview
We first review preliminaries to set a consistent feel for terminology and notation, particularly with respect to pushdown systems. The derivation of the analysis begins with a con-
ZU064-05-FPR
paper-jfp
6
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
crete CESK-machine-style semantics for A-Normal Form λ -calculus. The next step is an
infinite-state abstract interpretation, constructed by bounding the C(ontrol), E(nvironment)
and S(tore) portions of the machine. Uncharacteristically, we leave the stack component—
the K(ontinuation)—unbounded.
A shift in perspective reveals that this abstract interpretation is a pushdown system.
We encode it as a pushdown automaton explicitly, and pose control state reachability as
a decidable language intersection problem. We then extract a rooted pushdown system
from the pushdown automaton. For completeness, we fully develop pushdown analysis for
higher-order programs, including an efficient algorithm for computing reachable control
states. We go further by characterizing complexity and demonstrating the approximations
necessary to get to a polynomial-time algorithm.
We then introduce abstract garbage collection and quickly find that it violates the pushdown model with its traversals of the stack. To prove the decidability of control-state
reachability, we formulate introspective pushdown systems, and recast abstract garbage
collection within this framework. We then review that control-state reachability is decidable for introspective pushdown systems as well when subjected to a straightforward
regularity constraint.
We conclude with an implementation and empirical evaluation that shows strong synergies between pushdown analysis and abstract garbage collection, including significant
reductions in the size of the abstract state transition graph.
1.3 Contributions
We make the following contributions:
1. Our primary contribution is an online decision procedure for reachability in introspective pushdown systems, with a more efficient specialization to abstract garbage
collection.
2. We show that classical notions of context-sensitivity, such as k-CFA and poly/CFA,
have direct generalizations in a pushdown setting. CFA2 was presented as a monovariant analysis,2 whereas we show polyvariance is a natural extension.
3. We make the context-free aspect of CFA2 explicit: we clearly define and identify
the pushdown system. We do so by starting with a classical CESK machine and
systematically abstracting until a pushdown system emerges. We also remove the orthogonal frame-local-bindings aspect of CFA2, so as to focus solely on the pushdown
nature of the analysis.
4. (*) We remove the requirement for a global CPS-conversion by synthesizing the
analysis directly for direct-style (in the form of A-normal form lambda-calculus —
a local transformation).
5. We empirically validate claims of improved precision on a suite of benchmarks.
We find synergies between pushdown analysis and abstract garbage collection that
makes the whole greater that the sum of its parts.
2
Monovariance refers to an abstraction that groups all bindings to the same variable together: there
is one abstract variant for all bindings to each variable.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
7
6. We provide a mirror of the major formal development as working Haskell code in the
appendix. This code illuminates dark corners of pushdown analysis and it provides a
concise formal reference implementation.
(*) The CPS requirement distracts from the connection between continuations and stacks.
We do not discuss call/cc in detail, since we believe there are no significant barriers to
adapting the techniques of Vardoulakis and Shivers (2011) to the direct-style setting, given
related work in Johnson and Van Horn (2013). Languages with exceptions fit within the
pushdown model since a throw can be modeled as “pop until first catch.”
2 Pushdown Preliminaries
The literature contains many equivalent definitions of pushdown machines, so we adapt
our own definitions from Sipser (2005). Readers familiar with pushdown theory may wish
to skip ahead.
2.1 Syntactic sugar
When a triple (x, `, x0 ) is an edge in a labeled graph:
`
x x0 ≡ (x, `, x0 ).
Similarly, when a pair (x, x0 ) is a graph edge:
x x0 ≡ (x, x0 ).
We use both string and vector notation for sequences:
a1 a2 . . . an ≡ ha1 , a2 , . . . , an i ≡ ~a.
2.2 Stack actions, stack change and stack manipulation
Stacks are sequences over a stack alphabet Γ. To reason about stack manipulation concisely, we first turn stack alphabets into “stack-action” sets; each character represents a
change to the stack: push, pop or no change.
For each character γ in a stack alphabet Γ, the stack-action set Γ± contains a push
character γ+ ; a pop character γ− ; and a no-stack-change indicator, ε:
g ∈ Γ± ::= ε
[stack unchanged]
| γ+ for each γ ∈ Γ
[pushed γ]
| γ− for each γ ∈ Γ
[popped γ].
In this paper, the symbol g represents some stack action.
When we develop introspective pushdown systems, we are going to need formalisms
for easily manipulating stack-action strings and stacks. Given a string of stack actions, we
can compact it into a minimal string describing net stack change. We do so through the
operator b·c : Γ∗± → Γ∗± , which cancels out opposing adjacent push-pop stack actions:
b~g γ+ γ− ~g 0 c = b~g ~g 0 c
b~g ε ~g 0 c = b~g ~g 0 c,
ZU064-05-FPR
paper-jfp
8
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
so that b~gc = ~g, if there are no cancellations to be made in the string ~g.
We can convert a net string back into a stack by stripping off the push symbols with the
stackify operator, d·e : Γ∗± * Γ∗ :
(n)
dγ+ γ+0 . . . γ+ e = hγ (n) , . . . , γ 0 , γi,
and for convenience, [~g] = db~gce. Notice the stackify operator is defined for strings containing only push actions.
2.3 Pushdown systems
A pushdown system is a triple M = (Q, Γ, δ ) where:
1. Q is a finite set of control states;
2. Γ is a stack alphabet; and
3. δ ⊆ Q × Γ± × Q is a transition relation.
The set Q × Γ∗ is called the configuration-space of this pushdown system. We use PDS to
denote the class of all pushdown systems.
For the following definitions, let M = (Q, Γ, δ ).
• The labeled transition relation (7−→M ) ⊆ (Q × Γ∗ ) × Γ± × (Q × Γ∗ ) determines
whether one configuration may transition to another while performing the given
stack action:
ε
(q,~γ) 7−→ (q0 ,~γ) iff q q0 ∈ δ
ε
[no change]
M
γ−
γ−
(q, γ : ~γ) 7−→ (q0 ,~γ) iff q q0 ∈ δ
[pop]
M
γ+
γ+
(q,~γ) 7−→ (q0 , γ : ~γ) iff q q0 ∈ δ
[push].
M
• If unlabelled, the transition relation (7−→) checks whether any stack action can
enable the transition:
g
c 7−→ c0 iff c 7−→ c0 for some stack action g.
M
M
• For a string of stack actions g1 . . . gn :
g ...gn
g
g
gn−1
gn
M
M
M
M
M
1
2
c0 −
7 1 → cn iff c0 7−→
c1 7−→
· · · 7−→ cn−1 7−→ cn ,
for some configurations c0 , . . . , cn .
• For the transitive closure:
~g
c 7−→∗ c0 iff c 7−→ c0 for some action string ~g.
M
M
Note Some texts define the transition relation δ so that δ ⊆ Q × Γ × Q × Γ∗ . In these
texts, (q, γ, q0 ,~γ) ∈ δ means, “if in control state q while the character γ is on top, pop the
stack, transition to control state q0 and push ~γ.” Clearly, we can convert between these two
representations by introducing extra control states to our representation when it needs to
push multiple characters.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
9
2.4 Rooted pushdown systems
A rooted pushdown system is a quadruple (Q, Γ, δ , q0 ) in which (Q, Γ, δ ) is a pushdown
system and q0 ∈ Q is an initial (root) state. RPDS is the class of all rooted pushdown
systems. For a rooted pushdown system M = (Q, Γ, δ , q0 ), we define the reachable-fromroot transition relation:
g
g
c7−
−→
→ c0 iff (q0 , hi) 7−→∗ c and c 7−→ c0 .
M
M
M
In other words, the root-reachable transition relation also makes sure that the root control
state can actually reach the transition.
We overload the root-reachable transition relation to operate on control states:
g
g
M
M
q 7−
−→
→ q0 iff (q,~γ) 7−
−→
→ (q0 ,~γ 0 ) for some stacks ~γ,~γ 0 .
For both root-reachable relations, if we elide the stack-action label, then, as in the un-rooted
case, the transition holds if there exists some stack action that enables the transition:
g
−→
→ q0 for some action g.
q 7−
−→
→ q0 iff q 7−
M
M
2.5 Computing reachability in pushdown systems
A pushdown flow analysis can be construed as computing the root-reachable subset of
control states in a rooted pushdown system, M = (Q, Γ, δ , q0 ):
n
o
q : q0 7−
−→
→ q .
M
Reps et. al and many others provide a straightforward “summarization” algorithm to compute this set (Bouajjani et al. 1997; Kodumal and Aiken 2004; Reps 1998; Reps et al.
2005). We will develop a complete alternative to summarization, and then instrument this
development for introspective pushdown systems. Summarization builds two large tables:
• One maps “calling contexts” to “return sites” (AKA “local continuations”) so that a
returning function steps to all the places it must return to.
• The other maps “calling contexts” to “return states,” so that any place performing a
call with an already analyzed calling context can jump straight to the returns.
This setup requires intimate knowledge of the language in question for where continuations
should be segmented to be “local” and is strongly tied to function call and return. Our
algorithm is based on graph traversals of the transition relation for a generic pushdown
system. It requires no specialized knowledge of the analyzed language, and it avoids the
memory footprint of summary tables.
2.6 Pushdown automata
A pushdown automaton is an input-accepting generalization of a rooted pushdown system, a 7-tuple (Q, Σ, Γ, δ , q0 , F,~γ) in which:
1. Σ is an input alphabet;
2. δ ⊆ Q × Γ± × (Σ ∪ {ε}) × Q is a transition relation;
ZU064-05-FPR
paper-jfp
10
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
3. F ⊆ Q is a set of accepting states; and
4. ~γ ∈ Γ∗ is the initial stack.
We use PDA to denote the class of all pushdown automata.
Pushdown automata recognize languages over their input alphabet. To do so, their transition relation may optionally consume an input character upon transition. Formally, a PDA
M = (Q, Σ, Γ, δ , q0 , F,~γ) recognizes the language L (M) ⊆ Σ∗ :
ε ∈ L (M) if q0 ∈ F
aw ∈ L (M) if δ (q0 , γ+ , a, q0 ) and w ∈ L (Q, Σ, Γ, δ , q0 , F, γ : ~γ)
aw ∈ L (M) if δ (q0 , ε, a, q0 ) and w ∈ L (Q, Σ, Γ, δ , q0 , F,~γ)
aw ∈ L (M) if δ (q0 , γ− , a, q0 ) and w ∈ L (Q, Σ, Γ, δ , q0 , F,~γ 0 )
where ~γ = hγ, γ2 , . . . , γn i and ~γ 0 = hγ2 , . . . , γn i,
where a is either the empty string ε or a single character.
2.7 Nondeterministic finite automata
In this work, we will need a finite description of all possible stacks at a given control state
within a rooted pushdown system. We will exploit the fact that the set of stacks at a given
control point is a regular language. Specifically, we will extract a nondeterministic finite
automaton accepting that language from the structure of a rooted pushdown system. A
nondeterministic finite automaton (NFA) is a quintuple M = (Q, Σ, δ , q0 , F):
• Q is a finite set of control states;
• Σ is an input alphabet;
• δ ⊆ Q × (Σ ∪ {ε}) × Q is a transition relation.
• q0 is a distinguished start state.
• F ⊆ Q is a set of accepting states.
We denote the class of all NFAs as NFA.
3 Setting: A-Normal Form λ -Calculus
Since our goal is analysis of higher-order languages, we operate on the λ -calculus. To
simplify presentation of the concrete and abstract semantics, we choose A-Normal Form
λ -calculus. (This is a strictly cosmetic choice: all of our results can be replayed mutatis
mutandis in the standard direct-style setting as well. This differs from CFA2’s requirement
of CPS, since ANF can be applied locally whereas CPS requires a global transformation.)
ANF enforces an order of evaluation and it requires that all arguments to a function be
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
c ∈ Conf = Exp × Env × Store × Kont
11
[configurations]
ρ ∈ Env = Var * Addr
[environments]
σ ∈ Store = Addr → Clo
[stores]
clo ∈ Clo = Lam × Env
[closures]
κ ∈ Kont = Frame∗
[continuations]
φ ∈ Frame = Var × Exp × Env
[stack frames]
a ∈ Addr is an infinite set of addresses
[addresses].
Fig. 3: The concrete configuration-space.
atomic:
e ∈ Exp ::= (let ((v call)) e)
[non-tail call]
| call
[tail call]
| æ
[return]
f , æ ∈ Atom ::= v | lam
[atomic expressions]
lam ∈ Lam ::= (λ (v) e)
[lambda terms]
call ∈ Call ::= ( f æ)
[applications]
v ∈ Var is a set of identifiers
[variables].
We use the CESK machine of Felleisen and Friedman (1987) to specify a small-step
semantics for ANF. The CESK machine has an explicit stack, and under a structural abstraction, the stack component of this machine directly becomes the stack component of a
pushdown system. The set of configurations (Conf ) for this machine has the four expected
components (Figure 3).
3.1 Semantics
To define the semantics, we need five items:
1. I : Exp → Conf injects an expression into a configuration:
c0 = I (e) = (e, [], [], hi).
2. A : Atom × Env × Store * Clo evaluates atomic expressions:
A (lam, ρ, σ ) = (lam, ρ)
A (v, ρ, σ ) = σ (ρ(v))
[closure creation]
[variable look-up].
3. (⇒) ⊆ Conf × Conf transitions between configurations. (Defined below.)
4. E : Exp → P (Conf ) computes the set of reachable machine configurations for a
given program:
E (e) = {c : I (e) ⇒∗ c} .
ZU064-05-FPR
paper-jfp
6 February 2018
12
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
5. alloc : Var × Conf → Addr chooses fresh store addresses for newly bound variables.
The address-allocation function is an opaque parameter in this semantics, so that the
forthcoming abstract semantics may also parameterize allocation. The nondeterministic nature of the semantics makes any choice of alloc sound (Might and Manolios
2009). This parameterization provides the knob to tune the polyvariance and contextsensitivity of the resulting analysis. For the sake of defining the concrete semantics,
letting addresses be natural numbers suffices. The allocator can then choose the
lowest unused address:
Addr = N
alloc(v, (e, ρ, σ , κ)) = 1 + max(dom(σ )).
Transition relation To define the transition c ⇒ c0 , we need three rules. The first rule
handles tail calls by evaluating the function into a closure, evaluating the argument into a
value and then moving to the body of the closure’s λ -term:
c0
c
}|
{
}|
{ z
z
([[( f æ)]], ρ, σ , κ) ⇒ (e, ρ 00 , σ 0 , κ) , where
0
([[(λ (v) e)]], ρ ) = A ( f , ρ, σ )
a = alloc(v, c)
ρ 00 = ρ 0 [v 7→ a]
σ 0 = σ [a 7→ A (æ, ρ, σ )].
Non-tail calls push a frame onto the stack and evaluate the call:
c0
c
z
}|
{ z
}|
{
([[(let ((v call)) e)]], ρ, σ , κ) ⇒ (call, ρ, σ , (v, e, ρ) : κ) .
Function return pops a stack frame:
c
c0
z
}|
{ z
}|
{
(æ, ρ, σ , (v, e, ρ 0 ) : κ) ⇒ (e, ρ 00 , σ 0 , κ) , where
a = alloc(v, c)
ρ 00 = ρ 0 [v 7→ a]
σ 0 = σ [a 7→ A (æ, ρ, σ )].
4 An Infinite-State Abstract Interpretation
Our first step toward a static analysis is an abstract interpretation into an infinite statespace. To achieve a pushdown analysis, we simply abstract away less than we normally
would. Specifically, we leave the stack height unbounded.
Figure 4 details the abstract configuration-space. To synthesize it, we force addresses to
be a finite set, but crucially, we leave the stack untouched. When we compact the set of
addresses into a finite set, the machine may run out of addresses to allocate, and when it
does, the pigeon-hole principle will force multiple closures to reside at the same address.
As a result, to remain sound we change the range of the store to become a power set in the
abstract configuration-space. The abstract transition relation has components analogous to
those from the concrete semantics:
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
d × Store
d
[ = Exp × Env
[ × Kont
ĉ ∈ Conf
[configurations]
d = Var * Addr
[
ρ̂ ∈ Env
d
[ → P Clo
[ = Addr
σ̂ ∈ Store
[environments]
[stores]
c ∈ Clo
d = Lam × Env
d
clo
[closures]
d = Frame
\
κ̂ ∈ Kont
∗
d
\ = Var × Exp × Env
φ̂ ∈ Frame
[ is a finite set of addresses
â ∈ Addr
13
[continuations]
[stack frames]
[addresses].
Fig. 4: The abstract configuration-space.
[ pairs an expression
Program injection The abstract injection function Iˆ : Exp → Conf
with an empty environment, an empty store and an empty stack to create the initial abstract
configuration:
ĉ0 = Iˆ(e) = (e, [], [], hi).
Atomic expression evaluation The abstract atomic expression evaluator, Aˆ : Atom ×
d returns the value of an atomic expression in the context of an
d × Store
[ → P(Clo),
Env
environment and a store; it returns a set of abstract closures:
Aˆ(lam, ρ̂, σ̂ ) = {(lam, ρ̂)}
Aˆ(v, ρ̂, σ̂ ) = σ̂ (ρ̂(v))
[closure creation]
[variable look-up].
[ ) returns
Reachable configurations The abstract program evaluator Eˆ : Exp → P(Conf
all of the configurations reachable from the initial configuration:
Eˆ (e) = ĉ : Iˆ(e) c
⇒∗ ĉ .
Because there are an infinite number of abstract configurations, a naı̈ve implementation of
this function may not terminate. Pushdown analysis provides a way of precisely computing
this set and both finitely and compactly representing the result.
[ × Conf
[ has three rules,
Transition relation The abstract transition relation (c
⇒) ⊆ Conf
one of which has become nondeterministic. A tail call may fork because there could be
multiple abstract closures that it is invoking:
ĉ
ĉ0
}|
{
}|
{ z
z
([[( f æ)]], ρ̂, σ̂ , κ̂) c
⇒ (e, ρ̂ 00 , σ̂ 0 , κ̂) , where
([[(λ (v) e)]], ρ̂ 0 ) ∈ Aˆ( f , ρ̂, σ̂ )
[ ĉ)
â = alloc(v,
ρ̂ 00 = ρ̂ 0 [v 7→ â]
σ̂ 0 = σ̂ t [â 7→ Aˆ(æ, ρ̂, σ̂ )].
ZU064-05-FPR
paper-jfp
6 February 2018
14
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
We define all of the partial orders shortly, but for stores:
(σ̂ t σ̂ 0 )(â) = σ̂ (â) ∪ σ̂ 0 (â).
A non-tail call pushes a frame onto the stack and evaluates the call:
ĉ0
ĉ
}|
{ z
}|
{
z
([[(let ((v call)) e)]], ρ̂, σ̂ , κ̂) c
⇒ (call, ρ̂, σ̂ , (v, e, ρ̂) : κ̂) .
A function return pops a stack frame:
ĉ
ĉ0
}|
{ z
}|
{
z
(æ, ρ̂, σ̂ , (v, e, ρ̂ 0 ) : κ̂) c
⇒ (e, ρ̂ 00 , σ̂ 0 , κ̂) , where
[ ĉ)
â = alloc(v,
ρ̂ 00 = ρ̂ 0 [v 7→ â]
σ̂ 0 = σ̂ t [â 7→ Aˆ(æ, ρ̂, σ̂ )].
Allocation: Polyvariance and context-sensitivity In the abstract semantics, the abstract
[ : Var × Conf
[ → Addr
[ determines the polyvariance of the analysis.
allocation function alloc
In a control-flow analysis, polyvariance literally refers to the number of abstract addresses
(variants) there are for each variable. An advantage of this framework over CFA2 is that
varying this abstract allocation function instantiates pushdown versions of classical flow
analyses. All of the following allocation approaches can be used with the abstract semantics. Note, though only a technical detail, that the concrete address space and allocation
would change as well for the abstraction function to still work. The abstract allocation
function is a parameter to the analysis.
Monovariance: Pushdown 0CFA Pushdown 0CFA uses variables themselves for abstract
addresses:
[ = Var
Addr
alloc(v, ĉ) = v.
For better precision, a program would be transformed to have unique binders.
Context-sensitive: Pushdown 1CFA Pushdown 1CFA pairs the variable with the current
expression to get an abstract address:
[ = Var × Exp
Addr
alloc(v, (e, ρ̂, σ̂ , κ̂)) = (v, e).
For better precision, expressions are often uniquely labeled so that textually equal expressions at different points in the program are distinguished.
Polymorphic splitting: Pushdown poly/CFA Assuming we compiled the program from
a programming language with let expressions and we marked which identifiers were letbound, we can enable polymorphic splitting:
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
15
[ = Var + Var × Exp
Addr
(
(v, [[( f æ)]]) f is let-bound
alloc(v, ([[( f æ)]], ρ̂, σ̂ , κ̂)) =
v
otherwise.
Pushdown k-CFA For pushdown k-CFA, we need to look beyond the current state and at
∗
[ to Var × Conf
[ → Addr.
[ By
the last k states, necessarily changing the signature of alloc
concatenating the expressions in the last k states together, and pairing this sequence with a
variable we get pushdown k-CFA:
[ = Var × Expk
Addr
[ h(e1 , ρ̂1 , σ̂1 , κ̂1 ), . . .i) = (v, he1 , . . . , ek i).
alloc(v,
4.1 Partial orders
For each set X̂ inside the abstract configuration-space, we use the natural partial order,
(vX̂ ) ⊆ X̂ × X̂. Abstract addresses and syntactic sets have flat partial orders. For the other
sets, the partial order lifts:
• point-wise over environments:
ρ̂ v ρ̂ 0 iff ρ̂(v) = ρ̂ 0 (v) for all v ∈ dom(ρ̂);
• component-wise over closures:
(lam, ρ̂) v (lam, ρ̂ 0 ) iff ρ̂ v ρ̂ 0 ;
• point-wise over stores:
σ̂ v σ̂ 0 iff σ̂ (â) v σ̂ 0 (â) for all â ∈ dom(σ̂ );
• component-wise over frames:
(v, e, ρ̂) v (v, e, ρ̂ 0 ) iff ρ̂ v ρ̂ 0 ;
• element-wise over continuations:
hφ̂1 , . . . , φ̂n i v hφ̂10 , . . . , φ̂n0 i iff φ̂i v φ̂i0 ; and
• component-wise across configurations:
(e, ρ̂, σ̂ , κ̂) v (e, ρ̂ 0 , σ̂ 0 , κ̂ 0 ) iff ρ̂ v ρ̂ 0 and σ̂ v σ̂ 0 and κ̂ v κ̂ 0 .
ZU064-05-FPR
paper-jfp
16
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
4.2 Soundness
To prove soundness, an abstraction map α connects the concrete and abstract configurationspaces:
α(e, ρ, σ , κ) = (e, α(ρ), α(σ ), α(κ))
α(ρ) = λ v.α(ρ(v))
α(σ ) = λ â.
G
{α(σ (a))}
α(a)=â
αhφ1 , . . . , φn i = hα(φ1 ), . . . , α(φn )i
α(v, e, ρ) = (v, e, α(ρ))
α(a) is determined by the allocation functions.
It is then easy to prove that the abstract transition relation simulates the concrete transition
relation:
Theorem 4.1
[ such that α(c0 ) v ĉ0 and ĉ c
If α(c) v ĉ and c ⇒ c0 , then there exists ĉ0 ∈ Conf
⇒ ĉ0 .
Proof
The proof follows by case analysis on the expression in the configuration. It is a straightforward adaptation of similar proofs, such as that of Might (2007) for k-CFA.
5 From the Abstracted CESK Machine to a PDA
In the previous section, we constructed an infinite-state abstract interpretation of the CESK
machine. The infinite-state nature of the abstraction makes it difficult to see how to answer
static analysis questions. Consider, for instance, a control flow-question:
At the call site ( f æ), may a closure over lam be called?
If the abstracted CESK machine were a finite-state machine, an algorithm could answer this
question by enumerating all reachable configurations and looking for an abstract configuration ([[( f æ)]], ρ̂, σ̂ , κ̂) in which (lam, ) ∈ Aˆ( f , ρ̂, σ̂ ). However, because the abstracted
CESK machine may contain an infinite number of reachable configurations, an algorithm
cannot enumerate them.
Fortunately, we can recast the abstracted CESK as a special kind of infinite-state system:
a pushdown automaton (PDA). Pushdown automata occupy a sweet spot in the theory of
computation: they have an infinite configuration-space, yet many useful properties (e.g.,
word membership, non-emptiness, control-state reachability) remain decidable. Once the
abstracted CESK machine becomes a PDA, we can answer the control-flow question by
checking whether a specific regular language, accounting for the states of interest, when
intersected with the language of the PDA, is nonempty.
The recasting as a PDA is a shift in perspective. A configuration has an expression, an
environment and a store. A stack character is a frame. We choose to make the alphabet the
set of control states, so that the language accepted by the PDA will be sequences of controlstates visited by the abstracted CESK machine. Thus, every transition will consume the
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
\ (e) = (Q, Σ, Γ, δ , q0 , F, hi), where
PDA
17
(q, ε, q0 , q0 ) ∈ δ iff (q, κ̂) c
⇒ (q0 , κ̂) for all κ̂
d × Store
[
Q = Exp × Env
(q, φ̂− , q0 , q0 ) ∈ δ iff (q, φ̂ : κ̂) c
⇒ (q0 , κ̂) for all κ̂
Σ=Q
(q, φ̂+0 , q0 , q0 ) ∈ δ iff (q, κ̂) c
⇒ (q0 , φ̂ 0 : κ̂) for all κ̂
(q0 , hi) = Iˆ(e)
\
Γ = Frame
F = Q.
\ : Exp → PDA.
Fig. 5: PDA
control-state to which it transitioned as an input character. Figure 5 defines the program-to\ : Exp → PDA. (Note the implicit use of the isomorphism
PDA conversion function PDA
∼
d
[
Q × Kont = Conf .)
At this point, we can answer questions about whether a specified control state is reachable by formulating a question about the intersection of a regular language with a contextfree language described by the PDA. That is, if we want to know whether the control state
(e0 , ρ̂, σ̂ ) is reachable in a program e, we can reduce the problem to determining:
\ (e)) 6= 0,
Σ∗ · (e0 , ρ̂, σ̂ ) · Σ∗ ∩ L (PDA
/
where L1 · L2 is the concatenation of formal languages L1 and L2 .
Theorem 5.1
Control-state reachability is decidable.
Proof
The intersection of a regular language and a context-free language is context-free (simple
machine product of PDA with DFA). The emptiness of a context-free language is decidable. The decision procedure is easiest for CFGs: mark terminals, mark non-terminals that
reduce to marked (non)terminals until we reach a fixed point. If the start symbol is marked,
then the language is nonempty. The PDA to CFG translation is a standard construction.
Now, consider how to use control-state reachability to answer the control-flow question
from earlier. There are a finite number of possible control states in which the λ -term lam
may flow to the function f in call site ( f æ); let’s call this set of states Ŝ:
Ŝ = ([[( f æ)]], ρ̂, σ̂ ) : (lam, ρ̂ 0 ) ∈ Aˆ( f , ρ̂, σ̂ ) for some ρ̂ 0 .
What we want to know is whether any state in the set Ŝ is reachable in the PDA. In effect
what we are asking is whether there exists a control state q ∈ Ŝ such that:
\ (e)) 6= 0.
Σ∗ · {q} · Σ∗ ∩ L (PDA
/
If this is true, then lam may flow to f ; if false, then it does not.
Problem: Doubly exponential complexity The non-emptiness-of-intersection approach
establishes decidability of pushdown control-flow analysis. But, two exponential complexity barriers make this technique impractical.
ZU064-05-FPR
paper-jfp
6 February 2018
18
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
[ |Var| ) and stores
First, there are an exponential number of both environments (|Addr|
d [
d [
((2|Clo| )|Addr| = 2|Clo|×|Addr| ) to consider for the set Ŝ. On top of that, computing the intersection of a regular language with a context-free language will require enumeration of
the (exponential) control-state-space of the PDA. The size of the control-state-space of the
PDA is clearly doubly exponential:
d × Store|
[
|Q| = |Exp × Env
d × |Store|
[
= |Exp| × |Env|
d Addr|
[
[ |Var| × 2|Clo|×|
= |Exp| × |Addr|
d Addr|
[
[ |Var| × 2|Lam×Env|×|
= |Exp| × |Addr|
[ |Var| ×|Addr|
[
[ |Var| × 2|Lam|×|Addr|
= |Exp| × |Addr|
As a result, this approach is doubly exponential. For the next few sections, our goal will be
to lower the complexity of pushdown control-flow analysis.
6 Focusing on Reachability
In the previous section, we saw that control-flow analysis reduces to the reachability of
certain control states within a pushdown system. We also determined reachability by converting the abstracted CESK machine into a PDA, and using emptiness-testing on a language derived from that PDA. Unfortunately, we also found that this approach is deeply
exponential.
Since control-flow analysis reduced to the reachability of control-states in the PDA,
we skip the language problems and go directly to reachability algorithms of Bouajjani
et al. (1997); Kodumal and Aiken (2004); Reps (1998) and Reps et al. (2005) that determine the reachable configurations within a pushdown system. These algorithms are
even polynomial-time. Unfortunately, some of them are polynomial-time in the number
of control states, and in the abstracted CESK machine, there are an exponential number of
control states. We don’t want to enumerate the entire control state-space, or else the search
becomes exponential in even the best case.
To avoid this worst-case behavior, we present a straightforward pushdown-reachability
algorithm that considers only the reachable control states. We cast our reachability algorithm as a fixed-point iteration, in which we incrementally construct the reachable subset
of a pushdown system. A rooted pushdown system M = (Q, Γ, δ , q0 ) is compact if for any
(q, g, q0 ) ∈ δ , it is the case that:
(q0 , hi) 7−→∗ (q,~γ) for some stack ~γ,
M
and the domain of states and stack characters are exactly those that appear in δ :
Q = {{q, q0 } : (q, g, q0 ) ∈ δ }
Γ = {γ : (q, γ+ , q0 ) ∈ δ or (q, γ− , q0 ) ∈ δ }
S
In other words, a rooted pushdown system is compact when its states, transitions and stack
characters appear on legal paths from the initial control state. We will refer to the class of
compact rooted pushdown systems as CRPDS.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
19
We can compact a rooted pushdown system with a map:
C : RPDS → CRPDS
M
z
}|
{
C (Q, Γ, δ , q0 ) = (Q0 , Γ0 , δ 0 , q0 )
n
o
where Q0 = q : (q0 , hi) 7−→∗ (q,~γ)
M
n
o
0
Γ = γi : (q0 , hi) 7−→∗ (q,~γ)
M
~g
g
0
0
0
δ = (q, g, q ) : (q0 , hi) 7−→ (q, [~g]) 7−→ (q , [~gg]) .
M
M
In practice, the real difference between a rooted pushdown system and its compact form
is that our original system will be defined intensionally (having come from the components
of an abstracted CESK machine), whereas the compact system will be defined extensionally, with the contents of each component explicitly enumerated during its construction.
Our near-term goals are (1) to convert our abstracted CESK machine into a rooted
pushdown system and (2) to find an efficient method to compact it.
To convert the abstracted CESK machine into a rooted pushdown system, we use the
\ : Exp → RPDS:
function RPDS
\ (e) = (Q, Γ, δ , q0 )
RPDS
ε
q q0 ∈ δ iff (q, κ̂) c
⇒ (q0 , κ̂) for all κ̂
d × Store
[
Q = Exp × Env
\
Γ = Frame
(q0 , hi) = Iˆ(e)
φ̂−
q q0 ∈ δ iff (q, φ̂ : κ̂) c
⇒ (q0 , κ̂) for all κ̂
φ̂+
q q0 ∈ δ iff (q, κ̂) c
⇒ (q0 , φ̂ : κ̂) for all κ̂.
7 Compacting a Rooted Pushdown System
We now turn our attention to compacting a rooted pushdown system (defined intensionally)
into its compact form (defined extensionally). That is, we want to find an implementation
of the function C . To do so, we first phrase the construction as the least fixed point of a
monotonic function. This will provide a method (albeit an inefficient one) for computing
the function C . In the next section, we look at an optimized work-set driven algorithm that
avoids the inefficiencies of this section’s algorithm.
The function F : RPDS → (CRPDS → CRPDS) generates the monotonic iteration
function we need:
F (M) = f , where
M = (Q, Γ, δ , q0 )
f (S, Γ, E, s0 ) = (S0 , Γ, E 0 , s0 ), where
n
o
S0 = S ∪ s0 : s ∈ S and s 7−
−→
→ s0 ∪ {s0 }
M
n g
o
g
0
0
E = E ∪ s s : s ∈ S and s 7−
−→
→ s0 .
M
Given a rooted pushdown system M, each application of the function F (M) accretes new
edges at the frontier of the system. Once the algorithm reaches a fixed point, the system is
complete:
ZU064-05-FPR
paper-jfp
20
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
Theorem 7.1
C (M) = lfp(F (M)).
Proof
Let M = (Q, Γ, δ , q0 ). Let f = F (M). Observe that lfp( f ) = f n (0,
/ Γ, 0,
/ q0 ) for some n.
When N ⊆ C (M), then it easy to show that f (N) ⊆ C (M). Hence, C (M) ⊇ lfp(F (M)).
To show C (M) ⊆ lfp(F (M)), suppose this is not the case. Then, there must be at least
one edge in C (M) that is not in lfp(F (M)). Since these edges must be root reachable, let
(s, g, s0 ) be the first such edge in some path from the root. This means that the state s is
in lfp(F (M)). Let m be the lowest natural number such that s appears in f m (M). By the
definition of f , this edge must appear in f m+1 (M), which means it must also appear in
lfp(F (M)), which is a contradiction. Hence, C (M) ⊆ lfp(F (M)).
7.1 Complexity: Polynomial and exponential
To determine the complexity of this algorithm, we ask two questions: how many times
would the algorithm invoke the iteration function in the worst case, and how much does
each invocation cost in the worst case? The size of the final system bounds the run-time
of the algorithm. Suppose the final system has m states. In the worst case, the iteration
function adds only a single edge each time. Since there are at most 2|Γ|m2 + m2 edges in
the final graph, the maximum number of iterations is 2|Γ|m2 + m2 .
The cost of computing each iteration is harder to bound. The cost of determining whether
to add a push edge is proportional to the size of the stack alphabet, while the cost of
determining whether to add an ε-edge is constant, so the cost of determining all new
push and ε edges to add is proportional to |Γ|m + m. Determining whether or not to add
γ−
a pop edge is expensive. To add the pop edge s s0 , we must prove that there exists a
configuration-path to the control state s, in which the character γ is on the top of the stack.
This reduces to a CFL-reachability query (Melski and Reps 2000) at each node, the cost of
which is O(|Γ± |3 m3 ) (Kodumal and Aiken 2004).
To summarize, in terms of the number of reachable control states, the complexity of the
most recent algorithm is:
O((2|Γ|m2 + m2 ) × (|Γ|m + m + |Γ± |3 m3 )) = O(|Γ|4 m5 ).
While this approach is polynomial in the number of reachable control states, it is far from
efficient. In the next section, we provide an optimized version of this fixed-point algorithm
that maintains a work-set and an ε-closure graph to avoid spurious recomputation.
Moreover, we have carefully phrased the complexity in terms of “reachable” control
states because, in practice, compact rooted pushdown systems will be extremely sparse,
and because the maximum number of control states is exponential in the size of the input
program. After the subsequent refinement, we will be able to develop a hierarchy of pushdown control-flow analyses that employs widening to achieve a polynomial-time algorithm
at its foundation.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
21
8 An Efficient Algorithm: Work-sets and ε-Closure Graphs
We have developed a fixed-point formulation of the rooted pushdown system compaction
algorithm, but found that, in each iteration, it wasted effort by passing over all discovered
states and edges, even though most will not contribute new states or edges. Taking a cue
from graph search, we can adapt the fixed-point algorithm with a work-set. That is, our next
algorithm will keep a work-set of new states and edges to consider, instead of reconsidering
all of them. We will refer to the compact rooted pushdown system we are constructing as
a graph, since that is how we represent it (Q is the set of nodes, and δ is a set of labeled
edges).
In each iteration, it will pull new states and edges from the work list, insert them into the
graph and then populate the work-set with new states and edges that have to be added as a
consequence of the recent additions.
8.1 ε-closure graphs
Figuring out what edges to add as a consequence of another edge requires care, for adding
an edge can have ramifications on distant control states. Consider, for example, adding the
ε-edge q ε q0 into the following graph:
q0
γ+
/q
q0
γ−
/ q1
As soon this edge drops in, an ε-edge “implicitly” appears between q0 and q1 because the
net stack change between them is empty; the resulting graph looks like:
ε
q0
γ+
/q
ε
/ q0
γ−
/ q 1
where we have illustrated the implicit ε-edge as a dotted line.
To keep track of these implicit edges, we will construct a second graph in conjunction
with the graph: an ε-closure graph. In the ε-closure graph, every edge indicates the existence of a no-net-stack-change path between control states. The ε-closure graph simplifies
the task of figuring out which states and edges are impacted by the addition of a new edge.
Formally, an ε-closure graph, H ⊆ N × N, is a set of edges. Of course, all ε-closure
graphs are reflexive: every node has a self loop. We use the symbol ECG to denote the
class of all ε-closure graphs.
We have two notations for finding ancestors and descendants of a state in an ε-closure
graph:
←
−
G ε [s] = s0 : (s0 , s) ∈ H ∪ {s}
[ancestors]
0
→
−
G ε [s] = s : (s, s0 ) ∈ H ∪ {s}
[descendants].
8.2 Integrating a work-set
Since we only want to consider new states and edges in each iteration, we need a work-set,
or in this case, three work-sets:
ZU064-05-FPR
paper-jfp
22
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
• ∆S contains states to add,
• ∆E contains edges to add,
• ∆H contains new ε-edges.
Let WS ::= (∆S, ∆E, ∆H) be the space of work-sets.
8.3 A new fixed-point iteration-space
Instead of consuming a graph and producing a graph, the new fixed-point iteration function
will consume and produce a graph, an ε-closure graph, and the work-sets. Hence, the
iteration space of the new algorithm is:
ICRPDS = (℘(Q) ×℘(Q × Γ± × Q)) × ECG × WS.
The I in ICRPDS stands for intermediate.
8.4 The ε-closure graph work-list algorithm
The function F 0 : RPDS → (ICRPDS → ICRPDS) generates the required iteration function (Figure 6). Please note that we implicitly distribute union across tuples:
(X,Y ) ∪ (X 0 ,Y 0 ) = (X ∪ X,Y ∪Y 0 ).
The functions sprout, addPush, addPop, addEmpty (defined shortly) calculate the additional the graph edges and ε-closure graph edges (potentially) introduced by a new state or
edge.
Sprouting Whenever a new state gets added to the graph, the algorithm must check whether
that state has any new edges to contribute. Both push edges and ε-edges do not depend on
the current stack, so any such edges for a state in the pushdown system’s transition function
belong in the graph. The sprout function:
sprout(Q,Γ,δ ,s0 ) : Q → (P (δ ) × P (Q × Q)),
checks whether a new state could produce any new push edges or no-change edges. We
can represent its behavior diagrammatically (as previously, the dotted arrows correspond
to the corresponding additions to the work-graph and ε-closure work graph):
s
γ+
ε
δ
q0
δ
q00
which means if adding control state s:
ε
add edge s q0 if it exists in δ (hence the arrow subscript δ ), and
γ+
add edge s q00 if it exists in δ .
Formally:
sprout(Q,Γ,δ ,s0 ) (s) = (∆E, ∆H), where
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
23
F 0 (M) = f , where
M = (Q, Γ, δ , q0 )
f (G, H, (∆S, ∆E, ∆H)) = (G0 , H 0 , (∆S0 − S0 , ∆E 0 − E 0 , ∆H 0 − H)), where
(S, Γ, E, s0 ) = G
(∆E0 , ∆H0 ) =
[
sproutM (s)
s∈∆S
[
(∆E1 , ∆H1 ) =
(s,γ+
(∆E2 , ∆H2 ) =
addPushM (G, H)(s, γ+ , s0 )
,s0 )∈∆E
[
addPopM (G, H)(s, γ− , s0 )
(s,γ− ,s0 )∈∆E
(∆E3 , ∆H3 ) =
[
addEmptyM (G, H)(s, s0 )
(s,ε,s0 )∈∆E
(∆E4 , ∆H4 ) =
[
addEmptyM (G, H)(s, s0 )
(s,s0 )∈∆H
S0 = S ∪ ∆S
E 0 = E ∪ ∆E
H 0 = H ∪ ∆H
∆E 0 = ∆E0 ∪ ∆E1 ∪ ∆E2 ∪ ∆E3 ∪ ∆E4
∆S0 = s0 : (s, g, s0 ) ∈ ∆E 0 ∪ {s0 }
∆H 0 = ∆H0 ∪ ∆H1 ∪ ∆H2 ∪ ∆H3 ∪ ∆H4
∆Γ = γ : (s, γ+ , s0 ) ∈ ∆E 0
G0 = (S ∪ ∆S, Γ ∪ ∆Γ, E 0 , q0 ).
Fig. 6: The fixed point of the function F 0 (M) contains the compact form of the rooted
pushdown system M.
n ε
o γ+
γ+
ε
∆E = s q : s q ∈ δ ∪ s q : s q ∈ δ
n
o
ε
∆H = s q : s q ∈ δ .
Considering the consequences of a new push edge Once our algorithm adds a new push
edge to a graph, there is a chance that it will enable new pop edges for the same stack frame
somewhere downstream. If and when it does enable pops, it will also add new edges to the
ε-closure graph. The addPush function:
addPush(Q,Γ,δ ,s0 ) : RPDS × ECG → δ → (P (δ ) × P (Q × Q)),
ZU064-05-FPR
paper-jfp
24
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
checks for ε-reachable states that could produce a pop. We can represent this action by the
following diagram (the arrow subscript ε indicates edges in the ε-closure graph):
s
/q
γ+
ε
ε
/ q0
γ−
δ
/ q00
G
ε
ε
γ+
which means if adding push-edge s q:
γ−
if pop-edge q0 q00 is in δ , then
γ−
add edge q0 q00 , and
add ε-edge s q00 .
Formally:
γ+
addPush(Q,Γ,δ ,s0 ) (G, H)(s q) = (∆E, ∆H), where
γ
γ
→
−
0 − 00
0
0 − 00
∆E = q q : q ∈ G ε [q] and q q ∈ δ
γ
→
−
00
0
0 − 00
∆H = s q : q ∈ G ε [q] and q q ∈ δ .
Considering the consequences of a new pop edge Once the algorithm adds a new pop
edge to a graph, it will create at least one new ε-closure graph edge and possibly more by
matching up with upstream pushes. The addPop function:
addPop(Q,Γ,δ ,s0 ) : RPDS × ECG → δ → (P (δ ) × P (Q × Q)),
checks for ε-reachable push-edges that could match this pop-edge. This action is illustrated
by the following diagram:
s
γ+
/ s0
ε
ε
/ s00
γ−
δ
/q
I
ε
ε
γ−
which means if adding pop-edge s00 q:
γ+
if push-edge s s0 is already in the graph, then
add ε-edge s q.
Formally:
γ−
addPop(Q,Γ,δ ,s0 ) (G, H)(s00 q) = (∆E, ∆H), where
γ+
←
−
∆E = 0/ and ∆H = s q : s0 ∈ G ε [s00 ] and s s0 ∈ G .
Considering the consequences of a new ε-edge Once the algorithm adds a new ε-closure
graph edge, it may transitively have to add more ε-closure graph edges, and it may connect
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
25
an old push to (perhaps newly enabled) pop edges. The addEmpty function:
addEmpty(Q,Γ,δ ,s0 ) : RPDS × ECG → (Q × Q) → (P (δ ) × P (Q × Q)),
checks for newly enabled pops and ε-closure graph edges: Once again, we can represent
this action diagrammatically:
ε
ε
s
γ+
/ s0
ε
ε
ε
ε
/ s00
/ s000
ε
ε
ε
/ s0000
I
γ−
δ
/q
O
ε
ε
ε
ε
which means if adding ε-edge s00 s000 :
γ−
if pop-edge s0000 q is in δ , then
add ε-edge s q; and
γ−
add edge s0000 q;
add ε-edges s0 s000 , s00 s0000 , and s0 s0000 .
Formally:
addEmpty(Q,Γ,δ ,s0 ) (G, H)(s00 s000 ) = (∆E, ∆H), where
γ−
←
−
→
−
∆E = s0000 q : s0 ∈ G ε [s00 ] and s0000 ∈ G ε [s000 ] and
γ+
γ−
s s0 ∈ G and s0000 q ∈ δ
←
−
→
−
∆H = s q : s0 ∈ G ε [s00 ] and s0000 ∈ G ε [s000 ] and
γ+
γ−
s s0 ∈ G and s0000 q ∈ δ
n
o
←
−
∪ s0 s000 : s0 ∈ G ε [s00 ]
n
o
→
−
∪ s00 s0000 : s0000 ∈ G ε [s000 ]
n
o
←
−
→
−
∪ s0 s0000 : s0 ∈ G ε [s00 ] and s0000 ∈ G ε [s000 ] .
8.5 Termination and correctness
To prove that a fixed point exists, we show the iteration function is monotonic. The key
observation is that ∆G and ∆H drive all additions to, and are disjoint from, G and H. Since
G and H monotonically increase in a finite space, ∆G and ∆H run out of room (full details
in 18.1). Once the graph reaches a fixed point, all work-sets will be empty, and the εclosure graph will also be saturated. We can also show that this algorithm is correct by
defining first E C G : RPDS → ECG as
~g
0
0
E C G (M) = s s : s 7−
−→
→ s and [~g] = ε
M
ZU064-05-FPR
paper-jfp
26
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
and stating the following theorem:
Theorem 8.1
For all M ∈ RPDS, C (M) = G and E C G (M) = H, where (G, H, (0,
/ 0,
/ 0))
/ = lfp(F 0 (M)).
In the proof of Theorem 8.1, the ⊆ case comes from an invariant lemma we have on F 0 :
Lemma 8.1
g
g
inv((S, E), H, (∆S, ∆E, ∆H)) = ∀s s0 ∈ E ∪ ∆E.s 7−
−→
→ s0
M
~g
∧ ∀s s0 ∈ H ∪ ∆H.∃~g.[~g] = ε ∧ ∀κ̂.(s, κ̂) 7−→∗ (s, κ̂)
M
The ⊇ case follows from
Lemma 8.2
~g
~g
−→
→ s, there is both a corresponding path s0 7−
−→
→ s and for all nonFor all traces π ≡ s0 7−
M
G
~g0
empty subtraces of π, sb 7−
−→
→ sa , if [~g0 ] = ε then sb sa ∈ H.
M
Since all edges in a compact rooted pushdown system must be in a path from the initial
state, we can extract the edges from said paths using this lemma.
8.6 Complexity: Still exponential, but more efficient
As in the previous case (Section 7.1), to determine the complexity of this algorithm, we
ask two questions: how many times would the algorithm invoke the iteration function in
the worst case, and how much does each invocation cost in the worst case? The run-time of
the algorithm is bounded by the size of the final graph plus the size of the ε-closure graph.
Suppose the final graph has m states. In the worst case, the iteration function adds only a
single edge each time. There are at most 2|Γ|m2 +m2 edges in the graph (|Γ|m2 push edges,
just as many pop edges, and m2 no-change edges) and at most m2 edges in the ε-closure
graph, which bounds the number of iterations. Recall that m can be exponential in the size
of the program, since m ≤ |Q| (and Section 5 derived the exponential size of |Q|).
Next, we must reason about the worst-case cost of adding an edge: how many edges
might an individual iteration consider? In the worst case, the algorithm will consider every
edge in every iteration, leading to an asymptotic time-complexity of:
O((2|Γ|m2 + 2m2 )2 ) = O(|Γ|2 m4 ).
While still high, this is a an improvement upon the previous algorithm. For sparse graphs,
this is a reasonable algorithm.
9 Polynomial-Time Complexity from Widening
In the previous section, we developed a more efficient fixed-point algorithm for computing a compact rooted pushdown system. Even with the core improvements we made,
the algorithm remained exponential in the worst case, owing to the fact that there could
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
27
be an exponential number of reachable control states. When an abstract interpretation
is intolerably complex, the standard approach for reducing complexity and accelerating
convergence is widening (Cousot and Cousot 1977). Of course, widening techniques trade
away some precision to gain this speed. It turns out that the small-step variants of finitestate CFAs are exponential without some sort of widening as well (Van Horn and Mairson
2008).
To achieve polynomial time complexity for pushdown control-flow analysis requires the
same two steps as the classical case: (1) widening the abstract interpretation to use a global,
“single-threaded” store and (2) selecting a monovariant allocation function to collapse the
abstract configuration-space. Widening eliminates a source of exponentiality in the size
of the store; monovariance eliminates a source of exponentiality from environments. In
this section, we redevelop the pushdown control-flow analysis framework with a singlethreaded store and calculate its complexity.
9.1 Step 1: Refactor the concrete semantics
First, consider defining the reachable states of the concrete semantics using fixed points.
That is, let the system-space of the evaluation function be sets of configurations:
C ∈ System = P (Conf ) = P (Exp × Env × Store × Kont).
We can redefine the concrete evaluation function:
E (e) = lfp( fe ), where fe : System → System and
fe (C) = {I (e)} ∪ c0 : c ∈ C and c ⇒ c0 .
9.2 Step 2: Refactor the abstract semantics
We can take the same approach with the abstract evaluation function, first redefining the
abstract system-space:
[
\ = P Conf
Ĉ ∈ System
d × Store
d ,
[ × Kont
= P Exp × Env
and then the abstract evaluation function:
\ → System
\ and
Eˆ (e) = lfp( fˆe ), where fˆe : System
0
fˆe (Ĉ) = Iˆ(e) ∪ ĉ : ĉ ∈ Ĉ and ĉ c
⇒ ĉ0 .
What we’d like to do is shrink the abstract system-space with a refactoring that corresponds
to a widening.
9.3 Step 3: Single-thread the abstract store
We can approximate a set of abstract stores {σ̂1 , . . . , σ̂n } with the least-upper-bound of
those stores: σ̂1 t · · · t σ̂n . We can exploit this by creating a new abstract system space in
which the store is factored out of every configuration. Thus, the system-space contains a
ZU064-05-FPR
paper-jfp
28
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
set of partial configurations and a single global store:
0
\ × Store
\ = P PConf
[
System
d × Kont.
d
\ = Exp × Env
π̂ ∈ PConf
We can factor the store out of the abstract transition relation as well, so that (_σ̂ ) ⊆
\ × (PConf
\ × Store):
[
PConf
σ̂
(e, ρ̂, κ̂) _ ((e0 , ρ̂ 0 , κ̂ 0 ), σ̂ 0 ) iff (e, ρ̂, σ̂ , κ̂) c
⇒ (e0 , ρ̂ 0 , σ̂ 0 , κ̂ 0 ),
0
0
\ → System
\ ,
which gives us a new iteration function, fˆe0 : System
fˆe0 (P̂, σ̂ ) = (P̂0 , σ̂ 0 ), where
σ̂
0
0
0
00
P̂ = π̂ : π̂ _ (π̂ , σ̂ ) ∪ {π̂0 }
G
σ̂
0
00
0
00
σ̂ =
σ̂ : π̂ _ (π̂ , σ̂ )
(π̂0 , hi) = Iˆ(e).
9.4 Step 4: Pushdown control-flow graphs
Following the earlier graph reformulation of the compact rooted pushdown system, we
can reformulate the set of partial configurations as a pushdown control-flow graph. A
pushdown control-flow graph is a frame-action-labeled graph over partial control states,
and a partial control state is an expression paired with an environment:
00
\ = PDCFG
\ × Store
[
System
\ = P(PState)
\ × P(PState
\ × Frame
\
\ ± × PState)
PDCFG
d
\ = Exp × Env.
ψ̂ ∈ PState
In a pushdown control-flow graph, the partial control states are partial configurations which
have dropped the continuation component; the continuations are encoded as paths through
the graph.
A preliminary analysis of complexity Even without defining the system-space iteration
function, we can ask, How many iterations will it take to reach a fixed point in the worst
case? This question is really asking, How many edges can we add? And, How many entries
are there in the store? Summing these together, we arrive at the worst-case number of
iterations:
PDCFG edges
store entries
}|
{
}|
{ z
z
d.
[ × |Clo|
\ × |Frame
\ + |Addr|
\ ± | × |PState|
|PState|
With a monovariant allocation scheme that eliminates abstract environments, the number
of iterations ultimately reduces to:
d + 1) × |Exp| + |Var| × |Lam|,
|Exp| × (2|Var|
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
29
which means that, in the worst case, the algorithm makes a cubic number of iterations with
respect to the size of the input program.3
The worst-case cost of the each iteration would be dominated by a CFL-reachability
calculation, which, in the worst case, must consider every state and every edge:
O(|Var|3 × |Exp|3 ).
Thus, each iteration takes O(n6 ) and there are a maximum of O(n3 ) iterations, where n is
the size of the program. So, total complexity would be O(n9 ) for a monovariant pushdown
control-flow analysis with this scheme, where n is again the size of the program. Although
this algorithm is polynomial-time, we can do better.
9.5 Step 5: Reintroduce ε-closure graphs
Replicating the evolution from Section 8 for this store-widened analysis, we arrive at a
more efficient polynomial-time analysis. An ε-closure graph in this setting is a set of pairs
of store-less, continuation-less partial states:
d = P PState
\ × PState
\ .
ECG
Then, we can set the system space to include ε-closure graphs:
000
d × Store.
\ = CRPDS
\ × ECG
[
System
Before we redefine the iteration function, we need another factored transition relation.
σ̂
\ × PState
\ × Store deterThe stack- and action-factored transition relation (+) ⊆ PState
g
mines if a transition is possible under the specified store and stack-action:
(e, ρ̂) + ((e0 , ρ̂ 0 ), σ̂ 0 ) iff (e, ρ̂, σ̂ , κ̂) c
⇒ (e0 , ρ̂ 0 , σ̂ 0 , φ̂ : κ̂)
σ̂
φ̂+
(e, ρ̂) + ((e0 , ρ̂ 0 ), σ̂ 0 ) iff (e, ρ̂, σ̂ , φ̂ : κ̂) c
⇒ (e0 , ρ̂ 0 , σ̂ 0 , κ̂)
σ̂
φ̂−
(e, ρ̂) + ((e0 , ρ̂ 0 ), σ̂ 0 ) iff (e, ρ̂, σ̂ , κ̂) c
⇒ (e0 , ρ̂ 0 , σ̂ 0 , κ̂).
σ̂
ε
Now, we can redefine the iteration function (Figure 7).
Theorem 9.1
Pushdown 0CFA with single-threaded store (PDCFA) can be computed in O(n6 )-time,
where n is the size of the program.
Proof
As before, the maximum number of iterations is cubic in the size of the program for a
monovariant analysis. Fortunately, the cost of each iteration is also now bounded by the
number of edges in the graph, which is also cubic.
3
In computing the number of frames, we note that in every continuation, the variable and the
expression uniquely determine each other based on the let-expression from which they both came.
As a result, the number of abstract frames available in a monovariant analysis is bounded by both
\ = |Var|.
the number of variables and the number of expressions, i.e., |Frame|
ZU064-05-FPR
paper-jfp
30
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
fˆe ((P̂, Ê), Ĥ, σ̂ ) = ((P̂0 , Ê 0 ), Ĥ 0 , σ̂ 00 ), where
(
T̂+ =
φ̂+
)
0
0
0
σ̂
0
(ψ̂ ψ̂ , σ̂ ) : ψ̂ + (ψ̂ , σ̂ )
φ̂+
n
o
ε
σ̂
T̂ε = (ψ̂ ψ̂ 0 , σ̂ 0 ) : ψ̂ + (ψ̂ 0 , σ̂ 0 )
ε
φ̂−
σ̂
T̂− = (ψ̂ 00 ψ̂ 000 , σ̂ 0 ) : ψ̂ 00 + (ψ̂ 000 , σ̂ 0 ) and
φ̂−
φ̂+
ψ̂ ψ̂ 0 ∈ Ê and
ψ̂ 0 ψ̂ 00 ∈ Ĥ
T̂ 0 = T̂+ ∪ T̂ε ∪ T̂−
Ê 0 = ê : (ê, ) ∈ T̂ 0
G
σ̂ 00 =
σ̂ 0 : ( , σ̂ 0 ) ∈ T̂ 0
Ĥε = ψ̂ ψ̂ 00 : ψ̂ ψ̂ 0 ∈ Ĥ and ψ̂ 0 ψ̂ 00 ∈ Ĥ
φ̂+
Ĥ+− = ψ̂ ψ̂ 000 : ψ̂ ψ̂ 0 ∈ Ê and ψ̂ 0 ψ̂ 00 ∈ Ĥ
φ̂−
and ψ̂ 00 ψ̂ 000 ∈ Ê
Ĥ 0 = Ĥε ∪ Ĥ+−
n
o
g
P̂0 = P̂ ∪ ψ̂ 0 : ψ̂ ψ̂ 0 ∪ {(e, ⊥)} .
Fig. 7: An ε-closure graph-powered iteration function for pushdown control-flow analysis
with a single-threaded store.
10 Introspection for Abstract Garbage Collection
Abstract garbage collection (Might and Shivers 2006b) yields large improvements in precision by using the abstract interpretation of garbage collection to make more efficient
use of the finite address space available during analysis. Because of the way abstract
garbage collection operates, it grants exact precision to the flow analysis of variables whose
bindings die between invocations of the same abstract context. Because pushdown analysis
grants exact precision in tracking return-flow, it is clearly advantageous to combine these
techniques. Unfortunately, as we shall demonstrate, abstract garbage collection breaks the
pushdown model by requiring a full traversal of the stack to discover the root set.
Abstract garbage collection modifies the transition relation to conduct a “stop-and-copy”
garbage collection before each transition. To do this, we define a garbage collection func[ → Conf
[ on configurations:
tion Ĝ : Conf
ĉ
z }| {
Ĝ(e, ρ̂, σ̂ , κ̂) = (e, ρ̂, σ̂ |Reachable(ĉ), κ̂),
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
31
where the pipe operation f |S yields the function f , but with inputs not in the set S mapped
[ → P(Addr)
[ first
to bottom—the empty set. The reachability function Reachable : Conf
computes the root set, and then the transitive closure of an address-to-address adjacency
relation:
ĉ
z }| {
∗
Reachable(e, ρ̂, σ̂ , κ̂) = â : â0 ∈ Root(ĉ) and â0 _ â ,
σ̂
[ → P(Addr)
[ finds the root addresses:
where the function Root : Conf
Root(e, ρ̂, σ̂ , κ̂) = range(ρ̂) ∪ StackRoot(κ̂),
d → P(Addr)
[ function finds roots down the stack:
and the StackRoot : Kont
StackRoothφ1 , . . . , φn i =
[
T (φi ),
i
[
\ → P(Addr):
using a “touches” function, T : Frame
T (v, e, ρ̂) = range(ρ̂),
[ × Store
[ connects adjacent addresses:
[ × Addr
and the relation (_) ⊆ Addr
â _ â0 iff there exists (lam, ρ̂) ∈ σ̂ (â) such that â0 ∈ range(ρ̂).
σ̂
The new abstract transition relation is thus the composition of abstract garbage collection
with the old transition relation:
(c
⇒GC ) = (c
⇒) ◦ Ĝ.
Problem: Stack traversal violates pushdown constraint In the formulation of pushdown systems, the transition relation is restricted to looking at the top frame, and in less
restricted formulations that may read the stack, the reachability decision procedures need
the entire system up-front. Thus, the relation (c
⇒GC ) cannot be computed as a straightforward pushdown analysis using summarization.
Solution: Introspective pushdown systems To accommodate the richer structure of the
relation (c
⇒GC ), we now define introspective pushdown systems. Once defined, we can
embed the garbage-collecting abstract interpretation within this framework, and then focus
on developing a control-state reachability algorithm for these systems.
An introspective pushdown system is a quadruple M = (Q, Γ, δ , q0 ):
1. Q is a finite set of control states;
2. Γ is a stack alphabet;
3. δ ⊆ Q × Γ∗ × Γ± × Q is a transition relation (where (q, κ, φ− , q0 ) ∈ δ implies κ ≡ φ :
κ 0 ); and
4. q0 is a distinguished root control state.
The second component in the transition relation is a realizable stack at the given controlstate. This realizable stack distinguishes an introspective pushdown system from a general
pushdown system. IPDS denotes the class of all introspective pushdown systems.
ZU064-05-FPR
paper-jfp
32
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
Determining how (or if) a control state q transitions to a control state q0 , requires knowing a path taken to the state q. We concern ourselves with root-reachable states. When
M = (Q, Γ, δ , q0 ), if there is a κ̂ such that (q0 , hi) 7−→∗ (q, κ̂) we say q is reachable via κ̂,
M
where
(q, κ̂) 7−→∗ (q0 , κ̂ 0 ) (q0 , κ̂ 0 , g0 , q00 ) ∈ δ
M
(q, κ̂) 7−→∗ (q, κ̂)
M
(q, κ̂) 7−→∗ (q00 , [κ̂+0 g0 ])
M
10.1 Garbage collection in introspective pushdown systems
To convert the garbage-collecting, abstracted CESK machine into an introspective pushdown system, we use the function I \
PDS : Exp → IPDS:
I\
PDS (e) = (Q, Γ, δ , q0 )
d × Store
[
Q = Exp × Env
\
Γ = Frame
(q0 , hi) = Iˆ(e)
(q, κ̂, ε, q0 ) ∈ δ iff Ĝ(q, κ̂) c
⇒ (q0 , κ̂)
(q, φ̂ : κ̂, φ̂− , q0 ) ∈ δ iff Ĝ(q, φ̂ : κ̂) c
⇒ (q0 , κ̂)
(q, κ̂, φ̂+ , q0 ) ∈ δ iff Ĝ(q, κ̂) c
⇒ (q0 , φ̂ : κ̂).
11 Problem: Reachability for Introspective Pushdown Systems is Uncomputable
As currently formulated, computing control-state reachability for introspective pushdown
systems is uncomputable. The problem is that the transition relation expects to enumerate
every possible stack for every control point at every transition, without restriction.
Theorem 11.1
Reachability in introspective pushdown systems is uncomputable.
Proof
Consider an IPDS with two states — searching (start state) and valid — and a singleton
stack alphabet of unit (>). For any first-order logic proposition, φ , we can define a reduction relation that interprets the length of the stack as an encoding of a proof of φ . If the
length encodes an ill-formed proof object, or is not a proof of φ , searching pushes > on
the stack and transitions to itself. If the length encodes a proof of φ , transition to valid.
By the completeness of first-order logic, if φ is valid, there is a finite proof, making the
pushdown system terminate in valid. If it is not valid, then there is no proof and valid
is unreachable. Due to the undecidability of first-order logic, we definitely cannot have a
decision procedure for reachability of IPDSs.
To make introspective pushdown systems computable, we must first refine our definition
of introspective pushdown systems to operate on sets of stacks and insist these sets be
regular.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
33
A conditional pushdown system (CPDS) is a quadruple M = (Q, Γ, δ , q0 ):
1. Q is a finite set of control states;
2. Γ is a stack alphabet;
3. δ ⊆fin Q × RE G (Γ∗ ) × Γ± × Q is a transition relation (same restriction on stacks);
and
4. q0 is a distinguished root control state,
where RE G (S) is the set of all regular languages formable with strings in S.
The regularity constraint on the transition relations guarantees that we can decide applicability of transition rules at each state, since (as we will see) the set of all stacks that reach
a state in a CPDS has decidable overlap with regular languages. Let CPDS denote the set
of all conditional pushdown systems.
The rules for reachability with respect to sets of stacks are similar to those for IPDSs.
(q, κ̂) 7−→∗ (q0 , κ̂ 0 ) κ̂ 0 ∈ K̂ 0
M
(q, κ̂) 7−→∗ (q, κ̂)
(q0 , K̂ 0 , g0 , q00 ) ∈ δ
(q, κ̂) 7−→∗ (q00 , [κ̂+0 g0 ])
M
M
K̂,g
We will write q 7−
−→
→ q0 to mean there are κ̂, K̂ such that q is reachable via κ̂, κ̂ ∈ K̂ and
M
(q, K̂, g, q0 ) ∈ δ . We will omit the labels above if they merely exist.
11.1 Garbage collection in conditional pushdown systems
Of course, we must adapt abstract garbage collection to this refined framework. To convert
the garbage-collecting, abstracted CESK machine into a conditional pushdown system, we
0
use the function I \
PDS : Exp → CPDS:
0
I\
PDS (e) = (Q, Γ, δ , q0 )
d × Store
[
Q = Exp × Env
\
Γ = Frame
[ let K̂ = {κ̂ : StackRoot(κ̂) = A}
For all sets of addresses A ⊆ Addr
(q, K̂, ε, q0 ) ∈ δ iff Ĝ(q, κ̂) c
⇒ (q0 , κ̂) for any κ̂ ∈ K̂
(q, K̂, φ̂− , q0 ) ∈ δ iff Ĝ(q, φ̂ : κ̂) c
⇒ (q0 , κ̂) for any φ̂ : κ̂ ∈ K̂
(q, K̂, φ̂+ , q0 ) ∈ δ iff Ĝ(q, κ̂) c
⇒ (q0 , φ̂ : κ̂) for any κ̂ ∈ K̂
(q0 , hi) = Iˆ(e).
Assuming we can overcome the difficulty of computing with some representation of a
set of stacks, the intuition for the decidability of control-state reachability with garbage
collection stems from two observations: garbage collection operates on sets of addresses,
and for any given control point there is a finite number of sets of sets of addresses. The
finiteness makes the definition of δ fit the finiteness restriction of CPDSs. The regularity of
K̂ (for any given A, which we recall are finite sets) is apparent from a simple construction:
let the DFA control states represent the subsets of A, with 0/ the start state and A the
accepting state. Transition from A0 ⊆ A to A ∪ T (φ̂ ) for each φ̂ (no transition if the result
ZU064-05-FPR
paper-jfp
34
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
is not a subset of A). Thus any string of frames that has a stack root of A (and only A) gets
accepted.
The last challenge to consider before we can delve into the mechanics of computing
reachable control states is how to represent the sets of stacks that may be paired with
each control state. Fortunately, a regular language can describe the stacks that share the
same root addresses, the set of stacks at a control point are recognized by a one-way
non-deterministic stack automaton (1NSA), and, fortuitously, non-empty overlap of these
two is decidable (but NP-hard (Rounds 1973)). The 1NSA describing the set of stacks
at a control point is already encoded in the structure of the (augmented) CRPDS that we
will accumulate while computing reachable control states. As we develop an algorithm for
control-state reachability, we will exploit this insight (Section 13).
12 Reachability in Conditional Pushdown Systems
We will show a progression of constructions that take us along the following line:
specialize
CPDS −−−→ CCPDS −−−−−→ PDCFA with GC → approx. PDCFA with GC
§12.2
§12.3
In the first construction, we show that a CCPDS is finitely constructible in a similar
fashion as in Section 7. The key is to take the current introspective CRPDS and “read off”
an automaton that describes the stacks accepted at each state. For traditional pushdown
systems, this is always an NFA, but introspection adds another feature: transition if the
string accepted so far is accepted by a given NFA. Such power falls outside of standard
NFAs and into one-way non-deterministic stack automata (1NSA)4 . These automata enjoy
closure under finite intersection with regular languages and decidable emptiness checking (Ginsburg et al. 1967), which we use to decide applicability of transition rules. If
the stacks realizable at q have a non-empty intersection with a set of stacks K̂ in a rule
(q, K̂, g, q0 ) ∈ δ , then there are paths from the start state to q that further reach q0 .
The structure of the GC problem allows us to sidestep the 1NSA constructions and more
directly compute state reachability. We specialize to garbage collection in subsection 12.3.
We finally show a space-saving approximation that our implementation uses.
12.1 One-way non-deterministic stack automata
The machinery we use for describing the realizable stacks at a state is a generalized
pushdown automaton itself. A stack automaton is permitted to move a cursor up and down
the stack and read frames (left and right on the input if two-way, only right if one-way),
but only push and pop when the stack cursor is at the top. Formally, a one-way stack
automaton is a 6-tuple A = (Q, Σ, Γ, δ , q0 , F) where
1.
2.
3.
4.
4
Q is a finite nonempty set of states,
Σ is a finite nonempty input alphabet,
Γ is a finite nonempty stack alphabet,
δ ⊆ Q × (Γ ∪ {ε}) × (Σ ∪ {ε}) × {↑, ·, ↓} × Γ± × Q is the transition relation,
The reachable states of a 1NSA is known to be regular, but the paths are not.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
35
5. q0 ∈ Q is the start state, and
6. F ⊆ Q the set of final states
An element of the transition relation, (q, φε , a, d, φ± , q0 ), should be read as, “if at q the
right of the stack cursor is prefixed by φε and the input is prefixed by a, then consume
a of the input, transition to state q0 , move the stack cursor in direction d, and if at the
top of the stack, perform stack action φ± .” This reading translates into a run relation
on instantaneous descriptions, Q × (Γ∗ × Γ∗ ) × Σ∗ . These descriptions are essentially
machine states that hold the current control state, the stack split around the cursor, and the
rest of the input.
(q, φε , a, d, φ± , q0 ) ∈ δ
φε v ΓT
w ≡ aw0
[Γ0B , Γ0T ] = P(φ± , D(d, [ΓB , ΓT ]))
(q, [ΓB , ΓT ], w) 7−→ (q0 , [Γ0B , Γ0T ], w0 )
where
P(φ+ , [ΓB , φε0 ]) = [ΓB φε0 , φ ]
D(↑, [ΓB , φ ΓT ]) = [ΓB φ , ΓT ]
P(φ− , [ΓB φ 0 , φ ]) = [ΓB , φ 0 ]
D(↓, [ΓB φ , ΓT ]) = [ΓB , φ ΓT ]
P(φ− , [ε, φ ]) = [ε, ε]
P(φ± , ΓB,T ) = ΓB,T
D(d, ΓB,T ) = ΓB,T
otherwise
otherwise
The meta-functions P and D perform the stack actions and direct the stack cursor,
respectively. A string w is thus accepted by a 1NSA A iff there are q ∈ F, ΓB , ΓT ∈ Γ∗
such that
(q0 , [ε, ε], w) 7−→∗ (q, [ΓB , ΓT ], ε)
Next we develop an introspective form of compact rooted pushdown systems that use
1NSAs for realizable stacks, and prove a correspondence with conditional pushdown systems.
12.2 Compact conditional pushdown systems
Similar to rooted pushdown systems, we say a conditional pushdown system G = (S, Γ, E, s0 )
is compact if all states, frames and edges are on some path from the root. We will refer to
this class of conditional pushdown systems as CCPDS. Assuming we have a way to decide
overlap between the set of realizable stacks at a state and a regular language of stacks, we
can compute the CCPDS in much the same way as in section 7.
F (M) = f , where
M = (Q, Γ, δ , q0 )
G
z }| {
f (S, Γ, E, s0 ) = (S0 , Γ, E 0 , s0 ), where
n
o
S0 = S ∪ s0 : s ∈ S and s 7−
−→
→ s0 ∪ {s0 }
M
K̂,g
K̂,g 0
0
0
E = E ∪ s s : s ∈ S and s 7−
−→
→ s and Stacks(G)(s) ∩ K̂ 6= 0/ .
M
ZU064-05-FPR
paper-jfp
36
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
The function Stacks : CCPDS → S → 1NSA performs the stack extraction with a construction that inserts the stack-checking NFA for each reduction rule after it has run the
cursor to the bottom of the stack, and continues from the final states to the state dictated
by the rule (added by meta-function gadget). All the stack manipulations from s0 to s are
ε-transitions in terms of reading input; only once control reaches s do we check if the stack
is the same as the input, which captures the notion of a stack realizable at s. Once control
reaches s, we run down to the bottom of the stack again, and then match the stack against
the input; complete matches are accepted. To determine the bottom and top of the stack,
we add distinct sentinel symbols to the stack alphabet, ¢ and $.
G
z }| {
Stacks(S, Γ, E, s0 )(s) = (S ∪ S0 , Γ, Γ ∪ {¢, $} , δ , sstart , {sfinal }), where
sstart , sdown , scheck , sfinal fresh, and S0 , δ the smallest sets such that
{sstart , sdown , scheck , sfinal } ⊆ S0
(sstart , ε, ε, ·, ¢+ , s0 ) ∈ δ
0
gadget(s , K̂, γ± , s00 ) v (δ , S0 ) if (s0 , K̂, γ± , s00 ) ∈ E
(s, ε, ε, ·, $+ , sdown ) ∈ δ
(sdown , ε, ε, ↓, ε, sdown ) ∈ δ
(sdown , ¢, ε, ↑, ε, scheck ) ∈ δ
(scheck , a, a, ↑, ε, scheck ) ∈ δ , a ∈ Γ ∪ {ε}
(scheck , $, ε, ↑, ε, sfinal ) ∈ δ
The first rule changes the initial state to initialize the stack with the “bottom” sentinel.
Every reduction of the CPDS is given the gadget discussed above and explained below.
The last five rules are what implement the final checking of stack against input. When at
the state we are recognizing realizable stacks for, the machine will have the cursor at the
top of the stack, so we push the “top” sentinel before moving the cursor all the way down
to the bottom. When sdown finds the bottom sentinel at the cursor, it moves the cursor past
it to start the exact matching in scheck . If the cursor matches the input exactly, we consume
the input and move the cursor past the matched character to start again. When scheck finds
the top sentinel, it transitions to the final state; if the input is not completely exhausted, the
machine will get stuck and not accept.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
37
gadget(s, K̂, γ± , s0 ) = (δ 0 , Q ∪ {qdown , qout }) where
Let N = (Q, Σ, δ , q0 , F) be a fresh NFA recognizing K̂, qdown , qout fresh states
(q, a, ε, ↑, ε, q0 ) ∈ δ 0 if (q, a, q0 ) ∈ δ , a ∈ Σ
(q, ε, ε, ·, ε, q0 ) ∈ δ 0 if (q, ε, q0 ) ∈ δ
(q, $, ε, ·, $− , qout ) ∈ δ 0 if q ∈ F
(qout , ε, ε, ·, γ± , s0 ) ∈ δ 0
(s, ε, ε, ·, $+ , qdown ) ∈ δ 0
(qdown , ε, ε, ↓, ε, qdown ) ∈ δ 0
(qdown , ¢, ε, ↑, ε, q0 ) ∈ δ 0
We explain each rule in order. When the NFA that recognizes K̂ consumes a character,
the stack automaton should similarly read the character on the stack and move the cursor
along. If the NFA makes an ε-transition, the stack automaton should also, without moving
the stack cursor. When this sub-machine N is in a final state, the cursor should be at the
top of the stack (if indeed it matched), so we pop off the top sentinel and proceed to do
the stack action the IPDS does when transitioning to the next state. The last three rules
implement the same “run down to the bottom” gadget used before, when matching the
stack against the input.
Finally, we can show that states are reachable in a conditional pushdown system iff they
are reached in their corresponding CCPDS. Consider a map
C C : CPDS → CCPDS
such that given a conditional pushdown system M = (Q, Γ, δ , q0 ), its equivalent CCPDS is
C C (M) = (S, Γ, E, q0 ) where S contains reachable nodes:
o
n
S = q : (q0 , hi) 7−→∗ (q, κ̂)
M
and the set E contains reachable edges:
K̂,g
K̂,g
0
0
E = q q : q 7−
−→
→ q
M
Theorem 12.1 (Computable reachability)
For all M ∈ CPDS, C C (M) = lfp(F (M))
Proof in Appendix 18.2.
Corollary 12.1 (Realizable stacks of CPDSs are recognized by 1NSAs)
For all M = (Q, Γ, δ , q0 ) ∈ CPDS, and (S, Γ, E, q0 ) = lfp(F (M)), (q0 , hi) 7−→∗ (q, κ̂) iff
M
q ∈ S and Stacks(G)(q) accepts κ̂.
12.3 Simplifying garbage collection in conditional pushdown systems
The decision problems on 1NSAs are computationally intractable in general, but luckily
GC is a special problem where we do not need the full power of 1NSAs. There are equally
ZU064-05-FPR
paper-jfp
6 February 2018
38
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
fˆe ((P̂, Ê), Ĥ) = ((P̂0 , Ê 0 ), Ĥ 0 ), where
(
Ê+ =
φ̂+
)
A
0
(ψ̂, A) (ψ̂ , A ∪ T (φ̂ )) : ψ̂ + ψ̂
0
φ̂+
n
o
ε
A
Êε = (ψ̂, A) (ψ̂ 0 , A) : ψ̂ + ψ̂ 0
ε
φ̂+
φ̂−
A
Ê− = (ψ̂ 00 , A) (ψ̂ 000 , A0 ) : ψ̂ 00 + ψ̂ 000 and (ψ̂, A0 ) (ψ̂ 0 , A) ∈ Ê and (ψ̂ 0 , A) (ψ̂ 00 , A) ∈ Ĥ
φ̂−
0
Ê = Ê+ ∪ Êε ∪ Ê−
Ĥε = Ω̂ Ω̂00 : Ω̂ Ω̂0 ∈ Ĥ and Ω̂0 Ω̂00 ∈ Ĥ
φ̂+
Ĥ+− = Ω̂ Ω̂000 : Ω̂ Ω̂0 ∈ Ê and Ω̂0 Ω̂00 ∈ Ĥ
φ̂−
and Ω̂00 Ω̂000 ∈ Ê
Ĥ 0 = Ĥε ∪ Ĥ+−
n
o
g
P̂0 = P̂ ∪ Ω̂0 : Ω̂ Ω̂0 ∪ {((e, ⊥, ⊥), 0)}
/ .
Fig. 8: An ε-closure graph-powered iteration function for pushdown garbage-collecting
control-flow analysis
precise techniques at much lower cost, and less precise techniques that can shrink the
explored state space.5 The transition relation we build does not enumerate all sets of
addresses, but instead queries the graph for the sets of addresses it should consider in
order to apply GC. A fully precise method to manage the stack root addresses is to add
the root addresses to the representation of each state, and update it incrementally. The
root addresses can be seen as the representation of K̂ in edge labels, but to maintain the
precision, the set must also distinguish control states. This addition to the state space is an
effective reification of the stack filtering that conditional performs.
An approximative method is to not distinguish control states, but rather to traverse the
graph backward through ε-closure edges and push edges, and collect the root addresses
through the pushed frames. As more paths are discovered to control states, more stacks
will be realizable there, which add more to the stack root addresses to consider as the
relation steps forward. For soundness, edges still must be labeled with the language of
stacks they are valid for, since they can become invalid as more stacks reach control states.
Notice that the root sets of addresses are isomorphic to languages of stacks that have the
given root set, so we can use sets of addresses as the language representation.
We consider both methods in turn, augmenting the compaction algorithm from subsection 9.5. Each have program states that consist of the expression, environment, and store;
d × Store.
\ = Exp × Env
[ Since GC is a non-monotonic operation, stores cannot
ψ̂ ∈ PState
5
The added precision of GC with tighter working sets makes the state space comparison between
the two approaches non-binary. Neither approach is clearly better in terms of performance.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
39
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
be shared globally without sacrificing the precision benefits of GC. For the first method,
program states additionally include the stack root set of addresses; we will call these
[ We show the non-worklist
\ = PState
\ × P(Addr).
ornamented program states, Ω̂ ∈ OPState
solution to computing reachability by employing the function fˆe defined in Figure 8.
A
In order to define the iteration function, we need a refactored transition relation + ⊆
g
\ × PState,
\ defined as follows:
PState
A
(e, ρ̂, σ̂ ) + (e0 , ρ̂ 0 , σ̂ 0 ) iff StackRoot(κ̂) = A and Ĝ(e, ρ̂, σ̂ , κ̂) c
⇒ (e0 , ρ̂ 0 , σ̂ 0 , φ̂ : κ̂)
φ̂+
A
(e, ρ̂, σ̂ ) + (e0 , ρ̂ 0 , σ̂ 0 ) iff StackRoot(φ̂ : κ̂) = A and Ĝ(e, ρ̂, σ̂ , φ̂ : κ̂) c
⇒ (e0 , ρ̂ 0 , σ̂ 0 , κ̂)
φ̂−
A
ε
(e, ρ̂, σ̂ ) + (e0 , ρ̂ 0 , σ̂ 0 ) iff StackRoot(κ̂) = A and Ĝ(e, ρ̂, σ̂ , κ̂) c
⇒ (e0 , ρ̂ 0 , σ̂ 0 , κ̂)
Theorem 12.2 (Correctness of GC specialization)
0
lfp( fˆe ) completely abstracts C C (I \
PDS (e))
The approximative method changes
to contain
the representation of edges in the graph
[ = P PState
[ × Frame
\ × P(Addr)
\ . We also add
\ ± × PState
sets of addresses, Ê ∈ Edge
[ → (PState
[ to
\ → P(Addr))
\ → P(Addr)),
in a sub-fixed-point computation for tˆ : (PState
traverse the graph and collect the union of all stack roots for stacks realizable at a state.
Although we show a non-worklist solution here (in Figure 9) to not be distracting, this
solution will not compute the same reachable states as a worklist solution due to the evergrowing stack roots at each state. Only states in the worklist would need to be analyzed at
the larger stack root sets. In other words, the non-worklist solution potentially throws in
more live addresses at states that would otherwise not need to be re-examined.
This approximation is not an easily described introspective pushdown system since the
root sets it uses depend on the iteration state — particularly what frames have reached a
state so far, regardless of the stack filtering the original CPDS performs. The regular sets
of stacks acceptable at some state can be extracted a posteriori from the fixed point of the
function fˆe0 defined in Figure 9, if so desired. The next theorem follows from the fact that
R(ψ̂) ⊇ A for any represented (ψ̂, A).
Theorem 12.3 (Approximate GC is sound)
lfp( fˆe0 ) approximates lfp( fˆe ).
The last thing to notice is that by disregarding the filtering, the stack root set can get
larger and render previous GCs unsound, since more addresses can end up live than were
previously considered. Thus we label edges with the root set for which the GC was considered, in order to not make false predictions.
13 Implementing Introspective Pushdown Analysis with Garbage Collection
The reachability-based analysis for a pushdown system described in the previous section
requires two mutually-dependent pieces of information in order to add another edge:
ZU064-05-FPR
paper-jfp
6 February 2018
40
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
fˆe0 ((P̂, Ê), Ĥ) = ((P̂0 , Ê 0 ), Ĥ 0 ), where
(
)
A
A
0
ψ̂ ψ̂ : A = R(ψ̂), ψ̂ + ψ̂
Ê+ =
φ̂+
φ̂+
0
A
0
Êε = ψ̂ ψ̂ : A = R(ψ̂), ψ̂
ε
A
+ ψ̂ 0
ε
A
A
Ê− = ψ̂ 00 ψ̂ 000 : A = R(ψ̂ 00 ), ψ̂ 00 + ψ̂ 000 and
φ̂−
φ̂−
A0
ψ̂ ψ̂ 0 ∈ Ê and
φ̂+
0
ψ̂ ψ̂ 00 ∈ Ĥ
Ê 0 = Ê+ ∪ Êε ∪ Ê−
Ĥε = ψ̂ ψ̂ 00 : ψ̂ ψ̂ 0 ∈ Ĥ and ψ̂ 0 ψ̂ 00 ∈ Ĥ
A
Ĥ+− = ψ̂ ψ̂ 000 : ψ̂ ψ̂ 0 ∈ Ê and ψ̂ 0 ψ̂ 00 ∈ Ĥ
φ̂+
A0
and ψ̂ 00 ψ̂ 000 ∈ Ê
φ̂−
0
Ĥ = Ĥε ∪ Ĥ+−
A
P̂0 = P̂ ∪ ψ̂ 0 : ψ̂ ψ̂ 0 ∪ {(e, ⊥, ⊥)}
g
(
tˆ(R) = λ ψ̂.
0
0 A
( T (φ̂ ) ∪ R(ψ̂ ) : ψ̂ ψ̂ ∈ Ê
[
)
∪ R(ψ̂ 0 ) : ψ̂ 0 ψ̂ ∈ Ĥ )
φ̂+
R = lfp(tˆ).
Fig. 9: Approximate pushdown garbage-collecting control-flow analysis.
1. The topmost frame on a stack for a given control state q. This is essential for return
transitions, as this frame should be popped from the stack and the store and the
environment of a caller should be updated respectively.
2. Whether a given control state q is reachable or not from the initial state q0 along
realizable sequences of stack actions. For example, a path from q0 to q along edges
labeled “push, pop, pop, push” is not realizable: the stack is empty after the first pop,
so the second pop cannot happen—let alone the subsequent push.
Knowing about a possible topmost frame on a stack and initial-state reachability is
enough for a classic pushdown reachability summarization to proceed one step further,
and we presented an efficient algorithm to compute those in Section 8. However, to deal
with the presence of an abstract GC in a conditional PDS, we add:
3. For a given control state q, what are the touched addresses of all possible frames that
could happen to be on the stack at the moment the CPDS is in the state q?
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
41
The crucial addition to the algorithm is maintaining for each node q0 in the CRPDS a
~g
set of ε-predecessors, i.e., nodes q, such that q 7−
−→
→ q0 and [~g] = ε. In fact, only two out of
M
three kinds of transitions can cause a change to the set of ε-predecessors for a particular
node q: an addition of an ε-edge or a pop edge to the CRPDS.
One can notice a subtle mutual dependency between computation of ε-predecessors and
top frames during the construction of a CRPDS. Informally:
• A top frame for a state q can be pushed as a direct predecessor (e.g., q follows a
nested let-binding), or as a direct predecessor to an ε-predecessor (e.g., q is in tail
position and will return to a waiting let-binding).
ε
• When a new ε-edge q →
− q0 is added, all ε-predecessors of q become also ε-predecessors
of q0 . That is, ε-summary edges are transitive.
γ−
• When a γ− -pop-edge q −→ q0 is added, new ε-predecessors of a state q1 can be
obtained by checking if q0 is an ε-predecessor of q1 and examining all existing εpredecessors of q, such that γ+ is their possible top frame: this situation is similar to
the one depicted in the example above.
The third component—the touched addresses of all possible frames on the stack for a
state q—is straightforward to compute with ε-predecessors: starting from q, trace out only
the edges which are labeled ε (summary or otherwise) or γ+ . The frame for any action γ+
in this trace is a possible stack action. Since these sets grow monotonically, it is easy to
cache the results of the trace, and in fact, propagate incremental changes to these caches
when new ε-summary or γ+ nodes are introduced. This implementation strategy captures
the approximative approach to performing GC, as discussed in the previous section. Our
implementation directly reflects the optimizations discussed above.
14 Experimental Evaluation
A fair comparison between different families of analyses should compare both precision
and speed. We have implemented a version k-CFA for a subset of R5RS Scheme and
instrumented it with a possibility to optionally enable pushdown analysis, abstract garbage
collection or both. Our implementation source (in Scala) and benchmarks are available:
http://github.com/ilyasergey/reachability
In the experiments, we have focused on the version of k-CFA with a per-state store (i.e.,
without widening), as in the presence of single-threaded store, the effect of abstract GC
is neutralized due to merging. For non-widened versions of k-CFA, as expected, the fused
analysis does at least as well as the best of either analysis alone in terms of singleton flow
sets (a good metric for program optimizability) and better than both in some cases. Also
worthy of note is the dramatic reduction in the size of the abstract transition graph for the
fused analysis—even on top of the already large reductions achieved by abstract garbage
collection and pushdown flow analysis individually. The size of the abstract transition
graph is a good heuristic measure of the temporal reasoning ability of the analysis, e.g.,
its ability to support model-checking of safety and liveness properties (Might et al. 2007).
ZU064-05-FPR
paper-jfp
6 February 2018
42
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
Program #e #v k
mj09
19
8
eta
21 13
kcfa2
20 10
kcfa3
25 13
blur
40 20
loop2
41 14
sat
51 23
0
1
0
1
0
1
0
1
0
1
0
1
0
1
k-CFA
83
107
454
812
63
74
33
33
194
236
970
1935
272
327
> 32662 > 88548
4686
7606
123
149
149
163
> 10867 > 16040
3844
5547
> 28432 > 37391
k-CFA + GC
k-PDCFA
4
1
4
8
3
1
4
–
4
10
7
–
4
–
38
38
44
48
32
32
32
31
36
35
87
144
58
63
1761 4046
115
146
94
101
69
73
411
525
545
773
12828 16846
4
36
39
1
34
35
6
28
27
8
28
27
4
35
34
2
35
34
5
53
52
2
53
52
4
90
95
10 76
82
7
43
46
3 151 163
4 1137 1543
4 958 1314
4
1
8
8
4
2
5
2
10
10
7
3
4
5
k-PDCFA + GC
33
32
28
28
35
35
53
53
68
75
34
145
254
71
32
31
27
27
34
34
52
52
76
81
35
156
317
73
4
1
8
8
4
2
5
2
10
10
7
3
4
10
Table 1: Benchmark results for toy programs. The first three columns provide the name
of a benchmark, the number of expressions and variables in the program in the ANF,
respectively. For each of eight combinations of pushdown analysis, k ∈ {0, 1} and garbage
collection on or off, the first two columns in a group show the number of control states
and transitions/CRPDS edges computed during the analysis (for both less is better). The
third column presents the amount of singleton variables, i.e, how many variables have a
single lambda flow to them (more is better). Inequalities for some results of the plain kCFA denote the case when the analysis explored more than 105 configurations (i.e., control
states coupled with continuations) or did not finish within 30 minutes. For such cases we
do not report on singleton variables.
14.1 Plain k-CFA vs. pushdown k-CFA
In order to exercise both well-known and newly-presented instances of CESK-based CFAs,
we took a series of small benchmarks exhibiting archetypal control-flow patterns (see
Table 1). Most benchmarks are taken from the CFA literature: mj09 is a running example
from the work of Midtgaard and Jensen designed to exhibit a non-trivial return-flow behavior (Midtgaard 2007), eta and blur test common functional idioms, mixing closures and
eta-expansion, kcfa2 and kcfa3 are two worst-case examples extracted from the proof of
k-CFA’s EXPTIME hardness (Van Horn and Mairson 2008), loop2 is an example from
Might’s dissertation that was used to demonstrate the impact of abstract GC (Might 2007,
Section 13.3), sat is a brute-force SAT-solver with backtracking.
14.1.1 Comparing precision
In terms of precision, the fusion of pushdown analysis and abstract garbage collection
substantially cuts abstract transition graph sizes over one technique alone.
We also measure singleton flow sets as a heuristic metric for precision. Singleton flow
sets are a necessary precursor to optimizations such as flow-driven inlining, type-check
elimination and constant propagation. It is essential to notice that for the experiments in
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
43
Table 1 our implementation computed the sets of values (i.e., closures) assigned to each
variable (as opposed to mere syntactic lambdas). This is why in some cases the results
computed by 0CFA appear to be better than those by 1CFA: the later one may examine
more states with different environments, which results in exploring more different values,
whereas the former one will just collapse all these values to a single lambda (Might et al.
2010). What is important is that for a fixed k the fused analysis prevails as the best-of- or
better-than-both-worlds.
Running on the benchmarks, we have re-validated hypotheses about the improvements to
precision granted by both pushdown analysis (Vardoulakis and Shivers 2010) and abstract
garbage collection (Might 2007). Table 1 contains our detailed results on the precision of
the analysis. In order to make the comparison fair, in the table we report on the numbers of
control states, which do not contain a stack component and are the nodes of the constructed
CRPDS. In the case of plain k-CFA, control states are coupled with stack pointers to obtain
configurations, whose resulting number is significantly bigger.
The SAT-solving benchmark showed a dramatic improvement with the addition of contextsensitivity. Evaluation of the results showed that context-sensitivity provided enough fuel
to eliminate most of the non-determinism from the analysis.
14.1.2 Comparing speed
In the original work on CFA2, Vardoulakis and Shivers present experimental results with
a remark that the running time of the analysis is proportional to the size of the reachable
states (Vardoulakis and Shivers 2010, Section 6). There is a similar correlation in our fused
analysis, but with higher variance due to the live address set computation GC performs.
Since most of the programs from our toy suite run for less than a second, we do not report
on the absolute time. Instead, the histogram on Figure 10 presents normalized relative
times of analyses’ executions. To our observation the pure machine-style k-CFA is always
significantly worse in terms of execution time than either with GC or push-down system,
so we excluded the plain, non-optimized k-CFA from the comparison.
Our earlier implementation of a garbage-collecting pushdown analysis (Earl et al. 2012)
did not fully exploit the opportunities for caching ε-predecessors, as described in Section 13. This led to significant inefficiencies of the garbage-collecting analyzer with respect
to the regular k-CFA, even though the former one observed a smaller amount of states and
in some cases found larger amounts of singleton variables. After this issue has been fixed,
it became clearly visible that in all cases the GC-optimized analyzer is strictly faster than
its non-optimized pushdown counterpart.
Although caching of ε-predecessors and ε-summary edges is relatively cheap, it is not
free, since maintaining the caches requires some routine machinery at each iteration of
the analyzer. This explains the loss in performance of the garbage-collecting pushdown
analysis with respect to the GC-optimized k-CFA.
As it follows from the plot, fused analysis is always faster than the non-garbage-collecting
pushdown analysis, and about a fifth of the time, it beats k-CFA with garbage collection in
terms of performance. When the fused analysis is slower than just a GC-optimized one, it is
generally not much worse than twice as slow as the next slowest analysis. Given the already
paper-jfp
6 February 2018
44
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
�����������������������������������
�����������������������������������
���
���
����
����
�������
�������
����������
����������
������������
������������
��
���
���
��
��
���
���
��
����
����
���
���
�����
�����
�����
�����
����
����
�����
�����
���
���
�����
�����
�����
�����
����
����
�����
�����
���
���
���
���
�����������������������������������
�����������������������������������
ZU064-05-FPR
����
����
�������
�������
����������
����������
������������
������������
��
���
���
��
��
���
���
��
����
����
���
���
Fig. 10: Analysis times relative to worst (= 1) in class; smaller is better. At the top is
the monovariant 0CFA class of analyses, at the bottom is the polyvariant 1CFA class of
analyses. (Non-GC k-CFA omitted.)
substantial reductions in analysis times provided by collection and pushdown analysis, the
amortized penalty is a small and acceptable price to pay for improvements to precision.
14.2 Analyzing real-world programs with garbage-collecting pushdown CFA
Even though our prototype implementation is just a proof of concept, we evaluated it not
on a suite of toy programs, tailored for particular functional programming patterns, but on a
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
Program
#e
primtest 155
rsa
211
regex
344
scm2java 1135
#v !#v
44
93
150
460
16
36
44
63
k = 0, GC off
790 955 1400
1267 1507 2300
943 956 5400
376 375 1300
k = 0, GC on
113
355
578
376
k = 1, GC off
127 100 >43146 >54679 ∞
407 600
20746
28895 210
589 4500
1153
1179
8800
375 1300
376
375
1400
45
k = 1, GC on
442 562 1300
926 1166 2800
578 589 5000
376 375 1300
Table 2: Benchmark results of PDCFA on real-world programs. The first four columns
provide the name of a program, the number of expressions and variables in the program in
the ANF, and the number of singleton variables revealed by the analysis (same in all cases).
For each of four combinations of k ∈ {0, 1} and garbage collection on or off, the first two
columns in a group show the number of visited control states and edges, respectively, and
the third one shows absolute time of running the analysis (for both less is better). The
results of the analyses are presented in minutes (0 ) or seconds (00 ), where ∞ stands for an
analysis, which has been interrupted due to the an execution time greater than 30 minutes.
set of real-world programs. In order to set this experiment, we have chosen four programs,
dealing with numeric and symbolic computations:
• primtest – an implementation of the probabilistic Fermat and Solovay-Strassen
primality testing in Scheme for the purpose of large prime generation;
• rsa – an implementation of the RSA public-key cryptosystem;
• regex – an implementation of a regular expression matcher in Scheme using Brzozovski derivatives (Might et al. 2011; Owens et al. 2009);
• scm2java – scm2java is a Scheme to Java compiler;
We ran our benchmark suite on a 2.7 GHz Intel Core i7 OS X machine with 8 Gb
RAM. Unfortunately, k-CFA without global stores timed out on most of these examples
(i.e., did not finish within 30 minutes), so we had to exclude it from the comparison and
focus on the effect of a pushdown analyzer only. Table 2 presents the results of running the
benchmarks for k ∈ {0, 1} with a garbage collector on and off. Surprisingly, for each of the
six programs, those cases, which terminated within 30 minutes, found the same number
of global singleton variables.6 However, the numbers of observed states and runtimes are
indeed different in most of the cases except scm2java, for which all the four versions of
the analysis were precise enough to actually evaluate the program: happily, there was no
reuse of abstract addresses, which resulted in the absence of forking during the CRPDS
construction. In other words, the program scm2java, which used no scalar data but strings
being concatenated and was given a simple input, has been evaluated precisely, which is
confirmed by the number of visited control states and edges.
Time-wise, the results of the experiment demonstrate the general positive effect of the
abstract garbage collection in a pushdown setting, which might improve the analysis performance by more than two orders of magnitude.
6
Of course, the numbers of states explored for each program by different analyses were different,
and there were variations in function parameters cardinalities, which we do not report on here.
ZU064-05-FPR
paper-jfp
46
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
15 Discussion: Applications
Pushdown control-flow analysis offers more precise control-flow analysis results than the
classical finite-state CFAs. Consequently, introspective pushdown control-flow analysis
improves flow-driven optimizations (e.g., constant propagation, global register allocation,
inlining (Shivers 1991)) by eliminating more of the false positives that block their application.
The more compelling applications of pushdown control-flow analysis are those which
are difficult to drive with classical control-flow analysis. Perhaps not surprisingly, the best
examples of such analyses are escape analysis and interprocedural dependence analysis.
Both of these analyses are limited by a static analyzer’s ability to reason about the stack,
the core competency of introspective pushdown control-flow analysis. (We leave an indepth formulation and study of these analyses to future work.)
15.1 Escape analysis
In escape analysis, the objective is to determine whether a heap-allocated object is safely
convertible into a stack-allocated object. In other words, the compiler is trying to figure out
whether the frame in which an object is allocated outlasts the object itself. In higher-order
languages, closures are candidates for escape analysis.
Determining whether all closures over a particular λ -term lam may be heap-allocated is
straightforward: find the control states in the compact rooted pushdown system in which
closures over lam are being created, then find all control states reachable from these states
over only ε-edge and push-edge transitions. Call this set of control states the “safe” set.
Now find all control states which are invoking a closure over lam. If any of these control
states lies outside of the safe set, then stack-allocation may not be safe; if, however, all
invocations lie within the safe set, then stack-allocation of the closure is safe.
15.2 Interprocedural dependence analysis
In interprocedural dependence analysis, the goal is to determine, for each λ -term, the set
of resources which it may read or write when it is called. Might and Prabhu (2009) showed
that if one has knowledge of the program stack, then one can uncover interprocedural
dependencies. We can adapt that technique to work with compact rooted pushdown systems. For each control state, find the set of reachable control states along only ε-edges and
pop-edges. The frames on the pop-edges determine the frames which could have been on
the stack when in the control state. The frames that are live on the stack determine the
procedures that are live on the stack. Every procedure that is live on the stack has a readdependence on any resource being read in the control state, while every procedure that is
live on the stack also has a write-dependence on any resource being written in the control
state. In control-flow terms, this translates to “if f calls g and g accesses a, then f also
accesses a.”
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
47
16 Related Work
The Scheme workshop presentation of PDCFA (Earl et al. 2010) is not archival, nor were
there rigorous proofs of correctness. The complete development of pushdown analysis from
first principles stands as a new contribution, and it constitutes an alternative to CFA2. It
goes beyond work on CFA2 by specifying specific mechanisms for reducing the complexity
to polynomial time (O(n6 )) as well. Vardoulakis (2012) sketches an approach to regain
polynomial time in his dissertation, but does not give a precise bound. An immediate advantage of the complete development is its exposure of parameters for controlling polyvariance and context-sensitivity. An earlier version of this work appeared in ICFP 2012 (Earl
et al. 2012). We also provide a reference implementation of control-state reachability in
Haskell. We felt this was necessary to shine a light on the “dark corners” in the formalism,
and in fact, it helped expose both bugs and implicit design decisions that were reflected
in the revamped text of this work. The development of introspective pushdown systems is
also more complete and more rigorous. We expose the critical regularity constraint absent
from the ICFP 2012 work, and we specify the implementation of control-state reachability
and feasible paths for conditional pushdown systems in greater detail. More importantly,
this work uses additional techniques to improve the performance of the implementation
and discusses those changes.
Garbage-collecting pushdown control-flow analysis draws on work in higher-order controlflow analysis (Shivers 1991), abstract machines (Felleisen and Friedman 1987) and abstract
interpretation (Cousot and Cousot 1977).
Context-free analysis of higher-order programs The motivating work for our own is
Vardoulakis and Shivers recent discovery of CFA2. CFA2 is a table-driven summarization
algorithm that exploits the balanced nature of calls and returns to improve return-flow precision in a control-flow analysis. Though CFA2 exploits context-free languages, contextfree languages are not explicit in its formulation in the same way that pushdown systems
are explicit in our presentation of pushdown flow analysis. With respect to CFA2, our pushdown flow analysis is also polyvariant/context-sensitive (whereas CFA2 is monovariant/contextinsensitive), and it covers direct-style.
On the other hand, CFA2 distinguishes stack-allocated and store-allocated variable bindings, whereas our formulation of pushdown control-flow analysis does not: it allocates all
bindings in the store. If CFA2 determines a binding can be allocated on the stack, that
binding will enjoy added precision during the analysis and is not subject to merging like
store-allocated bindings. While we could incorporate such a feature in our formulation, it
is not necessary for achieving “pushdownness,” and in fact, it could be added to classical
finite-state CFAs as well.
CFA2 has a follow-up that sacrifices its complete abstraction with the machine that only
abstracts bindings in order to handle first-class control (Vardoulakis and Shivers 2011).
We do not have an analogous construction since loss of complete abstraction was an antigoal of this work. We leave an in-depth study of generalizations of CFA2’s method to
introspection, polyvariance and other control operators to future work.
ZU064-05-FPR
paper-jfp
48
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
Calculation approach to abstract interpretation Midtgaard and Jensen (2009) systematically calculate 0CFA using the Cousot-style calculational approach to abstract interpretation (Cousot 1999) applied to an ANF λ -calculus. Like the present work, Midtgaard and
Jensen start with the CESK machine of Flanagan et al. (1993) and employ a reachablestates model.
The analysis is then constructed by composing well-known Galois connections to reveal a 0CFA incorporating reachability. The abstract semantics approximate the control
stack component of the machine by its top element. The authors remark monomorphism
materializes in two mappings: one “mapping all bindings to the same variable,” the other
“merging all calling contexts of the same function.” Essentially, the pushdown 0CFA of
Section 4 corresponds to Midtgaard and Jensen’s analysis when the latter mapping is
omitted and the stack component of the machine is not abstracted. However, not abstracting
the stack requires non-trivial mechanisms to compute the compaction of the pushdown
system.
CFL- and pushdown-reachability techniques This work also draws on CFL- and pushdownreachability analysis (Bouajjani et al. 1997; Kodumal and Aiken 2004; Reps 1998; Reps
et al. 2005). For instance, ε-closure graphs, or equivalent variants thereof, appear in many
context-free-language and pushdown reachability algorithms. For our analysis, we implicitly invoked these methods as subroutines. When we found these algorithms lacking (as
with their enumeration of control states), we developed rooted pushdown system compaction.
CFL-reachability techniques have also been used to compute classical finite-state abstraction CFAs (Melski and Reps 2000) and type-based polymorphic control-flow analysis (Rehof and Fähndrich 2001). These analyses should not be confused with pushdown
control-flow analysis, which is computing a fundamentally more precise kind of CFA.
Moreover, Rehof and Fähndrich’s method is cubic in the size of the typed program, but the
types may be exponential in the size of the program. Finally, our technique is not restricted
to typed programs.
Model-checking pushdown systems with checkpoints A pushdown system with checkpoints has designated finite automata for state/frame pairs. If in a given state/frame configuration, and the automaton accepts the current stack, then execution continues. This
model was first created in Esparza et al. (2003) and describes its applications to modelchecking programs that use Java’s AccessController class, and performing better dataflow analysis of Lisp programs with dynamic scope, though the specific applications are
not fully explored. The algorithm described in the paper is similar to ours, but not “on-thefly,” however, so such applications would be difficult to realize with their methods. The
algorithm discussed has multiple loops that enumerate all transitions within the pushdown
system considered. Again this is a non-starter for higher-order languages, since up-front
enumeration would conservatively suggest that any binding called would resolve to any
possible function. This strategy is a sure-fire way to destroy precision and performance.
Meet-over-all-paths for conditional weighted pushdown systems A conditional pushdown system is essentially a pushdown system in which every state/frame pair is a check-
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
49
point. The two are easily interchangeable, but weighted conditional pushdown systems
assign weights to reduction rules from a bounded idempotent semiring in the same manner
as Reps et al. (2005). The work that introduces CWPDSs uses them for points-to analysis
for Java. They solve the meet-over-all-paths problem by an incrementally translating a
skeleton CFG into a WPDS and using WPDS++ (Lal and Reps 2006) to discover more
points-to information to fill in call/return edges. The translation involves a heavy encoding
and is not obviously correct. The killer for its use for GC is that it involves building
the product automaton of all the (minimized) condition automata for the system, and
interleaving the system states with the automaton’s states — there are exponentially many
such machines in our case, and even though the overall solution is incremental, this large
automaton is pre-built. It is not obvious how to incrementalize the whole construction, nor
is it obvious that the precision and performance are not negatively impacted by the repeated
invocation of the WPDS solver (as opposed to a work-set solution that only considers
recently changed states).
The approach to incremental solving using first-order tools is an interesting approach
that we had not considered. Perhaps first-order and higher-order methods are not too far
removed. It is possible that these frameworks could be extended to request transitions —
or even further, checkpoint machines — on demand in order to better support higher-order
languages. As we saw in this article, however, we needed access to internal data structures
to compute root sets of addresses, and the ability to update a cache of such sets in these
structures. The marriage could be rocky, but worth exploring in order to unite the two
communities and share technologies.
Model-checking higher-order recursion schemes There is terminology overlap with
work by Kobayashi (2009) on model-checking higher-order programs with higher-order
recursion schemes, which are a generalization of context-free grammars in which productions can take higher-order arguments, so that an order-0 scheme is a context-free grammar.
Kobyashi exploits a result by Ong (2006) which shows that model-checking these recursion schemes is decidable (but ELEMENTARY-complete) by transforming higher-order
programs into higher-order recursion schemes.
Given the generality of model-checking, Kobayashi’s technique may be considered an
alternate paradigm for the analysis of higher-order programs. For the case of order-0, both
Kobayashi’s technique and our own involve context-free languages, though ours is for
control-flow analysis and his is for model-checking with respect to a temporal logic. After
these surface similarities, the techniques diverge. In particular, higher-order recursions
schemes are limited to model-checking programs in the simply-typed lambda-calculus with
recursion.
17 Conclusion
Our motivation was to further probe the limits of decidability for pushdown flow analysis
of higher-order programs by enriching it with abstract garbage collection. We found that
abstract garbage collection broke the pushdown model, but not irreparably so. By casting
abstract garbage collection in terms of an introspective pushdown system and synthesizing
ZU064-05-FPR
paper-jfp
50
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
a new control-state reachability algorithm, we have demonstrated the decidability of fusing
two powerful analytic techniques.
As a byproduct of our formulation, it was also easy to demonstrate how polyvariant/contextsensitive flow analyses generalize to a pushdown formulation, and we lifted the need to
transform to continuation-passing style in order to perform pushdown analysis.
Our empirical evaluation is highly encouraging: it shows that the fused analysis provides further large reductions in the size of the abstract transition graph—a key metric for
interprocedural control-flow precision. And, in terms of singleton flow sets—a heuristic
metric for optimizability—the fused analysis proves to be a “better-than-both-worlds”
combination.
Thus, we provide a sound, precise and polyvariant introspective pushdown analysis for
higher-order programs.
Acknowledgments
We thank the anonymous reviewers of ICFP 2012 and JFP for their detailed reviews, which
helped to improve the presentation and technical content of the paper. Tim Smith was
especially helpful with his knowledge of stack automata. This material is based on research
sponsored by DARPA under the programs Automated Program Analysis for Cybersecurity (FA8750-12-2-0106) and Clean-Slate Resilient Adaptive Hosts (CRASH). The U.S.
Government is authorized to reproduce and distribute reprints for Governmental purposes
notwithstanding any copyright notation thereon.
References
Bouajjani, A., J. Esparza, and O. Maler (1997). Reachability analysis of pushdown
automata: Application to Model-Checking. In Proceedings of the 8th International
Conference on Concurrency Theory, CONCUR ’97, pp. 135–150. Springer-Verlag.
Cousot, P. (1999). The calculational design of a generic abstract interpreter. In M. Broy and
R. Steinbrüggen (Eds.), Calculational System Design. NATO ASI Series F. IOS Press,
Amsterdam.
Cousot, P. and R. Cousot (1977). Abstract interpretation: A unified lattice model for static
analysis of programs by construction or approximation of fixpoints. In Conference
Record of the Fourth ACM Symposium on Principles of Programming Languages, pp.
238–252. ACM Press.
Earl, C., M. Might, and D. Van Horn (2010). Pushdown Control-Flow analysis of HigherOrder programs. In Workshop on Scheme and Functional Programming.
Earl, C., I. Sergey, M. Might, and D. Van Horn (2012). Introspective pushdown analysis
of higher-order programs. In Proceedings of the 17th ACM SIGPLAN International
Conference on Functional Programming (ICFP 2012), ICFP ’12, pp. 177–188. ACM.
Esparza, J., A. Kucera, and S. Schwoon (2003). Model checking LTL with regular
valuations for pushdown systems. Inf. Comput. 186(2), 355–376.
Felleisen, M. and D. P. Friedman (1987). A calculus for assignments in higher-order
languages. In POPL ’87: Proceedings of the 14th ACM SIGACT-SIGPLAN Symposium
on Principles of Programming Languages, pp. 314+. ACM.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
51
Flanagan, C., A. Sabry, B. F. Duba, and M. Felleisen (1993, June). The essence of
compiling with continuations. In PLDI ’93: Proceedings of the ACM SIGPLAN 1993
Conference on Programming Language Design and Implementation, pp. 237–247.
ACM.
Ginsburg, S., S. A. Greibach, and M. A. Harrison (1967). One-way stack automata. Journal
of the ACM 14(2), 389–418.
Johnson, J. I. and D. Van Horn (2013). Concrete semantics for pushdown analysis: The
essence of summarization. In HOPA 2013: Workshop on higher-order program analysis.
Kobayashi, N. (2009, January). Types and higher-order recursion schemes for verification
of higher-order programs. SIGPLAN Not. 44(1), 416–428.
Kodumal, J. and A. Aiken (2004, June). The set constraint/CFL reachability connection
in practice. In PLDI ’04: Proceedings of the ACM SIGPLAN 2004 Conference on
Programming Language Design and Implementation, pp. 207–218. ACM.
Lal, A. and T. W. Reps (2006). Improving pushdown system model checking. In T. Ball
and R. B. Jones (Eds.), CAV, Volume 4144 of Lecture Notes in Computer Science, pp.
343–357. Springer.
Li, X. and M. Ogawa (2010). Conditional weighted pushdown systems and applications.
In J. P. Gallagher and J. Voigtländer (Eds.), PEPM, pp. 141–150. ACM.
Melski, D. and T. W. Reps (2000, October). Interconvertibility of a class of set constraints
and context-free-language reachability. Theoretical Computer Science 248(1-2), 29–98.
Midtgaard, J. (2007). Transformation, Analysis, and Interpretation of Higher-Order
Procedural Programs. Ph. D. thesis, University of Aarhus.
Midtgaard, J. and T. P. Jensen (2009). Control-flow analysis of function calls and returns
by abstract interpretation. In ICFP ’09: Proceedings of the 14th ACM SIGPLAN
International Conference on Functional Programming, pp. 287–298. ACM.
Might, M. (2007, June). Environment Analysis of Higher-Order Languages. Ph. D. thesis,
Georgia Institute of Technology.
Might, M., B. Chambers, and O. Shivers (2007, January). Model checking via GammaCFA. In Verification, Model Checking, and Abstract Interpretation, pp. 59–73.
Might, M., D. Darais, and D. Spiewak (2011). Parsing with derivatives: a functional
pearl. In ICFP ’11: Proceeding of the 16th ACM SIGPLAN international conference
on Functional Programming, pp. 189–195. ACM.
Might, M. and P. Manolios (2009). A posteriori soundness for non-deterministic abstract
interpretations. In Proceedings of the 10th International Conference on Verification,
Model Checking, and Abstract Interpretation, VMCAI ’09, pp. 260–274. SpringerVerlag.
Might, M. and T. Prabhu (2009). Interprocedural dependence analysis of higher-order
programs via stack reachability. In Proceedings of the 2009 Workshop on Scheme and
Functional Programming.
Might, M. and O. Shivers (2006a). Environment analysis via Delta-CFA. In Conference
Record of the 33rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming
Languages (POPL 2006), pp. 127–140. ACM.
Might, M. and O. Shivers (2006b). Improving flow analyses via Gamma-CFA: Abstract
garbage collection and counting.
In Proceedings of the 11th ACM SIGPLAN
International Conference on Functional Programming (ICFP 2006), pp. 13–25. ACM.
ZU064-05-FPR
paper-jfp
52
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
Might, M., Y. Smaragdakis, and D. Van Horn (2010). Resolving and exploiting the kCFA paradox: Illuminating functional vs. object-oriented program analysis. In PLDI
’10: Proceedings of the 2010 ACM SIGPLAN Conference on Programming Language
Design and Implementation, PLDI ’10, pp. 305–315. ACM Press.
Ong, C. H. L. (2006). On Model-Checking trees generated by Higher-Order recursion
schemes. In 21st Annual IEEE Symposium on Logic in Computer Science (LICS’06),
LICS, pp. 81–90. IEEE.
Owens, S., J. Reppy, and A. Turon (2009). Regular-expression derivatives re-examined.
Journal of Functional Programming 19(02), 173–190.
Rehof, J. and M. Fähndrich (2001). Type-based flow analysis: From polymorphic
subtyping to CFL-reachability. In POPL ’01: Proceedings of the 28th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, pp. 54–66. ACM.
Reps, T. (1998, December). Program analysis via graph reachability. Information and
Software Technology 40(11-12), 701–726.
Reps, T., S. Schwoon, S. Jha, and D. Melski (2005, October). Weighted pushdown
systems and their application to interprocedural dataflow analysis. Science of Computer
Programming 58(1-2), 206–263.
Rounds, W. C. (1973). Complexity of recognition in intermediate level languages. In
Switching and Automata Theory, 1973. SWAT ’08. IEEE Conference Record of 14th
Annual Symposium on, pp. 145–158.
Shivers, O. G. (1991). Control-Flow Analysis of Higher-Order Languages. Ph. D. thesis,
Carnegie Mellon University.
Sipser, M. (2005, February). Introduction to the Theory of Computation (2 ed.). Cengage
Learning.
Van Horn, D. and H. G. Mairson (2008). Deciding kCFA is complete for EXPTIME.
In ICFP ’08: Proceeding of the 13th ACM SIGPLAN International Conference on
Functional Programming, pp. 275–282. ACM.
Van Horn, D. and M. Might (2012). Systematic abstraction of abstract machines. Journal
of Functional Programming 22(Special Issue 4-5), 705–746.
Vardoulakis, D. (2012). CFA2: Pushdown Flow Analysis for Higher-Order Languages. Ph.
D. thesis, Northeastern University.
Vardoulakis, D. and O. Shivers (2010). CFA2: A Context-Free approach to ControlFlow analysis. In A. D. Gordon (Ed.), Programming Languages and Systems, Volume
6012 of Lecture Notes in Computer Science, Chapter 30, pp. 570–589. Springer Berlin
Heidelberg.
Vardoulakis, D. and O. Shivers (2011). Pushdown flow analysis of first-class control.
In Proceedings of the 16th ACM SIGPLAN International Conference on Functional
Programming, ICFP ’11, pp. 69–80. ACM.
Wright, A. K. and S. Jagannathan (1998, January). Polymorphic splitting: An effective
polyvariant flow analysis. ACM Transactions on Programming Languages and
Systems 20(1), 166–207.
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
53
18 Full Proofs
18.1 Pushdown reachability
Proof of Theorem 8.1. The space ICRPDS is further constrained than stated in the main
article:
S
{{s, s0 } : s s0 ∈ H} ⊆ S,
ICRPDS = ((S, E), H, (∆S, ∆E, ∆H)) :
∆S ∩ S = 0,
/ ∆E ∩ E = 0,
/ and ∆H ∩ H = 0/
For this section we assume
M = (Q, Γ, δ , q0 ) ∈ RPDS
G = ((S, E), H, (∆S, ∆E), ∆H) ∈ CRPDS where (S, E) ⊆ (Q, δ )
and q0 = s0
Let ORD be the class of ordinal numbers. We define a termination measure on the fixedpoint computation of F 0 ((Q, Γ, , )), d : CRPDS → ORD.
2 ·|Γ|
d((S, E), H, (∆S, ∆E, ∆H)) = (2|Q|
2
− |E|)ω + (2|Q| − |H|)
Lemma 18.1 (Termination)
Either G = F 0 (M)(G) or d(F 0 (M)(G)) ≺ d(G)
Proof
If both ∆E and ∆H are empty, there are no additions made to S, E or H, meaning G is a
fixed point. Otherwise, due to the non-overlap condition, one or both of E and H grow,
meaning the ordinal is smaller.
A corollary is that the fixed-point has empty ∆E and ∆H.
Lemma 18.2 (Key lemma for PDS reachability)
If inv(G) then inv(F 0 (M)(G))
Proof
All additional states and edges come from ∆Ei and ∆Hi for i ∈ [0..4], so by cases on the
sources of edges:
g
Case s s0 ∈ ∆E0 , s00 s000 ∈ ∆H0 .
By definition of sprout and path extension.
g
Case s s0 ∈ ∆E1 , s00 s000 ∈ ∆H1 .
φ̂+
If g ≡ φ̂− , then by definition of addPush there are q q0 ∈ ∆E, q0 s ∈ H, such
that (s, φ̂+ , s0 ) ∈ δ .
Let ~g be the witness of the invariant on q0 s given from definition of inv. Let κ̂ be
φ̂+~gφ̂−
arbitrary. We have [φ̂+~gφ̂− ] = ε. We also have (q, κ̂) 7−→∗ (s0 , κ̂). Root reachability
M
φ̂+
follows from path concatenation with the root path from (q, κ̂) 7−→ (q0 , φ̂ κ̂) from
M
inv.
The balanced path for s00 s000 comes from a similar push edge from ∆E and
concatenation with the path from the invariant on H.
ZU064-05-FPR
paper-jfp
6 February 2018
54
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
Case s00 s000 ∈ ∆H2 .
φ̂−
By definition of addPop, ∆E2 = 0/ and there are q s000 ∈ ∆E, q0 q ∈ H such that
φ̂+
s00 q0 ∈ E. Let ~g be the witness of the invariant on q0 q. Let κ̂ be arbitrary. We
φ̂+~gφ̂−
know by the invariant on E, (s00 , κ̂) 7−→∗ (s000 , κ̂) and [φ̂+~gφ̂− ] = ε.
M
g
s0
, s00
s000
Case s ∈ ∆E3 ∪ ∆E4
∈ ∆H3 ∪ ∆H4 .
Follows from definition of inv and path concatenation, following similar reasoning
as above cases.
We define “π is a subtrace of π 0 ,” π v π 0
~g ∗
π v (s, κ̂) 7−→ (s0 , κ̂ 0 ) (s0 , g0 , s00 ) ∈ δ
hi ∗
~g ∗
(s0 , κ̂ 0 ) 7−→ (s0 , κ̂ 0 ) v (s, κ̂) 7−→ (s0 , κ̂ 0 )
M
M
M
~g ∗
g0
M
M
π v (s, κ̂) 7−→ (s0 , κ̂ 0 ) 7−→ (s00 , [κ̂+0 g0 ])
∗
g~00
~g ∗
(s, κ̂) 7−→ (s0 , κ̂ 0 ) v (s000 , κ̂ 00 ) 7−→ (s0 , κ̂ 0 ) (s0 , g0 , s00 ) ∈ δ
M
M
∗
g0
g~00
0
0
00
0
0
(s , κ̂ ) 7−→ (s , [κ̂+ g ]) v (s000 , κ̂ 00 ) 7−→
M
M
~g ∗
(s, κ̂) 7−→
M
g0
(s0 , κ̂ 0 ) 7−→ (s00 , [κ̂+0 g0 ])
M
Theorem 8.1 is a corollary of the following theorem.
Theorem 18.1
lfp(F 0 (M)) = (C (M), E C G (M), (0,
/ 0),
/ 0)
/
Proof
(⊆): Directly from 18.2.
~g ∗
(⊇): Let π ≡ (s0 , hi) 7−→ (s, κ̂) be an arbitrary path in C (M) (the inclusion of root is not
M
a restriction due to the definition of CRPDSs). Let n ∈ Nats be such that lfp(F 0 (M)) =
F 0 (M)n . We show
• the same path through G,
g
g
• for each s ∈ S, s s0 ∈ E, s s0 ∈ H, there is an m < n such that s ∈ ∆Sm s s0 ∈
∆Em s s0 ∈ ∆Hm respectively, where F 0 (M)m = (Gm , Hm , (∆Sm , ∆Em , ∆Hm )), and
g~ε ∗
• all non-empty balanced subtraces have edges in H: ∀(sb , κ̂) 7−→ (sa , κ̂) v π.~
gε 6=
M
hi ∧ [~
gε ] = ε =⇒ sb sa ∈ H.
By induction on π,
Case Base: s0 .
Follows by definition of F 0 . No non-empty balanced subtrace.
~g0 ∗
g00
M
M
Case Induction step: (s0 , hi) 7−→ (s0 , κ̂) 7−→ (s, [κ̂+ g00 ]).
~g0 ∗
By IH, (s0 , hi) 7−→
G
(s0 , κ̂).
By cases on g00 :
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
55
Case γ+ .
g00
Let m be the witness for s0 by the IH. By definition of F 0 , (s0 , κ̂) 7−→ (s, [κ̂+ g00 ])
M
is in ∆Em+1 and Em+2 (and thus s ∈ ∆Sm+1 and Sm+2 ). Thus the path is constructible through Gn . All balanced subtraces carry over from IH, since the last
push edge cannot end a balanced path.
Case ε.
The path is constructible the same as γ+ . Let m be the witness used in the path
g~ε ∗
construction. Let π 0 ≡ (sb , κ̂) 7−→ (se , κ̂) be an arbitrary non-empty balanced
M
subtrace. If se 6= s, then the IH handles it. Otherwise, g~ε = g~0ε ε. If sb = s0 , then
the ε-edge is added by sprout (so the witness number is m + 1). If not, then there
∗
g~0
ε
is a balanced subtrace (sb , κ̂) 7−→
(s0 , κ̂), thus sb s0 ∈ H. Let m0 be the witness
M
for sb s0 ∈ ∆Hm0 . Then sb s ∈ ∆max{m,m0 }+1 by definition of addEmpty.
Case γ− .
γ+
Since [~g] is defined, there is a push edge in the trace (call it su 7−→ sv ) with a
M
(possibly empty) balanced subtrace following to s0 . Thus by IH, there are some
γ+
m, m0 such that su sv ∈ Em , (if the subtrace is non-empty) sv s0 ∈ Hm0 If
γ−
m ≥ m0 by definition of addPush, s0 s ∈ ∆Em+1 . Otherwise, the edge is in Em0
γ−
and by definition of addEmpty, s0 s ∈ ∆Em0 +1 .
g~ε ∗
Let π 0 ≡ (sb , κ̂) 7−→ (se , κ̂) be an arbitrary non-empty balanced subtrace. If se 6=
M
∗
g~0
γ+
M
M
ε
(su , κ̂) 7−→
s, the IH handles it. Otherwise, g~ε = g~0ε γ+ g~00ε γ− and π 0 ≡ (sb , κ̂) 7−→
∗
g~0ε
γ−
M
M
(sv , γ κ̂) 7−→ (s0 , γ κ̂) 7−→ (s, κ̂). su s is added to ∆Hmax{m,m0 }+1 and thus sb
su is in Hmax{m,m0 }+3 .
18.2 RIPDS reachability
We use metafunction •++• : Cont × Cont → Cont to aid proofs:
ε++κ̂ = κ̂
φ : κ̂++κ̂ 0 = φ : (κ̂++κ̂ 0 )
split(ε) = [¢, ε]
split(φ : κ̂) = [¢κ̂, φ ]
Lemma 18.3 (Down spin)
For (q, ε, ε, ↓, ε, q) ∈ δ , (q, [κ̂B ++κ̂B0 , κ̂T ], w) 7−→∗ (q, [κ̂B , κ̂B0 ++κ̂T ], w)
Proof
By induction on κ̂B0 .
ZU064-05-FPR
paper-jfp
6 February 2018
56
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
Case Base: κ̂B = ε.
Reflexivity.
Case Induction step: κ̂B0 = κ̂φ .
By δ , (q, [κ̂B ++κ̂B0 , κ̂T ], w) 7−→ (q, [κ̂B ++κ̂, φ κ̂T ], w). By IH, (q, [κ̂B ++κ̂, φ κ̂T ], w)
7−→∗ (q, [κ̂B , κ̂++φ κ̂T ], w). This final configuration is the same as (q, [κ̂B , κ̂B0 ++κ̂T ], w).
Lemma 18.4 (gadget correctness)
For (δ , S) = gadget(s, K̂, g, s0 ), (s, split(κ̂), w) 7−→∗ (s0 , split([κ̂+ g]), w) iff κ̂ ∈ K̂ and [κ̂+ g]
δ
defined.
Proof
(⇒): By inversion on the rules for δ , the path must go through three stages: the downspin, the middle path, and the pop-off. By 18.3, (s, split(κ̂), w) 7−→ (qdown , [¢κ̂, $], w) 7−→∗
(qdown , [ε, ¢κ̂$], w). Then the (qdown , ¢, ε, ↑, ε, q0 ) rule must apply. We can construct an accepting path in the machine recognizing K̂ from the middle path via the following lemma:
κ̂ 0
κ̂ 0
δ
N
(q0 , [¢, κ̂$], w) 7−→∗ (q, [¢κ̂ 0 , κ̂ 00 $], w) implies (q0 , κ̂) 7−→∗ (q, κ̂ 00 ). Proof by induction.
Then (q, $, ε, ·, $− , qout ) must apply, and then (qout , ε, ε, g, s0 ) must apply, meaning that
[κ̂+ g] is defined.
(⇐): Since K̂ is regular, there must be a path in the chosen NFA N = (Q, Σ, δN , q0 , F)
from q0 to a final state q ∈ F, (q0 , κ̂) 7−→∗ (q, ε).
N
In the first stage, (s, split(κ̂), w) 7−→∗ (q0 , [¢, κ̂$], w).
The follows first by the (s, ε, ε, ·, $+ , qdown ) transition, then by 18.3 (qdown , split(κ̂$), w)
7−→∗ (qdown , [ε, ¢κ̂$], w), finally by the (qdown , ¢, ε, ↑, ε, q0 ) rule.
In the second stage we construct a path (q0 , [¢, κ̂$], w) 7−→∗ (q, [¢κ̂, $], w), from an accepting path in N: (q0 , κ̂) 7−→∗ (q, ε) where q ∈ F. The statement we can induct on to get this
κ̂ 0
κ̂ 0
is (q0 , κ̂) 7−→∗ (q, κ̂ 00 ) implies (q0 , [¢, κ̂$], w) 7−→∗ (q, [¢κ̂ 0 , κ̂ 00 $], w).
N
δ
κ̂ 0
Case Base: = ε, q = q0
Reflexivity.
, κ̂ 00
= κ̂.
κ̂ 000
φε
Case Induction step: κ̂ 0 = κ̂ 000 φε , (q0 , κ̂) 7−→∗ (q0 , κ̂ 0000 ) 7−→ (q, κ̂ 00 ).
N
7 →∗ (q0 , [¢κ̂ 000 , κ̂ 0000 $], w).
−
N
By IH, (q0 , [¢, κ̂$], w)
If φε = ε, then κ̂ 000 = κ̂ 0 , κ̂ 0000 = κ̂ 00
0
and we apply the (q , ε, ε, ·, ε, q) rule to get to (q, [¢κ̂ 0 , κ̂ 00 $], w). Otherwise, κ̂ 0 = κ̂ 000 φ
and we apply the (q0 , φ , ε, ↑, ε, q) rule to get to (q, [¢κ̂ 0 , κ̂ 00 $], w).
In the third and final stage, (qout , [¢κ̂, $], w) 7−→ (qout , split(κ̂), w) and since [κ̂+ g] is defined, we reach the final state by (qout , ε, ε, ·, g, s0 ).
Lemma 18.5 (Checking lemma)
If (q, a, a, ↑, ε, q) ∈ δ and (q, [κ̂B , κ̂T 0 ++κ̂T $], w) 7−→∗ (q, [κ̂B ++κ̂T 0 , κ̂T $], w0 ) (through
the one rule) then w = κ̂T 0 w0 .
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
57
Proof
Simple induction.
Lemma 18.6 (Stack machine correctness)
For all M ∈ CPDS, G ∈ CCPDS, q ∈ G, if G v C C (M) then
−→
K̂,g
L (Stacks(G)(q)) = κ̂ : (q0 , hi) 7−→∗ (q, κ̂) .
G
Proof
(⊆): Let (sstart , [ε, ε], κ̂) 7−→∗ (sfinal , split(κ̂), ε) be an accepting path for κ̂ ∈ L (Stacks(G)(q)).
We inductively construct a corresponding path in G that realizes κ̂. We first see that the
given path is split into three phases: setup, gadgetry, checking. The first step must be
(sstart , ε, ε, ·, ¢+ , s0 ), which we call setup. The only final state must be preceded by scheck ,
sdown , and the final occurrence of s, which we call checking. Thus the middle phase is a
trace from s0 to s. This must be through gadgets, which are disjoint for each rule of the
IPDS, and thus each edge in G.
(sstart , [ε, ε], κ̂) 7−→ (s0 , [ε, ¢], κ̂) 7−→∗ (s, split(κ̂), κ̂) 7−→
(sdown , [¢κ̂, $], κ̂) 7−→∗ (sdown , [ε, ¢κ̂$], κ̂) 7−→
(scheck , [¢, κ̂$], κ̂) 7−→∗ (scheck , [¢κ̂, $], ε) 7−→ (sfinal , split(κ̂), ε)
We induct on the path through gadgets, s0 to s in the above path, invoking 18.4 at each step.
(⊇): Simple induction between setup and teardown, applying 18.4.
Proof of Theorem 12.1
Proof
The finiteness of the state space and monotonicity of F ensures the least fixed point exists.
lfp(F (M)) ⊆ C C (M) follows from 18.6 and the definition of F .
−→ ∗
K̂,g
To prove C C (M) ⊆ lfp(F (M)), suppose not. Then there must be a path (s0 , hi) 7−→
M
K̂ 0 ,g0
(s, κ̂) 7−→ (s0 , [κ̂+ g0 ])
M
where the final edge is the first edge not in lfp(F (M)).
By definition of C C , κ̂ ∈ K̂ 0 and (s, K̂ 0 , g, s0 ) ∈ δ . Since κ̂ is realizable at s in G, by
K̂ 0 ,g
definition of F and 18.6, (s, κ̂) 7−→ (s0 , [κ̂+ g]) contra the assumption. Thus C C (M) ⊆
G
lfp(F (M)) holds by contradiction.
ZU064-05-FPR
paper-jfp
58
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
mon \
\Γ−
\ Γ = ICRPDS×
We first prove an invariant of fˆ : Exp → System
−→ SystemΓ , where System
\ × OPState)
\
P(OPState
IΓ (e) = ((e, ⊥, ⊥), hi)
IΓ0 (e) = (((e, ⊥, ⊥), 0),
/ hi)
hφ̂1 , . . . , φ̂n i+ = φ̂1+ . . . φ̂n+
\ Γ → Prop
invΓ : Exp → System
G
z }| {
o
[ n
g
Ω̂, Ω̂0 : Ω̂ Ω̂0 ∈ Ê )
invΓ (e)((P̂, Ê), Ĥ) = (P̂ =
g
∧ ∀(ψ̂, A) (ψ̂ 0 , A0 ) ∈ Ê.let K̂ = {κ̂ : StackRoot(κ̂) = A} in
∀κ̂ ∈ κ̂ ∈ K̂ : [κ̂+ g] defined .StackRoot([κ̂+ g]) = A0
K̂,g
∧ (ψ̂, κ̂) 7−→ (ψ̂ 0 , [κ̂+ g])
M
−→ ∗
−−→
K̂,g
∧ ∀(ψ̂, ) (ψ̂ , ) ∈ Ĥ.∃K̂, g.[~g] = ε ∧ (ψ̂, hi) 7−→ (ψ̂ 0 , hi)
0
M
0
where M = I \
PDS (e)
Lemma 18.7 ( fˆ invariant)
For all e, if invΓ (e)(G) then invΓ (e)( fˆe (G))
Proof
Same structure as in Lemma 18.2 without reasoning about worklists.
Proof of Theorem 12.2
Proof
0
Let M = I \
PDS (e), G = C C (M) and G0 = ((P̂, Ê), Ĥ) = lfp( fˆe ).
(G0 approximates G):
−→
K̂,g
We strengthen the statement to π ≡ IΓ (e) 7−→∗ (ψ̂, κ̂) implies
G
~g
• IΓ0 (e) 7−→0 ∗ ((ψ̂, StackRoot(κ̂)), κ̂).
G
−→ ∗
K̂,g
K̂ 0 ,g0
• for all (ψ̂, κ̂) 7−→ (ψ̂ 0 , [κ̂+~g]) 7−→ (ψ̂ 00 , [κ̂+~gg0 ]) v π, if [~gg0 ] = ε, then ∃κ̂ ∈ K̂ and
G
G
(ψ̂, StackRoot(κ̂)) (ψ̂ 00 , StackRoot([κ̂+~gg0 ])) ∈ Ĥ
By induction on π,
Case Base: IΓ (e).
By definition of fˆe , IΓ0 (e) = (Ω̂0 , hi), Ω̂0 ∈ P̂. First goal holds by definition of
StackRoot(hi) and reflexivity. Second goal vacuously true.
Case Induction step:
−−→
K̂ 0 ,g0
((e, ⊥, ⊥), hi) 7−→∗
G
K̂ 00 ,g00
(ψ̂ 0 , [~g0 ]) 7−→ (ψ̂, κ̂).
G
~g0
Let A = StackRoot([~g0 ]). By IH, IΓ0 (e) 7−→0 ∗ ((ψ̂ 0 , A), [~g0 ]).
G
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
59
0
Let K̂root = {κ̂ : StackRoot(κ̂) = A} By definition of I \
PDS and the case assumpK̂ 00 ,g00
0
00
0
0
~
tion, (ψ̂ , K̂root , g , ψ̂) ∈ δ . By cases on (ψ̂ , [g ]) 7−→ (ψ̂, κ̂):
G
K̂ 00 ,φ̂+
Case (ψ̂ 0 , [~g0 ]) 7−→ (ψ̂, κ̂).
G
φ̂
+
By definition of fˆ, (ψ̂ 0 , A) (ψ̂, A ∪ T (φ̂+ )) ∈ G0 . By definition of StackRoot
and A, StackRoot([~g0 g00 ]) = StackRoot([~g]) = StackRoot(κ̂).
K̂ 00 ,ε
Case (ψ̂ 0 , [~g0 ]) 7−→ (ψ̂, κ̂).
G
Similar to previous case.
K̂ 00 ,φ̂−
Case (ψ̂ 0 , [~g0 ]) 7−→ (ψ̂, κ̂).
G
Since is [~g0 φ̂− ] is defined, there is an i such that gi = φ̂+ , which is witness
K̂ 000 ,φ̂+
to an edge in the trace with that action, ψ̂b 7−
−→
→ ψ̂e By definition of [ ], the
G
actions from ψ̂e to ψ̂ 0 cancel to ε, meaning by IH (ψ̂e , A) (ψ̂ 0 , A) ∈ H, and
φ̂
+
(ψ̂b , A0 ) (ψ̂e , A) ∈ E. Thus the pop edge is added by definition of fˆ0 . The new
balanced path (ψ̂b , A0 ) (ψ̂, A0 ) is also added, and extended paths get added
with propagation.
Approximation follows by composition with Theorem 12.1.
(G approximates G0 ): Directly from 18.7.
The approximate GC has a similar invariant, except the sets of addresses are with respect
to the tˆ computation.
\ Γ → Prop
invΓ̂ : Exp → System
G
z }| {
[
A,g
0
0
invΓ̂ (e)((P̂, Ê), Ĥ) = (P̂ =
ψ̂, ψ̂ : ψ̂ ψ̂ ∈ Ê )
A,g
g
∧ ∀ψ̂0 ψ̂1 ∈ Ê.∃(ψ̂0Γ , AΓ ) (ψ̂1Γ , A0Γ ) ∈ Ê.(∀i.ψ̂iΓ v ψ̂i ) ∧ AΓ ⊆ A
∧ ∀ψ̂0 ψ̂1 ∈ Ĥ.∃(ψ̂0Γ , AΓ ) (ψ̂1Γ , AΓ ) ∈ Ĥ 0 .∀i.ψ̂iΓ v ψ̂i
∧ ∀ψ̂ ∈ P̂.∃(ψ̂ Γ , A) ∈ P̂0 .ψ̂ Γ v ψ̂ ∧ lfp(tˆ)(ψ̂) ⊆ A
where ((P̂0 , Ê 0 ), Ĥ 0 ) = lfp( fˆe )
Lemma 18.8 (Approx GC invariant)
For all e, if invΓ̂ (e)(G) then invΓ̂ (e)( fˆe0 (G))
Proof
Straightforward case analysis.
Proof of Theorem 12.3
Proof
Induct on path in lfp( fˆe ) and apply 18.8.
ZU064-05-FPR
paper-jfp
60
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
19 Haskell implementation of CRPDSs
Where it is critical to understanding the details of the analysis, we have transliterated the
formalism into Haskell. We make use of a two extensions in GHC:
-XTypeOperators -XTypeSynonymInstances
All code is in the context of the following header, and we’ll assume the standard instances
of type classes like Ord and Eq.
import Prelude hiding ((!!))
import Data.Map as Map hiding (map,foldr)
import Data.Set as Set hiding (map,foldr)
import Data.List as List hiding ((!!))
type P s = Set.Set s
type k :-> v = Map k v
(==>) :: a -> b -> (a,b)
(==>) x y = (x,y)
(//) :: Ord a => (a :-> b) -> [(a,b)] -> (a :-> b)
(//) f [(x,y)] = Map.insert x y f
set x = Set.singleton x
19.1 Transliteration of NFA formalism
We represent an NFA as a set of labeled forward edges, the inverse of those edges (for
convenience), a start state and an end state:
type NFA state char =
(NFAEdges state char,NFAEdges state char,state,state)
type NFAEdges state char = state :-> P(Maybe char,state)
19.2 ANF
data Exp
=
|
|
data AExp
=
|
data Lambda =
data Call
=
type Var
=
Ret AExp
App Call
Let1 Var Call Exp
Ref Var
Lam Lambda
Var :=> Exp
AExp :@ AExp
String
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
-- Abstract
type AConf
type AEnv
type AStore
type AD
data AVal
type AKont
type AFrame
61
state-space:
= (Exp,AEnv,AStore,AKont)
= Var :-> AAddr
= AAddr :-> AD
= (AVal)
= AClo (Lambda, AEnv)
= [AFrame]
= (Var,Exp,AEnv)
data AAddr = ABind Var AContext
type AContext = [Call]
Abstract configuration space transliterated into Haskell. In the code, we defined abstract
addresses to be able to support k-CFA-style polyvariance.
Atomic expression evaluation implementation:
aeval :: (AExp,AEnv,AStore) -> AD
aeval (Ref v, ρ, σ ) = σ !!(ρ!v)
aeval (Lam l, ρ, σ ) = set $ AClo (l, ρ)
We encode the transition relation it as a function that returns lists of states:
astep :: AConf -> [AConf]
astep (App (f :@ ae), ρ, σ , κ) = [(e, ρ’’, σ ’, κ) |
AClo(v :=> e, ρ’) <- Set.toList $ aeval(f, ρ, σ ),
let a = aalloc(v, App (f :@ ae)),
let ρ’’ = ρ’ // [v ==> a],
let σ ’ = σ [a ==> aeval(ae, ρ, σ )] ]
astep (Let1 v call e, ρ, σ , κ) =
[(App call, ρ, σ , (v, e, ρ) : κ)]
astep (Ret ae, ρ, σ , (v, e, ρ’) : κ) = [(e, ρ’’, σ ’, κ)]
where a = aalloc(v, Ret ae)
ρ’’ = ρ’ // [v ==> a]
σ ’ = σ [a ==> aeval(ae, ρ, σ )]
19.3 Partial orders
We define a typeclass for lattices:
class Lattice a where
bot :: a
top :: a
(v) :: a -> a -> Bool
(t) :: a -> a -> a
(u) :: a -> a -> a
ZU064-05-FPR
paper-jfp
62
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
And, we can lift instances to sets and maps:
instance (Ord s, Eq s) => Lattice (P s) where
bot = Set.empty
top = error "no representation of universal set"
x t y = x ‘Set.union‘ y
x u y = x ‘Set.intersection‘ y
x v y = x ‘Set.isSubsetOf‘ y
instance (Ord k, Lattice v) => Lattice (k :-> v) where
bot = Map.empty
top = error "no representation of top map"
f v g = Map.isSubmapOfBy (v) f g
f t g = Map.unionWih (t) f g
f u g = Map.intersectionWith (u) f g
(t) :: (Ord k, Lattice v) => (k :-> v) -> [(k,v)] -> (k :-> v)
f t [(k,v)] = Map.insertWith (t) k v f
(!!) :: (Ord k, Lattice v) => (k :-> v) -> k -> v
f !! k = Map.findWithDefault bot k f
19.4 Reachability
We can turn any data type to a stack-action alphabet:
data StackAct frame = Push { frame :: frame }
| Pop { frame :: frame }
| Unch
type CRPDS control frame = (Edges control frame, control)
type Edges control frame = control :-> (StackAct frame,control)
We split the encoding of δ into two functions for efficiency purposes:
type Delta control frame =
(TopDelta control frame, NopDelta control frame)
type TopDelta control frame =
control -> frame -> [(control,StackAct frame)]
type NopDelta control frame =
control -> [(control,StackAct frame)]
If we only want to know push and no-change transitions, we can find these with a NopDelta
function without providing the frame that is currently on top of the stack. If we want pop
transitions as well, we can find these with a TopDelta function, but of course, it must have
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
63
access to the top of the stack. In practice, a TopDelta function would suffice, but there
are situations where only push and no-change transitions are needed, and having access to
NopDelta avoids extra computation.
At this point, we must clarify how to embed the abstract transition relation into a pushdown transition relation:
adelta :: TopDelta AControl AFrame
adelta (e, ρ, σ ) γ = [ ((e’, ρ’, σ ’), g) |
(e’, ρ’, σ ’, κ) <- astep (e, ρ, σ , [γ]),
let g = case κ of
[]
-> Pop γ
[γ1 , ] -> Push γ1
[ ]
-> Unch ]
adelta’ :: NopDelta AControl AFrame
adelta’ (e, ρ, σ ) = [ ((e’, ρ’, σ ’), g) |
(e’, ρ’, σ ’, κ) <- astep (e, ρ, σ , []),
let g = case κ of
[γ1] -> Push γ1
[ ]
-> Unch ]
The function crpds will invoke the fixed point solver:
crpds :: (Ord control, Ord frame) =>
(Delta control frame) ->
control ->
frame ->
CRPDS control frame
crpds (δ ,δ ’) q0 0 =
(summarize (δ ,δ ’) etg1 ecg1 [] dE dH, q0) where
etg1 = (Map.empty // [q0 ==> Set.empty],
Map.empty // [q0 ==> Set.empty])
ecg1 = (Map.empty // [q0 ==> set q0],
Map.empty // [q0 ==> set q0])
(dE,dH) = sprout (δ ,δ ’) q0
Figure 11 provides the code for summarize, which conducts the fixed point calculation,
the executable equivalent of Figure 6:
ZU064-05-FPR
paper-jfp
64
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
summarize :: (Ord control, Ord frame) =>
(Delta control frame) ->
(ETG control frame) ->
(ECG control) ->
[control] ->
[Edge control frame] ->
[EpsEdge control] ->
(Edges control frame)
To expose the structure of the computation, we’ve added a few types:
-- A set of edges, encoded as a map:
type Edges control frame =
control :-> P (StackAct frame,control)
-- Epsilon edges:
type EpsEdge control = (control,control)
-- Explicit transition graph:
type ETG control frame =
(Edges control frame, Edges control frame)
-- Epsilon closure graph:
type ECG control =
(control :-> P(control), control :-> P(control))
An explicit transition graph is an explicit encoding of the reachable subset of the transition relation. The function summarize takes six parameters:
1. the pushdown transition function;
2. the current explicit transition graph;
3. the current ε-closure graph;
4. a work-list of states to add;
5. a work-list of explicit transition edges to add; and
6. a work-list of ε-closure transition edges to add.
The function summarize processes ε-closure edges first, then explicit transition edges and
then individual states. It must process ε-closure edges first to ensure that the ε-closure
graph is closed when considering the implications of other edges.
Sprouting
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
summarize (δ ,δ ’) (fw,bw) (fe,be) [] [] [] = fw
summarize (δ ,δ ’) (fw,bw) (fe,be) (q:dS) [] []
| fe ‘contains‘ q = summarize (δ ,δ ’) (fw,bw) (fe,be) dS [] []
summarize (δ ,δ ’) (fw,bw) (fe,be) (q:dS) [] [] =
summarize (δ ,δ ’) (fw’,bw’) (fe’,be’) dS dE’ dH’ where
(dE’,dH’) = sprout (δ ,δ ’) q
fw’ = fw t [q ==> Set.empty]
bw’ = bw t [q ==> Set.empty]
fe’ = fe t [q ==> set q]
be’ = be t [q ==> set q]
summarize (δ ,δ ’) (fw,bw) (fe,be) dS ((q,g,q’):dE) []
| (q,g,q’) ‘isin’‘ fw = summarize (δ ,δ ’) (fw,bw) (fe,be) dS dE []
summarize (δ ,δ ’) (fw,bw) (fe,be) dS ((q,Push ,q’):dE) [] =
summarize (δ ,δ ’) (fw’,bw’) (fe’,be’) dS’ dE’’ dH’ where
(dE’,dH’) = addPush (fw,bw) (fe,be) (δ ,δ ’) (q,Push ,q’)
dE’’ = dE’ ++ dE’’
dS’ = q’:dS
fw’ = fw t [q ==> set (Push ,q’)]
bw’ = bw t [q’ ==> set (Push ,q) ]
fe’ = fe t [q ==> set q ]
be’ = fe t [q’ ==> set q’]
summarize (δ ,δ ’) (fw,bw) (fe,be) dS ((q,Pop ,q’):dE) [] =
summarize (δ ,δ ’) (fw’,bw’) (fe’,be’) dS’ dE’’ dH’ where
(dE’,dH’) = addPop (fw,bw) (fe,be) (δ ,δ ’) (q,Pop ,q’)
dE’’ = dE ++ dE’
dS’ = q’:dS
fw’ = fw t [q ==> set (Pop ,q’)]
bw’ = bw t [q’ ==> set (Pop ,q) ]
fe’ = fe t [q ==> set q ]
be’ = fe t [q’ ==> set q’]
summarize (δ ,δ ’) (fw,bw) (fe,be) dS ((q,Unch,q’):dE) [] =
summarize (δ ,δ ’) (fw’,bw’) (fe’,be’) dS’ dE [(q,q’)] where
dS’ = q’:dS
fw’ = fw t [q ==> set (Unch,q’)]
bw’ = bw t [q’ ==> set (Unch,q) ]
fe’ = fe t [q ==> set q ]
be’ = fe t [q’ ==> set q’]
summarize (δ ,δ ’) (fw,bw) (fe,be) dS dE ((q,q’):dH)
| (q,q’) ‘isin‘ fe = summarize (δ ,δ ’) (fw,bw) (fe,be) dS dE dH
summarize (δ ,δ ’) (fw,bw) (fe,be) dS dE ((q,q’):dH) =
summarize (δ ,δ ’) (fw,bw) (fe’,be’) dS dE’ dH’ where
(dE’,dH’) = addEmpty (fw,bw) (fe,be) (δ ,δ ’) (q,q’)
fe’ = fe t [q ==> set q ]
be’ = fe t [q’ ==> set q’]
Fig. 11: An implementation of pushdown control-state reachability.
65
ZU064-05-FPR
paper-jfp
66
6 February 2018
3:38
J.I. Johnson, I. Sergey, C. Earl, M. Might, and D. Van Horn
sprout :: (Ord control) =>
Delta control frame ->
control ->
([Edge control frame], [EpsEdge control])
sprout (δ ,δ ’) q = (dE, dH) where
edges = δ ’ q
dE = [ (q,g,q’) | (q’,g) <- edges, isPush g ]
dH = [ (q,q’)
| (q’,g) <- edges, isUnch g ]
Pushing
addPush :: (Ord control) =>
ETG control frame ->
ECG control ->
Delta control frame ->
Edge control frame ->
([Edge control frame], [EpsEdge control])
addPush (fw,bw) (fe,be) (δ ,δ ’) (s,Push γ,q) = (dE,dH) where
qset’ = Set.toList $ fe!q
dE = [ (q’,g,q’’) | q’ <- qset’, (q’’,g) <- δ q’ γ, isPop g ]
dH = [ (s,q’’)
| (q’,Pop ,q’’) <- dE ]
Popping
addPop :: (Ord control) =>
ETG control frame ->
ECG control ->
Delta control frame ->
Edge control frame ->
([Edge control frame], [EpsEdge control])
addPop (fw,bw) (fe,be) (δ ,δ ’) (s’’,Pop γ,q) = (dE,dH) where
sset’ = Set.toList $ be!s’’
dH = [ (s,q) | s’ <- sset’,
(g,s) <- Set.toList $ bw!s’, isPush g ]
dE = []
Clearly, we could eliminate the new edges parameter dE for the function addPop, but we
have retained it for stylistic symmetry.
Adding empty edges The function addEmpty has many cases to consider:
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
PUSHDOWN FLOW ANALYSIS WITH ABSTRACT GARBAGE COLLECTION
67
addEmpty :: (Ord control) =>
ETG control frame ->
ECG control ->
Delta control frame ->
EpsEdge control ->
([Edge control frame], [EpsEdge control])
addEmpty (fw,bw) (fe,be) (δ ,δ ’) (s’’,s’’’) = (dE,dH) where
sset’
= Set.toList $ be!s’’
sset’’’’ = Set.toList $ fe!s’’’
dH’
= [ (s’,s’’’’) | s’ <- sset’, s’’’’ <- sset’’’’ ]
dH’’ = [ (s’,s’’’)
| s’ <- sset’ ]
dH’’’ = [ (s’’,s’’’’) | s’’’’ <- sset’’’’ ]
sEdges = [ (g,s) | s’ <- sset’, (g,s) <- Set.toList $ bw!s’ ]
dE = [ (s’’’’,g’,q) | s’’’’ <- sset’’’’,
(g,s) <- sEdges,
isPush g, let Push γ = g,
(q,g’) <- δ s’’’’ γ,
isPop g’ ]
dH’’’’ = [ (s,q) | ( ,s) <- sEdges, ( , ,q) <- dE ]
dH = dH’ ++ dH’’ ++ dH’’’ ++ dH’’’’
ZU064-05-FPR
paper-jfp
6 February 2018
3:38
| 6 |
On the frequentist validity of Bayesian limits
B. J. K. Kleijn∗
Korteweg-de Vries Institute for Mathematics, University of Amsterdam
arXiv:1611.08444v3 [] 27 Nov 2017
November 2017
Abstract
To the frequentist who computes posteriors, not all priors are useful asymptotically: in
this paper Schwartz’s 1965 Kullback-Leibler condition is generalised to enable frequentist
interpretation of convergence of posterior distributions with the complex models and often
dependent datasets in present-day statistical applications. We prove four simple and fully
general frequentist theorems, for posterior consistency; for posterior rates of convergence;
for consistency of the Bayes factor in hypothesis testing or model selection; and a theorem
to obtain confidence sets from credible sets. The latter has a significant methodological
consequence in frequentist uncertainty quantification: use of a suitable prior allows one to
convert credible sets of a calculated, simulated or approximated posterior into asymptotically consistent confidence sets, in full generality. This extends the main inferential implication of the Bernstein-von Mises theorem to non-parametric models without smoothness
conditions. Proofs require the existence of a Bayesian type of test sequence and priors
giving rise to local prior predictive distributions that satisfy a weakened form of Le Cam’s
contiguity with respect to the data distribution. Results are applied in a wide range 0f
examples and counterexamples.
1
Introduction
In this paper (following [5]:“Statisticians should readily use both Bayesian and frequentist
ideas.”) we examine for which priors Bayesian asymptotic conclusions extend to conclusions
valid in the frequentist sense: how Doob’s prior-almost-sure consistency is strengthened to
reach Schwartz’s frequentist conclusion that the posterior is consistent, or how a test that is
consistent prior-almost-surely becomes a test that is consistent in all points of the model, or
how a Bayesian credible set can serve as a frequentist confidence set asymptotically.
The central property to enable frequentist interpretation of posterior asymptotics is defined as
remote contiguity in section 3. It expresses a weakened form of Le Cam’s contiguity, relating
the true distribution of the data to localized prior predictive distributions. Where Schwartz’s
Kullback-Leibler neighbourhoods represent a choice for the localization appropriate when the
∗
email: [email protected], web: https://staff.fnwi.uva.nl/b.j.k.kleijn
1
sample is i.i.d., remote contiguity generalises the notion to include non-i.i.d. samples, priors
that change with the sample size, weak consistency with the Dirichlet process prior, etcetera.
Although firstly aimed at enhancing insight into asymptotic relations by simplification and
generalisation, this paper also has a significant methodological consequence: theorem 4.12
demonstrates that if the prior is such that remote contiguity applies, credible sets can be
converted to asymptotically consistent confidence sets in full generality. So the asymptotic
validity of credible sets as confidence sets in smooth parametric models [53] extends much
further: in practice, the frequentist can simulate the posterior in any model, construct his
preferred type of credible sets and ‘enlarge’ them to obtain asymptotic confidence sets, provided
his prior induces remote contiguity. This extends the main inferential implication of the
Bernstein-von Mises theorem to non-parametric models.
In the remainder of this section we discuss posterior consistency. In section 2 we concentrate
on an inequality that relates testing to posterior concentration and indicates the relation with
Le Cam’s inequality. Section 3 introduces remote contiguity and the analogue of Le Cam’s
First Lemma. In section 4, frequentist theorems on the asymptotic behaviour of posterior
distributions are proved, on posterior consistency, on posterior rates of convergence, on consistent testing and model selection with Bayes factors and on the conversion of credible sets
to confidence sets. Section 5 formulates the conclusions.
Definitions, notation, conventions roughly follow those of [51] and are collected in appendix A
with some other preliminaries. All applications, illustrations, examples and counterexamples
have been collected in appendix B. Proofs are found in appendix C.
1.1
Posterior consistency and inconsistency
For a statistical procedure to be consistent, it must infer the truth with arbitrarily large
accuracy and probability, if we gather enough data. For example, when using sequential data
X n ∼ Pθ0 ,n to estimate the value θ0 , a consistent estimator sequence θn converges to θ0 in
Pθ0 ,n -probability. For a posterior Π(·|X n ) to be consistent, it must concentrate mass arbitrarily
close to one in any neighbourhood of θ0 as n → ∞ (see definition 4.1).
Consider a model P for i.i.d. data with single-observation distribution P0 . Give P a Polish
topology with Borel prior Π so that the posterior is well-defined (see definition A.3). The first
general consistency theorem for posteriors is due to Doob.
Theorem 1.1 (Doob (1949))
For all n ≥ 1, let (X1 , X2 , . . . , Xn ) ∈ X n be i .i .d . − P0 , where P0 lies in a model P. Suppose
X and P are Polish spaces. Assume that P 7→ P (A) is Borel measurable for every Borel set
A ⊂ X . Then for any Borel prior Π on P the posterior is consistent, for Π-almost-all P .
In parametric applications Doob’s Π-null-set of potential inconsistency can be considered small
2
(for example, when the prior dominates Lebesgue measure). But in non-parametric context
these null-sets can become very large (or not, see [54]): the first examples of unexpected
posterior inconsistency are due to Schwartz [61], but it was Freedman [29] who made the point
famous with a simple non-parametric counterexample (discussed in detail as example B.1). In
[30] it was even shown that inconsistency is generic in a topological sense: the set of pairs
(P0 , Π) for which the posterior is consistent is meagre: posteriors that only wander around,
placing and re-placing mass aimlessly, are the rule rather than the exception. (For a discussion,
see example B.2.)
These and subsequent examples of posterior inconsistency established a widespread conviction
that Bayesian methods were wholly unfit for frequentist purposes, at least in non-parametric
context. The only justifiable conclusion from Freedman’s meagreness, however, is that a
condition is missing: Doob’s assertion may be all that a Bayesian requires, a frequentist
demands strictly more, thus restricting the class of possible choices for his prior. Strangely,
a condition representing this restriction had already been found when Freedman’s meagreness
result was published.
Theorem 1.2 (Schwartz (1965))
For all n ≥ 1, let (X1 , X2 , . . . , Xn ) ∈ X n be i .i .d . − P0 , where P0 lies in a model P. Let U
denote an open neighbourhood of P0 in P. If,
(i) there exist measurable φn : X n → [0, 1], such that,
P0n φn = o(1),
sup Qn (1 − φn ) = o(1),
(1)
Q∈U c
(ii) and Π is a Kullback-Leibler prior, i.e. for all δ > 0,
dP
< δ > 0,
Π P ∈ P : −P0 log
dP0
(2)
P -a.s.
then Π(U |X n ) −−0−−→ 1.
Over the decades, examples of problematic posterior behaviour in non-parametric setting continued to captivate [20, 21, 17, 22, 23, 31, 32], while Schwartz’s theorem received initially limited but steadily growing amounts of attention: subsequent frequentist theorems (e.g. by Barron [3], Barron-Schervish-Wasserman [4], Ghosal-Ghosh-van der Vaart [34], Shen-Wasserman
[63], Walker [70] and Walker-Lijoi-Prünster [72], Kleijn-Zhao [46] and many others) have extended the applicability of theorem 1.2 but not its essence, condition (2) for the prior. The
following example illustrates that Schwartz’s condition cannot be the whole truth, though.
Example 1.3 Consider X1 , X2 , . . . that are i.i.d.-P0 with Lebesgue density p0 : R → R supported on an interval of known width (say, 1) but unknown location. Parametrize in terms
of a continuous density η on [0, 1] with η(x) > 0 for all x ∈ [0, 1] and a location θ ∈ R:
pθ,η (x) = η(x − θ) 1[θ,θ+1] (x). A moment’s thought makes clear that if θ 6= θ0 ,
pθ0 ,η0
−Pθ,η log
= ∞,
pθ,η
3
for all η, η 0 . Therefore Kullback-Leibler neighbourhoods do not have any extent in the θdirection and no prior is a Kullback-Leibler prior in this model. Nonetheless the posterior is
consistent (see examples B.14 and B.15).
Similar counterexamples exist [46] for the type of prior that is proposed in the analyses of
posterior rates of convergence in (Hellinger) metric setting [34, 63]. Although methods in
[46] avoid this type of problem, the essential nature of condition (2) in i.i.d. setting becomes
apparent there as well.
This raises the central question of this paper: is Schwartz’s Kullback-Leibler condition perhaps a manifestation of a more general notion? The argument leads to other questions for
which insightful answers have been elusive: why is Doob’s theorem completely different from
Schwartz’s? The accepted explanation views the lack of congruence as an indistinct symptom of differing philosophies, but is this justified? Why does weak consistency in the full
non-parametric model (e.g. with the Dirichlet process prior [28], or more modern variations
[19]) reside in a corner of its own (with tailfreeness [30] as sufficient property of the prior),
apparently unrelated to posterior consistency in either Doob’s or Schwartz’s views? Indeed,
what would Schwartz’s theorem look like without the assumption that the sample is i.i.d. (e.g.
with data that form a Markov chain or realize some other stochastic process) or with growing
parameter spaces and changing priors? And to extend the scope further, what can be said
about hypothesis testing, classification, model selection, etcetera? Given that the Bernsteinvon Mises theorem cannot be expected to hold in any generality outside parametric setting
[17, 32], what relationship exists between credible sets and confidence sets? This paper aims
to shed more light on these questions in a general sense, by providing a prior condition that
enables strengthening Bayesian asymptotic conclusions to frequentist ones, illustrated with a
variety of examples and counterexamples.
2
Posterior concentration and asymptotic tests
In this section, we consider a lemma that relates concentration of posterior mass in certain
model subsets to the existence of test sequences that distinguish between those subsets. More
precisely, it is shown that the expected posterior mass outside a model subset V with respect
to the local prior predictive distribution over a model subset B, is upper bounded (roughly)
by the testing power of any statistical test for the hypotheses B versus V : if a test sequence
exists, the posterior will concentrate its mass appropriately.
4
2.1
Bayesian test sequences
Since the work of Schwartz [62], test sequences and posterior convergence have been linked
intimately. Here we follow Schwartz and consider asymptotic testing; however, we define test
sequences immediately in Bayesian context by involving priors from the outset.
Definition 2.1 Given priors (Πn ), measurable model subsets (Bn ), (Vn ) ⊂ G and an ↓ 0, a
sequence of Bn -measurable maps φn : Xn → [0, 1] is called a Bayesian test sequence for Bn
versus Vn (under Πn ) of power an , if,
Z
Z
Pθ,n φn dΠn (θ) +
Pθ,n (1 − φn ) dΠn (θ) = o(an ).
(3)
Vn
Bn
We say that (φn ) is a Bayesian test sequence for Bn versus Vn (under Πn ) if (3) holds for
some an ↓ 0.
Note that if we have sequences (Cn ) and (Wn ) such that Cn ⊂ Bn and Wn ⊂ Vn for all n ≥ 1,
then a Bayesian test sequence for (Bn ) versus (Vn ) of power an is a Bayesian test sequence for
(Cn ) versus (Wn ) of power (at least) an .
Lemma 2.2 For any B, V ∈ G with Π(B) > 0 and any measurable φ : X → [0, 1],
Z
Z
Z
1
Pθ Π(V |X) dΠ(θ|B) ≤ Pθ φ dΠ(θ|B) +
Pθ (1 − φ) dΠ(θ).
Π(B) V
(4)
So the mere existence of a test sequence is enough to guarantee posterior concentration, a fact
expressed in n-dependent form through the following proposition.
Proposition 2.3 Assume that for given priors Πn , sequences (Bn ), (Vn ) ⊂ G and an , bn ↓ 0
such that an = o(bn ) with Πn (Bn ) ≥ bn > 0, there exists a Bayesian test sequence for Bn
versus Vn of power an . Then,
PnΠn |Bn Π(Vn |X n ) = o(an b−1
n ),
(5)
for all n ≥ 1.
To see how this leads to posterior consistency, consider the following: if the model subsets
Vn = V are all equal to the complement of a neighbourhood U of P0 , and the Bn are chosen
Π |Bn
such that the expectations of the random variables X n 7→ Π(V |X n ) under Pn n
‘dominate’
their expectations under P0,n in a suitable way, sufficiency of prior mass bn given testing power
an ↓ 0, is enough to assert that P0,n Π(V |X n ) → 0, so an arbitrarily large fraction of posterior
mass is found in U with high probability for n large enough.
2.2
Existence of Bayesian test sequences
Lemma 2.2 and proposition 2.3 require the existence of test sequences of the Bayesian type.
That question is unfamiliar, frequentists are used to test sequences for pointwise or uniform
5
testing. For example, an application of Hoeffding’s inequality demonstrates that, weak neighbourhoods are uniformly testable (see proposition A.6). Another well-known example concerns
testability of convex model subsets. Mostly the uniform test sequences in Schwartz’s theorem
are constructed using convex building blocks B and V separated in Hellinger distance (see
proposition B.7 and subsequent remarks).
Requiring the existence of a Bayesian test sequence c.f. (3) is quite different. We shall illustrate
this point in various ways below. First of all the existence of a Bayesian test sequence is linked
directly to behaviour of the posterior itself.
Theorem 2.4 Let (Θ, G , Π) be given. For any B, V ∈ G with Π(B) > 0, Π(V ) > 0, the
following are equivalent,
(i) there are Bn -measurable φn : Xn → [0, 1] such that for Π-almost-all θ ∈ B, θ0 ∈ V ,
Pθ,n φn → 0,
Pθ0 ,n (1 − φn ) → 0,
(ii) there are Bn -measurable φn : Xn → [0, 1] such that,
Z
Z
Pθ,n φn dΠ(θ) +
Pθ,n (1 − φn ) dΠ(θ) → 0,
B
V
(iii) for Π-almost-all θ ∈ B, θ0 ∈ V ,
Pθ,n
Π(V |X n ) −−−→ 0,
Pθ0 ,n
Π(B|X n ) −−−→ 0.
The interpretation of this theorem is gratifying to supporters of the likelihood principle and
pure Bayesians: distinctions between model subsets are Bayesian testable, if and only if, they
are picked up by the posterior asymptotically, if and only if, there exists a pointwise test for
B versus V that is Π-almost-surely consistent.
For a second, more frequentist way to illustrate how basic the existence of a Bayesian test
sequences is, consider a parameter space (Θ, d) which is a metric space with fixed Borel prior
Π and d-consistent estimators θ̂n : Xn → Θ for θ. Then for every θ0 ∈ Θ and > 0, there
exists a pointwise test sequence (and hence, by dominated convergence, also a Bayesian test
sequence) for B = {θ ∈ Θ : d(θ, θ0 ) < 12 } versus V = {θ ∈ Θ : d(θ, θ0 ) > }. This approach is
followed in example B.19 on random walks, see the definition of the test following inequality
(B.36).
A third perspective on the existence of Bayesian tests arises from Doob’s argument. From
our present perspective, we note that theorem 2.4 implies an alternative proof of Doob’s
consistency theorem through the following existence result on Bayesian test sequences. (Note:
here and elsewhere in i.i.d. setting, the parameter space Θ is P, θ is the single-observation
distribution P and θ 7→ Pθ,n is P 7→ P n .)
Proposition 2.5 Consider a model P of single-observation distributions P for i.i.d. data
(X1 , X2 , . . . , Xn ) ∼ P n , (n ≥ 1). Assume that P is a Polish space with Borel prior Π. For
any Borel set V there is a Bayesian test sequence for V versus P \ V under Π.
6
Doob’s theorem is recovered when we let V be the complement of any open neighbourhood
U of P0 . Comparing with conditions for the existence of uniform tests, Bayesian tests are
quite abundant: whereas uniform testing relies on the minimax theorem (forcing convexity,
compactness and continuity requirements into the picture), Bayesian tests exist quite generally
(at least, for Polish parameters with i.i.d. data).
The fourth perspective on the existence of Bayesian tests concerns a direct way to construct
a Bayesian test sequence of optimal power, based on the fact that we are really only testing
barycentres against each other: let priors (Πn ) and G -measurable model subsets Bn , Vn be
given. For given tests (φn ) and power sequence an , write (3) as follows:
Πn (Bn ) PnΠn |Bn φn (X n ) + Πn (Vn ) PnΠn |Vn φn (X n ) = o(an ),
Π |Bn
and note that what is required here, is a (weighted) test of (Pn n
likelihood-ratio test (denote the density for
pBn ,n , and similar for
Π |B
Pn n n
Π |Vn
) versus (Pn n
with respect to µn =
Π |B
Pn n n
). The
Π |V
+ Pn n n
by
Π |V
Pn n n ),
φn (X n ) = 1{Πn (Vn ) pVn ,n (X n )>Πn (Bn ) pBn ,n (X n )} ,
Π |Bn
is optimal and has power kΠn (Bn ) Pn n
Π |Bn
∧ Πn (Vn ) Pn n
k. This proves the following useful
proposition that re-expresses power in terms of the relevant Hellinger transform (see, e.g.
section 16.4 in [51], particularly, Remark 1).
Proposition 2.6 Let priors (Πn ) and measurable model subsets Bn , Vn be given. There exists
a test sequence φn : Xn → [0, 1] such that,
Z
Z
Pθ,n φn dΠn (θ) +
Pθ,n (1 − φn ) dΠn (θ)
Bn
Vn
Z
α
1−α
Πn (Bn ) pBn ,n (x)
Πn (Vn ) pVn ,n (x)
≤
dµn (x),
(6)
for every n ≥ 1 and any 0 ≤ α ≤ 1.
Proposition 2.6 generalises proposition 2.5 and makes Bayesian tests available with a (close-to)sharp bound on the power under fully general conditions. For the connection with minimax
tests, we note the following. If {Pθ,n : θ ∈ Bn } and {Pθ,n : θ ∈ Vn } are convex sets (and the
Πn are Radon measures, e.g. in Polish parameter spaces), then,
H PnΠn |Bn , PnΠn |Vn ≥ inf{H(Pθ,n , Pθ0 ,n ) : θ ∈ Bn , θ0 ∈ Vn }.
Combination with (6) for α = 1/2, implies that the minimax upper bound in i.i.d. cases, c.f.
proposition B.7 remains valid:
Z
Z
n
P φn dΠn (P ) +
Bn
Qn (1 − φn ) dΠn (Q) ≤
p
2
Πn (Bn ) Πn (Vn ) e−nn ,
(7)
Vn
where n = inf{H(P, Q) : P ∈ Bn , Q ∈ Vn }. Given an ↓ 0, any Bayesian test φn that satisfies
(3) for all probability measures Πn on Θ, is a (weighted) minimax test for Bn versus Vn of
power an .
7
Note that the above enhances the role that the prior plays in the frequentist discussion of the
asymptotic behaviour of the posterior: the prior is not only important in requirements like
(2), but can also be of influence in the testing condition: where testing power is relatively
weak, prior mass should be scarce to compensate and where testing power is strong, prior
mass should be plentiful. To make use of this, one typically imposes upper bounds on prior
mass in certain hard-to-test subsets of the model (as opposed to lower bounds like (2)). See
example B.19 on random-walk data. In the Hellinger-geometric view, the prior determines
Π |Bn
whether the local prior predictive distributions Pn n
Π |Vn
and Pn n
lie close together or not in
Hellinger distance, and thus to the r.h.s. of (6) for α = 1/2. This phenomenon plays a role in
example B.17 on the estimation of a sparse vector of normal means, where it explains why the
slab-component of a spike-and-slab prior must have a tail that is heavy enough.
2.3
Le Cam’s inequality
Referring to the argument following proposition 2.3, one way of guaranteeing that the exΠ|Bn
pectations of X n 7→ Π(V |X n ) under Pn
approximate those under P0,n , is to choose
Bn = {θ ∈ Θ : kPθ,n − Pθ0 ,n k ≤ δn }, for some sequence δn → 0, because in that case,
Π|Bn
|P0,n ψ − Pn
Π|Bn
ψ| ≤ kP0,n − Pn
k ≤ δn , for any random variable ψ : Xn → [0, 1]. Without
fixing the definition of the sets Bn , one may use this step to specify inequality (4) further:
P0,n Π(Vn |X) ≤ P0,n − PnΠ|Bn
Z
Z
Πn (Vn )
Pθ,n (1 − φn ) dΠn (θ|Vn ),
+ Pθ,n φn dΠn (θ|Bn ) +
Πn (Bn )
(8)
for Bn and Vn such that Πn (Bn ) > 0 and Πn (Vn ) > 0. Le Cam’s inequality (8) is used, for
example, in the proof of the Bernstein-von Mises theorem, see lemma 2 in section 8.4 of [53]. A
less successful application pertains to non-parametric posterior rates of convergence for i.i.d.
data, in an unpublished paper [50]. Rates of convergence obtained in this way are suboptimal:
Le Cam qualifies the first term on the right-hand side of (8) as a “considerable nuisance”
and concludes that “it is unclear at the time of this writing what general features, besides the
metric structure, could be used to refine the results”, (see [51], end of section 16.6). In [74],
Le Cam relates the posterior question to dimensionality restrictions [49, 63, 34] and reiterates,
“And for Bayes risk, I know that just the metric structure does not catch everything, but I
don’t know what else to look at, except calculations.”
3
Remote contiguity
Le Cam’s notion of contiguity describes an asymptotic version of absolute continuity, applicable
to sequences of probability measures in a limiting sense [48]. In this section we weaken the
8
property of contiguity in a way that is suitable to promote Π-almost-everywhere Bayesian
limits to frequentist limits that hold everywhere.
3.1
Definition and criteria for remote contiguity
The notion of ‘domination’ left undefined in the argument following proposition 2.3 is made
rigorous here.
Definition 3.1 Given measurable spaces (Xn , Bn ), n ≥ 1 with two sequences (Pn ) and (Qn )
of probability measures and a sequence ρn ↓ 0, we say that Qn is ρn -remotely contiguous with
respect to Pn , notation Qn C ρ−1
n Pn , if,
Pn φn (X n ) = o(ρn )
Qn φn (X n ) = o(1),
⇒
(9)
for every sequence of Bn -measurable φn : Xn → [0, 1].
Note that for a sequence (Qn ) that is an -remotely contiguous with respect to (Pn ), there
exists no test sequence that distinguishes between Pn and Qn with power an . Note also that
given two sequences (Pn ) and (Qn ), contiguity Pn C Qn is equivalent to remote contiguity
Pn C a−1
n Qn for all an ↓ 0. Given sequences an , bn ↓ 0 with an = O(bn ), bn -remote contiguity
implies an -remote contiguity of (Pn ) with respect to (Qn ).
Example 3.2 Let P be a model for the distribution of a single observation in i.i.d. samples
X n = (X1 , . . . , Xn ). Let P0 , P and > 0 be such that −P0 log(dP/dP0 ) < 2 . The law of large
numbers implies that for large enough n,
n 2
dP n n
(X ) ≥ e− 2 ,
n
dP0
(10)
with P0n -probability one. Consequently, for large enough n and for any Bn -measurable sequence ψn : Xn → [0, 1],
1
2
P n ψn ≥ e− 2 n P0n ψn .
(11)
Therefore, if P n φn = o(exp (− 21 n2 )) then P0n φn = o(1). Conclude that for every > 0, the
Kullback-Leibler neighbourhood {P : −P0 log(dP/dP0 ) < 2 } consists of model distributions
for which the sequence (P0n ) of product distributions are exp (− 21 n2 )-remotely contiguous
with respect to (P n ).
Criteria for remote contiguity are given in the lemma below; note that, here, we give sufficient conditions, rather than necessary and sufficient, as in Le Cam’s First Lemma. (For the
definition of (dPn /dQn )−1 , see appendix A, notation and conventions.)
Lemma 3.3 Given (Pn ), (Qn ), an ↓ 0, Qn C a−1
n Pn , if any of the following hold:
P
Qn
n
(i) for any Bn -measurable φn : Xn → [0, 1], a−1
−→
0 implies φn −−→ 0,
n φn −
(ii) given > 0, there is a δ > 0 such that Qn (dPn /dQn < δ an ) < , for large enough n,
9
−1
(iii) there is a b > 0 such that lim inf n b a−1
n Pn (dQn /dPn > b an ) = 1,
(iv) for any > 0, there is a constant c > 0 such that kQn − Qn ∧ c a−1
n Pn k < , for large
enough n,
(v) under Qn every subsequence of (an (dPn /dQn )−1 ) has a weakly convergent subsequence.
Proof The proof of this lemma can be found in appendix C. It actually proves that ((i) or
(iv)) implies remote contiguity; that ((ii) or (iii)) implies (iv) and that (v) is equivalent to
(ii).
Contiguity and its remote variation are compared in the context of (parametric and nonparametric) regression in examples B.11 and B.12. We may specify the definition of remote
contiguity slightly further.
Definition 3.4 Given measurable spaces (Xn , Bn ), (n ≥ 1) with two sequences (Pn ) and
(Qn ) of probability measures and sequences ρn , σn > 0, ρn , σn → 0, we say that Qn is ρn -to-σn
remotely contiguous with respect to Pn , notation σn−1 Qn C ρ−1
n Pn , if,
Pn φn (X n ) = o(ρn )
⇒
Qn φn (X n ) = o(σn ),
for every sequence of Bn -measurable φn : Xn → [0, 1].
Like definition 3.1, definition 3.4 allows for reformulation similar to lemma 3.3, e.g. if for some
sequences ρn , σn like in definition 3.4,
Qn − Qn ∧ σn ρ−1
n Pn = o(σn ),
then σn−1 Qn C ρ−1
n Pn . We leave the formulation of other sufficient conditions to the reader.
2
n
−1 n
Note that inequality (11) in example 3.2 implies that b−1
n P0 C an P , for any an ≤ exp(−nα )
Qn -a.s.
with α2 > 21 2 and bn = exp(−n(α2 − 12 2 )). It is noted that this implies that φn (X n ) −−−−−→ 0
for any φn : Xn → [0, 1] such that Pn φn (X n ) = o(ρn ) (more generally, this holds whenever
P
n σn < ∞, as a consequence of the first Borel-Cantelli lemma).
3.2
Remote contiguity for Bayesian limits
The relevant applications in the context of Bayesian limit theorems concern remote contiguity
of the sequence of true distributions Pθ0 ,n with respect to local prior predictive distributions
Π |Bn
Pn n
, where the sets Bn ⊂ Θ are such that,
Πn |Bn
Pθ0 ,n C a−1
,
n Pn
(12)
for some rate an ↓ 0.
In the case of i.i.d. data, Barron [3] introduces strong and weak notions of merging of Pθ0 ,n
with (non-local) prior predictive distributions PnΠ . The weak version imposes condition (ii) of
10
lemma 3.3 for all exponential rates simultaneously. Strong merging (or matching [2]) coincides
with Schwartz’s almost-sure limit, while weak matching is viewed as a limit in probability.
By contrast, if we have a specific rate an in mind, the relevant mode of convergence is Prohorov’s weak convergence: according to lemma 3.3-(v), (12) holds if inverse likelihood ratios
Zn have a weak limit Z when re-scaled by an ,
Zn = (dPnΠn |Bn /dPθ0 ,n )−1 (X n ),
Pθ
,n -w.
0
an Zn −−−
−−→ Z.
To better understand the counterexamples of section B, notice the high sensitivity of this
criterion to the existence of subsets of the sample spaces assigned probability zero under
some model distributions, while the true probability is non-zero. More generally, remote
contiguity is sensitive to subsets En assigned fast decreasing probabilities under local prior
Π |Bn
predictive distributions Pn n
(En ), while the probabilities Pθ0 ,n (En ) remain high, which is
what definition 3.1 expresses. The rate an ↓ 0 helps to control the likelihood ratio (compare
to the unscaled limits of likelihood ratios that play a central role in the theory of convergence
of experiments [51]), conceivably enough to force uniform tightness in many non-parametric
situations.
But condition (12) can also be written out, for example to the requirement that for some
constant δ > 0,
Pθ0 ,n
Z dP
θ,n
(X n ) dΠn (θ|Bn ) < δ an → 0,
dPθ0 ,n
with the help of lemma 3.3-(ii).
Example 3.5 Consider again the model of example 1.3. In example B.14, it is shown that if
the prior Π for θ ∈ R has a continuous and strictly positive Lebesgue density and we choose
Bn = [θ0 , θ0 + 1/n], then for every δ > 0 and all an ↓ 0,
Z
dPθ,n
n
n
Pθ0
(X ) dΠ(θ|Bn ) < δ an ≤ Pθn0 n(X(1) − θ0 ) < 2δ an ,
dPθ0 ,n
for large enough n ≥ 1, and the r.h.s. goes to zero for any an because the random variables
n(X(1) − θ0 ) have a non-degenerate, positive weak limit under Pθn0 as n → ∞. Conclude that
with these choices for Π and Bn , (12) holds, for any an .
The following proposition should be viewed in light of [52], which considers properties like
contiguity, convergence of experiments and local asymptotic normality in situations of statistical information loss. In this case, we are interested in (remote) contiguity of the probability
measures that arise as marginals for the data X n when information concerning the (Bayesian
random) parameter θ is unavailable.
Proposition 3.6 Let θ0 ∈ Θ and a prior Π : G → [0, 1] be given. Let B be a measurable
subset of Θ such that Π(B) > 0. Assume that for some an ↓ 0, the family,
dPθ,n −1 n
an
(X ) : θ ∈ B, n ≥ 1 ,
dPθ0 ,n
11
Π|B
is uniformly tight under Pθ0 ,n . Then Pθ0 ,n C a−1
n Pn
.
Other sufficient conditions from lemma 3.3 may replace the uniform tightness condition. When
the prior Π and subset B are n-dependent, application of lemma 3.3 requires more. (See, for
instance, example B.12 and lemma B.13, where local asymptotic normality is used to prove
(12).)
To re-establish contact with the notion of merging, note the following. If remote contiguity
of the type (12) can be achieved for a sequence of subsets (Bn ), then it also holds for any
sequence of sets (e.g. all equal to Θ, in Barron’s case) that contain the Bn but at a rate that
differs proportionally to the fraction of prior masses.
Lemma 3.7 For all n ≥ 1, let Bn ⊂ Θ be such that Πn (Bn ) > 0 and Cn such that Bn ⊂ Cn
with cn = Πn (Bn )/Πn (Cn ) ↓ 0, then,
Πn |Cn
PnΠn |Bn C c−1
.
n Pn
Π |Bn
n
Also, if for some sequence (Pn ), Pn C a−1
n Pn
Π |Cn
n
−1
then Pn C a−1
n cn Pn
.
So when considering possible choices for the sequence (Bn ), smaller choices lead to slower
rates an , rendering (9) applicable to more sequences of test functions. This advantage is to be
balanced against later requirements that Πn (Bn ) may not decrease too fast.
4
Posterior concentration
In this section new frequentist theorems are formulated involving the convergence of posterior
distributions. First we give a basic proof for posterior consistency assuming existence of
suitable test sequences and remote contiguity of true distributions (Pθ0 ,n ) with respect to local
prior predictive distributions. Then it is not difficult to extend the proof to the case of posterior
rates of convergence in metric topologies. With the same methodology it is possible to address
questions in Bayesian hypothesis testing and model selection: if a Bayesian test to distinguish
between two hypotheses exists and remote contiguity applies, frequentist consistency of the
Bayes Factor can be guaranteed. We conclude with a theorem that uses remote contiguity to
describe a general relation that exists between credible sets and confidence sets, provided the
prior induces remotely-contiguous local prior predictive distributions.
4.1
Consistent posteriors
First, we consider posterior consistency generalising Schwartz’s theorem to sequentially observed (non-i.i.d.) data, non-dominated models and priors or parameter spaces that may
depend on the sample size. For an early but very complete overview of literature and developments in posterior consistency, see [33].
12
Definition 4.1 The posteriors Π( · |X n ) are consistent at θ ∈ Θ if for every neighbourhood U
of θ,
Pθ,n
Π(U |X n ) −−−→ 1.
(13)
The posteriors are said to be consistent if this holds for all θ ∈ Θ. We say that the posterior
is almost-surely consistent if convergence occurs almost-surely with respect to some coupling
for the sequence (Pθ0 ,n ).
Equivalently, posterior consistency can be characterized in terms of posterior expectations of
bounded and continuous functions (see proposition B.5).
Theorem 4.2 Assume that for all n ≥ 1, the data X n ∼ Pθ0 ,n for some θ0 ∈ Θ. Fix a prior
Π : G → [0, 1] and assume that for given B, V ∈ G with Π(B) > 0 and an ↓ 0,
(i) there exist Bayesian tests φn for B versus V ,
Z
Z
Pθ0 ,n (1 − φn ) dΠ(θ0 ) = o(an ),
Pθ,n φn dΠ(θ) +
Π|B
(ii) the sequence Pθ0 ,n satisfies Pθ0 ,n C a−1
n Pn
Pθ
(14)
V
B
.
,n
Then Π(V |X n ) −−−0−→ 0.
These conditions are to be interpreted as follows: theorem 2.4 lends condition (i) a distinctly
Bayesian interpretation: it requires a Bayesian test to set V apart from B with testing power
an . Lemma 2.2 translates this into the (still Bayesian) statement that the posteriors for V
Π|B
go to zero in Pn
-expectation. Condition (ii) is there to promote this Bayesian point to a
frequentist one through (9). To present this from another perspective: condition (ii) ensures
Π|B
that the Pn
cannot be tested versus Pθ0 ,n at power an , so the posterior for V go to zero in
Pθ0 ,n -expectation as well (otherwise a sequence φn (X n ) ∝ Π(V |X n ) would constitute such a
test).
To illustrate theorem 4.2 and its conditions Freedman’s counterexamples are considered in
detail in example B.4.
A proof of a theorem very close to Schwartz’s theorem is now possible. Consider condition
(i) of theorem 1.2: a well-known argument based on Hoeffding’s inequality guarantees the
existence of a uniform test sequence of exponential power whenever a uniform test sequence
test sequence exists, so Schwartz equivalently assumes that there exists a D > 0 such that,
P0n φn + sup Qn (1 − φn ) = o(e−nD ).
Q∈P\U
We vary slightly and assume the existence of a Bayesian test sequence of exponential power.
In the following theorem, let P denote a Hausdorff space of single-observation distributions
on (X , B) with Borel prior Π.
13
Corollary 4.3 For all n ≥ 1, let (X1 , X2 , . . . , Xn ) ∼ P0n for some P0 ∈ P. Let U denote an
open neighbourhood of P0 and define K() = {P ∈ P : −P0 log(dP/dP0 ) < 2 }. If,
(i) there exist > 0, D > 0 and a sequence of measurable ψn : X n → [0, 1], such that,
Z
Z
n
Qn (1 − ψn ) dΠ(Q) = o(e−nD ),
P ψn dΠ(P ) +
K()
P\U
(ii) and Π(K()) > 0 for all > 0,
P -a.s.
then Π(U |X n ) −−0−−→ 1.
An instance of the application of corollary 4.3 is given in example B.10. Example B.23 demonstrates posterior consistency in total variation for i.i.d. data from a finite sample space, for
priors of full support. Extending this, example B.24 concerns consistency of posteriors for
priors that have Freedman’s tailfreeness property [30], like the Dirichlet process prior. Also
interesting in this respect is the Neyman-Scott paradox, a classic example of inconsistency for
the ML estimator, discussed in Bayesian context in [5]: whether the posterior is (in)consistent
depends on the prior. The Jeffreys prior follows the ML estimate while the reference prior
avoids the Neyman-Scott inconsistency. Another question in a sequence model arises when we
analyse FDR-like posterior consistency for a sequence vector that is assumed to be sparse (see
example B.17).
4.2
Rates of posterior concentration
A significant extension to the theory on posterior convergence is formed by results concerning
posterior convergence in metric spaces at a rate. Minimax rates of convergence for (estimators
based on) posterior distributions were considered more or less simultaneously in Ghosal-Ghoshvan der Vaart [34] and Shen-Wasserman [63]. Both propose an extension of Schwartz’s theorem
to posterior rates of convergence [34, 63] and apply Barron’s sieve idea with a well-known
entropy argument [7, 8] to a shrinking sequence of Hellinger neighbourhoods and employs a
more specific, rate-related version of the Kullback-Leibler condition (2) for the prior. Both
appear to be inspired by contemporary results regarding Hellinger rates of convergence for sieve
MLE’s, as well as on Barron-Schervish-Wasserman [4], which concerns posterior consistency
based on controlled bracketing entropy for a sieve, up to subsets of negligible prior mass,
following ideas that were first laid down in [3]. It is remarked already in [4] that their main
theorem is easily re-formulated as a rate-of-convergence theorem, with reference to [63]. More
recently, Walker, Lijoi and Prünster [72] have added to these considerations with a theorem
for Hellinger rates of posterior concentration in models that are separable for the Hellinger
metric, with a central condition that calls for summability of square-roots of prior masses of
covers of the model by Hellinger balls, based on analogous consistency results in Walker [70].
More recent is [46], which shows that alternatives for the priors of [34, 63] exist.
14
Theorem 4.4 Assume that for all n ≥ 1, the data X n ∼ Pθ0 ,n for some θ0 ∈ Θ. Fix priors
Πn : G → [0, 1] and assume that for given Bn , Vn ∈ G with Πn (Bn ) > 0 and an , bn ↓ 0 such
that an = o(bn ),
(i) there are Bayesian tests φn : Xn → [0, 1] such that,
Z
Z
Pθ,n (1 − φn ) dΠn (θ) = o(an ),
Pθ,n φn dΠn (θ) +
Bn
(15)
Vn
(ii) The prior mass of Bn is lower-bounded, Πn (Bn ) ≥ bn ,
Π |Bn
n
(iii) The sequence Pθ0 ,n satisfies Pθ0 ,n C bn a−1
n Pn
Pθ
.
,n
Then Π(Vn |X n ) −−−0−→ 0.
Example 4.5 To apply theorem 4.4, consider again the situation of a uniform distribution
with an unknown location, as in examples 1.3 and 3.5. Taking Vn equal to {θ : θ − θ0 > n }
{θ : θ0 − θ > n } respectively, with n = Mn /n for some Mn → ∞, suitable test sequences
are constructed in example B.15, and in combination with example 3.5, lead to the conclusion
that with a prior Π for θ that has a continuous and strictly positive Lebesgue density, the
posterior is consistent at (any n slower than) rate 1/n.
Example 4.6 Let us briefly review the conditions of [4, 34, 63] in light of theorem 4.4: let
n ↓ 0 denote the Hellinger rate of convergence we have in mind, let M > 1 be some constant
and define,
Vn = {P ∈ P : H(P, P0 ) ≥ M n },
Bn = {P ∈ P : −P0 log dP/dP0 < 2n , P0 log2 dP/dP0 < 2n }.
Theorems for posterior convergence at a rate propose a sieve of submodels satisfying entropy
conditions like those of [7, 8, 51] and a negligibility condition for prior mass outside the sieve
[3], based on the minimax Hellinger rate of convergence n ↓ 0. Together, they guarantee the
existence of Bayesian tests for Hellinger balls of radius n versus complements of Hellinger
balls of radius M n of power exp(−DM 2 n2n ) for some D > 0 (see example B.8). Note that
Bn is contained in the Hellinger ball of radius n around P0 , so (15) holds. New in [34, 63] is
the condition for the priors Πn ,
2
Πn (Bn ) ≥ e−Cnn ,
(16)
for some C > 0. With the help of lemmas B.16 and 3.3-(ii), we conclude that,
2
P0n C ecnn PnΠ|Bn ,
(17)
P
0
for any c > 1. If we choose M such that DM 2 −C > 1, theorem 4.4 proves that Π(Vn |X n ) −−→
0,
i.e. the posterior is Hellinger consistent at rate n .
Certain (simple, parametric) models do not allow the definition of priors that satisfy (16), and
alternative less restrictive choices for the sets Bn are possible under mild conditions on the
model [46].
15
4.3
Consistent hypothesis testing with Bayes factors
The Neyman-Pearson paradigm notwithstanding, hypothesis testing and classification concern
the same fundamental statistical question, to find a procedure to choose one subset from a
given partition of the parameter space as the most likely to contain the parameter value of
the distribution that has generated the data observed. Asymptotically one wonders whether
choices following such a procedure focus on the correct subset with probability growing to one.
From a somewhat shifted perspective, we argue as follows: no statistician can be certain of
the validity of specifics in his model choice and therefore always runs the risk of biasing his
analysis from the outset. Non-parametric approaches alleviate his concern but imply greater
uncertainty within the model, leaving the statistician with the desire to select the correct
(sub)model on the basis of the data before embarking upon the statistical analysis proper (for
a recent overview, see [69]). The issue also makes an appearance in asymptotic context, where
over-parametrized models leave room for inconsistency of estimators, requiring regularization
[9, 10, 12].
Model selection describes all statistical methods that attempt to determine from the data which
model to use. (Take for example sparse variable selection, where one projects out the majority
of covariates prior to actual estimation, and the model-selection question is which projection
is optimal.) Methods for model selection range from simple rules-of-thumb, to cross-validation
and penalization of the likelihood function. Here we propose to conduct the frequentist analysis
with the help of a posterior: when faced with a (dichotomous) model choice, we let the so-called
Bayes factor formulate our preference. For an analysis of hypothesis testing that compares
Bayesian and frequentist views, see [5]. An objective Bayesian perspective on model selection
is provided in [73].
Definition 4.7 For all n ≥ 1, let the model be parametrized by maps θ 7→ Pθ,n on a parameter
space (Θ, G ) with priors Πn : G → [0, 1]. Consider disjoint, measurable B, V ⊂ Θ. For given
n ≥ 1, we say that the Bayes factor for testing B versus V ,
Fn =
Π(B|X n ) Πn (V )
,
Π(V |X n ) Πn (B)
Pθ,n
Pθ,n
is consistent for testing B versus V , if for all θ ∈ V , Fn −−−→ 0 and for all θ ∈ B, Fn−1 −−−→ 0.
Let us first consider this from a purely Bayesian perspective: for fixed prior Π and i .i .d . data,
theorem 2.4 says that the posterior gives rise to consistent Bayes factors for B versus V in a
Bayesian (that is, Π-almost-sure) way, iff a Bayesian test sequence for B versus V exists. If
the parameter space Θ is Polish and the maps θ 7→ Pθ (A) are Borel measurable for all A ∈ B,
proposition 2.5 says that any Borel set V is Bayesian testable versus Θ \ V , so in Polish models
for i.i.d. data, model selection with Bayes factors is Π-almost-surely consistent for all Borel
measurable V ⊂ Θ.
16
The frequentist requires strictly more, however, so we employ remote contiguity again to bridge
the gap with the Bayesian formulation.
Theorem 4.8 For all n ≥ 1, let the model be parametrized by maps θ 7→ Pθ,n on a parameter
space with (Θ, G ) with priors Πn : G → [0, 1]. Consider disjoint, measurable B, V ⊂ Θ with
Πn (B), Πn (V ) > 0 such that,
(i) There exist Bayesian tests for B versus V of power an ↓ 0,
Z
Z
n
Qn (1 − φn ) dΠn (Q) = o(an ),
P φn dΠn (P ) +
V
B
Π |B
Π |V
n
n
(ii) For every θ ∈ B, Pθ,n C a−1
, and for every θ ∈ V , Pθ,n C a−1
.
n Pn
n Pn
Then the Bayes factor for B versus V is consistent.
Note that the second condition of theorem 4.8 can be replaced by a local condition: if, for every
Π |Bn
n
θ ∈ B, there exists a sequence Bn (θ) ⊂ B such that Πn (Bn (θ)) ≥ bn and Pθ,n C a−1
n bn Pn
then Pθ,n C
Πn |B
a−1
n Pn
,
(as a consequence of lemma 3.7 with Cn = B).
In example B.19, we use theorem 4.8 to prove the consistency of the Bayes factor for a goodnessof-fit test for the equilibrium distribution of an stationary ergodic Markov chain, based on largelength random-walk data, with prior and posterior defined on the space of Markov transition
matrices.
4.4
Confidence sets from credible sets
The Bernstein-von Mises theorem [53] asserts that the posterior for a smooth, finite-dimensional
parameter converges in total variation to a normal distribution centred on an efficient estimate with the inverse Fisher information as its covariance, if the prior has full support. The
methodological implication is that Bayesian credible sets derived from such a posterior can
be reinterpreted as asymptotically efficient confidence sets. This parametric fact begs for
the exploration of possible non-parametric extensions but Freedman discourages us [32] with
counterexamples (see also [17]) and concludes that: “The sad lesson for inference is this. If frequentist coverage probabilities are wanted in an infinite-dimensional problem, then frequentist
coverage probabilities must be computed.”
In recent years, much effort has gone into calculations that address the question whether nonparametric credible sets can play the role of confidence sets nonetheless. The focus lies on
well-controlled examples in which both model and prior are Gaussian so that the posterior
is conjugate and analyse posterior expectation and variance to determine whether credible
metric balls have asymptotic frequentist coverage (for examples, see Szabó, van der Vaart and
van Zanten [68] and references therein). Below, we change the question slightly and do not seek
to justify the use of credible sets as confidence sets; from the present perspective it appears
17
more natural to ask in which particular fashion a credible set is to be transformed in order to
guarantee the transform is a confidence set, at least in the large-sample limit.
In previous subsections, we have applied remote contiguity after the concentration inequality
to control the Pθ0 ,n -expectation of the posterior probability for the alternative V through its
Π|Bn
Pn
-expectation. In the discussion of the coverage of credible sets that follows, remote conti-
guity is applied to control the Pθ0 ,n -probability that θ0 falls outside the prospective confidence
Π|Bn
set through its Pn
-probability. The theorem below then follows from an application of
Bayes’s rule (A.22). Credible levels provide the sequence an .
Definition 4.9 Let (Θ, G ) with prior Π, denote the sequence of posteriors by Π(·|·) : G ×Xn →
[0, 1]. Let D denote a collection of measurable subsets of Θ. A sequence of credible sets (Dn ) of
credible levels 1−an (where 0 ≤ an ≤ 1, an ↓ 0) is a sequence of set-valued maps Dn : Xn → D
such that Π(Θ \ Dn (x)|x) = o(an ) for PnΠn -almost-all x ∈ Xn .
Definition 4.10 For 0 ≤ a ≤ 1, a set-valued map x 7→ C(x) defined on X such that, for all
θ ∈ Θ, Pθ (θ 6∈ C(X)) ≤ a, is called a confidence set of level 1 − a. If the levels 1 − an of a
sequence of confidence sets Cn (X n ) go to 1 as n → ∞, the Cn (X n ) are said to be asymptotically
consistent.
Definition 4.11 Let D be a (credible) set in Θ and let B = {B(θ) : θ ∈ Θ} denote a collection
of model subsets such that θ ∈ B(θ) for all θ ∈ Θ. A model subset C 0 is said to be (a confidence
set) associated with D under B, if for all θ ∈ Θ \ C 0 , B(θ) ∩ D = ∅. The intersection C of
all C 0 like above equals {θ ∈ Θ : B(θ) ∩ D 6= ∅} and is called the minimal (confidence) set
associated with D under B (see Fig 1).
Example B.25 makes this construction explicit in uniform spaces and specializes to metric
context.
Theorem 4.12 Let θ0 ∈ Θ and 0 ≤ an ≤ 1, bn > 0 such that an = o(bn ) be given. Choose
priors Πn and let Dn denote level-(1 − an ) credible sets. Furthermore, for all θ ∈ Θ, let
Bn = {Bn (θ) ∈ G : θ ∈ Θ} denote a sequence such that,
(i) Πn (Bn (θ0 )) ≥ bn ,
Π |Bn (θ0 )
n
(ii) Pθ0 ,n C bn a−1
n Pn
.
Then any confidence sets Cn associated with the credible sets Dn under Bn are asymptotically
consistent, i.e. for all θ0 ∈ Θ,
Pθ0 ,n θ0 ∈ Cn (X n ) → 1.
(18)
This refutes Freedman’s lesson, showing that the asymptotic identification of credible sets and
confidence sets in smooth parametric models (the main inferential implication of the Bernsteinvon Mises theorem) generalises to the above form of asymptotic congruence in non-parametric
18
B(θ)
C
θ
D
Figure 1: The relation between a credible set D and
its associated (minimal) confidence set C under B in
Venn diagrams: the extra points θ in the associated
confidence set C not included in the credible set D
are characterized by non-empty intersection B(θ) ∩
D 6= ∅.
models. The fact that this statement holds in full generality implies very practical ways
to obtain confidence sets from posteriors, calculated, simulated or approximated. A second
remark concerns the confidence levels of associated confidence sets. In order for the assertion
of theorem 4.12 to be specific regarding the confidence level (rather than just resulting in
asymptotic coverage), we re-write the last condition of theorem 4.12 as follows,
Π |Bn (θ0 )
n
−1
(ii’) c−1
n Pθ0 ,n C bn an Pn
,
so that the last step in the proof of theorem 4.12 is more specific; particularly, assertion (18)
becomes,
Pθ0 ,n θ ∈ Dn (X n ) = o(cn ),
i.e. the confidence level of the sets Dn (X n ) is 1−Kcn asymptotically (for some constant K > 0
and large enough n).
The following corollary that specializes to the i.i.d. situation is immediate (see example B.26).
Let P denote a model of single-observation distributions, endowed with the Hellinger or totalvariational topology.
Corollary 4.13 For n ≥ 1 assume that (X1 , X2 , . . . , Xn ) ∈ X n ∼ P0n for some P0 ∈ P.
Let Πn denote Borel priors on P, with constant C > 0 and rate sequence n ↓ 0 such that
(16) is satisfied. Denote by Dn credible sets of level 1 − exp(−C 0 n2n ), for some C 0 > C.
Then the confidence sets Cn associated with Dn under radius-n Hellinger-enlargement are
asymptotically consistent.
19
Note that in the above corollary,
diamH (Cn (X n )) = diamH (Dn (X n )) + 2n ,
P0n -almost surely. If, in addition to the conditions in the above corollary, tests satisfying
(15) with an = exp(−C 0 n2n ) exist, the posterior is consistent at rate n and sets Dn (X n )
have diameters decreasing as n , c.f. theorem 4.4. In the case n is the minimax rate of
convergence for the problem, the confidence sets Cn (X n ) attain rate-optimality [55]. Rateadaptivity [42, 13, 68] is not possible like this because a definite, non-data-dependent choice
for the Bn is required.
5
Conclusions
We list and discuss the main conclusions of this paper below.
Frequentist validity of Bayesian limits
There exists a systematic way of taking Bayesian limits into frequentist ones, if priors
satisfy an extra condition relating true data distributions to localized prior predictive
distributions. This extra condition generalises Schwartz’s Kullback-Leibler condition and
amounts to a weakened form of contiguity, termed remote contiguity.
For example regarding consistency with i.i.d. data, Doob shows that a Bayesian form of posterior consistency holds without any real conditions on the model. To the frequentist, ‘holes’ of
potential inconsistency remain, in null-sets of the prior. Remote contiguity ‘fills the holes’ and
elevates the Bayesian form of consistency to the frequentist one. Similarly, prior-almost-surely
consistent tests are promoted to frequentist consistent tests and Bayesian credible sets are
converted to frequentist confidence sets.
The nature of Bayesian test sequences
The existence of a Bayesian test sequence is equivalent to consistent posterior convergence
in the Bayesian, prior-almost-sure sense. In theorems above, a Bayesian test sequence
thus represents the Bayesian limit for which we seek frequentist validity through remote
contiguity. Bayesian test sequences are more abundant than the more familiar uniform
test sequences. Aside from prior mass requirements arising from remote contiguity, the
prior should assign little weight where testing power is weak and much where testing
power is strong, ideally.
Example B.19 illustrates the influence of the prior when constructing a test sequence. Aside
from the familiar lower bounds for prior mass that arise from remote contiguity, existence of
Bayesian tests also poses upper bounds for prior mass.
Systematic analysis of complex models and datasets
Although many examples have been studied on a case-by-case basis in the literature,
20
the systematic analysis of limiting properties of posteriors in cases where the data is
dependent, or where the model, the parameter space and/or the prior are sample-size
dependent, requires generalisation of Schwartz’s theorem and its variations, which the
formalism presented here provides.
To elaborate, given the growing interest in the analysis of dependent datasets gathered from
networks (e.g. by webcrawlers that random walk linked webpages), or from time-series/stochastic
processes (e.g. financial data of the high-frequency type), or in the form of high-dimensional
or even functional data (biological, financial, medical and meteorological fields provide many
examples), the development of new Bayesian methods involving such aspects benefits from a
simple, insightful, systematic perspective to guide the search for suitable priors in concrete
examples.
To illustrate the last point, let us consider consistent community detection in stochastic block
models [64, 6]. Bayesian methods have been developed for consistent selection of the number
of communities [41], for community detection with a controlled error-rate with a growing
number of communities [16] and for consistent community detection using empirical priors
[67]. A moment’s thought on the discrete nature of the community assignment vector suggests
a sequence of uniform priors, for which remote contiguity (of Bn = {P0,n }) is guaranteed
(at any rate) and prior mass lower bounded by bn = Kn !Kn−n (where Kn is the number of
communities at ‘sample size’ n). It would be interesting to see under which conditions a
Bayesian test sequence of power an = o(bn ) can be devised that tests the true assignment
vector versus all alternatives (in the sparse regime [18, 1, 58]). Rather than apply a Chernoff
bound like in [16], one would probably have to start from the probabilistic [58] or informationtheoretic [1] analyses of respective algorithmic solutions in the (very closely related) planted bisection model. If a suitably powerful test can be shown to exist, theorem 4.4 proves frequentist
consistency of the posterior.
Methodology for uncertainty quantification
Use of a prior that induces remote contiguity allows one to convert credible sets of a calculated, simulated or approximated posterior into asymptotically consistent confidence
sets, in full generality. This extends the main inferential implication of the Bernsteinvon Mises theorem to non-parametric models without smoothness conditions.
The latter conclusion forms the most important and practically useful aspect of this paper.
A
Definitions and conventions
Because we take the perspective of a frequentist using Bayesian methods, we are obliged to
demonstrate that Bayesian definitions continue to make sense under the assumptions that the
data X is distributed according to a true, underlying P0 .
21
Remark A.1 We assume given for every n ≥ 1, a measurable (sample) space (Xn , Bn ) and
random sample X n ∈ Xn , with a model Pn of probability distributions Pn : Bn → [0, 1].
It is also assumed that there exists an n-independent parameter space Θ with a Hausdorff,
completely regular topology T and associated Borel σ-algebra G , and, for every n ≥ 1, a
bijective model parametrization Θ → Pn : θ 7→ Pθ,n such that for every n ≥ 1 and every
A ∈ Bn , the map Θ → [0, 1] : θ 7→ Pθ,n (A) is measurable. Any prior Π on Θ is assumed to be
a Borel probability measure Π : G → [0, 1] and can vary with the sample-size n. (Note: in i.i.d.
setting, the parameter space Θ is P1 , θ is the single-observation distribution P and θ 7→ Pθ,n
is P 7→ P n .) As frequentists, we assume that there exists a ‘true, underlying distribution for
the data; in this case, that means that for every n ≥ 1, there exists a distribution P0,n from
which the n-th sample X n is drawn.
Often one assumes that the model is well-specified : that there exists a θ0 ∈ Θ such that
P0,n = Pθ0 ,n for all n ≥ 1. We think of Θ as a topological space because we want to discuss
estimation as a procedure of sequential, stochastic approximation of and convergence to such
a ‘true parameter value θ0 . In theorem 2.4 and definition 4.1 we assume, in addition, that
the observations X n are coupled, i.e. there exists a probability space (Ω, F , P0 ) and random
variables X n : Ω → Xn such that P0 ((X n )−1 (A)) = P0,n (X n ∈ A) for all n ≥ 1 and A ∈ Bn .
Definition A.2 Given n, m ≥ 1 and a prior probability measure Πn : G → [0, 1], define the
n-th prior predictive distribution on Xm as follows:
Z
Πn
Pθ,m (A) dΠn (θ),
Pm (A) =
(A.19)
Θ
for all A ∈ Bm . If the prior is replaced by the posterior, the above defines the n-th posterior
predictive distribution on Xm ,
Πn |X n
Pm
(A)
Z
=
Pθ,m (A) dΠ(θ|X n ),
(A.20)
Θ
for all A ∈ Bm . For any Bn ∈ G with Πn (Bn ) > 0, define also the n-th local prior predictive
distribution on Xm ,
Πn |Bn
Pm
(A) =
1
Πn (Bn )
Z
Pθ,m (A) dΠn (θ),
(A.21)
Bn
as the predictive distribution on Xm that results from the prior Πn when conditioned on Bn .
If m is not mentioned explicitly, it is assumed equal to n.
The prior predictive distribution PnΠn is the marginal distribution for X n in the Bayesian
perspective that considers parameter and sample jointly (θ, X n ) ∈ Θ × Xn as the random
quantity of interest.
Definition A.3 Given n ≥ 1, a (version of) the posterior is any map Π( · |X n = · ) : G ×
Xn → [0, 1] such that,
(i) for B ∈ G , the map Xn → [0, 1] : xn 7→ Π(B|X n = xn ) is Bn -measurable,
22
(ii) for all A ∈ Bn and V ∈ G ,
Z
Z
n
Πn
Pθ,n (A) dΠn (θ).
Π(V |X ) dPn =
(A.22)
V
A
Bayes’s Rule is expressed through equality (A.22) and is sometimes referred to as a ‘disintegration’ (of the joint distribution of (θ, X n )). If the posterior is a Markov kernel, it is a
PnΠn -almost-surely well-defined probability measure on (Θ, G ). But it does not follow from the
definition above that a version of the posterior actually exists as a regular conditional probability measure. Under mild extra conditions, regularity of the posterior can be guaranteed: for
example, if sample space and parameter space are Polish, the posterior is regular; if the model
Pn is dominated (denote the density of Pθ,n by pθ,n ), the fraction of integrated likelihoods,
Z
Z
n
n
Π(V |X ) =
pθ,n (X ) dΠn (θ)
pθ,n (X n ) dΠn (θ),
(A.23)
V
Θ
for V ∈ G , n ≥ 1 defines a regular version of the posterior distribution. (Note also that there
is no room in definition (A.22) for X n -dependence of the prior, so ‘empirical Bayes’ methods
must be based on data Y n independent of X n , i.e. sample-splitting.)
Remark A.4 As a consequence of the frequentist assumption that X n ∼ P0,n for all n ≥ 1, the
PnΠn -almost-sure definition (A.22) of the posterior Π(V |X n ) does not make sense automatically
[29, 46]: null-sets of PnΠn on which the definition of Π( · |X n ) is ill-determined, may not be
null-sets of P0,n . To prevent this, we impose the domination condition,
P0,n PnΠn ,
(A.24)
for every n ≥ 1.
To understand the reason for (A.24) in a perhaps more familiar way, consider a dominated
model and assume that for certain n, (A.24) is not satisfied. Then, using (A.19), we find,
Z
P0,n
pθ,n (X n ) dΠn (θ) = 0 > 0,
so the denominator in (A.23) evaluates to zero with non-zero P0,n -probability.
To get an idea of sufficient conditions for (A.24), it is noted in [46] that in the case of i.i.d.
data where P0,n = P0n for some marginal distribution P0 , P0n PnΠ for all n ≥ 1, if P0 lies in
the Hellinger- or Kullback-Leibler-support of the prior Π. For the generalisation to the present
setting we are more precise and weaken the topology appropriately.
Definition A.5 For all n ≥ 1, let Fn denote the class of all bounded, Bn -measurable f :
Xn → R. The topology Tn is the initial topology on Pn for the functions {P 7→ P f : f ∈ Fn }.
Finite intersections of sets Uf, = {(P, Q) ∈ Pn2 : |(P − Q)f | < } (f ∈ Fn , 0 ≤ f ≤ 1 and
> 0), form a fundamental system of entourages for a uniformity Un on Pn . A fundamental
23
system of neighbourhoods for the associated topology Tn on P is formed by finite intersections
of sets of the form,
WP,f, = {Q ∈ Pn : |(P − Q)f | < },
with P ∈ Pn , f ∈ Fn , 0 ≤ f ≤ 1 and > 0.
If we model single-observation distributions P ∈ P for an i.i.d. sample, the topology Tn on
Pn = P n induces a topology on P (which we also denote by Tn ) for each n ≥ 1. The
union T∞ = ∪n Tn is an inverse-limit topology that allows formulation of conditions for the
existence of consistent estimates that are not only sufficient, but also necessary [47], offering
a precise perspective on what is estimable and what is not in i.i.d. context. The associated
strong topology is that generated by total variation (or, equivalently, the Hellinger metric).
For more on these topologies, the reader is referred to Strasser (1985) [65] and to Le Cam
(1986) [51]. We note explicitly the following fact, which is a direct consequence of Hoeffding’s
inequality.
Proposition A.6 (Uniform Tn -tests)
Consider a model P of single-observation distributions P for i.i.d. samples (X1 , X2 , . . . , Xn ) ∼
P n , (n ≥ 1). Let m ≥ 1, > 0, P0 ∈ P and a measurable f : X m → [0, 1] be given. Define
B = P ∈ P : |(P m − P0m )f | < , and V = P ∈ P : |(P m − P0m )f | ≥ 2 . There exist a
uniform test sequence (φn ) such that,
sup P n φn ≤ e−nD ,
sup Qn (1 − φn ) ≤ e−nD ,
P ∈B
Q∈V
for some D > 0.
Proof The proof is an application of Hoeffding’s inequality for the sum
Pn
i=1 f (Xi )
left to the reader.
and is
The topologies Tn also play a role for condition (A.24).
Proposition A.7 Let (Πn ) be Borel priors on the Hausdorff uniform spaces (Pn , Tn ). For
any n ≥ 1, if P0,n lies in the Tn -support of Πn , then P0,n PnΠn .
Proof Let n ≥ 1 be given. For any A ∈ Bn and any U 0 ⊂ Θ such that Πn (U 0 ) > 0,
Z
P0,n (A) ≤ Pθ,n (A) dΠn (θ|U 0 ) + sup |Pθ,n (A) − P0,n (A)|.
θ∈U 0
Let A ∈ Bn be a null-set of PnΠn ; since Πn (U 0 ) > 0,
R
Pθ,n (A) dΠn (θ|U 0 ) = 0. For some > 0,
take U 0 equal to the Tn -basis element {θ ∈ Θ : |Pθ,n (A) − Pθ0 ,n (A)| < } to conclude that
Pθ0 ,n (A) < for all > 0.
In many situations, priors are Borel for the Hellinger topology, so it is useful to observe that
the Hellinger support of Πn in Pn is always contained in the Tn -support.
24
Notation and conventions
l.h.s. and r.h.s. refer to left- and right-hand sides respectively. For given probability measures
P, Q on a measurable space (Ω, F ), we define the Radon-Nikodym derivative dP/dQ : Ω →
[0, ∞), P -almost-surely, referring only to the Q-dominated component of P , following [51].
We also define (dP/dQ)−1 : Ω → (0, ∞] : ω 7→ 1/(dP/dQ(ω)), Q-almost-surely. Given a
σ-finite measure µ that dominates both P and Q (e.g. µ = P + Q), denote dP/dµ = q and
dQ/dµ = p. Then the measurable map p/q 1{q > 0} : Ω → [0, ∞) is a µ-almost-everywhere
version of dP/dQ, and q/p 1{q > 0} : Ω → [0, ∞] of (dP/dQ)−1 . Define total-variational and
R
Hellinger distances by kP − Qk = supA |P (A) − Q(A)| and H(P, Q)2 = 1/2 (p1/2 − q 1/2 )2 dµ,
respectively. Given random variables Zn ∼ Pn , weak convergence to a random variable Z is
P -w.
P
n
denoted by Zn −−n−−→ Z, convergence in probability by Zn −−→
Z and almost-sure convergence
P ∞ -a.s.
(with coupling P ∞ ) by Zn −−−−−→ Z. The integral of a real-valued, integrable random variable
X with respect to a probability measure P is denoted P X, while integrals over the model with
respect to priors and posteriors are always written out in Leibniz’s notation. For any subset
B of a topological space, B̄ denotes the closure, B̊ the interior and ∂B the boundary. Given
> 0 and a metric space (Θ, d), the covering number N (, Θ, d) ∈ N ∪ {∞} is the minimal
cardinal of a cover of Θ by d-balls of radius . Given real-valued random variables X1 , . . . , Xn ,
the first order statistic is X(1) = min1≤i≤n Xi . The Hellinger diameter of a model subset C is
denoted diamH (C) and the Euclidean norm of a vector θ ∈ Rn is denoted kθk2,n .
B
Applications and examples
In this section of the appendix examples and applications are collected.
B.1
Inconsistent posteriors
Calculations that demonstrate instances of posterior inconsistency are many (for (a far-fromexhaustive list of) examples, see [20, 21, 17, 22, 23, 31, 32]). In this subsection, we discuss
early examples of posterior inconsistency that illustrate the potential for problems clearly and
without distracting technicalities.
Example B.1 (Freedman (1963) [29])
Consider a sample X1 , X2 , . . . of random positive integers. Denote the space of all probability
distributions on N by Λ and assume that the sample is i.i.d.-P0 , for some P0 ∈ Λ. For any
P ∈ Λ, write p(i) = P ({X = i}) for all i ≥ 1. The total-variational and weak topologies on
Λ are equivalent (defined, P → Q if p(i) → q(i) for all i ≥ 1). Let Q ∈ Λ \ {P0 } be given.
To arrive at a prior with P0 in its support, leading to a posterior that concentrates on Q, we
consider sequences (Pm ) and (Qn ) such that Qm → Q and Pm → P0 as m → ∞. The prior Π
25
places masses αm > 0 at Pm and βm > 0 at Qm (m ≥ 1), so that P0 lies in the support of Π.
A careful construction of the distributions Qm that involves P0 , guarantees that the posterior
satisfies,
Π({Qm }|X n ) P0 -a.s.
−−−−→ 0,
Π({Qm+1 }|X n )
that is, posterior mass is shifted further out into the tail as n grows to infinity, forcing all
posterior mass that resides in {Qm : m ≥ 1} into arbitrarily small neighbourhoods of Q. In
a second step, the distributions Pm and prior weights αm are chosen such that the likelihood
at Pm grows large for high values of m and small for lower values as n increases, so that the
posterior mass in {Pm : m ≥ 1} also accumulates in the tail. However, the prior weights αm
may be chosen to decrease very fast with m, in such a way that,
Π({Pm : m ≥ 1}|X n ) P0 -a.s.
−−−−→ 0,
Π({Qm : m ≥ 1}|X n )
thus forcing all posterior mass into {Qm : m ≥ 1} as n grows. Combination of the previous
two displays leads to the conclusion that for every neighbourhood UQ of Q,
P -a.s.
Π(UQ |X n ) −−0−−→ 1,
so the posterior is inconsistent. Other choices of the weights αm that place more prior mass
in the tail do lead to consistent posterior distributions.
Some objected to Freedman’s counterexample, because knowledge of P0 is required to construct
the prior that causes inconsistency. So it was possible to argue that Freedman’s counterexample
amounted to nothing more than a demonstration that unfortunate circumstances could be
created, probably not a fact of great concern in any generic sense. To strengthen Freedman’s
point one would need to construct a prior of full support without explicit knowledge of P0 .
Example B.2 (Freedman (1965) [30])
In the setting of example B.1, denote the space of all distributions on Λ by π(Λ). Note that
since Λ is Polish, so is π(Λ) and so is the product Λ × π(Λ).
Theorem B.3 (Freedman (1965) [30])
Let X1 , X2 , . . . form an sample of i.i.d.-P0 random integers, let Λ denote the space of all
distributions on N and let π(Λ) denote the space of all Borel probability measures on Λ, both
in Prohorov’s weak topology. The set of pairs (P0 , Π) ∈ Λ × π(Λ) such that for all open U ⊂ Λ,
lim sup P0n Π(U |X n ) = 1,
n→∞
is residual.
And so, the set of pairs (P0 , Π) ∈ Λ × π(Λ) for which the limiting behaviour of the posterior
is acceptable to the frequentist, is meagre in Λ × π(Λ). The proof relies on the following
construction: for k ≥ 1, define Λk to be the subset of all probability distributions P on N such
26
that P (X = k) = 0. Also define Λ0 as the union of all Λk , (k ≥ 1). Pick Q ∈ Λ \ Λ0 . We
assume that P0 ∈ Λ \ Λ0 and P0 6= Q. Place a prior Π0 on Λ0 and choose Π = 12 Π0 + 12 δQ .
Because Λ0 is dense in Λ, priors of this type have full support in Λ. But P0 has full support in
N so for every k ∈ N, P0∞ (∃m≥1 : Xm = k) = 1: note that if we observe Xm = k, the likelihood
equals zero on Λk so that Π(Λk |X n ) = 0 for all n ≥ m, P0∞ -almost-surely. Freedman shows
this eliminates all of Λ0 asymptotically, if Π0 is chosen in a suitable way, forcing all posterior
mass onto the point {Q}. (See also, Le Cam (1986) [51], section 17.7).
The question remains how Freedman’s inconsistent posteriors relate to the work presented
here. Since test sequences of exponential power exist to separate complements of weak neighbourhoods, c.f. proposition A.6, Freedman’s inconsistencies must violate the requirement of
remote contiguity in theorem 4.2.
Example B.4 As noted already, Λ is a Polish space; in particular Λ is metric and second
countable, so the subspace Λ \ Λ0 contains a countable dense subset D. For Q ∈ D, let V be
the set of all prior probability measures on Λ with finite support, of which one point is Q and
the remaining points lie in Λ0 . The proof of the theorem in [30] that asserts that the set of
consistent pairs (P0 , Π) is of the first category in Λ × π(Λ) departs from the observation that
if P0 lies in Λ \ Λ0 and we use a prior from V , then,
P -a.s.
Π({Q}|X n ) −−0−−→ 1,
(in fact, as is shown below, with P0∞ -probability one there exists an N ≥ 1 such that
Π({Q}|X n ) = 1 for all n ≥ N ). The proof continues to assert that V lies dense in π(Λ),
and, through sequences of continuous extensions involving D, that posterior inconsistency for
elements of V implies posterior inconsistency for all Π in π(Λ) with the possible exception of
a set of the first category.
From the present perspective it is interesting to view the inconsistency of elements of V in
light of the conditions of theorem 4.2. Define, for some bounded f : N → R and > 0, two
subsets of Λ,
B = {P : |P f − P0 f | < 21 },
V = {P : |P f − P0 f | ≥ }.
Proposition A.6 asserts the existence of a uniform test sequence for B versus V of exponential
power. With regard to remote contiguity, for an element Π of V with support of order M + 1,
write,
Π = βδQ +
M
X
αm δPm ,
m=1
where β +
P
m αm
= 1 and Pm ∈ Λ0 (1 ≤ m ≤ M ). Without loss of generality, assume that
and f are such that Q does not lie in B. Consider,
Π|B
1
dPn
(X n ) =
n
dP0
Π(B)
Z
B
M
n
dP n n
1 X
dPm
(X
)
dΠ(P
)
≤
α
(X n ).
m
dP0n
Π(B)
dP0n
m=1
27
For every 1 ≤ m ≤ M , there exists a k(m) such that Pm (X = k(m)) = 0, and the probability
of the event En that none of the X1 , . . . , Xn equal k(m) is (1 − P0 (X = k(m)))n . Note that
n /dP n (X n ) > 0.
En is also the event that dPm
0
Hence for every 1 ≤ m ≤ M and all X in an event of P0∞ -probability one, there exists an
n /dP n (X n ) = 0 for all n ≥ N . Consequently, for all X in an event
Nm ≥ 1 such that dPm
m
0
Π|B
of P0∞ -probability one, there exists an N ≥ 1 such that dPn
/dP0n (X n ) = 0 for all n ≥ N .
Therefore, condition (ii) of lemma 3.3 is not satisfied for any sequence an ↓ 0. A direct proof
that (9) does not hold for any an is also possible: given the prior Π ∈ V , define,
M
Y
φn (X n ) =
1{∃1≤i≤n :Xi =k(m)} .
m=1
Then the expectation of φn with respect to the local prior predictive distribution equals zero,
Π|B
so Pn
φn = o(an ) for any an ↓ 0. However, P0n φn (X n ) → 1, so the prior Π does not give rise
Π|B
to a sequence of prior predictive distributions (Pn
) with respect to which (P0n ) is remotely
contiguous, for any an ↓ 0.
B.2
Consistency, Bayesian tests and the Hellinger metric
Let us first consider characterization of posterior consistency in terms of the family of realvalued functions on the parameter space that are bounded and continuous.
Proposition B.5 Assume that Θ is a Hausdorff, completely regular space. The posterior is
consistent at θ0 ∈ Θ, if and only if,
Z
Pθ ,n
f (θ) dΠ(θ|X n ) −−−0−→ f (θ0 ),
(B.25)
for every bounded, continuous f : Θ → R.
Proof Assume (13). Let f : Θ → R be bounded and continuous (with M > 0 such that |f | ≤
M ). Let η > 0 be given and let U ⊂ Θ be a neighbourhood of θ0 such that |f (θ) − f (θ0 )| < η
for all θ ∈ U . Integrate f with respect to the (Pθ0 ,n -almost-surely well-defined) posterior and
to δθ0 :
Z
f (θ) dΠ(θ|X n ) − f (θ0 )
Z
Z
n
≤
|f (θ) − f (θ0 )| dΠ(θ|X ) +
|f (θ) − f (θ0 )| dΠ(θ|X n )
Θ\U
U
n
≤ 2M Π(Θ \ U |X )
+ sup |f (θ) − f (θ0 )| Π(U |X n ) ≤ η + oPθ0 ,n (1),
θ∈U
as n → ∞, so that (B.25) holds. Conversely, assume (B.25). Let U be an open neighbourhood
of θ0 . Because Θ is completely regular, there exists a continuous f : Θ → [0, 1] such that f = 1
at {θ0 } and f = 0 on Θ \ U . Then,
Z
Z
Pθ ,n
Π(U |X n ) ≥ f (θ) dΠ(θ|X n ) −−−0−→ f (θ) dδθ0 (P ) = 1.
28
Consequently, (13) holds.
Proposition B.5 is used to prove consistency of frequentist point-estimators derived from the
posterior.
Example B.6 Consider a model P of single-observation distributions P on (X , B) for i.i.d.
data (X1 , X2 , . . . , Xn ) ∼ P n , (n ≥ 1). Assume that the true distribution of the data is P0 ∈ P
and that the model topology is Prohorov’s weak topology or stronger. Then for any bounded,
continuous g : X → R, the map,
f : P → R : P 7→ (P − P0 )g(X) ,
is continuous. Assuming that the posterior is weakly consistent at P0 ,
Z
Pθ0
Πn |X n
0,
(P − P0 )g dΠ(P |X n ) −−→
P1
g − P0 g ≤
(B.26)
so posterior predictive distributions are consistent point estimators in Prohorov’s weak topology. Replacing the maps g by bounded, measurable maps X → R and assuming posterior
consistency in T1 , one proves consistency of posterior predictive distributions in T1 in exactly the same way. Taking the supremum over measurable g : X → [0, 1] in (B.26) and
assuming that the posterior is consistent in the total variational topology, posterior predictive
distributions are consistent in total variation as frequentist point estimators.
The vast majority of non-parametric applications of Bayesian methods in the literature is
based on the intimate relation that exists between testing and the Hellinger metric (see [51],
section 16.4). Proofs concerning posterior consistency or posterior convergence at a rate rely
on the existence of tests for small parameter subsets Bn surrounding a point θ0 ∈ Θ, versus the
complements Vn of neighbourhoods of the point θ0 . The building block in such constructions
is the following application of the minimax theorem.
Proposition B.7 (Minimax Hellinger tests)
Consider a model P of single-observation distributions P for i.i.d. data. Let B, V ⊂ P be
convex with H(B, V ) > 0. There exists a test sequence (φn ) such that,
sup P n φn ≤ e−nH
P ∈B
2 (B,V
)
, sup Qn (1 − φn ) ≤ e−nH
2 (B,V
)
.
Q∈V
Proof This is an application of the minimax theorem. See Le Cam (1986) [51], section 16.4
for details.
Questions concerning consistency require the existence of tests in which at least one of the two
hypotheses is a non-convex set, typically the complement of a neighbourhood. Imposing the
model P to be of bounded entropy with respect to the Hellinger metric allows construction
of such tests, based on the uniform tests of proposition B.7. Below, we apply well-known
constructions for the uniform tests in Schwartz’s theorem from the frequentist literature [49,
29
7, 8, 34] to the construction of Bayesian tests. Due to relations that exist between metrics for
model parameters and the Hellinger metric in many examples and applications, the material
covered here is widely applicable in (non-parametric) models for i.i.d. data.
Example B.8 Consider a model P of distributions P for i.i.d. data X n ∼ P n , (n ≥ 1) and,
in addition, suppose that P is totally bounded with respect to the Hellinger distance. Let
P0 ∈ P and > 0 be given, denote V () = {P ∈ P : H(P0 , P ) ≥ 4}, BH () = {P ∈ P :
H(P0 , P ) < }. There exists an N () ≥ 1 and a cover of V () by H-balls V1 , . . . , VN () of
radius and for any point Q in any Vi and any P ∈ BH (), H(Q, P ) > 2. According to
proposition 2.6 with α = 1/2 and (7), for each 1 ≤ i ≤ N () there exists a Bayesian test
sequence (φi,n ) for BH () versus Vi of power (upper bounded by) exp(−2n2 ). Then, for any
subset B 0 ⊂ BH (),
N ()
0
PnΠ|B Π(V |X n ) ≤
X
0
PnΠ|B Π(Vi |X n )
i=1
N ()Z
1 X
P n φn dΠ(P ) +
Π(B 0 )
0
B
i=1
s
N ()
X Π(Vi )
≤
exp(−2n2 ),
Π(B 0 )
≤
Z
P n (1 − φn ) dΠ(P )
(B.27)
Vi
i=1
2
which is smaller than or equal to e−n for large enough n. If = n with n ↓ 0 and n2n → ∞,
and the model’s Hellinger entropy is upper-bounded by log N (n , P, H) ≤ Kn2n for some
K > 0, the construction extends to tests that separate Vn = {P ∈ P : H(P0 , P ) ≥ 4n } from
Bn = {P ∈ P : H(P0 , P ) < n } asymptotically, with power exp(−nL2n ) for some L > 0. (See
also the so-called Le Cam dimension of a model [49] and Birgé’s rate-oriented work [7, 8].) It
is worth pointing out at this stage that posterior inconsistency due to the phenomenon of ‘data
tracking’ [4, 71], whereby weak posterior consistency holds but Hellinger consistency fails, can
only be due to failure of the testing condition in the Hellinger case.
Note that the argument also extends to models that are Hellinger separable: in that case (B.27)
remains valid, but with N () = ∞. The mass fractions Π(Vi )/Π(B 0 ) become important (we
point to strong connections with Walker’s theorem [70, 72]). Here we see the balance between
prior mass and testing power for Bayesian tests, as intended by the remark that closes the
subsection on the existence of Bayesian test sequences in section 2.
To balance entropy and prior mass differently in Hellinger separable models, Barron (1988)
[3] and Barron et al. (1999) [4] formulate an alternative condition that is based on the Radon
property that any prior on a Polish space has.
Example B.9 Consider a model P of distributions P for i.i.d. data X n ∼ P n , (n ≥ 1), with
priors (Πn ). Assume that the model P is Polish in the Hellinger topology. Let P0 ∈ P and
> 0 be given; for a fixed M > 1, define V = {P ∈ P : H(P0 , P ) ≥ M }, BH = {P ∈
30
P : H(P0 , P ) < }. For any sequence δm ↓ 0, there exist compacta Km ⊂ P such that
Π(Km ) ≥ 1 − δm for all m ≥ 1. For each m ≥ 1, Km is Hellinger totally bounded so there
exists a Bayesian test sequence φm,n for BH () ∩ Km versus V () ∩ Km . Since,
Z
Z
P n φn dΠ(P ) +
Qn (1 − φn ) dΠ(Q)
BH
V
Z
Z
n
Qn (1 − φm,n ) dΠ(Q) + δm ,
P φm,n dΠ(P ) +
≤
V ∩Km
BH ∩Km
and all three terms go to zero, a diagonalization argument confirms the existence of a Bayesian
test for BH versus V . To control the power of this test and to generalise to the case where
= n is n-dependent, more is required: as we increase m with n, the prior mass δm(n) outside
of Kn = Km(n) must decrease fast enough, while the order of the cover must be bounded: if
Πn (Kn ) ≥ 1−exp(−L1 n2n ) and the Hellinger entropy of Kn satisfies log N (n , Kn , H) ≤ L2 n2n
for some L1 , L2 > 0, there exist M > 1, L > 0, and a sequence of tests (φn ) such that,
Z
Z
2
P n φn dΠ(P ) +
Qn (1 − φn ) dΠ(Q) ≤ e−Lnn ,
BH (n )
V (n )
for large enough n. (For related constructions, see Barron (1988) [3], Barron et al. (1999) [4]
and Ghosal, Ghosh and van der Vaart (2000) [34].)
To apply corollary 4.3 consider the following steps.
Example B.10 As an example of the tests required under condition (i) of corollary 4.3, consider P in the Hellinger topology, assuming totally-boundedness. Let U be the Hellinger-ball
of radius 4 around Pθ0 of example B.8 and let V be its complement. The Hellinger ball BH ()
in equation (B.27) contains the set K(). Alternatively we may consider the model in any of
the weak topologies Tn : let > 0 be given and let U denote a weak neighbourhood of the
form {P ∈ P : |(P n − P0n )f | ≥ 2}, for some bounded measurable f : Xn → [0, 1], as in
proposition A.6. The set B of proposition A.6 contains a set K(δ), for some δ > 0. Both these
applications were noted by Schwartz in [62].
B.3
Some examples of remotely contiguous sequences
The following two examples illustrate the difference between contiguity and remote contiguity
in the context of parametric and non-parametric regression.
Example B.11 Let F denote a class of functions R → R. We consider samples X n =
((X1 , Y1 ), . . . , (Xn , Yn )), (n ≥ 1) of points in R2 , assumed to be related through Yi = f0 (Xi )+ei
for some unknown f0 ∈ F , where the errors are i.i.d. standard normal e1 , . . . , en ∼ N (0, 1)n
and independent of the i.i.d. covariates X1 , . . . , Xn ∼ P n , for some (ancillary) distribution P on
R
R. It is assumed that F ⊂ L2 (P ) and we use the L2 -norm kf k2P,2 = f 2 dP to define a metric d
on F , d(f, g) = kf −gkP,2 . Given a parameter f ∈ F , denote the sample distributions as Pf,n .
31
We distinguish two cases: (a) the case of linear regression, where F = {fθ : R → R : θ ∈ Θ},
where θ = (a, b) ∈ Θ = R2 and fθ (x) = ax + b; and (b) the case of non-parametric regression,
where we do not restrict F beforehand.
Let Π be a Borel prior Π on F and place remote contiguity in context by assuming, for the
moment, that for some ρ > 0, there exist 0 < r < ρ and τ > 0, as well as Bayesian tests φn
for B = {f ∈ F : kf − f0 kP,2 < r} versus V = {f ∈ F : kf − f0 kP,2 ≥ ρ} under Π of power
an = exp(− 21 nτ 2 ). If this is the case, we may assume that r < 12 τ without loss of generality.
Suppose also that Π has a support in L2 (P ) that contains all of F .
Let us concentrate on case (b) first: a bit of manipulation casts the an -rescaled likelihood ratio
for f ∈ F in the following form,
a−1
n
1 Pn
2
2
dPf,n
(X n ) = e− 2 i=1 (ei (f −f0 )(Xi )+(f −f0 ) (Xi )−τ ) ,
dPf0 ,n
(B.28)
under X n ∼ Pf0 ,n . The exponent is controlled by the law of large numbers,
n
Pf∞0 -a.s.
1X
−−→ kf − f0 k2P,2 − τ 2 .
ei (f − f0 )(Xi ) + (f − f0 )2 (Xi ) − τ 2 −−−
n
i=1
Hence, for every > 0 there exists an N (f, ) ≥ 1 such that the exponent in (B.28) satisfies
the upper bound,
n
X
ei (f − f0 )(Xi ) + (f − f0 )2 (Xi ) − τ 2 ≤ n(kf − f0 k2P,2 − τ 2 + 2 ),
i=1
for all n ≥ N (f, ). Since Π(B) > 0, we may condition Π on B, choose = 21 τ and use Fatou’s
inequality to find that,
1 2 dPnΠ|B
2
lim inf e nτ
(X n )
n→∞
dPf0 ,n
1
2
≥ lim inf e 4 nτ = ∞,
n→∞
Pf∞
-almost-surely. Consequently, for any choice of δ,
0
Pf0 ,n
1
Π|B
1 2
dPn
(X n ) < δ e− 2 nτ
dPf0 ,n
2
Π|B
and we conclude that Pf0 ,n C e 2 nτ Pn
→ 0,
. Based on theorem 4.2, we conclude that,
Pf ,n
Π kf − f0 kP,2 < ρ X n −−−0−→ 1,
i.e. posterior consistency for the regression function in L2 (P )-norm obtains.
Example B.12 As for case (a), one has the choice of using a prior like above, but also to
proceed differently: expression (B.28) can be written in terms of a local parameter h ∈ Rk
which, for given θ0 and n ≥ 1, is related to θ by θ = θ0 + n−1/2 h. For h ∈ R2 , we write
Ph,n = Pθ0 +n−1/2 h,n , P0,n = Pθ0 ,n and rewrite the likelihood ratio (B.28) as follows,
Pn
dPh,n n
√1
h·` (X ,Y )− 1 h·I ·h+Rn
(X ) = e n i=1 θ0 i i 2 θ0
,
dP0,n
32
(B.29)
where `θ0 : R2 → R2 : (x, y) 7→ (y − a0 x − b0 )(x, 1) is the score function for θ, Iθ0 = Pθ0 ,1 `θ0 `Tθ0
Pθ
,n
is the Fisher information matrix and Rn −−−0−→ 0. Assume that Iθ0 is non-singular and note
the central limit,
n
Pθ0 ,n -w.
1 X
√
`θ0 (Xi , Yi ) −−−
−−→ N2 (0, Iθ0 ),
n
i=1
which expresses local asymptotic normality of the model [48] and implies that for any fixed
h ∈ R2 , Ph,n C P0,n .
Lemma B.13 Assume that the model satisfies LAN condition (B.29) with non-singular Iθ0 and
that the prior Π for θ has a Lebesgue-density π : Rd → R that is continuous and strictly positive
in all of Θ. For given H > 0, define the subsets Bn = {θ ∈ Θ : θ = θ0 + n−1/2 h, khk ≤ H}.
Then,
Π|Bn
P0,n C c−1
,
n Pn
(B.30)
for any cn ↓ 0.
Proof According to lemma 3 in section 8.4 of Le Cam and Yang (1990) [53], Pθ0 ,n is conΠ|Bn
tiguous with respect to Pn
. That implies the assertion.
√
Note that for some K > 0, Π(Bn ) ≥ bn := K(H/ n)d . Assume again the existence of
Bayesian tests for V = {θ ∈ Θ : kθ − θ0 k > ρ} (for some ρ > 0) versus Bn (or some B
such that Bn ⊂ B), of power an = exp(− 12 nτ 2 ) (for some τ > 0). Then an b−1
n = o(1),
Pθ
,n
and, assuming (B.30), theorem 4.4 implies that Π(kθ − θ0 k > ρ|X n ) −−−0−→ 0, so consistency is
straightforwardly demonstrated.
The case becomes somewhat more complicated if we are interested in optimality of parametric
rates: following the above, a logarithmic correction arises from the lower bound Π(Bn ) ≥
√
K(H/ n)d when combined in the application of theorem 4.4. To alleviate this, we adapt the
construction somewhat: define Vn = {θ ∈ Θ : kθ − θ0 k ≤ Mn n−1/2 } for some Mn → ∞ and
Bn like above. Under the condition that there exists a uniform test sequence for any fixed
V = {θ ∈ Θ : kθ − θ0 k > ρ} versus Bn (see, for example, [45]), uniform test sequences for Vn
0
2
versus Bn of power e−K Mn exist, for some k 0 > 0. Alternatively, assume that the Hellinger
distance and the norm on Θ are related through inequalities of the form,
K1 kθ − θ0 k ≤ H(Pθ , Pθ0 ) ≤ K2 kθ − θ0 k,
for some constants K1 , K2 > 0. Then cover Vn with rings,
(Mn + k − 1)
(Mn + k)
√
√
Vn,k = θ ∈ Vn :
≤ kθ − θ0 k ≤
,
n
n
for k ≥ 1 and cover each ring with balls Vn,k,l of radius n−1/2 , where 1 ≤ l ≤ Ln,k and Ln,k the
minimal number of radius-n−1/2 balls needed to cover Vn,k , related to the Le Cam dimension
33
[49]. With the Bn defined like above, and the inequality,
Z
Pθ,n Π(Vn,k,l |X n ) dΠn (θ|Bn )
≤ sup Pθ,n φn,k,l +
θ∈Bn
Πn (Vn,k,l )
sup Pθ,n (1 − φn,k,l ),
Πn (Bn ) θ∈Vn,k,l
where the φn,k,l are the uniform minimax tests for Bn versus Vn,k,l of lemma B.7, of power
exp(−K 0 (Mn + k − 1)2 ) for some K 0 > 0. We define φn,k = max{φn,k,l : 1 ≤ l ≤ Ln,k } for Vn,k
versus Bn and note,
Z
Πn (Vn,k ) −K(Mn +k−1)2
e
,
Pθ,n Π(Vn,k |X n ) dΠn (θ|Bn ) ≤ Ln,k +
Πn (Bn )
where the numbers Ln,k are upper bounded by a multiple of (Mn +k)d and the fraction of prior
masses Πn (Vn,k )/Πn (Bn ) can be controlled without logarithmic corrections when summing over
k next.
But remote contiguity also applies in more irregular situations: example 1.3 does not admit
KL priors, but satisfies the requirement of remote contiguity. (Choose η equal to the uniform
density for simplicity.)
Example B.14 Consider X1 , X2 , . . . that form an i.i.d. sample from the uniform distribution
on [θ, θ + 1], for unknown θ ∈ R. The model is parametrized in terms of distributions Pθ
with Lebesgue densities of the form pθ (x) = 1[θ,θ+1] (x), for θ ∈ Θ = R. Pick a prior Π on Θ
with a continuous and strictly positive Lebesgue density π : R → R and, for some rate δn ↓ 0,
choose Bn = (θ0 , θ0 + δn ). Note that for any α > 0, there exists an N ≥ 1 such that for all
n ≥ N , (1 − α)π(θ0 )δn ≤ Π(Bn ) ≤ (1 + α)π(θ0 )δn . Note that for any θ ∈ Bn and X n ∼ Pθn0 ,
dPθn /dPθn0 (X n ) = 1{X(1) > θ}, and correspondingly,
Π|B
dPn n n
(X ) = Πn (Bn )−1
dPθn0
≥
Z
θ0 +δn
1{X(1) > θ} dΠ(θ)
θ0
1 − α δn ∧ (X(1) − θ0 )
,
1+α
δn
for large enough n. As a consequence, for every δ > 0 and all an ↓ 0,
Pθn0
Π|B
dPn n n
(X ) < δ an
dPθn0
≤ Pθn0 δn−1 (X(1) − θ0 ) < (1 + α)δ an ,
for large enough n ≥ 1. Since n(X(1) − θ0 ) has an exponential weak limit under Pθn0 , we choose
Π |Bn
n
δn = n−1 , so that the r.h.s. in the above display goes to zero. So Pθ0 ,n C a−1
n Pn
an ↓ 0.
, for any
To show consistency and derive the posterior rate of convergence in example 1.3, we use
theorem 4.4.
34
Example B.15 Continuing with example B.14, we define Vn = {θ : θ − θ0 > n }. It is noted
that, for every 0 < c < 1, the likelihood ratio test,
φn (X n ) = 1{dPθ0 +n ,n /dPθ0 ,n (X n ) > c} = 1{X(1) > θ0 + n },
satisfies Pθn (1 − φn )(X n ) = 0 for all θ ∈ Vn , and if we choose δn = 1/2 and n = Mn /n for
some Mn → ∞, Pθn φn ≤ e−Mn +1 for all θ ∈ Bn , so that,
Z
Z
Pθn φn ( dΠ(θ) +
Pθn (1 − φn ) dΠ(θ) ≤ Π(Bn ) e−Mn +1 ,
Bn
Vn
Π|Bn
Using lemma 2.2, we see that Pn
Π(Vn |X n ) ≤ e−Mn +1 . Based on the conclusion of exam-
ple B.14 above, remote contiguity implies that Pθn0 Π(Vn |X n ) → 0. Treating the case θ < θ0 −n
similarly, we conclude that the posterior is consistent at (any n slower than) rate 1/n.
To conclude, we demonstrate the relevance of priors satisfying the lower bound (16). Let us
repeat lemma 8.1 in [34], to demonstrate that the sequence (P0n ) is remotely contiguous with
respect to the local prior predictive distributions based on the Bn of example 4.6.
Lemma B.16 For all n ≥ 1, assume that (X1 , X2 , . . . , Xn ) ∈ X n ∼ P0n for some P0 ∈ P
and let n ↓ 0 be given. Let Bn be as in example 4.6. Then, for any priors Πn such that
Πn (Bn ) > 0,
Z
Pθ0 ,n
dPθn
2
(X n ) dΠn (θ|Bn ) < e−cnn
dPθn0
→ 0,
for any constant c > 1.
B.4
The sparse normal means problem
For an example of consistency in the false-detection-rate (FDR) sense, we turn to the most
prototypical instance of sparsity, the so-called sparse normal means problem: in recent years
various types of priors have been proposed for the Bayesian recovery of a nearly-black vector
in the Gaussian sequence model. Most intuitive in this context is the class of spike-and-slab
priors [57], which first select a sparse subset of non-zero components and then draws those
from a product distribution. But other proposals have also been made, e.g. the horseshoe prior
[14], a scale-mixture of normals. Below, we consider FDR-type consistency with spike-and-slab
priors.
Example B.17 Estimation of a nearly-black vector of locations in the Gaussian sequence
model is based on n-point samples X n = (X1 , . . . , Xn ) assumed distributed according to,
Xi = θi + εi ,
(B.31)
(for all 1 ≤ i ≤ n), where ε1 , . . . , εn form an i.i.d. sample of standard-normally distributed
errors. The parameter θ is a sequence (θi : i ≥ 1) in R, with n-dimensional projection
35
θn = (θ1 , . . . , θn ), for every n ≥ 1. The corresponding distributions for X n are denoted Pθ,n
for all n ≥ 1.
Denoting by pn the number of non-zero components of the vector θn = (θ1 , . . . , θn ) ∈ Rn ,
sparsity is imposed through the assumption that θ is nearly black, that is, pn → ∞, but
pn = o(n) as n → ∞. For any integer 0 ≤ p ≤ n, denote the space of n-dimensional vectors
θn with exactly p non-zero components by `0,n (p). For later reference, we introduce, for every
subset S of In := {1, . . . , n}, the space RnS := {θn ∈ Rn : θi = 1{i ∈ S}θi , 1 ≤ i ≤ n}.
Popular sub-problems concern selection of the non-zero components [10] and (subsequent)
minimax-optimal estimation of the non-zero components [25] (especially with the LASSO in
related regression problems, see, for example, [75]). Many authors have followed Bayesian
approaches; for empirical priors, see [44], and for hierarchical priors, see [15] (and references
therein).
As n grows, the minimax-rate at which the L2 -error for estimation of θn grows, is bounded in
the following, sparsity-induced way [24],
inf
sup
θ̂n
θn ∈`0,n (pn )
Pθ,n θ̂n − θn
2
2,n
≤ 2pn log
n
(1 + o(1)),
pn
as n → ∞, where θ̂n runs over all estimators for θn .
A natural proposal for a prior Π for θ [57] (or rather, priors Πn for all θn (n ≥ 1)), is to draw
a sparse θn hierarchically [15]: given n ≥ 1, first draw p ∼ πn (for some distribution πn on
{0, 1, . . . , n}), then draw a subset S of order p from {1, . . . , n} uniformly at random, and draw
θn by setting θi = 0 if i 6∈ S and (θi : i ∈ S) ∼ Gp , for some distribution G on (all of) R.
The components of θn can therefore be thought of as having been drawn from a mixture of a
distribution degenerate at zero (the spike) and a full-support distribution G (the slab).
To show that methods presented in this paper also apply in complicated problems like this, we
give a proof of posterior convergence in the FDR sense. We appeal freely to useful results that
appeared elsewhere, in particular in [15]: we adopt some of Castillo and van der Vaart’s more
technical steps to reconstitute the FDR-consistency proof based on Bayesian testing and remote
contiguity: to compare, the testing condition and prior-mass lower bound of theorem 4.4 are
dealt with simultaneously, while the remote contiguity statement is treated separately. (We
stress that only the way of organising the proof, not the result is new. In fact, we prove only
part of what [15] achieves.)
Assume that the data follows (B.31) and denote by θ0 the true vector of normal means. For
each n ≥ 1, let pn (respectively p) denote number of non-zero components of θ0n (respectively
θn ). We do not assume that the true degree of sparsity pn is fully known, but for simplicity
and brevity we assume that there is a known sequence of upper bounds qn , such that for some
constant A > 1, pn ≤ qn ≤ A pn , for all n ≥ 1. (Indeed, theorem 2.1 in [15] very cleverly shows
36
that if G has a second moment and the prior density for the sparsity level has a tail that is
slim enough, then the posterior concentrates on sets of the form, {θn ∈ Rn : p ≤ Apn } under
P0 , for some A > 1.)
Set rn2 = pn log(n/pn ) and define two subsets of Rn ,
Vn = θn : p ≤ Apn , kθn − θ0n k2,n > M rn ,
Bn = θn : kθn − θ0n k2,n ≤ d rn , ,
assuming for future reference that Π(Bn ) > 0. As for Vn , we split further: define, for all j ≥ 1,
Vn,j = θn ∈ Vn : jM rn < kθn − θ0n k2,n ≤ (j + 1)M rn .
Next, we subdivide Vn,j into intersections with the spaces RnS for S ⊂ In : we write Vn,j =
∪{Vn,S,j : S ⊂ In } with Vn,S,j = Vn,j ∩ RnS . For every n ≥ 1, j ≥ 1 and S ⊂ In , we cover Vn,S,j
by Nn,S,j L2 -balls Vn,j,S,i of radius 12 jM rn and centre points θj,S,i . Comparing the problem of
covering Vn,j with that of covering Vn,1 , one realizes that Nn,S,j ≤ Nn,S := Nn,S,1 .
Fix n ≥ 1. Due to lemma 2.2, for any test sequences φn,j,S,i ,
PnΠ|Bn Π(Vn |X n )
≤
n,S,j
X X NX
PnΠ|Bn Π(Vn,j,S,i |X n )
j≥1 S⊂In i=1
Nn,S,j
≤
1 X X X
Π(Bn )
j≥1 S⊂In i=1
Z
Z
Pθ,n φn,j,S,i dΠ(θ) +
Bn
≤
Apn
X
p=0
Pθ,n (1 − φn,j,S,i ) dΠ(θ)
Vn,j,S,i
n X
an (j)
,
Nn,S
p
bn
j≥1
where bn := Π(Bn ), an (j) := maxS⊂In ,1≤i≤Nn,S,j an (j, S, i) and,
an (j, S, i)
=
bn
Z
Pθ,n φn,j,S,i dΠ(θ|Bn )
Z
Π(Vn,j,S,i )
+
Pθ,n (1 − φn,j,S,i ) dΠ(θ|Vn,j,S,i )
Π(Bn )
Π(Vn,j,S,i )
≤ sup Pθ,n φn,j,S,i +
sup Pθ,n (1 − φn,j,S,i ).
n
Π(Bn ) θn ∈Vn,j,S,i
θ ∈Bn
A standard argument (see lemma 5.1 in [15]) shows that there exists a test φn,j,S,i such that,
P0,n φn,j,S,i +
Π(Vn,j,S,i )
sup Pθ,n (1 − φn,j,S,i )
Π(Bn ) θn ∈Vn,j,S,i
s
Π(Vn,j,S,i ) − 1 j 2 M 2 pn log(n/pn )
≤2
e 128
Π(Bn )
37
Note that for every measurable 0 ≤ φ ≤ 1, Cauchy’s inequality implies that, for all θ, θ0 ∈ Rn
Pθ,n φ ≤ Pθ0 ,n φ2
1/2
Pθ0 ,n (dPθ,n /dPθ0 ,n )2
1/2
≤ Pθ0 ,n φ
1/2
1
0 2
e 2 kθ−θ k2,n
(B.32)
We use this to generalise the first term in the above display to the test uniform over Bn at the
expense of an extra factor, that is,
sup Pθ,n φn,j,S,i +
θn ∈Bn
Π(Vn,j,S,i )
sup Pθ,n (1 − φn,j,S,i )
Π(Bn ) θn ∈Vn,j,S,i
s
Π(Vn,j,S,i ) − 1 j 2 M 2 rn2 + 1 d2 rn2
2
e 256
≤2
Π(Bn )
In what appears to be one of the essential (and technically very demanding) points of [15], the
proofs of the lemma 5.4 (only after the first line) and of proposition 5.1 show that there exists
a constant K > 0 such that,
s
Π(Vn,j,S,i )
2
≤ eKrn ,
Π(Bn )
if G has a Lebesgue density g : R → R such that there exists a constant c > 0 such that
| log g(θ) − log g(θ0 )| ≤ c(1 + |θ − θ0 |) for all θ, θ0 ∈ R. This allows for demonstration that (see
the final argument in the proof of proposition 5.1 in [15]) if we choose M > 0 large enough,
there exists a constant K 0 > 0 such that for large enough n,
0 2
PnΠ|Bn Π(Vn |X n ) ≤ e−K rn .
Remote contiguity follows from (B.32): fix some n ≥ 1 and note that for any θn ∈ Bn ,
(P0,n φ)2 ≤ ed
2 r2
n
Pθ,n φ.
Integrating with respect to Π(·|Bn ) on both sides shows that,
P0,n φ ≤ e
2 r2
n
so that P0,n C ed
Π|Bn
Pn
d2
2 rn (PnΠ|Bn φ)1/2 ,
. So if we choose d2 < K 0 , remote contiguity guarantees that
P0,n Π(Vn |X n ) → 0.
B.5
Goodness-of-fit Bayes factors for random walks
Consider the asymptotic consistency of goodness-of-fit tests for the transition kernel of a
Markov chain with posterior odds or Bayes factors. Bayesian analyses of Markov chains on
a finite state space are found in [66] and references therein. Consistency results c.f. [70] for
random walk data are found in [36]. Large-deviation results for posterior distributions are
derived in [59, 27]. The examples below are based on ergodicity for remote contiguity and
Hoeffding’s inequality for uniformly ergodic Markov chains [56, 37] to construct suitable tests.
We first prove the analogue of Schwartz’s construction in the case of an ergodic random walk.
38
Let (S, S ) denote a measurable state space for a discrete-time, stationary Markov process P
describing a random walk X n = {Xi ∈ S : 0 ≤ i ≤ n} of length n ≥ 1 (conditional on a
starting position X0 ). The chain has a Markov transition kernel P (·|·) : S × S → [0, 1] that
describes Xi |Xi−1 for all i ≥ 1.
Led by Pearson’s approach to goodness-of-fit testing, we choose a finite partition α = {A1 , . . . , AN }
of S and ‘bin the data’ in the sense that we switch to a new process Z n taking values in the
finite state space Sα = {ej : 1 ≤ j ≤ N } (where ej denotes the j-th standard basis vector
in Rn ), defined by Z n = {Zi ∈ Sα : 0 ≤ i ≤ n}, with Zi = (1{Xi ∈ A1 }, . . . , 1{Xi ∈ AN }).
The process Z n forms a stationary Markov chain on Sα with distribution Pα,n . The model is
parametrized in terms of the convex set Θ of N × N Markov transition matrices pα on the
finite state space Sα ,
pα (k|l) = Pα,n (Zi = ek |Zi−1 = el ) = P (Xi ∈ Ak |Xi−1 ∈ Al ),
(B.33)
for all 0 ≤ i ≤ n and 1 ≤ k, l ≤ N . We assume that Pα,n is ergodic with equilibrium
distribution that we denote by πα , and πα (k) := πα (Z = k). We are interested in Bayes
factors for goodness-of-fit type questions, given a parameter space consisting of transition
matrices.
Example B.18 Assume that the true transition kernel P0 gives rise to a matrix p0 ∈ Θ that
generates an ergodic Markov chain Z n . Denote the true distribution of Z n by P0,n and the
equilibrium distribution by π0 (with π0 (k) := π0 (Z = k)). For given > 0, define,
N
o
n
X
pα (l|k)
< 2 .
B = pα ∈ Θ :
−p0 (l|k)π0 (k) log
p0 (l|k)
0
k,l=1
Assume that Π(B 0 ) > 0. According to the ergodic theorem, for every pα ∈ B 0 ,
n
N
1X
p(l|k)
pα (Zi |Zi−1 ) P0,n -a.s. X
log
−−−−−→
p0 (l|k)π0 (k) log
,
n
p0 (Zi |Zi−1 )
p0 (l|k)
i=1
k,l=1
(compare with the rate-function in the large-deviation results in [59, 27]) so that, for large
enough n,
n
Y pα (Zi |Zi−1 )
n 2
dPα,n n
(Z ) =
≥ e− 2 ,
dP0,n
p0 (Zi |Zi−1 )
i=1
P0,n -almost-surely. Just like in Schwartz’s proof [62], in proposition B.22 and in example B.11,
the assumption Π(B 0 ) > 0 and Fatou’s lemma imply remote contiguity because,
Z dP
n 2
α,n
P0,n
(Z n ) dΠ(pα |B 0 ) < e− 2 → 0.
dP0,n
Π|B 0
So lemma 3.3 says that P0,n C exp( n2 2 )Pn
.
However, exponential remote contiguity will turn out not to be enough for goodness-of-fit
tests below, unless we impose stringent model conditions. Instead, we shall resort to local
asymptotic normality for a sharper result.
39
Example B.19 We formulate goodness-of-fit hypotheses in terms of the joint distribution for
two consecutive steps in the random walk. Like Pearson, we fix some such distribution P0 and
consider hypotheses based on differences of ‘bin probabilities’ pα (k, l) = pα (k|l)πα (l),
H0 : max pα (k, l) − p0 (k, l) < ,
1≤k,l≤N
(B.34)
H1 : max pα (k, l) − p0 (k, l) ≥ ,
1≤k,l≤N
for some fixed > 0. The sets B and V are defined as the sets of transition matrices pα ∈ Θ
that satisfy hypotheses H0 and H1 respectively. We assume that the prior is chosen such that
Π(B) > 0 and Π(V ) > 0.
Endowed with some matrix norm, Θ is compact and a Borel prior on Θ can be defined in
various ways. For example, we may assign the vector (pα (·|1), . . . , pα (·|N )) a product of
Dirichlet distributions. Conjugacy applies and the posterior for pα is again a product of
Dirichlet distributions [66]. For an alternative family of priors, consider the set E of N N
N × N -matrices E that have standard basis vectors ek in RN as columns. Each E ∈ E is a
deterministic Markov transition matrix on Sα and E is the extremal set of the polyhedral set
Θ. According to Choquet’s theorem, every transition matrix pα can then be written in the
form,
pα =
X
λE E,
(B.35)
E∈E
for a (non-unique) combination of λE := {λE : E ∈ E } such that λE ≥ 0,
P
E
λE = 1. If
λE > 0 for all E ∈ E , the resulting Markov chain is ergodic and we denote the corresponding
distributions for Z n by Pα,n . Any Borel prior Π0 (e.g. a Dirichlet distribution) on the simplex
SN N in RN
N
is a prior for λE and induces a Borel prior Π on Θ. Note that all non-ergodic
transition matrices lie in the boundary ∂Θ, so if we choose Π0 such that Π(Θ̊) = 1, ergodicity
may be assumed in all prior-almost-sure arguments. This is true for any Π0 that is absolutely
continuous with respect to the (N N − 1-dimensional) Lebesgue measure on SN N (for example
when we choose Π0 equal to a Dirichlet distribution). Note that if the associated density is
continuous and strictly positive, Π(B) > 0 and Π(V ) > 0.
We intend to use theorem 4.8 with B and V defined by H0 and H1 , so we first demonstrate that
a Bayesian test sequence for B versus V exists, based on a version of Hoeffding’s inequality
valid for random walks [37]. First, define, for given 0 < λn ≤ N −N such that λn ↓ 0,
N
Sn0 := λE ∈ S N : λE ≥ λn /N N −1 , for all E ∈ E ,
and denote the image of Sn0 under (B.35) by Sn . Note that if Π(∂Θ) = 0, then πS,n :=
Π(Θ \ Sn ) → 0.
Now fix n ≥ 1 for the moment. Recalling the nature of the matrices E, we see that for every
1 ≤ k, l ≤ N , pα (k|l) as in equation (B.35) is greater than or equal to λn . Consequently,
the corresponding Markov chain satisfies condition (A.1) of Glynn and Ormoneit [37] (closely
40
related to the notion of uniform ergodicity [56]): starting in any point X0 under a transition
from Sn , the probability that X1 lies in A ⊂ Sα is greater than or equal to λn φ(A), where φ is
the uniform probability measure on Sα . This mixing condition enables a version of Hoeffding’s
inequality (see theorem 2 in [37]): for any λE ∈ Sn0 and 1 ≤ k, l ≤ N , the transition matrix of
P
equation (B.35) is such that, with p̂n (k, l) = n−1 i 1{Zi = k, Zi−1 = l},
λ2 (nδ − 2λ−1 )2
n
.
Pα,n p̂n (k, l) − pα (k, l) ≥ δ ≤ exp − n
2n
(B.36)
Now define for a given sequence δn > 0 with δn ↓ 0 and all n ≥ 1, 1 ≤ k, l ≤ N ,
Bn = {pα ∈ Θ : max pα (k, l) − p0 (k, l) < − δn },
k,l
Vk,l = {pα ∈ Θ : pα (k, l) − p0 (k, l) ≥ },
V+,k,l,n = {pα ∈ Θ : pα (k, l) − p0 (k, l) ≥ + δn },
V−,k,l,n = {pα ∈ Θ : pα (k, l) − p0 (k, l) ≤ − − δn }.
N
Note that if Π0 is absolutely continuous with respect to the Lebesgue measure on S N , then
πB,n := Π(B \ Bn ) → 0 and πn,k,l := Π(Vk,l \ (V+,k,l,n ∪ V−,k,l,n )) → 0.
If we define the test φ+,k,l,n (Z n ) = 1{p̂n (k, l) − p0 (k, l) ≥ }, then for any pα ∈ Bn ∩ Sn ,
Pα,n φ+,k,l,n (Z n ) ≤ Pα,n p̂n (k, l) − pα (k, l) ≥ δn
λ2 (nδ − 2λ−1 )2
n
n
≤ exp − n
.
2n
If on the other hand, pα lies in the intersection of V+,n,k,l with Sn , we find,
Pα,n (1 − φ+,n,k,l (Z n )) = Pα,n p̂n (k, l) − pα (k, l) < −δn
λ2 (nδ − 2λ−1 )2
n
n
.
≤ exp − n
2n
Choosing the sequences δn and λn such that nδn2 λ2n → ∞, we also have λ−1
n = o(nδn ), so the
exponent on the right is smaller than or equal to − 81 nλ2n δn2 .
41
So if we define φn (Z n ) = maxk,l {φ−,k,l,n (Z n ), φ+,k,l,n (Z n )},
Z
Z
Pα,n φn dΠ(pα ) +
Qα,n (1 − φn ) dΠ(qα )
B
V
Z
Z
≤
Pα,n φn dΠ(pα ) +
Qα,n (1 − φn ) dΠ(qα ) + Π(Θ \ Sn )
B∩Sn
N
X
V ∩Sn
Z
≤
Pα,n (φ−,k,l,n + φ+,k,l,n ) dΠ(pα )
B k,l=1
+
N Z
X
Qα,n (1 − φ−,k,l,n ) dΠ(qα )
V−,k,l
k,l=1
Z
Qα,n (1 − φ+,k,l,n ) dΠ(qα )
+
V+,k,l
+
N
X
Π Vn,k,l \ (V+,n,k,l ∪ V+,n,k,l ) +Π(Θ \ Sn ) + Π(B \ Bn )
k,l=1
≤
1 2 2
2N 2 e− 8 nλn δn
+ πB,n + πS,n +
N
X
πn,k,l .
k,l=1
So if we choose a prior Π0 on S N
N
that is absolutely continuous with respect to Lebesgue
measure, then (φn ) defines a Bayesian test sequence for B versus V .
Because we have not imposed control over the rates at which the terms on the r.h.s. go to zero,
remote contiguity at exponential rates is not good enough. Even if we would restrict supports
of a sequence of priors such that πB,N = πS,n = πn,k,l = 0, the first term on the r.h.s. is subexponential. To obtain a rate sharp enough, we note that the chain Z n is positive recurrent,
which guarantees that the dependence pα → dPα,n /dP0,n is locally asymptotically normal
[43, 38]. According to lemma B.13, this implies that local prior predictive distributions based
on n−1/2 -neighbourhoods of p0 in Θ are cn -remotely contiguous to P0,n for any rate cn , if the
prior has full support. If we require that the prior density π 0 with respect to Lebesgue measure
on S N
that
N
is continuous and strictly positive, then we see that there exists a constant π > 0 such
π 0 (λ)
N
≥ π for all λ ∈ S N , so that for every n−1/2 -neighbourhood Bn of p0 , there exists a
K > 0 such that Π(Bn ) ≥ bn := K n−N
N /2
. Although local asymptotic normality guarantees
remote contiguity at arbitrary rate, we still have to make sure that cn → 0 in lemma B.13, i.e.
that an = o(bn ). Then the remark directly after theorem 4.8 shows that condition (ii) of said
theorem is satisfied.
The above leads to the following conclusion concerning goodness-of-fit testing c.f. (B.34).
Proposition B.20 Let X n be a stationary, discrete time Markov chain on a measurable state
space (S, S ). Choose a finite, measurable partition α of S such that the Markov chain Z n is
ergodic. Choose a prior Π0 on S N
N
absolutely continuous with respect to Lebesgue measure
with a continuous density that is everywhere strictly positive. Assume that,
(i) nλ2n δn2 / log(n) → ∞,
42
(ii) Π(B \ Bn ), Π(Θ \ Sn ) = o(n−(N
N /2)
),
(iii) maxk,l Π(Vk,l \ (V+,k,l,n ∪ V−,k,l,n )) = o(n−(N
N /2)
).
Then for any choice of > 0, the Bayes factors Fn are consistent for H0 versus H1 .
To guarantee ergodicity of Z n one may use an empirical device, i.e. we may use an independent, finite-length realization of the random walk X n to find a partition α such that for all
1 ≤ k, l ≤ N , we observe some m-step transition from l to k. An interesting generalisation
concerns a hypothesized Markov transition kernel P0 for the process X n and partitions αn
(with projections p0,αn as in (B.33)), chosen such that αn+1 refines αn for all n ≥ 1. Bayes
factors then test a sequence of pairs of hypotheses (B.34) centred on the p0,αn . The arguments
leading to proposition B.20 do not require modification and the rate of growth Nn comes into
the conditions of proposition B.20.
Example B.19 demonstrates the enhancement of the role of the prior as intended by the remark
that closes the subsection on the existence of Bayesian test sequences in section 2: where testing
power is relatively weak, prior mass should be scarce to compensate and where testing power
is strong, prior mass should be plentiful. A random walk for which mixing does not occur
quickly enough does not give rise to (B.36) and alternatives for which separation decreases too
fast lose testing power, so the difference sets of proposition B.20 are the hard-to-test parts of
the parameter space and conditions (ii)–(iii) formulate how scarce prior mass in these parts
has to be.
B.6
Finite sample spaces and the tailfree case
Example B.21 Consider the situation where we observe an i.i.d. sample of random variables
X1 , X2 , . . . taking values in a space XN of finite order N . Writing XN as the set of integers
{1, . . . , N }, we note that the space M of all probability measures P on (XN , 2XN ) with the
total-variational metric (P, Q) 7→ kP − Qk is in isometric correspondence with the simplex,
SN = p = (p(1), . . . , p(N )) : min p(k) ≥ 0, Σi p(i) = 1 ,
k
with the metric (p, q) 7→ kp − qk = Σk |p(k) − q(k)| it inherits from RN with the L1 -norm,
when k 7→ p(k) is the density of P ∈ M with respect to the counting measure. We also define
M 0 = {P ∈ M : P ({k}) > 0, 1 ≤ k ≤ N } ⊂ M (and RN = {p ∈ SN : p(k) > 0, 1 ≤ k ≤ N } ⊂
SN ).
Proposition B.22 If the data is an i.i.d. sample of XN -valued random variables, then for
any n ≥ 1, any Borel prior Π : G → [0, 1] of full support on M , any P0 ∈ M and any ball B
around P0 , there exists an 0 > 0 such that,
1
2
P0n C e 2 n PnΠ|B ,
43
(B.37)
for all 0 < < 0 .
Proof By the inequality kP − Qk ≤ −P log(dQ/dP ), the ball B around P0 contains all sets
of the form K() = {P ∈ M 0 : −P0 log(dP/dP0 ) < }, for some 0 > 0 and all 0 < < 0 . Fix
such an . Because the mapping P 7→ −P0 log(dP/dP0 ) is continuous on M 0 , there exists an
open neighbourhood U of P0 in M such that U ∩ M 0 ⊂ K(). Since both M 0 and U are open
and Π has full support, Π(K()) ≥ Π(U ∩ M 0 ) > 0. With the help of example 3.2, we see that
for every P ∈ K(),
1
e 2 n
2
dP n n
(X ) ≥ 1,
dP0n
for large enough n, P0 -almost-surely. Fatou’s lemma again confirms condition (ii) of lemma 3.3
is satisfied. Conclude that assertion (B.37) holds.
Example B.23 We continue with the situation where we observe an i.i.d. sample of random
variables X1 , X2 , . . . taking values in a space XN of finite order N . For given δ > 0, consider
the hypotheses,
B = {P ∈ M : kP − P0 k < δ}, V = {Q ∈ M : kQ − P0 k > 2δ}.
Noting that M is compact (or with the help of the simplex representation SN ) one sees that
entropy numbers of M are bounded, so the construction of example B.8 shows that uniform
tests of exponential power e−nD (for some D > 0) exist for B versus V . Application of
proposition B.22 shows that the choice for an 0 < < 0 small enough, guarantees that
Π(V |X n ) goes to zero in P0n -probability. Conclude that the posterior resulting from a prior Π
of full support on M is consistent in total variation.
Example B.24 With general reference to Ferguson (1973) [28], one way to construct nonparametric priors concerns a refining sequence of finite, Borel measurable partitions of a Polish
sample space, say X = R: to define a ‘random distribution’ P on X , we specify for each such
partition α = {A1 , . . . , AN }, a Borel prior Πα on SN , identifying (p1 , . . . , pN ) with the ‘random
variables’ (P (A1 ), . . . , P (AN )). Kolmogorov existence of the stochastic process describing all
P (A) in a coupled way subjects these Πα to consistency requirements expressing that if A1 , A2
partition A, then P (A1 ) + P (A2 ) must have the same distribution as P (A). If the partitions
refine appropriately, the resulting process describes a probability measure Π on the space of
Borel probability measures on X , i.e. a ‘random distribution’ on X . Well-known examples of
priors that can be constructed in this way are the Dirichlet process prior (for which a so-called
base-measure µ supplies appropriate parameters for Dirichlet distributions Πα , see [28]) and
Polya Tree prior (for detailed explanations, see, for example, [35]).
A special class of priors constructed in this way are the so-called tailfree priors. The process
prior associated with a family of Πα like above is said to be tailfree, if for all α, β such
that β = {B1 , . . . , BM } refines α = {A1 , . . . , AN }, the following holds: for all 1 ≤ k ≤ N ,
(P (Bl1 |Ak ), . . . , P (BlL (k) |Ak )) (where the sets Bl1 , . . . , BlL (k) ∈ β partition Ak ) is independent
44
of (P (A1 ), . . . , P (AN )). Although somewhat technical, explicit control of the choice for the
Πα render the property quite feasible in examples.
Fix a finite, measurable partition α = {A1 , . . . , AN }. For every n ≥ 1, denote by σα,n the
σ-algebra σ(αn ) ⊂ B n , generated by products of the form Ai1 × · · · × Ain ⊂ X n , with
1 ≤ i1 , . . . , in ≤ N . Identify XN with the collection {e1 , . . . , eN } ⊂ RN and define the
projection ϕα : X 7→ XN by,
ϕα (x) = 1{x ∈ A1 }, . . . , 1{x ∈ AN } .
We view XN (respectively XNn ) as a probability space, with σ-algebra σN equal to the power
set (respectively σN,n , the power set of XNn ) and probability measures denoted Pα : σN → [0, 1]
that we identify with elements of SN . Denoting the space of all Borel probability measures on
X by M 1 (X ), we also define ϕ∗α : M 1 (X ) → SN ,
ϕ∗α (P ) = P (A1 ), . . . , P (AN ) ,
which maps P to its restriction to σα,1 , a probability measure on XN . Under the projection
φα , any Borel-measurable random variable X taking values in X distributed P ∈ M 1 (X ) is
mapped to a random variable Zα = ϕα (X) that takes values in XN (distributed Pα = ϕ∗α (P )).
We also define Zαn = (ϕα (X1 ), . . . , ϕα (Xn )), for all n ≥ 1.
Let Πα denote a Borel prior on SN . The posterior on SN is then a Borel measure denoted
Πα (·|Zαn ), which satisfies, for all A ∈ σN,n and any Borel set V in SN ,
Z
Z
Πα (V |Zαn ) dPnΠα =
Pαn (A) dΠα (Pα ),
A
V
by definition of the posterior. In the model for the original i.i.d. sample X n , Bayes’s rule takes
the form, for all A0 ∈ Bn and all Borel sets V 0 in M 1 (X ),
Z
Z
Π(V 0 |X n ) dPnΠ =
P n (A0 ) dΠ(P ),
A0
V0
defining the posterior for P . Now specify that V 0 is the pre-image ϕ−1
∗α (V ) of a Borel measurable
V in SN : as a consequence of tailfreeness, the data-dependence of the posterior for such a V 0 ,
X n 7→ Π(V 0 |X n ), is measurable with respect to σα,n (see Freedman (1965) [30] or Ghosh
(2003) [35]). So there exists a function gn : XNn → [0, 1] such that,
Π(V 0 |X n = xn ) = gn (ϕα (x1 ), . . . , ϕα (xn )),
for PnΠ -almost-all xn ∈ X n . Then, for given A0 ∈ σα,n (with corresponding A ∈ σN,n ),
Z
Z
0
n
Π
Π(V |X ) dPn = P n (1A0 (X n ) Π(V 0 |X n )) dΠ(P )
0
A
Z
Z
= Pαn 1A (Zαn ) gn (Zαn ) dΠα (Pα ) =
gn (Zαn ) dPnΠα ,
A
45
while also,
Z
n
0
Z
P (A ) dΠ(P ) =
This shows that Zαn 7→
V0
gn (Zαn )
Pαn (A) dΠα (Pα ).
V
is a version of the posterior Πα ( · |Zαn ). In other words, we can
write Π(V 0 |X n ) = Πα (V |φα (X n )) = Πα (V |Zαn ), PnΠ -almost-surely.
Denote the true distribution of a single observation from X n by P0 . For any V 0 of the form
ϕ−1
∗α (V ) for some α and a neighbourhood V of P0,α = ϕ∗α (P0 ) in SN , the question whether
Π(V 0 |X n ) converges to one in P0 -probability reduces to the question whether Π(V |Zαn ) converges to one in P0,α -probability. Remote contiguity then only has to hold as in example B.21.
Another way of saying this is to note directly that, because X n 7→ Π(V 0 |X n ) is σα,n -measurable,
remote contiguity (as in definition 3.1) is to be imposed only for φn : X n → [0, 1] that are measurable with respect to σα,n (rather than B n ) for every n ≥ 1. That conclusion again reduces
the remote contiguity requirement necessary for the consistency of the posterior for the parameter (P (A1 ), . . . P (AN )) to that of a finite sample space, as in example B.21. Full support of the
prior Πα then guarantees remote contiguity for exponential rates as required in condition (ii)
of theorem 4.2. In the case of the Dirichlet process prior, full support of the base measure
µ implies full support for all Πα , if we restrict attention to partitions α = (A1 , . . . , AN ) such
that µ(Ai ) > 0 for all 1 ≤ i ≤ N . (Particularly, we require P0 µ for consistent estimation.)
Uniform tests of exponential power for weak neighbourhoods complete the proof that tailfree
priors lead to weakly consistent posterior distributions: (norm) consistency of Πα ( · |Zαn ) for
all α guarantees (weak T1 -)consistency of Π( · |X n ), in this proof based on remote contiguity
and theorem 4.2.
B.7
Credible/confidence sets in metric spaces
When enlarging credible sets to confidence sets using a collection of subsets B as in definition 4.11, measurability of confidence sets is guaranteed if B(θ) is open in Θ for all θ ∈ Θ.
Example B.25 Let G be the Borel σ-algebra for a uniform topology on Θ, like the weak and
metric topologies of appendix A. Let W denote a symmetric entourage and, for every θ ∈ Θ,
define B(θ) = {θ0 ∈ Θ : (θ, θ0 ) ∈ W }, a neighbourhood of θ. Let D denote any credible set.
A confidence set associated with D under B is any set C 0 such that the complement of D
contains the W -enlargement of the complement of C 0 . Equivalently (by the symmetry of W ),
the W -enlargement of D does not meet the complement of C 0 . Then the minimal confidence
set C associated with D is the W -enlargement of D. If the B(θ) are all open neighbourhoods
(e.g. whenever W is a symmetric entourage from a fundamental system for the uniformity on
Θ), the minimal confidence set associated with D is open. The most common examples include
the Hellinger or total-variational metric uniformities, but weak topologies (like Prohorov’s or
Tn -topologies) and polar topologies are uniform too.
46
Example B.26 To illustrate example B.25 with a customary situation, consider a parameter
space Θ with parametrization θ 7→ Pθn , to define a model for i.i.d. data X n = (X1 , . . . , Xn ) ∼
Pθn0 , for some θ0 ∈ Θ. Let D be the class of all pre-images of Hellinger balls, i.e. sets D(θ, ) ⊂ Θ
of the form,
D(θ, ) =
θ0 ∈ Θ : H(Pθ , Pθ0 ) < ,
for any θ ∈ Θ and > 0. After choice of a Kullback-Leibler prior Π for θ and calculation
of the posteriors, choose Dn equal to the pre-image D(θ̂n , ˆn ) of a (e.g. the one with the
smallest radius, if that exists) Hellinger ball with credible level 1 − o(an ), an = exp(−nα2 )
for some α > 0. Assume, now, that for some 0 < < α, the W of example B.25 is the
Hellinger entourage W = {(θ, θ0 ) : H(Pθ , Pθ0 ) < }. Since Kullback-Leibler neighbourhoods
are contained in Hellinger balls, the sets D(θ̂n , ˆn + ) (associated with Dn under the entourage
W ), is a sequence of asymptotic confidence sets, provided the prior satisfies (2). If we make
vary with n, neighbourhoods of the form Bn in example 4.6 are contained in Hellinger balls
of radius n , and in that case,
Cn (X n ) = D(θ̂n , ˆn + n ),
is a sequence of asymptotic confidence sets, provided that the prior satisfies (16).
C
Proofs
In this section of the appendix, proofs from the main text are collected.
Proof (theorem 1.1)
The argument (see, e.g., Doob (1949) [26] or Ghosh and Ramamoorthi (2003) [35]) relies on
martingale convergence and a demonstration of the existence of a measurable f : X ∞ → P
such that f (X1 , X2 , . . .) = P , P ∞ -almost-surely for all P ∈ P (see also propositions 1 and 2
of section 17.7 in [51]).
Proof (proposition 2.2)
Due to Bayes’s Rule (A.22) and monotone convergence,
Z
Pθ (1 − φ) Π(V |X) dΠ(θ)
B
Z
Z
≤ (1 − φ) Π(V |X) dP Π =
Pθ (1 − φ) dΠ(θ).
V
Inequality (4) follows from the fact that Π(V |X) ≤ 1.
Proof (theorem 2.4)
Condition (i) implies (ii) by dominated convergence. Assume (ii) and note that by lemma 2.2,
Z
Pθ,n Π(V |X n ) dΠ(θ|B) → 0.
47
Assuming that the observations X n are coupled and can be thought of as projections of
a random variable X ∈ X ∞ with distribution Pθ , martingale convergence in L1 (X ∞ × Θ)
R
(relative to the probability measure Π∗ defined by Π∗ (A×B) = B Pθ (A) dΠ(θ) for measurable
A ⊂ X ∞ and B ⊂ Θ), shows there is a measurable g : X ∞ → [0, 1] such that,
Z
Pθ Π(V |X n ) − g(X) dΠ(θ|B) → 0.
So
R
Pθ g(X) dΠ(θ|B) = 0, implying that g = 0, Pθ -almost-surely for Π-almost-all θ ∈ B.
Using martingale convergence again (now in L∞ (X ∞ × Θ)), conclude Π(V |X n ) → 0, Pθ almost-surely for Π-almost-all θ ∈ B, from which (iii) follows. Choose φ(X n ) = Π(V |X n ) to
conclude that (i) follows from (iii).
Proof (proposition 2.5)
Apply [51], section 17.1, proposition 1 with the indicator for V . See also [11].
Proof (lemma 3.3)
Assume (i). Let φn : Xn → [0, 1] be given and assume that Pn φn = o(an ). By Markov’s
inequality, for every > 0, Pn (a−1
n φn > ) = o(1). By assumption, it now follows that
Qn
φn −−→ 0. Because 0 ≤ φn ≤ 1 the latter conclusion is equivalent to Qn φn = o(1).
Assume (iv). Let > 0 and φn : Xn → [0, 1] be given. There exist c > 0 and N ≥ 1 such that
for all n ≥ N ,
Qn φn < c a−1
n Pn φn + .
2
If we assume that Pn φn = o(an ) then there is a N 0 ≥ N such that c a−1
n Pn φn < /2 for all
n ≥ N 0 . Consequently, for every > 0, there exists an N 0 ≥ 1 such that Qn φn < for all
n ≥ N 0.
To show that (ii) ⇒ (iv), let µn = Pn + Qn and denote µn -densities for Pn , Qn by pn , qn :
Xn → R. Then, for any n ≥ 1, c > 0,
Z
Z
Z
−1
Qn −Qn ∧ c an Pn k = sup
qn dµn −
qn dµn ∧
c a−1
p
dµ
n
n
n
A∈Bn
A
A
A
Z
≤ sup
(qn − qn ∧ c a−1
n pn ) dµn
A∈Bn A
Z
−1
= 1{qn > c a−1
n pn } (qn − c an pn ) dµn .
(C.38)
Note that the right-hand side of (C.38) is bounded above by Qn (dPn /dQn < c−1 an ).
To show that (iii) ⇒ (iv), it is noted that, for all c > 0 and n ≥ 1,
Z
−1
−1
0 ≤ c a−1
n Pn (qn > c an pn ) ≤ Qn (qn > c an pn ) ≤ 1,
−1
so (C.38) goes to zero if lim inf n→∞ c a−1
n Pn (dQn /dPn > c an ) = 1.
To prove that (v) ⇔ (ii), note that Prohorov’s theorem says that weak convergence of a
subsequence within any subsequence of an (dPn /dQn )−1 under Qn (see appendix A, notation
48
and conventions) is equivalent to the asymptotic tightness of (an (dPn /dQn )−1 : n ≥ 1) under
Qn , i.e. for every > 0 there exists an M > 0 such that Qn (an (dPn /dQn )−1 > M ) < for all
n ≥ 1. This is equivalent to (ii).
Proof (proposition 3.6)
For every > 0, there exists a constant δ > 0 such that,
dPθ,n −1 n
1
(X ) >
< ,
Pθ0 ,n an
dPθ0 ,n
δ
for all θ ∈ B, n ≥ 1. For this choice of δ, condition (ii) of lemma 3.3 is satisfied for all θ ∈ B
simultaneously, and c.f. the proof of said lemma, for given > 0, there exists a c > 0 such
that,
kPθ0 ,n − Pθ0 ,n ∧ c a−1
n Pθ,n k < ,
(C.39)
for all θ ∈ B, n ≥ 1. Now note that for any A ∈ Bn ,
Π|B
0 ≤Pθ0 ,n (A) − Pθ0 ,n (A) ∧ c a−1
(A)
n Pn
Z
≤
Pθ0 ,n (A) − Pθ0 ,n (A) ∧ c a−1
n Pθ,n (A) dΠ(θ|B).
Taking the supremum with respect to A, we find the following inequality in terms of total
variational norms,
Pθ0 ,n − Pθ0 ,n ∧
Π|B
c a−1
n Pn
Z
≤
Pθ0 ,n − Pθ0 ,n ∧ c a−1
n Pθ,n dΠ(θ|B).
Since the total-variational norm is bounded and Π(·|B) is a probability measure, Fatou’s lemma
says that,
Π|B
lim sup Pθ0 ,n − Pθ0 ,n ∧ c a−1
n Pn
n→∞
Z
≤ lim sup Pθ0 ,n − Pθ0 ,n ∧ c a−1
n Pθ,n dΠ(θ|B),
n→∞
and the r.h.s. equals zero c.f. (C.39). According to condition (iv) of lemma 3.3 this implies
the assertion.
Proof (lemma 3.7)
Fix n ≥ 1. Because Bn ⊂ Cn , for every A ∈ Bn , we have,
Z
Z
Pθ,n (A) dΠ(θ) ≤
Pθ,n (A) dΠ(θ),
Bn
Π |Bn
so Pn n
have
Cn
Π |Cn
(A) ≤ Πn (Cn )/Πn (Bn ) Pn n
Π |C
Pn n n φn (X n )
(A). So if for some sequence φn : Xn → [0, 1], we
Π |Bn
= o(Πn (Bn )/Πn (Cn )), then the Pn n
proving the first claim. If
Π |C
Pn n n φn (X n )
-expectations of φn (X n ) are o(1),
Π |Bn
= o(an Πn (Bn )/Πn (Cn )), then Pn n
o(an ) and, hence, Pn φn (X n ) = o(1).
φn (X n ) =
Proof (theorem 4.2)
Π|B
Choose Bn = B, Vn = V and use proposition 2.3 to see that Pn
49
Π(V |X n ) is upper bounded
by Π(B)−1 times the l.h.s. of (14) and, hence, is of order o(an ). Condition (ii) then implies
Pθ
,n
that Pθ0 ,n Π(V |X n ) = o(1), which is equivalent to Π(V |X n ) −−−0−→ 0 since 0 ≤ Π(V |X n ) ≤ 1,
Pθ0 ,n -almost-surely, for all n ≥ 1.
Proof (corollary 4.3)
A prior Π satisfying condition (ii) guarantees that P0n PnΠ for all n ≥ 1, c.f. the remark
preceding proposition A.7. Choose such that 2 < D. Recall that for every P ∈ B(), the
exponential lower bound (10) for likelihood ratios of dP n /dP0n exists. Hence the limes inferior
of exp( 21 n2 )(dP n /dP0n )(X n ) is greater than or equal to one with P0∞ -probability one. Then,
with the use of Fatou’s lemma and the assumption that Π(B()) > 0,
Z
dPθn
enD
(X n ) dΠ(θ) ≥ 1,
lim inf
n→∞ Π(B) B dP n
θ0
with Pθ∞
-probability one, showing that sufficient condition (ii) of lemma 3.3 holds. Conclude
0
that,
P0n C enD PnΠ|B ,
Pθ
,n
and use theorem 4.2 to see that Π(U |X n ) −−−0−→ 1.
Proof (theorem 4.4)
Π |Bn
Proposition 2.3 says that Pn n
Π(Vn |X n ) is of order o(b−1
n an ). Condition (iii) then implies
Pθ
,n
that Pθ0 ,n Π(Vn |X n ) = o(1), which is equivalent to Π(Vn |X n ) −−−0−→ 0 since 0 ≤ Π(Vn |X n ) ≤ 1,
Pθ0 ,n -almost-surely for all n ≥ 1.
Proof (theorem 4.12)
Fix n ≥ 1 and let Dn denote a credible set of level 1 − o(an ), defined for all x ∈ Fn ⊂ Xn
such that PnΠn (Fn ) = 1. For any x ∈ Fn , let Cn (x) denote a confidence set associated with
Dn (x) under B. Due to definition 4.11, θ0 ∈ Θ \ Cn (x) implies that Bn (θ0 ) ∩ Dn (x) = ∅.
Hence the posterior mass of B(θ0 ) satisfies Π(Bn (θ0 )|x) = o(an ). Consequently, the function
x 7→ 1{θ0 ∈ Θ \ Cn (x)} Π(B(θ0 )|x) is o(an ) for all x ∈ Fn . Integrating with respect to the n-th
prior predictive distribution and dividing by the prior mass of Bn (θ0 ), one obtains,
Z
1
an
1{θ0 ∈ Θ \ Cn } Π(Bn (θ0 )|X n ) dPnΠn ≤
.
Πn (Bn (θ0 ))
bn
Applying Bayes’s rule in the form (A.22), we see that,
Z
an
Πn |Bn (θ0 )
n
Pn
.
θ0 ∈ Θ \ Cn (X ) = Pθ,n θ0 ∈ Θ \ Cn (X n ) dΠn (θ|Bn ) ≤
bn
By the definition of remote contiguity, this implies asymptotic coverage c.f. (18).
Proof (corollary 4.13)
Define an = exp(−C 0 n2n ), bn = exp(−Cn2n ), so that the Dn are credible sets of level 1−o(an ),
2
the sets Bn of example 4.6 satisfy condition (i) of theorem 4.12 and bn a−1
n = exp(cnn ) for
some c > 0. By (17), we see that condition (ii) of theorem 4.12 is satisfied. The assertion now
follows.
50
References
[1] E. Abbe, A. Bandeira and G. Hall, Exact recovery in the stochastic block model,
arXiv:1405.3267 [cs.SI]
[2] A. Barron, Discussion on Diaconis and Freedman: the consistency of Bayes estimates.
Ann. Statist. 14 (1986), 26–30.
[3] A. Barron, The exponential convergence of posterior probabilities with implications for
Bayes estimators of density functions, Technical Report 7 (1988), Dept. Statistics, Univ.
Illinois.
[4] A. Barron, M. Schervish and L. Wasserman, The consistency of distributions in
nonparametric problems, Ann. Statist. 27 (1999), 536-561.
[5] M. Bayarri and J. Berger, The Interplay of Bayesian and Frequentist Analysis, Statist.
Sci. 19 (2004), 58–80.
[6] P. Bickel and P. Chen, A nonparametric view of network models and Newman-Girvan
and other modularities, Proc. Natl. Acad. Sci. USA 106 (2009), 21068-21073.
[7] L. Birgé, Approximation dans les espaces métriques et théorie de l’estimation, Zeitschrift
für Wahrscheinlichkeitstheorie und Verwandte Gebiete 65 (1983), 181–238.
[8] L. Birgé, Sur un théorème de minimax et son application aux tests, Probability and
Mathematical Statistics 3 (1984), 259–282.
[9] L. Birgé, and P. Massart, From model selection to adative estimation, Festschrift
for Lucien Le Cam (eds. D. Pollard, E. Torgesen, G. Yang), Springer, New York (1997),
55–87.
[10] L. Birgé, and P. Massart, Gaussian model selection, J. Eur. Math. Soc. 3 (2001),
203–268.
[11] L. Breiman, L. Le Cam and L. Schwartz, Consistent Estimates and Zero-One Sets,
Ann. Math. Statist. 35 (1964), 157–161.
[12] P. Bühlman and S. van de Geer, Statistics for High-Dimensional Data, Springer
Verlag, New York (2011).
[13] T. Cai, M. Low and Y. Xia, Adaptive confidence interval for regression functions under
shape constraints, Ann. Statist. 41 (2013), 722–750.
[14] C. Carvalho, N. Polson, and J. Scott, The horseshoe estimator for sparse signals,
Biometrika 97 (2010), 465–480.
[15] I. Castillo and A. van der Vaart, Needles and straw in a haystack: posterior concentration for possibly sparse sequences, Ann. Statist. 40 (2012), 2069–2101.
51
[16] D. Choi, P. Wolfe and E. Airoldi, Stochastic block models with a growing number
of classes, Biometrika 99 (2012), 273–284.
[17] D. Cox, An analysis of Bayesian inference for non-parametric regression, Ann. Statist.
21 (1993), 903–924.
[18] A. Decelle, F. Krzakala, C. Moore and L. Zdeborová, Inference and Phase
Transitions in the Detection of Modules in Sparse Networks, Phys. Rev. Lett. 107 (2011),
065701
[19] P. De Blasi, A. Lijoi, and I. Prünster, An asymptotic analysis of a class of discrete
nonparametric priors, Statist. Sinica 23 (2013), 1299–1322.
[20] P. Diaconis and D. Freedman, On the Consistency of Bayes Estimates, Ann. Statist.
14 (1986), 1–26.
[21] P. Diaconis and D. Freedman, On Inconsistent Bayes Estimates of Location, Ann.
Statist. 14 (1986), 68–87.
[22] P. Diaconis and D. Freedman, Nonparametric Binary Regression: A Bayesian Approach, Ann. Statist. 21 (1993), 2108–2137.
[23] P. Diaconis and D. Freedman, Consistency of Bayes estimates for nonparameteric
regression: normal theory, Bernoulli 4 (1998), 411–444.
[24] D. Donoho, I. Johnstone, J. Hoch and A. Stern, Maximum entropy and the nearly
black object, J. Roy. Statist. Soc. B54 (1992), 41–81.
[25] D. Donoho, and I. Johnstone, Minimax risk over `p -balls for `q -error, Probab. Theory
Related Fields, 99 (1994), 277-303.
[26] J. Doob, Application of the theory of martingales, Colloque international Centre nat.
Rech. Sci., Paris (1949), 22–28.
[27] P. Eichelsbacher and A. Ganesh, Bayesian inference for Markov chains, J. Appl.
Probab. 39 (2002), 91–99.
[28] T. Ferguson, A Bayesian Analysis of Some Nonparametric Problems, Ann. Statist. 1
(1973), 209–230.
[29] D. Freedman, On the asymptotic behavior of Bayes estimates in the discrete case I,
Ann. Math. Statist. 34 (1963), 1386–1403.
[30] D. Freedman, On the asymptotic behavior of Bayes estimates in the discrete case II,
Ann. Math. Statist. 36 (1965), 454–456.
[31] D. Freedman, and P. Diaconis, On Inconsistent Bayes Estimates in the Discrete Case,
Ann. Statist. 11 (1983), 1109–1118.
52
[32] D. Freedman, On the Bernstein-von Mises theorem with infinite dimensional parameters, Ann. Statist. 27 (1999), 1119–1140.
[33] S. Ghosal, J. Ghosh, and R. Ramamoorthi, Consistency issues in Bayesian nonparametrics, In Asymptotics, Nonparametrics and Time Series: A Tribute to Madan Lal
Puri (Subir Ghosh, ed.) Dekker, New York (1999), 639-667.
[34] S. Ghosal, J. Ghosh and A. van der Vaart, Convergence rates of posterior distributions, Ann. Statist. 28 (2000), 500–531.
[35] J. Ghosh and R. Ramamoorthi, Bayesian nonparametrics, Springer Verlag, New York
(2003).
[36] S. Ghosal and Y.-Q. Tang Bayesian Consistency for Markov Processes, Sankhya 68
(2006), 227–239.
[37] P. Glynn and D. Ormoneit, Hoeffding’s inequality for uniformly ergodic Markov chains,
Statist. Probab. Lett. 56 (2002), 143–146.
[38] E. Gobet, LAN property for ergodic diffusions with discrete observations, Ann. I. H.
Poincaré PR 38 (200), 711–737.
[39] P. Greenwood and A. Shiryaev, Contiguity and the statistical invariance principle,
Gordon and Breach, New York (1985).
[40] J. Hajék and Z. S̆idák, Theory of rank tests, Academic Press, New York (1967).
[41] K. Hayashi, T. Konishi and T. Kawamoto, A tractable fully Bayesian method for the
stochastic block model, arxiv:1602.02256 [cs.LG]
[42] N. Hengartner, P. Stark, Finite-sample confidence envelopes for shape-restricted densities, Ann. Statist. 23 (1995), 525–550.
[43] R. Höpfner, J. Jacod and L. Ladelli, Local asymptotic normality and mixed normality for Markov statistical models, Probab. Th. Rel. Fields (1990) 86: 105.
[44] I. Johnstone and B. Silverman, Needles and straw in haystacks: empirical Bayes
setimates of possibly sparse sequences, Ann. Statist. 32 (2004), 1594–1649.
[45] B. Kleijn and A. van der Vaart, The Bernstein-Von-Mises theorem under misspecification, Electron. J. Statist. 6 (2012), 354–381.
[46] B. Kleijn and Y.-Y. Zhao, Criteria for posterior consistency, arxiv:1308.1263
[MATH.ST].
[47] L. Le Cam and L. Schwartz, A necessary and sufficient condition for the existence of
consistent estimates, Ann. Math. Statist. 31 (1960), 140–150.
53
[48] L. Le Cam, Locally asymptotically normal families of distributions, University of California Publications in Statistics 3 (1960), 37-98.
[49] L. Le Cam, Convergence of estimates under dimensionality restrictions, Ann. Statist. 1
(1973), 38–55.
[50] L. Le Cam, An inequality concerning Bayes estimates, University of California, Berkeley
(197X), unpublished.
[51] L. Le Cam, Asymptotic methods in statistical decision theory, Springer, New York
(1986).
[52] L. Le Cam and G. Yang, On the preservation of local asymptotic normality under
information loss, Ann. Statist. 16 (1988), 483–520.
[53] L. Le Cam and G. Yang, Asymptotics in Statistics: some basic concepts, Springer, New
York (1990).
[54] A. Lijoi, I. Prünster and S. Walker, Extending Doob’s consistency theorem to nonparametric densities, Bernoulli 10 (2004), 651–663.
[55] M. Low, On nonparametric confidence intervals, Ann. Statist. 25 (1997), 2547–2554.
[56] S. Meyn and R. Tweedie, Markov Chains and Stochastic Stability, Cambridge University Press, New York (2009).
[57] T. Mitchell, and J. Beauchamp, Bayesian Variable Selection in Linear Regression, J.
Amer. Statist. Assoc. 83 (1988), 1023–1032.
[58] E. Mossel, J. Neeman and A. Sly, Consistency thresholds for the planted bisection
model, Electron. J. Probab. 21 (2016), 1–24.
[59] F. Papangelou, Large-deviations and the Bayesian estimation of higher-order Markov
chains, J. Appl. Probab. 33 (1996), 18–27.
[60] G. Roussas, Contiguity of probability measures: some applications in statistics, Cambridge Tracts in Mathematics and Mathematical Physics 63 (1972), Cambridge University
Press, London-New York.
[61] L. Schwartz, Consistency of Bayes procedures, PhD. thesis, Dept. of Statistics, University of California, Berkeley (1961).
[62] L. Schwartz, On Bayes procedures, Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 4 (1965), 10–26.
[63] X. Shen and L. Wasserman, Rates of convergence of posterior distributions, Ann.
Statist. 29 (2001), 687–714.
54
[64] K. Nowicki and T. Snijders, Estimation and Prediction for Stochastic Blockstructures,
J. Amer. Statist. Assoc. 96 (2001), 1077–1087.
[65] H. Strasser, Mathematical theory of statistics, de Gruyter, Berlin, 1985.
[66] C.Strelioff, J. Crutchfield and A. Hubler, Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling, arXiv:math/0703715
[] (2007).
[67] S. Suwan, D. Lee, R. Tang, D. Sussman, M. Tang and C. Priebe Empirical Bayes
estimation for the stochastic blockmodel, Electron. J. Statist. 10 (2016), 761–782.
[68] B. Szabó, A. van der Vaart and J. van Zanten, Frequentist coverage of adaptive
nonparametric Bayesian credible sets, Ann. Statist. 43 (2015), 1391–1428.
[69] J. Taylor and R. Tibshirani, Statistical learning and selective inference, Proc. Natl.
Acad. Sc. 112 (2016), 7629–7634.
[70] S. Walker, New approaches to Bayesian consistency, Ann. Statist. 32 (2004), 2028–2043.
[71] S. Walker, A. Lijoi and I. Prünster, Data tracking and the understanding of Bayesian
consistency, Biometrika 92 (2005), 765–778.
[72] S. Walker, A. Lijoi and I. Prünster, On rates of convergence for posterior distributions in infinite-dimensional models, Ann. Statist. 35 (2007), 738–746.
[73] L. Wasserman, Bayesian Model Selection and Model Averaging, J. Math. Psychology
44 (2000), 92-107.
[74] G. Yang, A conversation with Lucien Le Cam, Statist. Sc. 14 (1999), 223–241.
[75] C.-H. Zhang and J. Huang, The sparsity and bias of the LASSO selection in highdimensional linear regression, Ann. Statist. 36 (2008), 1567–1594.
55
| 10 |
Stochastic Runtime Analysis of a Cross-Entropy
Algorithm for Traveling Salesman Problems
Zijun Wua,1,∗, Rolf H. Möhringb,2,∗∗, Jianhui Laic,
arXiv:1612.06962v2 [] 11 Oct 2017
a
Beijing Institute for Scientific and Engineering Computing (BISEC), Pingle Yuan 100,
Beijing, P. R. China
b
Beijing Institute for Scientific and Engineering Computing (BISEC), Pingle Yuan 100,
Beijing, P. R. China
c
College of Metropolitan Transportation, Beijing University of Technology, Pingle Yuan
100, Beijing, P. R. China
Abstract
This article analyzes the stochastic runtime of a Cross-Entropy Algorithm
mimicking an Max-Min Ant System with iteration-best reinforcement. It
investigates the impact of magnitude of the sample size on the runtime to
find optimal solutions for TSP instances.
For simple TSP instances that have a {1, n}-valued distance function and
a unique optimal solution, we show that sample size N ∈ ω(ln n) results in a
stochastically polynomial runtime, and N ∈ O(ln n) results in a stochastically
exponential runtime, where “stochastically” means with a probability of 1 −
n−ω(1) , and n represents number of cities. In particular, for N ∈ ω(ln n),
we prove a stochastic runtime of O(N · n6 ) with the vertex-based random
solution generation, and a stochastic runtime of O(N · n3 ln n) with the edgebased random solution generation. These runtimes are very close to the best
known expected runtime for variants of Max-Min Ant System with best-sofar reinforcement by choosing a small N ∈ ω(ln n). They are obtained for
∗
Principal corresponding author
Corresponding author
Corresponding author
Email addresses: [email protected] (Zijun Wu), [email protected]
(Rolf H. Möhring), [email protected] (Jianhui Lai)
1
The author is also affiliated with the School of Applied Mathematics and Physics at
Beijing University of Technology
2
The author is a professor emeritus of mathematics at Berlin University of Technology
∗∗
Preprint submitted to Theoretical Computer Science
October 12, 2017
the stronger notion of stochastic runtime, and analyze the runtime in most
cases.
We also inspect more complex instances with n vertices positioned on
an m × m grid. When the n vertices span a convex polygon, we obtain
a stochastic runtime of O(n4 m5+ ) with the vertex-based random solution
generation, and a stochastic runtime of O(n3 m5+ ) for the edge-based random
solution generation. When there are k ∈ O(1) many vertices inside a convex
polygon spanned by the other n − k vertices, we obtain a stochastic runtime
of O(n4 m5+ + n6k−1 m ) with the vertex-based random solution generation,
and a stochastic runtime of O(n3 m5+ + n3k m ) with the edge-based random
solution generation. These runtimes are better than the expected runtime
for the so-called (µ+λ) EA reported in a recent article, and again obtained
for the stronger notion of stochastic runtime.
Keywords: probabilistic analysis of algorithms, stochastic runtime analysis
of evolutionary algorithms, Cross Entropy algorithm, Max-Min Ant
System, (µ+λ) EA.
1. Introduction
The Cross Entropy (CE) algorithm is a general-purpose evolutionary algorithm (EA) that has been applied successfully to many N P-hard combinatorial optimization problems, see e.g. the book [1] for an overview. It
was initially designed for rare event simulation by Rubinstein [2] in 1997,
and thereafter formulated as an optimization tool for both continuous and
discrete optimization (see [3]).
CE has much in common with the famous ant colony optimization (ACO,
see [4]) and the estimation of distribution algorithms (EDAs, see [5]). They
all belong to the so-called model-based search paradigm (MBS), see [6]. Instead of only manipulating solutions, which is very typical in traditional
heuristics like Genetic Algorithms [7] and Local Search [8] and others, MBS
algorithms attempt to optimize the solution reproducing mechanism. In each
iteration, they produce new solutions by sampling from a probabilistic distribution on the search space. The distribution is often called a model in the
literature (see e.g. [6] and [9]). This model evolves iteratively by incorporating information from some elite solutions occurring in the search history,
so as to asymptotically model the spread of optimal solutions in the search
2
space. See the recent Thesis [9] for more details on MBS algorithms and
their mathematical properties.
An important issue for MBS algorithms is to determine a suitable size
for the sampling in each iteration. A large sample size makes each iteration
unwieldy, however a small sample size may mislead the underlying search
due to the randomness in the sampling. Sample size reflects the iterative
complexity (computational complexity in each iteration). Whether a large
sample size is harmful depends on the required optimization time (i.e., the
total number of iterations required to reach an optimal solution). This article
aims to shed a light on this issue by theoretically analyzing the relation
between sample size and optimization time for a CE variant that includes
also some essential features of the famous Max-Min Ant System (MMAS
[10]). To this end, a thorough runtime analysis is needed.
The theoretical runtime analysis of EAs has gained rapidly growing interest in recent years, see e.g. [11], [12], [13], [14], [15], [16], [17], [18], [19],
[20], [21], [22], [23], [24], and [25]. In the analysis, an oracle-based view of
computation is adopted, i.e., the runtime of an algorithm is expressed as
the total number of solutions evaluated before reaching an optimal solution.
Since the presence of randomness, the runtime of an EA is often conveyed
in expectation or with high probability. Due to the famous No Free Lunch
Theorem [26], the analysis must be problem-specific. The first steps towards
this type of analysis were made for the so-called (1+1) EA [11] on some test
problems that use pseudo boolean functions as cost functions, e.g., OneMax
[15], LeadingOnes [27] and BinVar [11]. Recent research addresses problems of practical importance, such as the computing a minimum spanning
trees (MST) [28], matroid optimization [29], traveling salesman problem [30],
the shortest path problem [23], the maximum satisfiability problem [31] and
the max-cut problem [32].
Runtime analysis generally considers two cases: expected runtime analysis
and stochastic runtime analysis. Expected runtime is the average runtime of
an algorithm on a particular problem, see, e.g., the runtime results of (1 + 1)
EA reported in [11]. Expected runtime reflects the oracle-based average performance of an algorithm. A mature technique for expected runtime analysis
is the so-called drift analysis [12]. However, this technique requires that the
algorithm has a finite expected runtime for the underlying problem. By [33],
drift analysis is not applicable to the traditional CE [3].
An algorithm with a smaller expected runtime need not be more efficient,
see [34] for details. In contrast, stochastic runtime provides a better under3
standing of the performance of a (randomized) EA. Stochastic runtime is a
runtime result conveyed with an overwhelming probability guarantee (see,
e.g., the classic runtime result of 1-ANT in [15]), where an overwhelming
probability means a probability tending to 1 superpolynomially fast in the
problem size. It therefore reflects the efficiency of an algorithm for most cases
in the sense of uncertainty. This article is concerned with stochastic runtime
analysis, aiming to figure out the relation between stochastic runtime and
magnitude of the sample size.
Runtime analysis of CE algorithms has been initiated in [33], where Wu
and Kolonko proved a pioneering stochastic runtime result for the traditional
CE on the standard test problem LeadingOnes. As a continuation of the
study of [33], Wu et al [34] further investigated the stochastic runtime of
the traditional CE on another test problem OneMax. The runtime results
reported in [33] and [34] showed that sample size plays a crucial role in
efficiently finding an optimal solution. In particular, Wu et al [34] showed
that if the problem size n is moderately adapted to the sample size N , then
4
the stochastic runtime of the traditional CE on OneMax is O(n1.5+ 3 ) for
arbitrarily small > 0 and a constant smoothing parameter ρ > 0, which
beats the best-known stochastic runtime O(n2 ) reported in [13] for the classic
1-ANT algorithm, although 1-ANT employs a much smaller sample size (i.e.,
sample size equals one). Moreover, by imposing upper and lower bounds on
the sampling probabilities as was done in MMAS [10], Wu et al [34] showed
further that the stochastic runtime of the resulting CE can be significantly
improved even in a very rugged search space.
The present article continues the stochastic runtime analysis of [34], but
now in combinatorial optimization with a study of CE on the traveling salesman problem (TSP). We emphasize the impact of the magnitude of N on
the stochastic runtime, put ρ = 1, and consider a CE variant resembling an
MMAS with iteration-best reinforcement under two different random solution generation mechanisms, namely, a vertex-based random solution generation and an edge-based random solution generation.
Stochastic runtime for MMAS with iteration-best reinforcement on simple problems like OneMax has been studied in [20] and [25]. In particular,
Neumann et al [20] showed that to obtain a stochastically polynomial runtime for OneMax, N/ρ ∈ Ω(ln n) is necessary. We shall not only extend
this to TSP for the case of ρ = 1, but also prove that N ∈ ω(ln n) is already
sufficient to guarantee a stochastically polynomial runtime for simple TSP
instances.
4
TSP is a famous N P-complete combinatorial optimization problem. It
concerns finding a shortest Hamiltonian cycle on a weighted complete graph.
Existing algorithms exactly solving TSP generally have a prohibitive complexity. For instance, the Held-Karp algorithm [35] solves the problem with
a complexity of O(n2 2n ). A well-known polynomial time approximation algorithm for metric TSP is the so-called Christofides algorithm [36], which
finds a solution with a cost at most 3/2 times the cost of optimal solutions.
As mentioned in [37], this is still the best known approximation algorithm
for the general metric TSP so far. For Euclidean TSP there exists a famous
polynomial-time approximation scheme (PTAS) by Arora, see [38]. To design a superior approximation algorithm, researchers in recent years tend to
study TSP instances with particular structures, see, e.g., [39].
Due to the prohibitive running time of exact algorithms, heuristics are
frequently employed in practice so as to efficiently compute an acceptable
solution for a TSP problem, e.g., the Lin-Kernighan (LK) algorithm [40].
As a popular heuristic, CE has also been applied in practice to solve TSP
instances, see [41] and [3]. The implementation there shows that CE can also
efficiently compute an acceptable solution.
In view of the high complexity of general TSP, we consider in our analysis
two classes of TSP instances with a particular structure. The first kind of
instances has been used in [19] and [42] for analyzing the expected runtime of
some MMAS variants with best-so-far reinforcement. These TSP instances
have polynomially many objective function values and a unique optimal solution. Moreover, on these TSP instances, solutions containing more edges
from the optimal solution have a smaller cost than those with fewer such
edges. For more details on these instances, see Section 5.
For these simple instances, we prove in Theorem 2 that with a probability
1 − e−Ω(n ) , the runtime is O(n6+ ) with the vertex-based random solution
generation, and O(n3+ ln n) with the edge-based random solution generation,
for any constant ∈ (0, 1) and N ∈ Ω(n ). For the case of N ∈ ω(ln n), we
show that the runtimes (resp., O(n6 N and n3 (ln n)N ) are even smaller with
probability 1 − n−ω(1) , see Corollary 1. These results are very close to the
known expected runtime O(n6 + n lnρ n ) for (1 + 1) MMAA reported in [19],
and the expected runtime O(n3 ln n + nρln ) for MMAS∗Arb reported in [42]
(where ρ ∈ (0, 1) is an evaporation rate), if N ∈ ω(ln n) is suitably small.
But they give the stronger guarantee of achieving the optimal solution in the
respective runtime with an overwhelming probability. Moreover, we show
5
a stochastically exponential runtime for a suitable choice of N ∈ O(ln n),
see Theorem 3. This generalizes the finding in [20] for OneMax to TSP
instances. Therefore, N ∈ Ω(ln n) is necessary, and N ∈ ω(ln n) is sufficient
for a stochastically polynomial runtime for simple TSP instances.
We also inspect more complex instances with n vertices positioned on an
m×m grid, and the Euclidean distance as distance function. These instances
have been employed in [43] and [30] for analyzing the expected runtime of
(µ+λ) EA and randomized local search (RLS). When the n vertices span a
convex polygon without vertices in the interior of the polygon (so they are
the corners of that polygon), we prove a stochastic runtime of O(n4 m5+ )
for the vertex-based random solution generation, and a stochastic runtime
of O(n3 m5+ ) for the edge-based random solution generation, see Theorem
4 for details. Similarly, the in the stochastic runtimes can be removed
by slightly decreasing the probability guarantee, see Corollary 2. When the
vertices span a convex polygon with k ∈ O(1) vertices in the interior, we
show a stochastic runtime of O(n4 m5+ + n6k−1 m ) with the vertex-based
random solution generation, and a stochastic runtime of O(n3 m5+ + n3k m )
with the edge-based random solution generation, see Theorem 5 for details.
These runtimes are better than the expected runtime for the so-called (µ+λ)
EA and RLS reported in the recent paper [30].
The remainder of this paper is arranged as follows. Section 2 defines the
traditional CE and related algorithms, Section 3 defines the traveling salesman problem and provides more details of the used CE variants, Section 4
shows some important facts on the two random solution generation methods,
and Section 5 reports the stochastic runtime results on the TSP instances.
A short conclusion and suggestions for future work are given in Section 6.
Notations for runtime
Our analysis employs some commonly used notations from complexity
theory. We use O(f (n)) to denote the class of functions which are bounded
from above by the function f (n), i.e., those functions g(n) with g(n) ≤ c·f (n)
for large enough n and some constant c ≥ 0 not depending on n. Similarly,
Ω(f (n)) is the class of functions that are bounded from below by f (n), i.e.,
for any g(n) ∈ Ω(f (n)) there exists a constant c > 0 not depending on n
such that g(n) ≥ c · f (n) for large enough n. Class Θ(f (n)) is the intersection of Ω(f (n)) and O(f (n)). Class o(f (n)) is the class of functions g(n)
with g(n)/f (n) → 0 as n → ∞, and class ω(f (n)) is the class of functions
6
g(n) with g(n)/f (n) → +∞ as n → ∞. Obviously, o(f (n)) ⊂ O(f (n)) and
ω(f (n)) ⊂ Ω(f (n)).
2. The general cross entropy algorithm and related algorithms
We now introduce the traditional CE algorithm. The CE variant we will
analyze inherits the framework of this traditional version. To compare our
results with those in the literature, we shall give also details about some
related algorithms.
2.1. The traditional cross entropy algorithm
Algorithm 1 lists the traditional CE that was proposed in [3], adapted to
an abstract notion of combinatorial optimization problems. The algorithm
assumes a combinatorial minimization problem (S, f ), where S is a finite
search space of “feasible” solutions and f is the cost function. Every feasible
solution s ∈ S is composed of elements from a fixed finite set A, the ground
set of the problem, i.e., we assume S ⊆ An for some integer n ∈ N. Furthermore there is a product distribution on the product space An that induces a
distribution on S ⊆ An . The distribution on An can usually be represented as
a vector (or matrix) of real-valued probabilities. The convex combination of
the two distributions in Step 6 of Algorithm 1 then corresponds to a convex
combination of the two vectors (or matrices).
Specific to the TSP, the ground set A can be the set of nodes or edges, n
is the number of nodes, and a feasible solution is a sequence of elements from
A that forms a Hamiltonian cycle. The product distribution for the TSP is
represented as an n×n matrix.
When we consider the set of nodes as our ground set A, each row i
of the matrix is a marginal distribution that specifies choice probabilities
for all nodes following the current node i. A random Hamiltonian cycle
is sequentially constructed from the product distribution by allowing only
nodes not yet visited as continuations in each step, see Algorithm 2 for more
details.
When we consider the set of edges as A, marginals of the product distribution will be represented by the same n×n matrix where the sum of the
(i, j)-th and (j, i)-th entries reflects the probability that the edge {i, j} occurs in a random solution. A random Hamiltonian cycle is still constructed
sequentially and only edges leading to a feasible solution are taken in each
step, see Algorithm 3 for details.
7
Algorithm 1 The general Cross-Entropy algorithm
Input:
an initial distribution Π0 on the solution space, a fixed smoothing parameter ρ ∈ (0, 1], a sample size N ∈ N+ , an elite size M ∈ N+ with M ≤ N
1:
2:
3:
4:
5:
6:
7:
8:
t = 0;
loop
(1)
(N )
independently generate N random solutions Xt , . . . , Xt with the
current distribution Πt ;
[1]
sort these N solutions in non-decreasing order as f (Xt ) ≤ · · · ≤
[N ]
f (Xt ) according to the cost function f ;
learn an empirical distribution Wt from the M best solutions
[1]
[M ]
X t , . . . , Xt ;
set Πt+1 = (1 − ρ)Πt + ρWt ;
t = t + 1;
end loop
Traditionally, CE sets a small elite ratio α ∈ (0, 1) and uses the best
bα · N c solutions in Step 5 to build the empirical distribution Wt . Here, we
use the elite size M instead. This does not intrinsically change the original
algorithm. Steps 3 and 5 depend on the detailed definition of the underlying
problem. We shall give details to them in Subsection 3.2.
Step 6 of Algorithm 1 plays a crucial role in the different theoretical analyses of the algorithm, see, e.g., [44], [33], [45], [9], [34]. The occurrence of
good solutions are probabilistically enforced by incorporating the new information Wt into Πt+1 . This idea, somehow, coincides with the reinforcement
learning in [46]. The smoothing parameter ρ reflects the relative importance
of the new information Wt in the next sampling. It balances global exploration and local exploitation to a certain degree. A larger ρ makes the
algorithm concentrate more on the particular area spanned by the elite solu[1]
[M ]
tions Xt , . . . , Xt , while a smaller ρ gives more opportunities to solutions
outside that area.
However, balancing global exploration and local exploitation through tuning ρ is ultimately limited. Wu and Kolonko [33] proved that the famous
“genetic drift” [47] phenomenon also happens in this algorithmic scheme,
i.e., the sampling (Step 3) eventually freezes at a single solution and that
solution needs not to be optimal. This means that the algorithm gradually
8
loses the power of global exploration.
As a compensation for global exploration, Wu et al [34] proved that a
moderately large sample size N might be helpful. The results there showed
that a moderately large N configured with a large ρ (e.g., ρ = 1) can make the
algorithm very efficient. Although a large N introduces a high computational
burden in each iteration, the total number of iterations required for getting
an optimal solution is considerably reduced.
Wu et al [34] also indicated another way to compensate the global exploration, i.e., imposing a lower bound πmin ∈ (0, 1) and an upper bound
πmax ∈ (0, 1) on the sampling distributions in each iteration. This idea is
originated from MMAS [10]. In each iteration t, after applying Step 6, the
entries of distribution Πt+1 that are out of the range [πmin , πmax ] are reset to
that range by assigning to them the nearest bounds, see (6) in Section 3 for
more details. Wu et al [34] have proved that this can make CE more efficient
even in the case of a rugged search space.
To follow these theoretical suggestions made in [34], we shall in our
stochastic runtime analysis use a CE that modifies the traditional CE (Algorithm 1) accordingly. We shall see that these modifications make the CE
very efficient for the considered TSP instances.
2.2. Related evolutionary algorithms
Related evolutionary algorithms for TSP whose runtime has been extensively studied are RLS [28], (µ + λ) EA [30], and those theoretical abstractions of MMAS [10] including MMAS∗bs [17], (1+1) MMAA [19]. We now
give algorithmic details of them. In order to facilitate the comparison, their
runtimes for TSP instances will be discussed in Section 5.
(µ + λ) EA is an extension of the famous (1 + 1) EA [11]. (µ + λ) EA
randomly chooses µ solutions as the initial population. In each iteration,
(µ+λ) EA randomly chooses λ parents from current population, then produces
λ children by applying randomized mutation to each of the selected parents,
and forms the next population by taking the best µ solutions from these
µ + λ solutions in the end of current iteration. The expected runtime of
(µ + λ) EA on TSP instances is studied in [30], where Sutton et al uses a
Poisson distribution to determine the number of randomized mutations (2opt move or jump operation) should be taken by a selected parent in each
iteration.
RLS is a local search technique [48]. It employs a randomized neighborhood. In each iteration, it randomly chooses a number of components of the
9
best solution found so far and then changes these components. The expected
runtime of RLS for TSP instances is also studied in [30], where the neighborhood is taken to be a k-exchange neighborhood with k randomly determined
by a Poisson distribution.
(1+1) MMAA is a simplified version of the famous MMAS [10], where
the sample size is set to 1 and pheromones are updated only with the best
solution found so far (best-so-far reinforcement) in each iteration. In each
iteration of (1+1) MMAA, the ant which constructed the best solution found
so far deposits an amount πmax of pheromones on the traversed edges, and an
amount πmin of pheromones on the non-traversed edges, and the pheromones
are updated by linearly combining the old and these newly added pheromones
as in Algorithm 1. The expected runtime of (1+1) MMAA on simple TSP
instances is studied in [19]. The expected runtime of its variant MMAS∗Arb
on simple TSP instances is studied in [42].
3. The traveling salesman problem and details of the CE variant
Now, we formally define TSP, and give more details of the CE variant we
will analyze.
3.1. The traveling salesman problem
We consider an undirected
graph G = (V, E) with vertex set V = {1, . . . , n}
and edge set E = {i, j} | i ∈ V, j ∈ V, i 6= j . A Hamiltonian cycle is a
sequence {{il , il+1 } | l = 1, . . . , n} of edges such that
a) i1 = in+1 ;
b) (i1 , . . . , in ) is a permutation of {1, 2, . . . , n}.
This definition actually considers E as the ground set A. As mentioned above,
we can also put A = V and represent Hamiltonian cycles in a more compact way as permutations of V. Note that a Hamiltonian cycle corresponds
to n different permutations, whereas a permutation corresponds to a unique
Hamiltonian cycle. However, the two representations are intrinsically the
same. We shall use them interchangeably in the sequel. To facilitate our
discussion, we shall refer to a Hamiltonian cycle by just referring to one of
the n corresponding permutations, and denote by S the set of all possible
permutations. We employ the convention that two permutations are said
10
to be same iff they form the same underlying Hamiltonian cycle. The notation {k, l} ∈ s shall mean that the edge {k, l} belongs to the underlying
Hamiltonian cycle of the solution (permutation) s.
Once a distance function d : E 7→ R+ is given, the (total traveling) cost
f (s) of a feasible solution s = (i1 , i2 , . . . , in ) ∈ S is then calculated by
f (s) :=
n−1
X
d(ij , ij+1 ) + d(in , i1 ).
(1)
j=1
We denote by S ∗ ⊆ S the set of feasible solutions (Hamiltonian cycles) that
minimize the cost (1).
3.2. Details of the CE variant
The CE variant we consider in the analysis completely inherits the structure of Algorithm 1, and additionally employs a component from MMAS.
We now formalize the sampling distribution, and define Steps 3 and 5 in
more detail. As mentioned, we represent a sampling distribution (a product
distribution on An ) for the TSP by a matrix Π = (πi,j )n×n , such that
Pn
a)
j=1 πi,j = 1, for all i = 1, . . . , n,
b) πi,i = 0 for all i = 1, . . . , n,
c) πi,j = πj,i for each edge {i, j} ∈ E.
For each edge {i, j} ∈ E, πi,j reflects the probability that a Hamilton cycle
continues with vertex j when it is in vertex i. In the sequel, we write the
t
sampling distribution Πt in iteration t as (πi,j
)n×n , where the superscript t of
t
0
πi,j indicates the iteration. The initial distribution Π0 = (πi,j
)n×n is, without
1
0
0
for
loss of generality, set to be the uniform distribution, i.e., πi,j = πj,i
= n−1
all edges {i, j} ∈ E.
We shall consider two random solution generation methods, a vertex-based
random solution generation and an edge-based random solution generation.
Algorithm 2 lists the vertex-based random solution generation method. This
method uses V as the ground set A. A product distribution of An is therefore
represented as a matrix Π = (πi,j )n×n satisfying a)-c) above, i.e., each row
of Π represents a sampling distribution on A = V. Directly sampling from
Π may produce infeasible solutions from An − S. To avoid that, Algorithm
2 starts with a randomly fixed initial node, and then sequentially extends
11
a partial solution with an unvisited vertex until a complete permutation is
obtained. This method is efficient and rather popular in practice, see, e.g.,
[41] and [4]. Here, “s + (v)” means that appends a vertex v to the end of a
partial solution s.
Algorithm 2 Vertex-based random solution generation
Input:
a distribution Π = (πi,j )n×n
Output:
a permutation of 1, 2, . . . , n
1: s = ∅, and Vunivisted = V ;
2: randomly select v from V, s = s + (v), and Vunvisited = Vunvisited − {v};
3: while (|Vunvisited 6= ∅|) do
4:
select a random vertex v 0 from Vunvisited with a probability
P[v 0 | s] = P
πv,v0
k∈Vunvisited
5:
6:
7:
8:
πv,k
;
(2)
set s = s+(v 0 ), Vunvisited = Vunvisited − {v 0 };
v = v0;
end while
return s;
The edge-based random solution generation is listed in Algorithm 3. The
idea is from [42]. This method considers edge set E as the ground set A. A
feasible solution is then a sequence of edges that form a Hamiltonian cycle, i.e.
S ⊆ E n . To unify the notation of feasible solutions, Algorithm 3 translates
its outcomes into permutations. As the actual ground set is E, a product
matrix such that each row is a marginal specifying
distribution is an n× n(n−1)
2
a sampling distribution on E. Algorithm 3 only considers those with identical
marginals, a product distribution can be therefore fully characterized by
one of its marginals and is therefore again represented by an n × n matrix
Π = (πi,j )n×n as above. PAn edge
P {i, j} ∈ E is then sampled from Π with
probability (πi,j + πj,i )/ nk=1 nl=1 πk,l = 2πi,j /n since each row of Π sums
up to 1. A random sequence ∈ E n is generated by independently sampling
from Π n times. To avoid infeasible solutions, Algorithm 3 considers in
every sampling only edges that are admissible by the edges selected before.
Given a set B of edges such that the subgraph (V, B) does neither contain a
12
cycle nor a vertex of degree ≥ 3, an edge e0 ∈ E is said to be admissible by
B if and only if the subgraph (V, B ∪ {e0 }) still does neither contain a cycle
nor a vertex of degree ≥ 3. We denote by Badmissible the set of edges ∈
/ B that
are admissible by B.
Algorithm 3 Edge-based random solution generation
Input:
a distribution Π = (πi,j )n×n
Output:
a permutation of 1, 2, . . . , n
1: B = ∅, Badmissible = E;
2: while (|B| ≤ n − 1) do
3:
select an edge {i, j} from Badmissible with a probability
P[e | s] = P
4:
5:
6:
7:
8:
πi,j + πj,i
;
{k,l}∈Badmissible πk,l + πl,k
(3)
set B = B∪{{i, j}};
update Badmissible ;
end while
let s = (1, i2 , i3 , . . . , in ) with {1, i2 }, {ij , ij+1 } ∈ B for j = 2, . . . , n−1;
return s;
(1)
(N )
The N random solutions Xt , . . . , Xt in iteration t are then generated
by N runs of Algorithm 2 or Algorithm 3 with the current distribution Πt =
t
t
)n×n is then calculated from
)n×n . The empirical distribution Wt = (wi,j
(πi,j
the M elite solutions by setting
PM
({i, j})
k=1 1{e0 ∈E | e0 ∈X[k]
t
t }
wi,j
=
,
(4)
M
where 1A (·) is the indicator function of set A = {e0 ∈ E | e0 ∈ Xt } for each
t+1
{i, j} ∈ E. The next distribution Πt+1 = (πi,j
)n×n is therefore obtained as
[k]
t+1
t
t
πi,j
= (1 − ρ)πi,j
+ ρwi,j
(5)
for each {i, j} ∈ E.
We continue with the suggestions made in [34]. In the CE variant, we
shall use a moderately large N and a large ρ = 1. To fully use the best elite
13
solutions, we take M = 1. To prevent premature convergence (i.e., a possible
stagnation at a non-optimal solution), we employ a feature from MMAS
[10], called max-min calibration, in the construction of Πt+1 ,. We choose
a lower bound πmin ∈ (0, 1) and an upper bound πmax ∈ (0, 1), and, after
applying (5), adjust Πt+1 by
t+1
πmin if πi,j < πmin ,
t+1
t+1
t+1
πi,j
= πi,j
(6)
if πi,j
∈ [πmin , πmax ],
t+1
πmax if πi,j
> πmax ,
for any edge {i, j} ∈ E. Note that the max-min calibration is the only step
that does not occur in the general CE (i.e., Algorithm 1).
This setting turns CE into an MMAS with iteration-best reinforcement,
[1]
i.e., only the iteration-best solution Xt is allowed to change the ‘pheromones’
Πt . Stützle and Hoos [10] indicated in an empirical study that the practical performance of iteration-best reinforcement is comparable to best-so-far
reinforcement for TSP instances. Thus, it should also be worthwhile to compare the theoretical runtime of iteration-best reinforcement with the known
expected runtimes of best-so-far reinforcement for TSP instances presented
in, e.g., [19] and [42].
4. Properties of the random solution generation methods
Before we start with our runtime analysis, we shall discuss some relevant
properties of the two random solution generation methods, which concern the
probability of producing a k-exchange move of the iteration-best solution in
the next sampling.
Formally, a k-exchange move on a Hamiltonian cycle is an operation that
removes k edges from the cycle and adds k new edges to obtain again a cycle.
A k-opt move is a k-exchange move reducing the total travel cost. Figure
1a shows an example of a 2-exchange move, in which edges {i, j}, {k, l} are
removed, and edges {i, l}, {k, j} are added.Figure 1b shows an example of a
3-exchange move.
In our analysis, we shall consider only iteration-best reinforcement with
ρ = 1 and the max-min calibration (6). The empirical distribution Wt =
t
(wi,j
)n×n for each iteration t ∈ N in this particular case therefore satisfies
(
[1]
min{1, πmax } = πmax if edge {i, j} ∈ Xt ,
t+1
t+1
πi,j
= πj,i
=
(7)
max{0, πmin } = πmin otherwise,
14
i
k
i
l
w
v
k
j
l
j
(a) A 2-exchange move
(b) A 3-exchange move
Figure 1: Examples for edge exchange moves
for every edge {i, j} ∈ E and iteration t ∈ N. Furthermore, Πt+1 = Wt .
[1]
Since Πt+1 is biased towards the iteration-best solution Xt , k-exchanges
[1]
of Xt with a large k are unlikely to happen among the N draws from Πt+1
by either of the two generation methods. Thus, an optimal solution is more
likely to be reached by a sequence of repeatedly k-exchange moves with small
k from iteration-best solutions. Therefore, it is necessary to estimate the
[1]
probabilities of producing a k-exchange of Xt in the two generation methods,
especially for the case of small k.
4.1. Probabilities of producing k-exchanges in the vertex-based random solution generation
The probability of producing k-exchanges with k = 2, 3 in the vertexbased random solution generation has been studied in Zhou [19]. With πmin =
1
and πmax = 1 − n1 , Zhou [19] proved for (1 + 1) MMAA that with a
n2
probability of Ω(1/n5 ), Algorithm 2 produces a random solution having more
edges from s∗ than x∗t (the best solution found so far) provided that x∗t is not
optimal. Zhou [19] actually showed that if x∗t 6= s∗ , then there exists either
a 2-opt move or a 3-opt move for x∗t , and Algorithm 2 produces an arbitrary
2-exchange of x∗t with a probability of Ω(1/n3 ), and an arbitrary 3-exchange
of x∗t with a probability of Ω(1/n5 ).
1
Although we use πmin = n(n−2)
and consider iteration-best reinforcement,
a similar result holds in our case. Claim 1 below gives a lower bound on the
probability of producing a k-exchange move of the iteration-best solution in
the next round with the vertex-based random solution generation.
15
[1]
Claim 1. Let M = 1, ρ = 1, and consider a k-exchange move of Xt for some
integer k = 2, 3, . . . , n. Then, Algorithm 2 produces the given k-exchange
move with a probability Ω(1/n2k−1 ) in every of the N draws in iteration t + 1.
Proof. Recall that in Algorithm 2, the probability (2) to select a continuing
t
t
) for
(or, equivalently, πj,i
edge {i, j} is always bounded from below by πi,j
each iteration t ∈ N, since each row of Πt sums up to 1. Given a k-exchange
[1]
move of Xt , one possibility to generate it from Πt+1 = Wt by Algorithm 2
is that one of the new edges is added in the last step. This happens with a
n−k
1 k−1
1
· 1 − n1
≥ e·n2k−1
probability at least n1 · n(n−2)
, where e ≈ 2.71828
1
is Euler’s number, n represents the probability to select the starting vertex,
1
is the common lower bound of the probability to select the remaining
n(n−2)
k − 1 new edges, and 1 − n1 is the common lower bound of the probability to
[1]
select one of the remaining n − k edges from Xt .
[1]
Because of Claim 1, every 2-exchange of Xt is produced from Πt+1 by
Algorithm 2 with a probability Ω(1/n3 ), and every 3-exchange is produced
by Algorithm 2 with a probability Ω(1/n5 ). Note that for any k = 2, 3, . . . ,
[1]
if a k-opt move of Xt occurs among the N draws in the next sampling,
[1]
[1]
then f (Xt+1 ) < f (Xt ) must hold. Thus, if we take a moderately large
sample size, say N = Θ(n5+ ) for some > 0, with a probability 1 − (1 −
[1]
[1]
Ω(n−5 ))Ω(5+) = 1 − e−Ω(n ) , f (Xt+1 ) < f (Xt ) will hold, provided that there
[1]
still exists a 2-opt or 3-opt move from Xt .
Claim 2. Suppose that M = 1, ρ = 1. Then, for iteration t + 1, the probability
[1]
that Algorithm 2 produces a solution with a cost not larger than Xt in one
application is in Ω(1).
[1]
Proof. Observe that the probability that Xt is reproduced in one application
of Algorithm 2 is larger than (1 − 1/n)n−1 ∈ Ω(1), which implies that the
[1]
cost of the generated random solution is not larger than f (Xt ).
[1]
Note that if Xt is reproduced at least once among the N draws in the
[1]
next sampling, then f (Xt+1 )[1] ≤ f (Xt ). Thus, if the sample size N ∈
[1]
[1]
Ω(ln n), then f (Xt+1 ) ≤ f (Xt ) with a probability 1 − (1 − Ω(1))N = 1 −
[1]
[1]
O(1/n). Particularly, when N ∈ Ω(n ) for some > 0, f (Xt+1 ) ≤ f (Xt )
with an overwhelming probability 1 − e−Ω(n ) .
16
4.2. Probabilities of producing k-exchanges in the edge-based random solution
generation
The behavior of the edge-based random solution generation is comprehensively studied in [42]. Kötzing et al [42] proved for MMAS∗Arb and a constant
k ∈ O(1) that, with a probability of Ω(1), Algorithm 3 produces a random
solution that is obtained by a k-exchange move from the best solution found
so far.
t
t
t
t
Recall that in each iteration t, either πi,j
= πj,i
= πmin or πi,j
= πj,i
= πmax
for any edge {i, j} ∈ E. For convenience, we will call an edge {i, j} ∈ E with
t
t
= πmax a high edge, and otherwise a low edge. Kötzing et al [42]
= πj,i
πi,j
showed the probability of the event that Algorithm 3 chooses
√ a high edge in
an arbitrary fixed step conditioned on the event that l ≤ n low edges have
been chosen in some l steps before this step is 1 − O(1/n). Our setting is
1
only slightly different with from theirs, i.e., we use πmin = n(n−2)
but they
1
put πmin = n(n−1) . Thus, the result should also hold here. Claim 3 below
formally asserts this, readers may also refer to [42] for a similar proof.
Claim 3. Assume M = 1, ρ = 1. Then, the probability of choosing√a high
edge at any fixed step in Algorithm 3 is at least 1 − 12/n if at most n low
edges have been chosen before that step and there exists at least one high
admissible edge to be added.
Proof.
√ We now fix a step n−m for some m = 0, 1, . . . , n−1, and assume that
l ≤ n low edges have been chosen before this step. Obviously, we still need
to add m + 1 ≥ 1 edges to obtain a complete solution. We now estimate the
numbers of admissible high and low edges in this step. Note that every of
the l low edges blocks at most 3 of the m+l remaining high edges (at most
two which are incident to the end points of the low edge, and at most one
that may introduce a cycle). So at least m+l − 3l = m−2l ≥ m−3l high
edges are available for adding in this step. Of course, it may happen that
there is no admissible high edges in this step. However, we are not interested
in such a case. We consider only the case that there exists at least one
admissible high edge in this step, i.e. the number of admissible high edges
in this step is at least max{1, m − 3l}. Note also that the n−m edges added
before partition the subgraph of G = (V, E) with vertices V and edges from
the partial solution constructed so far into exactly m connected components
(here, we see an isolated vertex also as a connected component). For any two
of the components, there are at most 4 admissible edges connecting them.
17
Therefore,
there are at most min{4 m2 , n2 } admissible low edges. Observing
√
l ≤ n, the probability of choosing a high edge in this step is bounded from
below by
(
√
2m2
3
min{4 m2 , n2 } πmin
1− (m−3l)n(n−2)
≥ 1− (n−2)
if m > 3 n,
≥
(8)
1−
√
12
max{1, m − 3l} πmax
if m ≤ 3 n,
1− n−2
where the first inequality is obtained by observing that
m
n
n
2
min{4
,
} ≤ min{2m ,
} ≤ 2m2 ,
2
2
2
τmax = 1 − 1/n, πmin =
1
,
n(n−2)
and
2m2
m−3l
≤
2
√
1
− 3 2n
m
m
≤ 3n.
With Claim 3, we can show that, for any t ∈ N and any fixed k ∈ O(1),
[1]
the probability of the event that a k-exchange of Xt is produced by one
application of Algorithm 3 is Ω(1), see Claim 4. Here, we shall use a different
proof from the one presented by Kötzing et al [42], which appears to us as
problematic.
Claim 4. Let M = 1, ρ = 1. For any k ∈ O(1), with probability Ω(1), the
[1]
random solution produced by Algorithm 3 is a k-exchange of Xt .
Proof. Let k ∈ O(1) be arbitrarily fixed, and M be the set of all k-element
subsets of {1, 2, . . . , n/2} (where we assume without loss of generality that
n is even). Obviously, |M| ∈ Θ(nk ) since k ∈ O(1). Let M ∈ M be an arbitrarily fixed k-element subset. The probability of the event that Algorithm 3
selects k new edges (low edges) at steps i ∈ M and n − k edges (high edges)
[1]
from Xt at other steps, is bounded from below by
n−i+1
− (n − i + k + 1))πmin
1
1 n−k Y ( 2
≥ Θ( k ), (9)
1 − O( )
n
n(n − 1)πmin + (n − i + k + 1)πmax
n
i∈M
where 1 − O(1/n) is a lower bound for the probability of selecting an edge
[1]
from Xt . In each step i ∈ M, the edges chosen before partition the graph
into n−i+1 connected components, and for any two of the components there
exists at least 2 edges connecting
them without introducing a cycle. Hence,
n−i+1
there are at least
admissible edges in each step i ∈ M. Notice also
2
that the number of admissible high edges in this case is at most n − i + k + 1
18
(n − i+k+1 is the maximal number of high edges that have not been chosen
((n−i+1
)−(n−i+k+1))πmin
2
of (9) is just the lower
before). Therefore, each factor n(n−1)π
min +(n−i+k+1)πmax
bound of the probability for choosing an admissible edge not belonging to
[1]
Xt in a step i ∈ M.
As a result, the probability of the random event that Algorithm 3 pro[1]
duces a k-exchange of Xt with k ∈ O(1) in any of the N independent draws
in iteration t+1 is bounded from below by |M|·Θ( n1k ) = Θ(nk )·Θ( n1k ) ∈ Ω(1),
since new edges can also be added in steps l ≥ n/2.
Notice that in the edge-based random solution generation, for any k =
[1]
2, 3, . . . , n, any two k-exchanges of Xt are generated with the same probability, since the generation does not require adding the edges in a particular
order. Therefore, by Claim 4, for any k ∈ O(1), any specified k-exchange of
[1]
[1]
Xt will be produced with a probability Θ(1/nk ). Since reproducing Xt can
[1]
be seen as a 0-exchange of Xt , we can thus derive the following conclusion.
Claim 5. Let M = 1, ρ = 1. With probability Ω(1), the random solution
[1]
generated by Algorithm 3 has a cost not larger than that of Xt .
Claim 6 shows that it is unlikely that the random solution generated by
[1]
Algorithm 3 is “very” different from the last iteration-best solution Xt . This
will be fundamental for deriving the runtime lower bound.
Claim 6. Let M = 1, ρ = 1. For any δ ∈ (0, 1], with an overwhelming
min{δ,1/4}/2 )
probability 1 − e−ω(n
, the random solution generated by Algorithm 3
[1]
is a k-exchange move from Xt for some k < nδ .
Proof. Let δ ∈ (0, 1] be arbitrarily fixed, and put γ = min{δ, 1/4}. To prove
the claim, we just need to show that with an overwhelming probability, the
[1]
random solution generated by Algorithm 3 is a k-exchange of Xt for some
k ≤ nγ ≤ n1/4 . This is again implied by the fact that with an overwhelming
γ
probability, at most nγ/2 low edges are chosen within the first T := n − 3n4
γ
steps in Algorithm 3, since the best case nγ/2 + 3n4 is still smaller than nγ .
By Claim 3, for any k ≤ nγ/2 and any m ≤ T, Algorithm 3 chooses
high edges with a probability at least 1 − 12/n at step m if at most k edges
have been chosen before step m, since there exist at least n − m − 3k ≥
3nγ
− 3nγ/2 ≥ 3 admissible high edges at step m.
4
19
Let P denote the probability of the random event that at most nγ/2 low
edges are chosen within T steps, and Q the probability of the random event
that at least nγ/2 + 1 low edges are chosen within the same T steps. Then
P = 1 − Q. We shall bound Q from above, which will give a lower bound for
P.
Let E be the random event that at least nγ/2 + 1 low edges are chosen
within T steps. Then Q = P[E]. For each l = 1, . . . , nγ/2 + 1, we define
a random variable vl denoting the first step m ≤ T such that l low edges
are chosen within m steps. Obviously, E implies the random event E1 that
v1 < v2 < · · · < vnγ/2 +1 ≤ T. Thus, Q ≤ P[E1 ], and P ≥ 1 − P[E1 ].
Observe that
X
P[E1 ] =
P[v1 = a1 , . . . , vnγ/2 +1 = anγ/2 +1 ],
a1 <a2 <···<anγ/2 +1 ≤T
and v1 = a1 , . . . , vnγ/2 +1 = anγ/2 +1 is equivalent to the random event that
before step a1 only high edges are chosen, that at any step between al and
al+1 only high edges are chosen for any l with 1 ≤ l ≤ nγ/2 , and that at steps
a1 , . . . , anγ/2 +1 only low edges are chosen. Thus, we have by Claim 3 that
P[v1 = a1 , . . . , vnγ/2 +1 = anγ/2 +1 ] ≤
12 nγ/2 +1
,
n
since at each step al , there exists at least one admissible high edge and we
do not care about what happens
after step vnγ/2 +1 .
There are at most nγ/2T +1 different combinations for a1 < a2 < · · · <
nγ/2 +1
.
anγ/2 +1 . Therefore, P ≥ 1 − P[E1 ] ≥ 1 − nγ/2T +1 12
n
γ/1
By Stirling’s formula, and observing that n + 1 ∈ o(T ), T ∈ Θ(n), we
nγ/2 +1
γ/2
γ/2
have nγ/2T +1 12
= e−ω(n ) . Hence, P ≥ 1 − e−ω(n ) is overwhelmn
ingly large.
5. Main results
We shall now analyze the stochastic runtime of our two different random
solution generation methods for two classes of TSP instances that have been
well studied in the literature.
20
5.1. Stochastic runtime analysis for simple instances
We first consider a class of simple TSP instances that is defined by the
following distance function d : E → R on a graph with n vertices.
1 if {i, j} = {i, i + 1} for each i = 1, 2, . . . , n − 1,
d({i, j}) = 1 if {i, j} = {n, 1},
(10)
n otherwise.
Obviously, TSP instances with this distance function have a unique optimal
solution s∗ = (1, 2, . . . , n) (in the sense of the underlying Hamiltonian cycle),
and s∗ has a cost of n. The cost of an arbitrary feasible solution s equals
k + (n − k) · n, where k is the number of edges ∈ s that are also in s∗ . We
shall refer to these instances as G1 in the sequel.
The class G1 has been used in [19] and [42] for analyzing the expected
runtime of variants of MMAS. Zhou [19] proved that the (1 + 1) MMAA
algorithm has an expected runtime of O(n6 + n lnρ n ) on G1 in the case of nonvisibility (i.e., without the greedy distance information in the sampling), and
has an expected runtime of O(n5 + n lnρ n ) in the case of visibility (i.e., with
considering the greedy distance information in the sampling). Kötzing et al
[42] continued the study in [19]. They investigated the expected runtime of
(1 + 1) MMAA and its variant MMAS∗Arb on G1 and other TSP instances
on which both (1 + 1) MMAA and MMAS∗Arb have exponential expected
runtime. MMAS∗Arb differs with (1 + 1) MMAA only in the random solution
generation. MMAS∗Arb uses Algorithm 3 as its random solution generation
method, while (1 + 1) MMAA used Algorithm 2. Kötzing et al [42] proved
that MMAS∗Arb has an expected runtime of O(n3 ln n + n lnρ n ) on G1 .
Theorem 1 shows a stochastic runtime of O(n6+ ) for the CE variant with
the add-on, i.e., Algorithm 1 with max-min calibration (6), the vertex-based
random solution generation, and a stochastic runtime of O(n4+ ) for the
edge-based random solution generation. These results are comparable with
the above known expected runtimes. Although we are not able to get strictly
superior runtimes, our results are actually stronger and more informative.
Theorem 1 (Stochastic runtime of Algorithm 1 with max-min calibration
on G1 ). Assume that we set M = 1, ρ = 1, and use Algorithm 1 with the
1
, πmax = 1 − n1 . Then
max-min calibration (6) for the values πmin = n(n−2)
a) if we use the vertex-based random solution generation method (Algorithm 2), and take a sample size N ∈ Ω(n5+ ) for any constant ∈
21
5
(0, 1), then with a probability at least 1 − e−Ω(N/n ) the optimal solution
s∗ can be found within n iterations;
b) if we use the edge-based random solution generation method (Algorithm
3), and take a sample size N ∈ Ω(n3+ ) for a constant ∈ (0, 1), then
3
with a probability at least 1 − e−Ω(N/n ) , the optimal solution can be
found within n iterations.
Proof. We prove the Theorem by showing that the probability of the random
event that before the optimal solution is met, the number of edges shared by
the iteration-best and optimal solution strictly increases is overwhelmingly
large. This implies that the optimal solution is found within n iterations,
since the optimal solution has only n edges. Furthermore, the runtimes
presented in the Theorem hold. We only discuss the case of a), b) follows
with an almost identical argument.
[1]
By [19] (see also proof of Theorem 2), if Xt is not optimal, it has at
least either a 2-opt move or a 3-opt move. Note that for G1 , any k-opt
move of the iteration-best solution increases the number of its edges shared
with the optimal solution. By Claim 1, any 2-opt move is generated by
Algorithm 2 with probability Ω(n−3 ), and any 3-opt move is generated with
[1]
[1]
probability Ω(n−5 ). Thus, if Xt is not optimal, Xt+1 shares more edges with
[1]
the optimal solution than Xt with a probability at least 1 − (1 − n−5 )N =
5
1−e−Ω(N/n ) ∈ 1−e−Ω(n ) if N ∈ Ω(n5+ ) for any > 0. Thus, this repeatedly
happens within polynomially many number of iterations with overwhelming
5
probability 1 − e−Ω(N/n ) . This completes the proof.
The stochastic runtimes of Theorem 1 are derived for a relatively large
sample size, namely N = Ω(n5+ ) and N = Ω(n3+ ). Actually, Theorem 1
may still hold for a smaller sample size. Theorem 2 partially asserts this.
It states that the total number of iterations required to reach the optimal
solution for both generation schemes may increase considerably if a smaller
sample size is used. However, the stochastic runtime does not increase. Interestingly, one can obtain a smaller stochastic runtime with a small sample
size for the edge-based random solution generation.
Theorem 2 (Stochastic runtime of Algorithm 1 on G1 for a small sample
size). Assume the conditions in Theorem 1, but set N ∈ Ω(n ) for any ∈
(0, 1). Then:
22
a) For the vertex-based random solution generation, Algorithm 1 finds the
optimal solution s∗ within n6 iterations with a probability of 1 − e−Ω(N ) .
b) For the edge-based random solution generation, Algorithm 1 finds the
optimal solution s∗ within n3 ln n iterations with a probability of 1 −
e−Ω(N ) .
Proof of Theorem 2. The proof shares a similar idea with that of Theorem 1.
However, we consider here the random event that the number of edges shared
by the iteration-best and optimal solution does not decrease and strictly
increases enough times within a specified polynomial number of iterations.
For a), we shall consider the first n6 iterations. By Claim 2, the number
of edges shared by the iteration-best and optimal solution does not decrease
N
with a probability 1 − 1 − Ω(1)
= 1 − e−Ω(N ) (N ∈ Ω(n )). Therefore,
the number does not decrease within the first n6 iterations with probability
Qn6
−Ω(N )
) = 1−e−Ω(N ) . By Claim 1, for every consecutive n5 iterations,
t=0 (1−e
if the starting iteration-best solution is not optimal, then with probability
n5
1 − (1 − n−5 )N
= 1 − e−Ω(N ) , the number will strictly increase at least
once within these n5 iterations. Therefore, with overwhelming probability
1 − e−Ω(N ) , the optimal solution will be reached within the period of the first
n6 iterations, since there are n many consecutive n5 iterations within that
period.
b) can be proved by a similar way with a). We shall consider the first
3
n ln n iterations. By Claim 4, with probability 1 − (1 − Ω(1))N = 1 − e−Ω(N ) ,
the number of shared edges does not decrease in consecutive two iterations.
To complete the proof, we need an extra fact on 2, 3-exchanges.
Kötzing et al [42] showed for MMAS∗Arb that if the best solution s∗t found
so far has n−k edges from the optimal solution s∗ , then the probability of the
event that s∗t+1 has at least n−k+1 edges from s∗ , is in Ω(k/n3 ). We shall use a
different but simpler proof to show that this alsoTholds in our case of iteration[1]
best reinforcement. And with this fact, if |Xt
s∗ | = n−k for some 0 < k ≤
3
n, then with probability 1 − ((1 − k · n−3 )N )n /k = 1 − e−Ω(N ) , the number
of edges shared by the iteration-best solution and s∗ will strictly increase
at least once within the period [t, t + n3 /k]. This implies that s∗ is sampled
within the first n3 ln n iterations with overwhelming probability 1 − e−Ω(N ) ,
since n3 ln n iterations can be partitioned into n many consecutive phases
[0, n2 ), [n2 , n2 +n3 /(n−1)), [n2 +n3 /(n−1), n2 +n3 /(n−1)+n3 /(n−2)), . . . .
We now prove that fact.
23
[1]
We first show that when |Xt ∩ s∗ | = n − k with k > 0, then there exists a
[1]
2-opt move or a 3-opt move for Xt (see also [19] for a similar proof). Assume
[1]
that Xt contains exactly n − k edges from s∗ for some integer k > 0. Let
[1]
e∗ = {i, i + 1} be an edge in s∗ but not in Xt . Note that each node of the
[1]
graph is exactly incident to two edges of s∗ and Xt , respectively. Therefore
[1]
[1]
there exists an edge e0 ∈ Xt incident to i, an edge e00 ∈ Xt incident to i+1,
and e0 , e00 are not in s∗ . Figure 2 shows an example, where e0 is either {i, u} or
{i, v}, and e00 is either {i+1, w} or {i+1, y}. If e0 = {i, u} and e00 = {i+1, w} or
u
i
e
v
i+1
∗
w
y
e1
[1]
Figure 2: Demonstration of adding a new edge. The solid edges represent the cycle Xt .
[1]
if e0 = {i, v} and e00 = {i+1, y}, then there exists a 2-opt move of Xt which
removes e0 , e00 of distance n and adds e∗ and another edge (either {u, w} or
{v, y}) of distance at most n + 1 together. If e0 = {i, u}, e00 = {i+1, y} or
[1]
e0 = {i, v}, e00 = {i+1, w}, there is a 3-opt move of Xt which removes e0 , e00 ,
and an edge e1 ∈
/ s∗ , and adds edge e∗ and another two edges, this replacing
3 edges of distance n by 3 edges of distance at most 2n+1 together. Here,
[1]
[1]
observe the fact that adding e∗ to Xt and removing e0 , e00 from Xt results
[1]
in graph containing a cycle, and there must be an edge e1 ∈ Xt on that cycle
that does not belong to s∗ . We choose this edge as the edge e1 . Therefore,
[1]
for each e∗ of the k remaining edges in s∗ that are not in Xt , there exists a
[1]
2-opt or 3-opt move of Xt that adds e∗ .
By Claim 4, for any l ∈ O(1), the probability of producing an l-exchange
[1]
of the iteration-best solution Xt by Algorithm 3 in iteration t + 1 is Ω(1).
Since any two l-exchanges are produced with the same probability, the probability of producing a particular l-exchange in iteration t + 1 is Ω(1/nl ). As
[1]
a result, Algorithm 3 produces for each edge e∗ ∈ s∗ − Xt a 2-opt or 3-opt
[1]
move of Xt that adds edge e∗ with probability at least Ω(1/n3 ).
24
Note that the generation of a 2-exchange (or a 3-exchange) with two newly
added edges e2 , e3 by Algorithm 3 includes two mutually exclusive cases (3!
cases for a 3-exchange): e2 is chosen before e3 , or e3 is chosen before e2 . It
is not difficult to see that these two cases (3! cases for 3-exchange) have the
same probability. Therefore, the probability of the event that Algorithm 3
[1]
generates a 2-opt or 3-opt move of Xt that e∗ as one of the newly added edges
and selects e∗ before the other newly added edges, is bounded from below
[1]
by Ω(1/3!n3 ) = Ω(1/n3 ). Since Xt has k such e∗ and the corresponding k
[1]
events are also mutually exclusive, we obtain that the probability that Xt+1
[1]
[1]
has more edges from s∗ than Xt if Xt has exactly n − k edges from s∗ for
a constant k > 0 is Ω(k/n3 )
Corollary 1 further improves the stochastic runtime for an even smaller
sample size. It can be proved by an argument similar to the proof of Theorem
l
1, where we observe that (1 − (1 − p(n))ω(ln n) )n = 1 − n−ω(1) for any constant
l > 0 and probability p(n) ∈ Ω(1), and that 1 − e−ω(ln n) = 1 − n−ω(1) .
Corollary 1. Assume the conditions in Theorem 1, but let N ∈ ω(ln n).
Then:
a) For the vertex-based random solution generation, Algorithm 1 finds the
optimal solution s∗ within n6 iterations with a probability of 1 − n−ω(1) .
Particularly, if N = (ln n)2 , the runtime is n6 (ln n)2 with probability
1 − n−ω(1) .
b) For the edge-based random solution generation, Algorithm 1 finds the
optimal solution s∗ within n3 ln n iterations with a probability of 1 −
n−ω(1) . Particularly, if N = (ln n)2 , the runtime is n3 (ln n)3 with probability 1 − n−ω(1) .
Theorem 2 tells that, for any ∈ (0, 1), a sample size of N ∈ Θ(n ) is
already sufficient for iteration-best reinforcement to efficiently find an optimal
solution of simple TSP instances with an overwhelming probability. Corollary
1 further shows that N ∈ ω(ln n) even leads to a better runtime with a slightly
smaller but still overwhelming probability. Theorem 3 below shows that with
an overwhelming probability, the runtime of iteration-best reinforcement will
be exponential if N ∈ O(ln n), even if the instances are as simple as those in
G1 .
25
1
ln n.
Theorem 3. Assume the conditions of Theorem 1, but set N < 220
1/200
−Ω(n
)
Then, with probability 1 − e
, Algorithm 1 with edge-based solution
1/300 )
generation does not find the optimal solution s∗ within eΘ(n
iterations.
Proof. We prove the Theorem by inspecting the probability of the random
event that, before the optimal solution is found, the cost of the iteration[1]
best solution Xt will oscillate for exponentially many iterations with an
overwhelming probability. We shall consider this in the last stages of the
optimization process.
Let T0 be the first iteration which samples a solution containing at least
n − n1/4 + n1/5 edges from the optimal solution. We show that with an
overwhelming probability, the number of common edges in the iteration-best
and optimal solution will drop below n − n1/4 + n1/5 and the optimal solution
is not sampled before that. This will imply the conclusion of Theorem 3,
since, with an overwhelming probability, this phenomenon can repeatedly
occur exponentially many times before optimal solution is found.
To that end, we need to show the following:
[1]
1) For any 1/4 > δ > 0, if Xt contains at least n − nδ edges from the
optimal solution, then with a probability O( √1n ), the random solution
generated by Algorithm 3 will contain more edges from the optimal
[1]
solution than Xt in iteration t + 1;
[1]
2) For any 1/4 > δ > 0, if Xt contains at least n − nδ edges from
the optimal solution, then with a probability Ω(1) (at least e−5 ), the
random solution generated by Algorithm 3 will contain fewer edges
[1]
from the optimal solution than Xt in iteration t + 1.
However, we first use these two facts and show them afterwards.
1/20
[1]
By Claim 6, with probability 1−e−ω(n ) , XT0 contains at most n−n1/4 +
n1/5 + n1/10 edges from the optimal solution, since the random event that
the number of common edges from the iteration-best and optimal solution
increases more than n1/10 in one iteration implies an occurrence of a Ω(n1/10 )1/200 )
exchange. Similarly, by Claim 6 again, with probability 1 − e−ω(n
, the
iteration-best solution contains k ∈ [n − n1/4 + n1/5 − n1/6+1/100 , n − n1/4 +
n1/5 + n1/10 + n1/6+1/100 ] edges from the optimal solution in each iteration
t ∈ [T0 , T0 + n1/6 ]. This means that the optimal solution is not found in the
period [T0 , T0 + n1/6 ] with an overwhelming probability. With the help of
1) and 2), we are now to show that within this period, the number of edges
26
shared by the iteration-best and optimal solution is significantly reduced with
an overwhelming probability. This will complete the proof.
To facilitate our discussion, we call an iteration a successful iteration
if its iteration-best solution contains more edges from the optimal solution
than the last iteration-best solution, and an iteration a failure iteration if its
iteration-best solution contains fewer edges from the optimal solution than
the last iteration-best solution.
By 1) and the subsequent discussion, the expected number of success1
n
), since N < 220
ln n. Thus,
ful iterations within [T0 , T0 + n1/6 ] is O( nln1/3
1/6
−Ω(n )
by the Chernoff bound, with probability 1 − e
, at most n1/100 successful iterations can occur within [T0 , T0 + n1/6 ]. By 2) and the subsequent discussion, the expected number of failure iterations in [T0 , T0 + n1/6 ]
1
1
1
ln n. By the Chernoff bound, it happens that
is Ω(n 6 − 44 ), since N < 220
−Ω(n1/6 )
with probability 1 − e
, at least n1/7 failure iterations will occur in
[T0 , T0 + n1/6 ]. Since a successful iteration can add at most n1/100 edges from
1/200 )
the optimal solution with probability 1 − e−ω(n
, it totally adds at most
1/100
1/100
1/50
n
×n
=n
edges from the optimal solution to the iteration-best
1/200 )
solution within [T0 , T0 +n1/6 ] with probability 1−e−ω(n
. Note that within
1/6
−Ω(n1/6 )
1/7
[T0 , T0 + n ], with probability 1 − e
, at least n × 1 = n1/7 “good”
edges are removed from the iteration-best solution. Therefore, with over1/200 )
[1]
whelming probability 1 − e−Ω(n
, XT0 +n1/6 will contain at most
n − n1/4 + n1/5 + n1/10 − n1/7 + n1/50 < n − n1/4 + n1/5
[1]
edges from the optimal solution, since XT0 contains at most n − n1/4 + n1/5 +
1/20
n1/10 iterations with probability 1 − e−Ω(n ) . As a result, with probability
1/200 )
1−e−Ω(n
, the number of common edges in the iteration-best and optimal
solution will again be smaller than n − n1/4 + n1/5 in some iteration after T0 ,
and the optimal solution is not found before that. And this will repeatedly
1/300 )
1/200 )
happen eΘ(n
times with probability 1 − e−Ω(n
.
To finish the proof, we now formallyprove 1) and 2). We first consider 2).
n
By taking
√ k = 2 and considering the 2 2-exchanges that happen in the first
n − 3 n steps in the proof of Claim 4, one can show a tighter probability
[1]
lower bound e15 for producing 2-exchanges of Xt by Algorithm 3. Here, √
we
observe that the probability of choosing a high edge at a step before n − 3 n
is at least 1 − 3/(n − 2), see the proof of Claim 3.
Note that if 2-exchanges deleting 2 edges from the optimal solution happen N times in an iteration, then the iteration will be a failure iteration.
27
By the above and the fact that any two k-exchanges happen with the same
probability, a failure iteration then occurs with a probability at least
1
e5
n−nδ
2
n
2
!N
≥
1
e5
n−nδ
2
n
2
1
! 220
ln n
∈ Ω(n−1/44 ),
1
where δ ∈ (0, 1/4) and N < 220
ln n. This asserts 2).
[1]
1) follows with a similar discussion. Since Xt is assumed to contain
at least n − nδ edges from the optimal solution for some δ ∈ (0, 1/4), and
since Ω(nδ )-exchanges happen with an overwhelmingly small probability, we
need to consider only O(nδ )-exchanges when we estimate the probability of a
successful iteration. For each k ∈ Ω(nδ ), the proportion of failure k-exchanges
is bounded from below by
n−nδ
2knδ
−1/2
k
= e− n + o(1) ≥ e−2n
+ o(1),
n
k
since 0 < δ < 1/4, and k-exchanges removing k edges shared by the iterationbest and optimal solution are not “successful” k-exchanges. Since for any
k ∈ Ω(nδ ), any two k-exchanges happen with the same probability, and
since the sum of the probabilities of successful and failure k-exchanges is
smaller than 1, we conclude that successful O(nδ )-exchanges happen with a
−1/2
probability smaller than 1−e2n
∈ O( √1n ). Therefore, a successful iteration
√ n ) since N < 1 ln n.
happens with a probability 1−(1−O( √1n ))N ∈ O( ln
220
n
Theorem 3 generalizes the finding of [20] to simple TSP instances. It
formally states that for ρ = 1, N ∈ Ω(ln n) is necessary to efficiently find
an optimal solution to TSP. By Theorem 3, Theorem 1, Theorem 2 and its
Corollary 1, we have clearly analyzed the impact of the size of N on the
resulting stochastic runtime for the simple TSP instances in the case of that
ρ = 1. N ∈ ω(ln n) is sufficient to find the optimal solution in a stochastically
polynomial runtime, and the degree of the polynomial may increase with N ,
but the probability guaranteeing the runtime is also increasing with N .
5.2. Stochastic runtime analysis for grid instances
Now, we consider more general TSP instances. Herein, the n vertices are
positioned on an m × m grid for some integer m ∈ N+ . The vertices are
positioned in a way that no three of them are collinear. Figure 3 gives an
28
5
4
3
2
1
0
1
2
3
4
5
Figure 3: A grid instance
example of such an instance where m = 5 and n = 8. The weight of an edge
{l, k} ∈ E in this case is defined as the usual Euclidean distance d(l, k)
between vertex l and vertex k for every l, k = 1, . . . , n. In this section, we
shall refer to these TSP instances as grid instances.
Grid instances have been studied in [43] and [30]. Sutton and Neumann
[43] investigated the expected runtime of (1+1) EA and RLS for these instances. As a continuation of [43], Sutton et al [30] further proved that
the more extensive algorithm (µ + λ) EA finds an optimal solution for the
instances expectedly in
O((µ/λ)n3 m5 +nm5 +(µ/λ)n4k (2k−1)!)
iterations if every of the λ selected parents is mutated by taking a random
number of consecutive 2-exchange moves, and expectedly in
O((µ/λ)n3 m5 +nm5 +(µ/λ)n2k (k−1)!)
iterations with a mixed mutation operator, where k denotes the number of
vertices that are not on the boundary of the convex hull of V. Sutton et
al [30] also studied general Euclidean TSP instances (without collinearity)
and showed similar results in terms of the maximum distance value dmax ,
the minimum distance value dmin , k and the minimum angle in the triangles
formed by the vertices.
Before we present our stochastic runtime, we summarize some structural
properties of grid instances (some just follow from properties of general Euclidean instances). We say that two different edges {i, j} and {k, l} intersect
with each other if there exists a point p such that p ∈
/ {i, j, k, l} locates
on both of the two edges, see, e.g., Figure 4a. We say that a solution is
29
intersection-free if the corresponding Hamiltonian cycle does not contain intersections, see, e.g., Figure 4b.
j
k
p
l
j
l
i
(a) intersection
k
i
(b) intersection free
Figure 4: Example for intersections
Obviously, the triangle inequality [49] holds for grid instances. Therefore,
removing an intersection by a (unique) 2-exchange move in a solution strictly
reduces the total traveling cost, see Figure 4a. Lemma 1 states the well known
fact that an optimal solution of grid instances is intersection-free.
Lemma 1. Optimal solutions of grid instances are intersection-free.
i
l
p
k
j
Figure 5: Example for a 2-opt move
We now restrict 2-opt moves to 2-exchange moves that remove an intersection. For example, removing edges {i, j}, {k, l} in Figure 5 and adding
new edges {i, l}, {k, j} form such a 2-opt move. Lemma 2 below says that
for grid instances, removing one intersection may reduce the total traveling
cost Ω(m−4 ) if it is applicable. We omit the simple proof here. Interested
readers may refer to [30] for a proof.
Lemma 2. If a feasible solution to a grid instance contains intersections,
then removing the intersection can reduce the total traveling cost Ω(m−4 ).
30
The convex hull Y(V ) of the vertex set V is the smallest convex set in R2
that contains V . Its boundary is a convex polygon spanned by some vertices
with possibly other vertices in the interior of that polygon. Let V b denote
the set of vertices on the boundary of Y(V ). Figure 6 illustrates this.
Figure 6: Example of a convex hull
Quintas and Supnick [50] proved that if a solution s is intersection-free,
then the solution respects the hull-order, i.e., any two vertices in the subsequence of s induced by the boundary (the outer polygon) of Y(V ) are
consecutive in s if and only if they are consecutive on the boundary of Y(V ).
Therefore, if V b = V, i.e., all of the vertices are on the convex hull, then
every intersection-free solution is optimal.
Theorem 4 below analyzes the stochastic runtime of Algorithm 1 for grid
instances for the case that V = V b . It states that the stochastic runtime is
O(n4 ·m5+ ) for the vertex-based random solution generation, and O(n3 ·m5+ )
for the edge-based random solution generation. Corollary 2 further improves
the runtime by sacrificing the probability guarantee. These stochastic runtimes are close to the expected runtime O(n3 ·m5 ) for RLS reported by Sutton
et al [43] and [30].
Theorem 4. Consider a TSP instance with n vertices located on an m × m
grid such that no three of them are collinear. Assume that V b = V , i.e., every
vertex in V is on the convex hull V b , that we apply the max-min calibration
1
, ρ = 1, M = 1 and N ∈ Ω(m ) for some
(6) with πmax = 1 − n1 , πmin = n(n−2)
constant > 0. Then:
a) With an overwhelming probability of 1 − e−Ω(N ) , Algorithm 1 finds the
optimal solution within at most n4 · m5 iterations with the vertex-based
random solution generation.
b) With an overwhelming probability of 1 − e−Ω(N ) , Algorithm 1 finds an
optimal solution within at most n3 · m5 iterations with edge-based random solution generation.
31
Proof of Theorem 4. Note that under the conditions of Theorem 4, every
intersection free solution is optimal. By Lemma 2, we know that a 2-opt move
reduces the total traveling cost by Ω(m−4 ). Therefore, n·m5 consecutive 2-opt
moves turn a feasible solution into an optimal one, since the worst solution in
this case has a total traveling cost smaller than n·m and the optimal solution
has total traveling cost larger than n. Notice also that m ≥ n/2, since the n
vertices are positioned on the m × m grid and no three of them are collinear.
With these facts, we prove the Theorem by a similar argument to the one
used in the proof of Theorem 2.
Again, we consider the random event that the cost of the iteration best
solution does not increase within a specified period of polynomially many
iterations and strictly decreases sufficiently many times within that period.
For a), we consider the first n4 m5 iterations. For b), we consider the first
n3 m5 iterations.
4 5
For a) : By Claim 2, with probability (1 − (1 − Ω(1))N )n m = 1 − e−Ω(N ) ,
the cost of the iteration-best solution does not increase within n4 m5 iterations. By Claim 1, for a phase consisting of consecutive n3 iterations, with
3
probability 1 − (1 − n−3 )N ·n = 1 − e−Ω(N ) , in at least one iteration of that
phase an intersection is removed from the iteration-best solution, provided
the phase starts with an iteration-best solution containing at least one intersection. Since the first n4 m5 iterations can have nm5 such phases, a)
follows.
b) follows with an almost identical discussion. We therefore omit the
proof.
Corollary 2. Consider a TSP instance with n vertices located on an m × m
grid such that no three of them are collinear. Assume that V b = V , i.e., every
vertex in V is on the convex hull V b , that we apply the max-min calibration
1
, ρ = 1, M = 1 and N ∈ ω(ln m). Then:
(6) with πmax = 1 − n1 , πmin = n(n−2)
a) With probability 1 − m−ω(1) , Algorithm 1 finds the optimal solution
within at most n4 · m5 iterations with the vertex-based random solution generation.
b) With probability 1−m−ω(1) , Algorithm 1 finds an optimal solution within
at most n3 · m5 iterations with the edge-based random solution generation.
Now, we consider the more interesting case that |V | − |V b | = k ∈ O(1),
i.e., k vertices are not on the convex hull. Note that we can turn an arbitrary
32
intersection-free solution to an optimal solution only by rearranging the positions of those k interior points in that solution, and this requires at most k
consecutive jump moves (see [30] for a proof). A jump move δi,j transforms a
solution into another solution by shifting positions i, j as follows. Solution s
is transformed into solution δi,j (s) by moving the vertex at position i into position j while vertices at positions between i and j are shifted appropriately,
e.g.,
δ2,5 (i1 , i2 , i3 , i4 , i5 , i6 , i7 ) = (i1 , i3 , i4 , i5 , i2 , i6 , i7 ) and
δ5,2 (i1 , i2 , i3 , i4 , i5 , i6 , i7 ) = (i1 , i5 , i2 , i3 , i4 , i6 , i7 ).
It is not difficult to see that a jump move δi,j can be simulated by either a
2-exchange move (in the case that |i−j| = 1) or a 3-exchange move (in all
other cases). Therefore, we can actually turn an intersection-free solution
into an optimal one by a sequence of at most k consecutive 2-exchange or
3-exchange moves. Furthermore, a sequence of k consecutive 2-exchange or
3-exchange moves can be simulated by a κ-exchange move with an integer
κ ≤ 3k. This means that any intersection-free solution can be turned into an
optimal solution by a κ-exchange move with κ ≤ 3k. We shall call such a κexchange move in the sequel a 3k-opt move, although κ may be smaller than
1
) by
3k. Recall that a 3k-opt move is produced with a probability of Ω( n6k−1
1
Algorithm 2 (see Claim 1), and with a probability of Ω( n3k ) by Algorithm
3 (see Lemma 6 of [42], or Claim 4) in any of the N independent draws in
[1]
iteration t, if Xt−1 is intersection-free and not optimal. As a result, we obtain
by a similar proof as above Theorem 5 below.
Theorem 5. Consider a TSP instance with n vertices located on an m × m
grid such that no three of them are collinear. Assume that |V | − |V b | = k ∈
O(1) (k vertices are not on the convex hull V b ), that we apply the max-min
1
calibration 6 with πmax = 1 − n1 , πmin = n(n−2)
, and set ρ = 1, M = 1, for
some constant > 0. Then:
a) If we set N ∈ Ω(n3 · m ), then with an overwhelming probability of
3
1 − e−Ω(N/n ) , Algorithm 1 finds an optimal solution within at most n ·
m5 + n6k−4 iterations with the vertex-based random solution generation;
b) If we set N ∈ Ω(n2 · m ), then with an overwhelming probability of
2
1 − e−Ω(N/n ) , Algorithm 1 finds an optimal solution within at most
n·m5 +n3k−2 iterations with the edge-based random solution generation.
33
Proof of Theorem 5. We only prove a). b) can be derived by a very similar
argument. We define two random events as following:
[1]
[1]
E1 : for each t ≤ n · m5 + n6k−4 , f (Xt−1 ) ≥ f (Xt );
[1]
E2 : for each t ≤ n · m5 + n6k−4 , if Xt−1 is not intersection-free, then a 2-opt
move happens in iteration t.
By a similar argument as the one for Theorem 4, we obtain that P[E1 ∩ E2 ] ≥
3
1−e−Ω(N/n ) . Let η be a random variable denoting the number of iterations for
[1]
which Xt−1 is intersection-free. Notice that, conditioned on E1 ∩E2 , η ≥ n·m5
implies that an optimal solution occurs within n · m5 + n6k−4 iterations.
Conditioned on E1 ∩ E2 and η < n · m5 , there are at least Ω(n6k−4 ) itera[1]
tions in which Xt−1 is intersection-free, since each Xt−1 is either intersection[1]
free or not intersection-free. Note also that in each iteration in which Xt−1
[1]
intersection-free and not optimal, a 3k-opt move that turns Xt−1 into an op1
))N . This
timal solution happens with probability of at least 1 − (1 − Ω( n6k−1
[1]
means for any fixed t ∈ N, if Xt−1 is intersection-free, then the probability of
[1]
1
the event that Xt is optimal is bounded from below by 1 − (1 − Ω( n6k−1
))N .
Therefore, for any fixed Ω(n6k−4 ) iterations in which the iteration-best solu[1]
tion Xt−1 is intersection-free and not optimal, the probability of the event
[1]
that the corresponding Ω(n6k−4 ) Xt ’s are still not optimal, is bounded from
6k−4
3
1
above by (1 − Ω( n6k−1
))N ·n
= e−Ω(N/n ) . This means that, conditioned on
E1 ∩ E2 and η < n · m5 , an optimal solution occurs within n · m5 + n6k−4
3
iterations with a probability of 1 − e−Ω(N/n ) .
As a result, an optimal solution occurs within the first n · m5 + n6k−4
3
iterations with a probability of 1 − e−Ω(N/n ) .
Theorem 5 shows a stochastic runtime of n3 m5+ +n6k−1 m for Algorithm
1 equipped with the vertex-based solution generation, and a stochastic runtime of n3 m5+ + n3k m for Algorithm 1 equipped with edge-based solution
generation, in the case of that |V | − |V b | = k ∈ O(1). This is much better
than the expected runtime
O(µ · n3 m5 +nm5 +µ · n4k (2k−1)!)
for (µ+λ) EA with sequential 2-opt mutations reported by Sutton et al [30].
However, we are not able to analyze the stochastic runtime in the case that
34
k ∈ ω(1), since k ∈ ω(1) interior points may require super-polynomially
many iterations to turn an intersection-free solution into an optimal solution
when a polynomial sample size is used.
6. Conclusion
We have analyzed the stochastic runtime of a CE algorithm on two classes
of TSP instances under two different random solution generation methods.
The stochastic runtimes are comparable with corresponding expected runtimes reported in the literature.
Our results show that the edge-based random solution generation method
makes the algorithm more efficient for TSP instances in most cases. Moreover, N ∈ Ω(ln n) is necessary for efficiently finding an optimal solution with
iteration-best reinforcement. For simple instances, N ∈ ω(ln n) is sufficient
to efficiently find an optimal solution with an overwhelming probability, and
N ∈ O(ln n) results in an exponential runtime with an overwhelming probability. However, for more difficult instances, one may need to use a relatively
large sample size.
Our stochastic runtimes are better than the expected runtimes of the
(µ + λ) EA on the grid instances. The EA randomly changes local structures
of some of its current solutions by a Poisson distributed number of consecutive 2-exchange moves in every iteration, while our algorithm refrains from
local operations on current solutions and only refreshes solutions by sampling from an evolving distribution. The solution reproducing mechanism in
the EA stays the same throughout the optimization, only the current solutions in every iteration vary. However, the solution reproducing mechanism
(sampling distribution) of our algorithm also evolves. This is the essential
difference of MBS with traditional EAs. The comparison of our results with
the expected runtimes in [30] therefore show that using a self-adaptive dynamic solution reproducing mechanism is helpful (in efficiently finding an
optimal solution) when the search space becomes rugged. The stochastic
runtimes in Theorem 4 are only valid for instances with a bounded number
of interior points. In the future, it should be interesting to analyze the case
that |V | − |V b | ∈ ω(1). This might also give more insight to the problem of
RP v.s. P [51].
Our analysis is actually a kind of worst-case analysis, which is rather
pessimistic. We analyze the optimization progress by only checking some very
particular random events. This may not only underestimate the probability
35
of finding an optimal solution with our algorithm, but also overestimate the
required number of iterations. In the future, it should be of great interest to
consider a smoothed runtime analysis over an -neighborhood of the n nodes
in the real plane as has been done for the Simplex method by Spielman and
Teng in their famous paper [52].
Acknowledgment
We thank the anonymous reviewers for their numerous useful suggestions
on improving the scientific quality and English presentation of this article.
References
[1] R. Y. Rubinstein, D. P. Kroese, The cross-entropy method: a unified
approach to combinatorial optimization, Monte-Carlo simulation and
machine learning, Springer Science & Business Media, 2004.
[2] R. Y. Rubinstein, Optimization of computer simulation models with rare
events, European Journal of Operational Research 99 (1) (1997) 89–112.
[3] R. Y. Rubinstein, The cross-entropy method for combinatorial and continuous optimization, Methodology and computing in applied probability 1 (2) (1999) 127–190.
[4] M. Dorigo, T. Stützle, Ant colony optimization, Cambridge, Massachusetts: A Bradford Book, MIT Press, 2004.
[5] M. Hauschild, M. Pelikan, An introduction and survey of estimation
of distribution algorithms, Swarm and Evolutionary Computation 1 (3)
(2011) 111–128.
[6] M. Zlochin, M. Birattari, N. Meuleau, M. Dorigo, Model-based search
for combinatorial optimization: A critical survey, Annals of Operations
Research 131 (1-4) (2004) 373–395.
[7] D. Whitley, A genetic algorithm tutorial, Statistics and computing 4 (2)
(1994) 65–85.
[8] H. R. Lourenço, O. C. Martin, T. Stützle, Iterated local search, Springer,
2003.
36
[9] Z. Wu, Model-based heuristics for combinatorial optimization: a mathematical study of their asymptotic behavior, Ph.D. thesis, Institut für
Angewandte Stochastik und Operations Research (IASOR), Technical
University of Clausthal (2015).
[10] T. Stützle, H. H. Hoos, MAX-MIN ant system, Journal of Future Generation Computer Systems 16 (2000) 889–914.
[11] S. Droste, T. Jansen, I. Wegener, On the analysis of the (1+1) evolutionary algorithm, Theoretical Computer Science 276 (1-2) (2002) 51–81.
[12] J. He, X. Yao, Drift analysis and average time complexity of evolutionary
algorithms, Artificial Intelligence 127 (1) (2001) 57–85.
[13] F. Neumann, C. Witt, Runtime analysis of a simple ant colony optimization algorithm, Tech. rep., Departmant of Computer Science, University
of Dortmund, Germany (2006).
[14] C. Witt, Runtime analysis of the (µ +1) ea on simple pseudo-boolean
functions, Evolutionary Computation 14 (1) (2006) 65–86.
[15] F. Neumann, C. Witt, Runtime analysis of a simple ant colony optimization algorithm, Algorithmica 54 (2) (2009) 243–255.
[16] B. Doerr, F. Neumann, D. Sudholt, C. Witt, Runtime analysis of the 1ant ant colony optimizer, Theoretical Computer Science 412 (17) (2011)
1629–1644.
[17] W. J. Gutjahr, G. Sebastiani, Runtime analysis of ant colony optimization with best-so-far reinforcement, Methodology & Computing in Applied Probability 10 (3) (2008) 409–433.
[18] Y. Zhou, J. He, A runtime analysis of evolutionary algorithms for constrained optimization problems, IEEE Transactions on Evolutionary
Computation 11 (5) (2007) 608–619.
[19] Y. Zhou, Runtime analysis of an ant colony optimization algorithm for
tsp instances, Evolutionary Computation IEEE Transactions on 13 (5)
(2009) 1083–1092.
37
[20] F. Neumann, D. Sudholt, C. Witt, A few ants are enough:aco with
iteration-best update, in: Genetic and Evolutionary Computation Conference, GECCO 2010, Proceedings, Portland, Oregon, Usa, July, 2010,
pp. 63–70.
[21] P. S. Oliverto, C. Witt, Improved time complexity analysis of the simple
genetic algorithm, Theoretical Computer Science 605 (15) (2015) 21–41.
[22] D. Sudholt, C. Thyssen, Runtime analysis of ant colony optimization for
shortest path problems, Journal of Discrete Algorithms 10 (10) (2012)
165–180.
[23] A. Lissovoi, C. Witt, Runtime analysis of ant colony optimization on dynamic shortest path problems, Theoretical Computer Science 561 (2015)
73–85.
[24] Y. Chen, X. Zou, Runtime analysis of a multi-objective evolutionary algorithm for obtaining finite approximations of pareto fronts, Information
Sciences 262 (2014) 62–77.
[25] D. Sudholt, C. Witt, Update strength in edas and aco: How to avoid
genetic drift, in: Genetic and Evolutionary Computation Conference,
2016, pp. 61–68.
[26] D. H. Wolpert, W. G. Macready, No free lunch theorems for optimization, IEEE Transactions on Evolutionary Computation 1 (1) (1997) 67–
82.
[27] F. Neumann, D. Sudholt, C. Witt, Analysis of different mmas aco algorithms on unimodal functions and plateaus, Swarm Intelligence 3 (2009)
35–68.
[28] F. Neumann, I. Wegener, Randomized local search, evolutionary algorithms, and the minimum spanning tree problem, Theoretical Computer
Science 378 (2007) 32–40.
[29] J. Reichel, M. Skutella, Evolutionary algorithms and matroid optimization problems, Algorithmica 57 (1) (2010) 187–206.
[30] A. M. Sutton, F. Neumann, S. Nallaperuma, Parameterized runtime
analyses of evolutionary algorithms for the planar euclidean traveling
salesperson problem, Evolutionary Computation 22 (4) (2014) 595–628.
38
[31] A. M. Sutton, J. Day, F. Neumann, A parameterized runtime analysis
of evolutionary algorithms for max-2-sat, in: Conference on Genetic &
Evolutionary Computation, 2012, pp. 433–440.
[32] Y. Zhou, X. Lai, K. Li, Approximation and parameterized runtime analysis of evolutionary algorithms for the maximum cut problem., IEEE
Transactions on Cybernetics 45 (8) (2015) 1491–1498.
[33] Z. Wu, M. Kolonko, Asymptotic properties of a generalized cross entropy
optimization algorithm, IEEE Transactions on Evolutionary Computation 18 (5) (2014) 658 – 673.
[34] Z. Wu, M. Kolonko, R. H. Möhring, Stochastic runtime analysis of the
cross entropy algorithm, IEEE Transactions on Evolutionary Computation, DOI: 10.1109/TEVC.2017.2667713.
[35] M. Held, R. M. Karp, A dynamic programming approach to sequencing
problems, Journal of the Society for Industrial and Applied Mathematics
10 (1) (1962) 196–210.
[36] N. Christofides, Worst-case analysis of a new heuristic for the travelling
salesman problem, Tech. rep., Graduate School of Industrial Administration, CMU (1976).
[37] M. T. Goodrich, R. Tamassia, Algorithm Design and Applications, Wiley, 2015.
[38] S. Arora, Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems, Journal of the ACM 45 (5)
(1988) 753–782.
[39] J. S. B. Mitchell, A constant-factor approximation algorithm for tsp with
pairwise-disjoint connected neighborhoods in the plane, in: TwentySixth Symposium on Computational Geometry, 2010, pp. 183–191.
[40] S. Lin, B. W. Kernighan, An effective heuristic algorithm for the
traveling-salesman problem, Operations Research 21 (2) (1973) 498–516.
[41] P. T. D. Boer, D. P. Kroese, S. Mannor, R. Y. Rubinstein, A tutorial
on the cross-entropy method, Annals of Operations Research 134 (1)
(2005) 19–67.
39
[42] T. Kötzing, F. Neumann, H. Röglin, C. Witt, Theoretical analysis of two
aco approaches for the traveling salesman problem, Swarm Intelligence
6 (1) (2012) 1–21.
[43] A. M. Sutton, F. Neumann, A parameterized runtime analysis of evolutionary algorithms for the euclidean traveling salesperson problem, in:
Proceedings of the Twenty-Sixth Conference on Artificial Intelligence
(AAAI’12), AAAI press, 2012, pp. 1105–1111.
[44] A. Costa, O. D. Jones, D. Kroese, Convergence properties of the crossentropy method for discrete optimization, Operations Research Letters
35 (5) (2007) 573–580.
[45] Z. Wu, M. Kolonko, Absorption in model-based search algorithms for
combinatorial optimization, in: Evolutionary Computation (CEC), 2014
IEEE Congress on, IEEE, 2014, pp. 1744–1751.
[46] M. Thomas, Machine learning, New Delhi: McGraw Hill Education India, 1997.
[47] H. Asoh, H. Mühlenbein, On the mean convergence time of evolutionary
algorithms without selection and mutation, in: Parallel Problem Solving
from Nature—PPSN III, Springer, 1994, pp. 88–97.
[48] M. Pirlot, General local search methods, European Journal of Operational Research 92 (3) (1996) 493–511.
[49] M. A. Khamsi, W. A. Kirk, An introduction to metric spaces and fixed
point theory, John Wiley,, 2001.
[50] L. V. Quintas, F. Supnick, On some properties of shortest Hamiltonian
circuits, American Mathematical Monthly 72 (9) (1965) 977–980.
[51] W. Gasarch, Classifying problems into complexity classes, Advances in
Computers 95 (2015) 239–292.
[52] D. A. Spielman, S. H. Teng, Smoothed analysis of algorithms: Why the
simplex algorithm usually takes polynomial time, Journal of the Acm
51 (3) (2004) 385–463.
40
| 8 |
arXiv:1508.04596v1 [physics.flu-dyn] 19 Aug 2015
Large scale three-dimensional topology optimisation of
heat sinks cooled by natural convection
Joe Alexandersena,∗, Ole Sigmunda , Niels Aagea
a
Department of Mechanical Engineering, Solid Mechanics
Technical University of Denmark
Nils Koppels Allé, Building 404
DK-2800, Denmark
Abstract
This work presents the application of density-based topology optimisation
to the design of three-dimensional heat sinks cooled by natural convection.
The governing equations are the steady-state incompressible Navier-Stokes
equations coupled to the thermal convection-diffusion equation through the
Bousinessq approximation. The fully coupled non-linear multiphysics system
is solved using stabilised trilinear equal-order finite elements in a parallel
framework allowing for the optimisation of large scale problems with order of
40-330 million state degrees of freedom. The flow is assumed to be laminar
and several optimised designs are presented for Grashof numbers between
103 and 106 . Interestingly, it is observed that the number of branches in
the optimised design increases with increasing Grashof numbers, which is
opposite to two-dimensional optimised designs.
Keywords: topology optimisation, heat sink design, natural convection,
large scale, multiphysics optimisation
1. Introduction
Natural convection is the phenomena where density-gradients due to temperature differences cause a fluid to move. Natural convection is therefore
a natural way to passively cool a hot object, such as electronic components,
light-emitting diode lamps or materials in food processing.
∗
Corresponding author
Email address: [email protected] (Joe Alexandersen)
Preprint submitted to arXiv.org
August 20, 2015
Topology optimisation is a material distribution method [1] used to optimise the layout of a structure in order to minimise a given performance
measure subject to design constraints and a physical model. In order to take
convective heat transfer, to an ambient fluid, into account in the design process of density-based methods, a common extension is to introduce some form
of interpolation of the convection boundaries, see e.g. [2, 3, 4]. More recently,
Dede et al. [5] used these simplified models to design and manufacture heat
sinks subject to jet impingement cooling. However, it is hard to justify the
application of a predetermined and constant convection coefficient, because
topology optimisation often leads to unanticipated designs, closed cavities
and locally varying velocities. During the optimisation process, the design
also changes significantly and the interaction with the ambient fluid changes.
Therefore, to ensure physically correct capturing of the aspects of convective
heat transfer, the full conjugate heat transfer problem must be solved.
Topology optimisation for fluid systems began with the treatment of
Stokes flow in the seminal article by Borrvall and Petersson [6] and has
since been applied to Navier-Stokes [7], as well as scalar transport problems [8] amongst others. The authors have previously presented a densitybased topology optimisation approach for two-dimensional natural convection problems [9]. Recently, Coffin and Maute presented a level-set method
for steady-state and transient natural convection problems using X-FEM
[10]. Interested readers are referred to [9] for further references on topology
optimisation in fluid dynamics and heat transfer.
Throughout this article, the flows are assumed to be steady and laminar.
The fluid is assumed to be incompressible, but buoyancy effects are taken
into account through the Boussinesq approximation, which introduces variations in the fluid density due to temperature gradients. The inclusion of a
Brinkman friction term facilitates the topology optimisation of the fluid flow.
The scope of this article is primarily to present large scale three-dimensional
results using the formulation presented in [9], as well as the computational
issues arising from solving the non-linear equation system and the subsequent
linear systems. Thus, only a brief overview of the underlying finite element
and topology optimisation formulations are given and the reader is referred
to [9] for further information.
In recent years an increasing body of work has been published on efficient
large scale topology optimisation. These works cover the use of high-level
scripting languages [11, 12], multiscale/-resolution approaches [13, 14] and
parallel programming using the message parsing interface and C/Fortran
2
[15, 16, 17, 18]. To facilitate the solution to truly large scale conjugate heat
transfer problems, the implementation in this paper is done using PETSc
[19] and the framework for topology optimisation presented in [18].
The layout of the paper is as follows: Section 2 presents the governing
equations; Section 3 presents the topology optimisation problem; Section
4 briefly discusses the finite element formulation; Section 5 discusses the
numerical implementation details; Section 6 presents scalability results for
the parallel framework and optimised designs for a test problem; Section 7
finishes with a discussion and conclusion.
2. Governing equations
The dimensionless form of the governing equations have been derived
based on the Navier-Stokes and convection-diffusion equations under the assumption of constant fluid properties, incompressible, steady flow and neglecting viscous dissipation. Furthermore, the Boussinesq approximation has
been introduced to take density-variations due to temperature-differences
into account. A domain is decomposed into two subdomains, Ω = Ωf ∪ Ωs ,
where Ωf is the fluid domain and Ωs is the solid domain. In order to facilitate
the topology optimisation of conjugate natural convective heat transfer between a solid and a surrounding fluid, the equations are posed in the unified
domain, Ω, and the subdomain behaviour is achieved through the control of
coefficients. The following dimensionless composite equations are the result.
∀x ∈ Ω :
∂ui
∂
∂ui ∂uj
∂p
uj
− Pr
+
+
= −α(x)ui − GrP r2 egi T
(1)
∂xj
∂xj ∂xj
∂xi
∂xi
∂uj
=0
(2)
∂xj
∂
∂T
∂T
−
K(x)
= s(x)
(3)
uj
∂xj
∂xj
∂xj
where ui is the velocity field, p is the pressure field, T is the temperature field,
xi denotes the spatial coordinates, egi is the unit vector in the gravitational
direction, α(x) is the spatially-varying effective impermeability, K(x) is the
spatially-varying effective thermal conductivity, s(x) is the spatially-varying
volumetric heat source term, P r is the Prandtl number, and Gr is the Grashof
number.
3
The effective thermal conductivity, K(x), is defined as:
1 if x ∈ Ωf
K(x) = 1
if x ∈ Ωs
Ck
(4)
k
where Ck = kfs is the ratio between the fluid thermal conductivity, kf , and the
solid thermal conductivity, ks . Theoretically, the effective impermeability,
α(x), is defined as:
0 if x ∈ Ωf
α(x) =
(5)
∞ if x ∈ Ωs
in order to ensure zero velocities inside the solid domain. However, numerically this requirement must be relaxed as will be described in section 3. The
volumetric heat source term is defined as being active within a predefined
subdomain of the solid domain, ω ⊂ Ωs :
0 if x ∈
/ω
s(x) =
(6)
s0 if x ∈ ω
is the dimensionless volumetric heat generation, q is the
where s0 = ksqL
∆T
dimensional volumetric heat generation, ∆T is the reference temperature
difference and L is the reference length scale.
The Prandtl number is defined as:
ν
Pr =
(7)
Γ
where ν is the kinematic viscosity, or momentum diffusivity, and Γ is the
thermal diffusivity. It thus describes the relative spreading of viscous and
thermal effects. The Grashof number is defined as:
Gr =
gβ ∆T L3
ν2
(8)
where g is the acceleration due to gravity and β is the volumetric coefficient of
thermal expansion. It describes the ratio between the buoyancy and viscous
forces in the fluid. The Grashof number is therefore used to describe to what
extent the flow is dominated by natural convection or diffusion. For low
Gr the flow is dominated by viscous diffusion and for high Gr the flow is
dominated by natural convection. The problems in this article are assumed
to have large enough buoyancy present to exhibit natural convective effects,
but small enough Gr numbers to exhibit laminar fluid motion.
4
3. Optimisation formulation
3.1. Interpolation functions
In order to perform topology optimisation, a continuous design field, γ(x),
varying between 0 and 1 is introduced. Pure fluid is represented by γ(x) = 1
and solid by γ(x) = 0. For intermediate values between 0 and 1, the effective
conductivity is interpolated as follows:
K(γ) =
γ(Ck (1 + qf ) − 1) + 1
Ck (1 + qf γ)
(9)
and likewise the effective impermeability is interpolated using:
α (γ) = α
1−γ
1 + qα γ
(10)
The interpolation functions ensure that the end points defined in (4) and (5),
respectively, are satisfied. The effective impermeability is bounded to α in the
solid regions and this upper bound should be chosen large enough to provide
vanishing velocities, but small enough to ensure numerical stability. The
convexity factors, qf and qα , are used to control the material properties for
intermediate design values in order to promote well-defined designs without
intermediate design field values.
3.2. Optimisation problem
The topology optimisation problem is defined as:
Z
minimise:f (γ, T ) = s(x) T dV
γ∈D
Z ω
Z
subject to: g(γ) =
1 − γ dV ≤ vf
Ωd
dV
(11)
Ωd
R(γ, u, p, T ) = 0
0 ≤ γ(x) ≤ 1 ∀x ∈ Ωd
where γ is the design variable field, D is the design space, f is the thermal compliance objective functional, g is the volume constraint functional,
R(γ, u, p, T ) is the residual arising from the stabilised weak form of the governing equations, and Ωd ⊆ Ω is the design domain.
The design field is regularised using a PDE-based (partial differential
equation) density filter [20, 18] and the optimisation problem is solved using
the method of moving asymptotes (MMA) [21, 18].
5
3.3. Continuation scheme
A continuation scheme is performed on various parameters in order to
stabilise the optimisation process and to improve the optimisation results.
It is the experience of the authors that the provided continuation scheme
yields better results than starting with the end values, although this cannot
generally be proven [22, 23].
The chosen continuation strategy consists of five steps:
qf ∈ {0.881, 8.81, 88.1, 881, 881}
qα ∈ {8, 8, 8, 98, 998}
α ∈ 105 , 105 , 105 , 106 , 107
(12a)
(12b)
(12c)
The sequence is chosen in order to alleviate premature convergence to poor
local minimum. The value of qf is slowly increased to penalise intermediate design field values with respect to conductivity. The maximum effective
permeability, α, is set relatively low during the first three steps, as this ensures better scaled sensitivities and more stable behaviour. Over the last two
steps, α is increased by two orders of magnitude in order to further decrease
the velocity magnitudes in the solid regions. The particular values of qf are
chosen by empirical inspection such as to ensure the approximate collocation
of the fluid and thermal boundaries. A more comprehensive investigation
into the modelling accuracy of the boundary layers is outside the scope of
this paper and will be investigated in future work.
It is important to note that the optimisation problem is by no means
convex and any optimised design will at best be a local minimum. The obtained design will always depend on the initial design as well as the continuation strategy. However, in the authors experience, the chosen continuation
strategy gives a good balance between convergence speed, final design performance and physicality of the modelling. The effects of the steps of the
current continuation strategy on the design distribution will be discussed in
section 6.4.
4. Finite element formulation
The governing equations are discretised using stabilised trilinear hexahedral finite elements. The design field is discretised using elementwise constant variables, in turn rendering the effective thermal conductivity and the
effective impermeability to be elementwise constant. The monolithic finite
6
element discretisation of the problem ensures continuity of the temperature
field, as well as the fluxes across fluid-solid interfaces. The particularities of
the implemented finite element formulation are as detailed in [9]. However,
simpler stabilisation parameters have been used in order to allow for consistent sensitivities to be employed, see Appendix B. The Jacobian matrix is
now fully consistent in that variations of the stabilisation parameters with
respect to design and state fields are included, in contrast to the original
work [9].
5. Numerical implementation
The discretised FEM equation is implemented in PETSc [19] based on the
topology optimisation framework presented in [18]. The PETSc framework
is used due to its parallel scalability, the availability of both linear and nonlinear solvers, preconditioners and structured mesh handling possibilities. All
components described in the following are readily available within the PETSc
framework.
5.1. Solving the non-linear system
The non-linear system of equations is solved using a damped Newton
method. The damping coefficient is updated using a polynomial L2 -norm fit,
where the coefficient is chosen as the minimiser of the polynomial fit. The
polynomial fit is built using the L2 -norm of the residual vector at the current
solution point, at 50% of the Newton step and at 100% of the Newton step.
This residual-based update scheme combined with a good initial vector (the
solution from the previous design iteration) has been observed to be very
robust throughout the optimisation process for the moderately non-linear
problems treated. To further increase the robustness of the non-linear solver,
if the Newton solver fails from the supplied initial vector, a ramping scheme
for the heat generation magnitude is applied in order to recover. Throughout,
the convergence criteria for the Newton solver is set to a reduction in the L2 norm of the residual of 10−4 relative to the initial residual.
5.2. Solving the linear systems
Due to the large scale and three-dimensional nature of the treated problems, the arising linearised systems of equations are by far the most time
7
consuming part of the Newton scheme. Therefore, to make large scale problems tractable the (unsymmetric) linear systems are solved using a fully
parallelised iterative Krylov subspace solver.
Constructing an iterative solver that is both independent of problem settings and possesses both parallel and numerical scalability, is intricate and
beyond the scope of this work, where focus is on the optimisation. However,
in order to facilitate the solution of large scale optimisation problems, an
efficient solver is required. To this end the authors use the readily available
core components in PETSc to construct a solver with focus on simplicity, as
well as reduced wall clock time. This is quite easy to obtain for solvers and
preconditioners that rely heavily on matrix-vector multiplications. To this
purpose the flexible GMRES (F-GMRES) method is used as the linear solver
combined with a geometry-based Galerkin-projection multigrid (GMG) preconditioner. Such a solver depends highly on the quality of the smoother
to guarantee fast convergence [24]. The authors have found that a simple
Jacobi-preconditioned GMRES provides a reasonable choice of smoother1 .
The convergence criterion for the Krylov solver is set to 10−5 relative to the
initial residual.
6. Results
6.1. Problem setup
The considered optimisation problem is an (academic) example of a heat
sink in a closed cubic cavity. Figure 1 shows the problem setup with dimensions and boundary conditions. The heat source (black in figure), exemplifying an electronics chip, is placed in the mid-bottom of the cavity and
modelled using a small block of solid material with volumetric heat generation, s0 = 104 . The design domain (gray in figure) is placed on top of the
heat source in order to allow the cooling fluid to pass underneath it, as well
as to allow room for wires etc. The vertical and top outer walls of the cavity
are assumed to be kept at a constant cold temperature, T = 0, while the
bottom is insulated. The height of the cavity has been used as the reference
length scale, L. Thus, the cavity dimensions are 1 × 1 × 1, the design domain dimensions are 0.75 × 0.75 × 0.75 and the heat source dimensions are
1
Due to the choice of a Krylov smoother, the multigrid preconditioner will vary slightly
with input and thus require a flexible outer Krylov method.
8
(a) Dimensions
(b) Boundary conditions
Figure 1: Illustration of the problem setup for the heat sink in a close cubic
cavity. The heat source is black and the design domain is gray.
0.1 × 0.05 × 0.1. A discussion on the definition of the temperature difference
in the Grashof number can be found in Appendix A. Initial investigations
showed that due to the symmetry of the domain and boundary conditions,
the design and state solutions remained quarter symmetric throughout the
optimisation. Thus, the computational domain is limited to a quarter of the
domain with symmetry boundary conditions. The volume fraction is set to
5%, i.e. vf = 0.05, for all examples.
6.2. Parallel performance
To demonstrate the parallel performance of the state solver, the model
optimisation problem described in section 6 is solved on a fixed mesh for different numbers of processes. All computations in this paper were performed
on a cluster, where each node is equipped with two Intel Xeon e5-2680v2 10core 2.8GHz processors. The results shown in Tables 1 and 2 are averaged
over 250 design cycles and show the performance for Gr = 103 and Gr = 106 ,
respectively. The data shows that the proposed solver scales almost linearly
in terms of speed up, and more importantly that the performance is only
slightly affected by the Grashof number.
In order to quantify the degree of numerical scalability, a second study is
performed in which the mesh resolution is varied. The study is conducted for
9
Processes
160
320
640
time [s]
53.2
28.9
14.1
scaling
1.00
0.54
0.26
Table 1: Average time taken per state solve over 250 design iterations for
Gr = 103 at a mesh resolution of 80 × 160 × 80.
Processes
160
320
640
time [s]
62.6
31.9
16.5
scaling
1.00
0.51
0.26
Table 2: Average time taken per state solve over 250 design iterations for
Gr = 106 at a mesh resolution of 80 × 160 × 80.
Mesh size
Gr = 103
80 × 160 × 80
7.5
160 × 320 × 160
10.1
320 × 640 × 320
18.4
Gr = 106
5.6
7.7
15.6
Table 3: Average iterations for the linear solver per state solve over entire
design process for Gr = 103 and Gr = 106 at varying mesh resolutions.
10
both low and high Grashof numbers and the average F-GMRES iterations
are collected in Table 3. The total number of design iterations averaged
over was 250, 500 and 1000, respectively, for the three mesh resolutions.
The data clearly shows that the computational complexity increases with
problem size, and thus that the solver is not numerically scalable. However,
since the growth in numerical effort, i.e. the number of F-GMRES iterations,
is moderate we conclude that the proposed solver is indeed applicable for
solving large scale natural convection problems.
6.3. Varying Grashof number
The problem is investigated for varying Gr under constant volumetric
heat generation, s0 = 104 , Prandtl number, P r = 1, and thermal conductivity ratio, Ck = 10−2 . The purpose of the present study is not to provide
a detailed physical example. It is rather to provide phenomenological insight into the effect of changing the governing parameter of the fluid-thermal
coupling, namely the Grashof number. However, physical interpretations of
tuning the Gr-number, while keeping the dimensionless volumetric heat generation and the P r-number constant, could be e.g. equivalent to tuning the
gravitational strength (going from microgravity towards full gravity) or the
dimensional volumetric heat generation. It is important to note that when
interpreting the results, the dimensional temperature scale will differ for the
two interpretations. While tuning the gravitational strength, the temperature scale remains the same; by varying the magnitude of the volumetric heat
generation, the temperature scale varies accordingly2 .
The computational mesh is 160 × 320 × 160 elements yielding a total
of 8, 192, 000 elements and 41, 603, 205 degrees of freedom for the quarter
domain. The design domain consists of 3, 456, 000 elements and the filter
radius is set to 2.5 times the element size.
Figure 2 shows the optimised designs for varying Gr-number with superimposed slices of the corresponding temperature fields. Due to the use of
density filtering, the interface between solid and fluid regions for the optimized designs are not exactly described but consists of a thin layer of intermediate design field values. The optimised design distributions are shown
The dimensionless volumetric heat generation is given by s0 = ksqL
∆T . Requiring that
s0 remains constant, means that the dimensional volumetric heat generation, q, and the
q
= const.. An increase
reference temperature difference, ∆T , must vary in unison, i.e. ∆T
in the Gr-number is thus achieved through the increase of ∆T .
2
11
(a) Gr = 103
(b) Gr = 104
(c) Gr = 105
(d) Gr = 106
Figure 2: Optimised designs for varying Gr-number at a mesh resolution of
160 × 320 × 160.
12
Gr
103
104
105
106
Primary
12
8
8
8
Secondary
0
16
28
48
Surface area
0.887
0.853
0.834
0.846
Table 4: The number of primary and secondary branches, as well as the
surface area for the optimised designs of figure 2.
thresholded at γ = 0.05, which is the approximate location of the computational interface. The general characteristics of all the designs are similar, i.e.
all designs are “thermal trees” with conductive branches moving the heat
away from the heat source. However, it can clearly be seen that the design
changes considerably with increasing convection-dominance (increasing Gr).
For increasing Gr-number the conductive branches contract, resulting in a
smaller spatial extent of the overall heat sink. This intuitively makes sense as
the problem goes from one of conduction/diffusion at Gr = 103 to convection
at Gr = 106 . When diffusion dominates, the goal for the branches essentially
becomes to conduct the heat directly towards the cold walls. As convection
begins to matter, the fluid movement aids the transfer of heat away from the
heat sink and the branches do not need to be as long. Instead, the design
forms higher vertical interfaces in order to increase surface area perpendicular to the flow direction and thus better transfer of heat to the fluid. At the
same time the complexity of the designs increases as the importance of convection increases. This can be seen by studying the number of primary and
secondary branches. Primary branches are defined as those connected to the
heat source directly and secondary branches are those connected to primary
branches. Table 4 shows the number of primary and secondary branches of
the optimised designs shown in figure 2. The number of primary branches is
largest for the diffusion-dominated case, but more or less constant thereafter.
However, it can be seen that the number of secondary branches significantly
increases as the Gr-number is increased. Table 4 also shows that the total
surface area3 is decreasing for the three lower Gr-numbers and then increases
slightly.
3
The surface area is computed using an isosurface at the selected threshold design field
value.
13
(a) Gr = 103
(b) Gr = 106
Figure 3: Temperature distribution in optimised designs for Gr = 103 and
Gr = 106 at a mesh resolution of 160 × 320 × 160 - view from below.
It is interesting to note, that the trend of increasing complexity with
increasing Gr-number is the reverse of what was observed for two-dimensional
problems [9]. There, the complexity of the design decreased as the Gr-number
increased. This difference is likely due to the fact that going into threedimensions allows the fluid to move around and through the design making
it more a question of “topology”, in contrast to in two-dimensions where it is a
question of surface shape. Physically, in two-dimensions additional branches
disturb the flow and thus the heat transfer; in three-dimensions, vertical
branches improve the heat transfer through an increased vertical surface
area. Figure 3 shows the optimised designs for Gr = 103 and Gr = 106
from below. The radial extent of the designs are emphasised from these
views. It is also seen that the branches for Gr = 106 are positioned to ensure
that the structure is open from below, with the branches forming vertical
14
Analysis Gr
103
104
105
106
3
10
2.064962
1.933460
1.488278
1.134963
Optimisation Gr
104
105
2.066856 2.241125
1.880946 1.994538
1.450276 1.404511
1.121979 1.061452
106
2.362470
2.111726
1.439089
1.025340
Table 5: Crosscheck objective function values for the designs shown in figure
2.
Gr
103
104
105
106
Time
Non-linear:
avg. (max)
9:56
1.9 (2)
10:25
2.0 (3)
10:28
2.1 (10)
10:35
2.1 (7)
Linear:
contin - avg. (max)
7.6,8.4,7.8,8.1,18.4 - 10.1 (25)
8.3,8.6,8.3,8.6,22.7 - 11.3 (29)
8.4,8.6,8.7,8.2,15.7 - 9.9 (34)
7.3,7.4,7.5,8.0,8.4 - 7.7 (14)
Table 6: Computational time, average non-linear iterations and linear iterations for the optimised designs of figure 2.
walls as discussed above.
Table 5 shows a crosscheck of the objective functions for the optimised
designs. It can be seen that all designs optimised for certain flow conditions
outperform the other designs for the specified Gr-number.
The optimisations was run for 500 design iterations for each optimised
design. Table 6 shows the computational time, average non-linear iterations
and linear iterations for the optimised designs of figure 2 using 1280 cores.
It can be seen that the computational time only weakly increases as the Grnumber is increased and remains close to 1.2 minutes per design iteration
on average. It is interesting to note, that Gr = 106 seems easier to solve
than the others as it exhibits lower average number of linear iterations than
all others, as well as a lower maximum number of non-linear iterations than
Gr = 105 . Furthermore, it can be seen that due to the Newton method
starting from a good initial vector, only 2 non-linear iterations are needed
for most of the design iterations independent of Gr-number.
15
6.4. High resolution design
The problem is now investigated with a computational mesh of 320 ×
640 × 320 elements yielding a total of 65, 536, 000 elements and 330, 246, 405
degrees of freedom for the quarter domain. The design domain consists of
27, 648, 000 elements and the filter radius is set to 2.5 times the element size,
i.e. in absolute measures half the size of before. The optimisation is run for
1000 design iterations and the computational time was 107 and 108 hours,
respectively, for Gr = 103 and Gr = 106 , using 2560 cores. This yields an
average of 6.4 and 6.5 minutes per design iteration, respectively.
Figure 4 shows the optimised design for Gr = 106 with the fine mesh
resolution and small length scale at various steps of the continuation strategy. The complexity of the design can be seen to be significantly higher than
for the design with a larger length scale, figure 2d. The subfigures are the
final iterations of the 1st, 2nd, 3rd and 5th (final) continuation steps. It
can be seen that the complexity of the design decreases during the optimisation process once the overall topology has been settled (iteration 400 and
onwards). This is due to the harder and harder penalisation of intermediate
design field values, with respect to conductivity, which are present at the
interface between solid and fluid. Therefore, smaller features are progressively removed as the surface area is more heavily penalised. The reason
for going to such high penalisation of the conductivity is to ensure the approximate collocation of the fluid and thermal boundaries. However, if one
starts directly with these physically-relevant parameters, particularly poor
local minima have been observed.
Figure 5 shows the final optimised designs for Gr = 103 and Gr = 106
with the fine mesh resolution and small length scale. The complexities of
both designs can be seen to be significantly higher than the previous, figures
2a and 2d, due to the smaller length scale. The increase in complexity with
increasing Grashof number is not as apparent at this resolution and length
scale, however, it is still argued that the number of primary and secondary
members follows the trend from the previous study, section 6.3. Also, the
shift from long conducting branches to shorter members, with significant
vertical surface area, is still obvious.
7. Discussion and conclusion
In this paper we have applied topology optimisation to the design of
three-dimensional heat sinks using a fully coupled non-linear thermofluidic
16
(a) Iteration 200
(b) Iteration 400
(c) Iteration 600
(d) Iteration 1000
Figure 4: Optimised designs for Gr = 106 at a mesh resolution of 320 ×
640 × 320. Please note that the freely hanging material in (a) is due to only
elements below the threshold, γ = 0.05, are shown - the design is connected
by intermediate design field values.
17
(a) Gr = 103
(b) Gr = 106
Figure 5: Optimised designs for Gr = 103 and Gr = 106 at a mesh resolution
of 320 × 640 × 320. The outline of the outer walls are shown in black.
model. In contrast to previous works, that considered simplified convection
models, the presented methodology is able to recover interesting physical
effects and insights, and avoids problems with the formation of non-physical
internal cavities, length-scale effects and artificial convection assumptions.
The implementation of the code in a PETSc framework suitable for large
scale parallel computations allows for running examples with more than 300
million degrees of freedom and almost 30 million design variables on regular
grids.
The example considered in the paper was primarily of academic nature.
Nevertheless, some interesting insight is obtained, showing that optimised
structures go from exhibiting simple branches that conduct heat towards the
cold outer boundaries for diffusion-dominated problems, towards complex
and compact multi-branched structures that maximize the convection heat
transfer for higher Grashof numbers.
Current and future work includes applications to real life problems (see
preliminary work in [25]), irregular meshes, multiple orientations, as well
as the extension to transient problems and detailed investigations into the
modelling accuracy of the boundary layer.
18
Computational Gr
103
104
105
106
103
106
Mesh
Figure Actual Gr
160 × 320 × 160
2a
1.69 × 103
160 × 320 × 160
2b
1.55 × 104
160 × 320 × 160
2c
1.16 × 105
160 × 320 × 160
2d
8.60 × 105
320 × 640 × 320
5a
1.42 × 103
320 × 640 × 320
5b
7.16 × 105
Table A.7: Computational and actual Gr-numbers for the designs presented
in this paper.
8. Acknowledgements
This work was funded by Villum Fonden through the NextTop project,
as well as by Innovation Fund Denmark through the HyperCool project. The
first author was also partially funded by the EU FP7-MC-IAPP programme
LaScISO.
Appendix A. Computational versus actual Grashof number
The Grashof numbers used above are all based on an a priori defined reference temperature difference. However, the actual temperature difference
observed between the heat source and the walls are not known a priori due
to the fact that the problem only has a single known temperature (Dirichlet
boundary condition) and a volumetric heat source. This is why the maximum temperature observed for the designs, see figure 2, is not equal to 1.
Therefore, after the optimisation, one can define an a posteriori Grashof
number based on the actual computed temperature difference. The a priori
version can be termed the computational Grashof number, the one that goes
into the dimensionless governing equations; and the a posteriori version can
be called the actual, or optimised, Grashof number for the given optimised
design. The actual Grashof number can be useful for experimental studies, as
well as for future comparisons. The actual Grashof numbers for the designs
shown in this paper are listed in Table A.7. The values for the fine meshes
are lower due to the lower thermal compliance, equivalent to the temperature
of the heat source, allowed by the smaller design length scale.
19
Appendix B. Stabilisation parameters
The stabilised weak form of the equations are given in [9]. The equations
are stabilised using the pressure-stabilising Petrov-Galerkin (PSPG) [26, 27]
and streamline-upwind Petrov-Galerkin (SUPG) methods [28], for more information please see [9].
The stabilisation parameters for SUPG and PSPG are defined as one and
the same using the following approximate min-function:
τSU = τP S = τ =
1
1
1
+ 2+ 2
2
τ1
τ2
τ3
− 21
(B.1)
The limit factors are given by:
4he
||ue ||2
he 2
τ2 =
4P r
1
τ3 =
αe
τ1 =
(B.2a)
(B.2b)
(B.2c)
where he is a characteristic element size (for cubes the element edge length)
and ue is the element vector of velocity degrees of freedom. The first limit
factor, τ1 , has been simplified based on evaluation at element centroids under the assumption of a single Gauss-point, yielding a constant stabilisation
factor within each element.
In order to define a consistent Jacobian matrix, and thus a consistent
adjoint problem, the derivatives of the stabilisation factors are needed with
respect to the velocity field. This can be found to be:
2 2 !−1
−1 T
∂τ
τ1
τ1
ue T ue
= −τ 1 +
ue
(B.3)
+
∂ue
τ2
τ3
Furthermore, to define consistent design sensitivities, the derivatives of
the stabilisation factors with respect to the design field is needed. This can
be found to be:
!−1
2 2
∂τ
τ
τ3
∂αe
τ3
=−
(B.4)
+
+1
∂γe
τ3
τ1
τ2
∂γe
20
Including the derivatives in the definition of consistent adjoint sensitivities
has been observed to provide a one to two order of magnitude improvement
in accuracy of sensitivities with respect to a finite difference approximation.
Significant differences have not been observed in optimisation behaviour or
in final designs. However, this cannot be guaranteed in general and it is
therefore best to ensure consistency, as also highlighted by the similar issue
of frozen turbulence [29].
Similar definitions and derivations are carried out for the thermal SUPG
stabilisation.
References
[1] M. P. Bendsøe, O. Sigmund, Topology Optimization: Theory, Methods
and Applications, Springer, 2003, ISBN: 3-540-42992-1.
[2] L. Yin, G. Ananthasuresh, A novel topology design scheme for
the multi-physics problems of electro-thermally actuated compliant micromechanisms, Sensors and Actuators 97-98 (2002) 599–609.
doi:10.1016/S0924-4247(01)00853-6.
[3] T. Bruns,
Topology optimization of convection-dominated,
steady-state heat transfer problems,
International Journal
of Heat and Mass Transfer 50 (15-16) (2007) 2859–2873.
doi:10.1016/j.ijheatmasstransfer.2007.01.039.
[4] A. Iga, S. Nishiwaki, K. Izui, M. Yoshimura, Topology optimization for thermal conductors considering design-dependent effects, including heat conduction and convection, International Journal of Heat and Mass Transfer 52 (11-12) (2009) 2721–2732.
doi:10.1016/j.ijheatmasstransfer.2008.12.013.
[5] E. Dede, S. N. Joshi, F. Zhou, Topology optimization, additive layer manufacturing, and experimental testing of an air-cooled
heat sink, ASME Journal of Mechanical Design (available online).
doi:10.1115/1.4030989.
[6] T. Borrvall, J. Petersson, Topology optimization of fluids in Stokes flow,
International Journal for Numerical Methods in Fluids 41 (1) (2003) 77–
107. doi:10.1002/fld.426.
21
[7] A. Gersborg-Hansen, O. Sigmund, R. Haber, Topology optimization of
channel flow problems, Structural Multidisciplinary Optimization 30 (3)
(2005) 181–192. doi:10.1007/s00158-004-0508-7.
[8] C. S. Andreasen, A. R. Gersborg, O. Sigmund, Topology optimization
of microfluidic mixers, International Journal for Numerical Methods in
Fluids 61 (5) (2009) 498–513. doi:10.1002/fld.1964.
[9] J. Alexandersen, N. Aage, C. S. Andreasen, O. Sigmund, Topology optimisation of natural convection problems, International Journal for Numerical Methods in Fluids 76 (10) (2014) 699–721.
doi:10.1002/fld.3954.
[10] P. Coffin, K. Maute, A level-set method for steady-state and transient
natural convection problems, submitted.
[11] E. Andreassen, A. Clausen, M. Schevenels, B. S. Lazarov, O. Sigmund, Efficient topology optimization in MATLAB using 88 lines of
code, Structural and Multidisciplinary Optimization 43 (2011) 1–16.
doi:10.1007/s00158-010-0594-7.
[12] O. Amir, N. Aage, B. S. Lazarov, On multigrid-CG for efficient topology
optimization, Structural and Multidisciplinary Optimization (2013) 1–
15doi:10.1007/s00158-013-1015-5.
[13] J. Alexandersen, B. S. Lazarov, Topology optimisation of manufacturable microstructural details without length scale separation using a spectral coarse basis preconditioner, Computer Methods in Applied Mechanics and Engineering 290 (2015) 156–182.
doi:10.1016/j.cma.2015.02.028.
[14] T. H. Nguyen, G. H. Paulino, J. Song, C. H. Le, Improving multiresolution topology optimization via multiple discretizations Tam, International Journal for Numerical Methods in Engineering 92 (June) (2012)
507–530. doi:10.1002/nme.
[15] T. Borrvall, J. Petersson, Large-scale topology optimization in 3D using
parallel computing, Computer Methods in Applied Mechanics and Engineering 190 (2001) 6201–6229. doi:10.1016/S0045-7825(01)00216-X.
22
[16] A. Evgrafov, C. J. Rupp, K. Maute, M. L. Dunn, Large-scale parallel topology optimization using a dual-primal substructuring solver,
Structural and Multidisciplinary Optimization 36 (2008) 329–345.
doi:10.1007/s00158-007-0190-7.
[17] N. Aage, B. Lazarov, Parallel framework for topology optimization using
the method of moving asymptotes, Structural and Multidisciplinary Optimization 47 (4) (2013) 493–505. doi:10.1007/s00158-012-0869-2.
[18] N. Aage, E. Andreassen, B. S. Lazarov, Topology optimization using
PETSc: An easy-to-use, fully parallel, open source topology optimization framework, Structural and Multidisciplinary Optimization 51 (3)
(2014) 565–572. doi:10.1007/s00158-014-1157-0.
[19] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, V. Eijkhout, W. D. Gropp, D. Kaushik, M. G. Knepley, L. C.
McInnes, K. Rupp, B. F. Smith, H. Zhang, PETSc Users Manual, Tech.
Rep. ANL-95/11 - Revision 3.5, Argonne National Laboratory (2014).
URL http://www.mcs.anl.gov/petsc
[20] B. S. Lazarov, O. Sigmund, Filters in topology optimization based on
Helmholtz-type differential equations, International Journal for Numerical Methods in Engineering 86 (2011) 765–781. doi:10.1002/nme.3072.
[21] K. Svanberg, The method of moving asymptotes - a new method for
structural optimization, International Journal for Numerical Methods
in Engineering 24 (2) (1987) 359–373. doi:10.1002/nme.1620240207.
[22] M. Stolpe, K. Svanberg, On the trajectories of penalization methods
for topology optimization, Structural Multidisciplinary Optimization 21
(2001) 128–139.
[23] S. Rojas-Labanda, M. Stolpe, Benchmarking optimization solvers for
structural topology optimization, Structural and Multidisciplinary Optimization (available online). doi:10.1007/s00158-015-1250-z.
[24] P. S. Vassilevski, Multilevel Block Factorization Preconditioners,
Springer, 2007. doi:10.1007/978-0-387-71564-3.
[25] J.
Alexandersen,
O.
Sigmund,
N.
Aage,
Topology optimisation of passive coolers for light-emitting diode lamps,
23
in: 11th World Congress on Structural and Multidisciplinary Optimization, 2015. doi:10.13140/RG.2.1.3906.5446.
URL www.aeromech.usyd.edu.au/WCSMO2015/papers/1264_paper.pdf
[26] T. J. Hughes, L. P. France, M. Balestra, A new finite element formulation for computational fluid dynamics V - circumventing the
Babuska-Brezzi condition: a stable Petrov-Galerkin formulation of the
Stokes problem accomodating equal-order interpolations, Computer
Methods in Applied Mechanics and Engineering 59 (1) (1986) 85–99.
doi:10.1016/0045-7825(86)90025-3.
[27] T. E. Tezduyar, S. Mittal, S. Ray, R. Shih, Incompressible flow computations with stabilized bilinear and linear equalorder-interpolation velocity-pressure elements, Computer Methods
in Applied Mechanics and Engineering 95 (2) (1992) 221–242.
doi:10.1016/0045-7825(92)90141-6.
[28] A. N. Brooks, T. J. Hughes, Streamline Upwind/Petrov-Galerkin
formulations for convection dominated flows with particular emphasis on the incompressible Navier-Stokes equations, Computer Methods in Applied Mechanics and Engineering 32 (1-3) (1982) 199–259.
doi:10.1016/0045-7825(82)90071-8.
[29] A. S. Zymaris, D. I. Papadimitriou, K. C. Giannakoglou, C. Othmer,
Continuous adjoint approach to the spalart-allmaras turbulence model
for incompressible flows, Computers and Fluids 38 (2009) 1528–1538.
doi:doi:10.1016/j.compfluid.2008.12.006.
24
| 5 |
A note on the Survivable Network Design Problem
Chandra Chekuri∗
Thapanapong Rukkanchanunt†
arXiv:1608.02515v1 [] 8 Aug 2016
August 9, 2016
Abstract
In this note we consider the survivable network design problem (SNDP) in undirected graphs. We make
two contributions. The first is a new counting argument in the iterated rounding based 2-approximation
for edge-connectivity SNDP (EC-SNDP) originally due to Jain [Jai01]. The second is to make some additional connections between hypergraphic version of SNDP (Hypergraph-SNDP) introduced in [ZNI03] and
edge and node-weighted versions of EC-SNDP and element-connectivity SNDP (Elem-SNDP). One useful
consequence of this connection is a 2-approximation for Elem-SNDP that avoids the use of set-pair based
relaxation and analysis.
1
Introduction
The survivable network design problem (SNDP) is a fundamental problem in network design and has been
instrumental in the development of several algorithmic techniques. The input to SNDP is a graph G = (V, E)
and an integer requirement r(uv) between each unordered pair of nodes uv. The goal is to find a minimum-cost
subgraph H of G such that for each pair uv, the connectivity in H between u and v is at least r(uv). We
use rmax to denote maxuv r(uv), the maximum requirement. We restrict attention to undirected graphs in this
paper. There are several variants depending on whether the costs are on edges or on nodes, and whether the
connectivity requirement is edge, element or node connectivity. Unless otherwise specified we will assume
that G has edge-weights c : E → R+ . We refer to the three variants of interest based on edge, element and
vertex connectivity as EC-SNDP, Elem-SNDP and VC-SNDP. All of them are NP-Hard and APX-hard to
approximate even in very special cases.
The seminal work of Jain [Jai01] obtained a 2-approximation for EC-SNDP via the technique of iterated
rounding that was introduced in the same paper. A 2-approximation for Elem-SNDP was obtained, also via
3
iterated rounding, in [FJW06, CVV06]. For VC-SNDP the current best approximation bound is O(rmax
log |V |)
[CK12]; it is also known from hardness results in [CCK08] that the approximation bound for VC-SNDP must
depend polynomially on rmax under standard hardness assumptions.
In this note we revisit the iterated rounding framework that yields a 2-approximation for EC-SNDP and
Elem-SNDP. The framework is based on arguing that for a class of covering problems, a basic feasible solution
to an LP relaxation for the covering problem has a variable of value at least 12 . This variable is then rounded
up to 1 and the residual problem is solved inductively. A key fact needed to make this iterative approach
work is that the residual problem lies in the same class of covering problems. This is ensured by working
with the class of skew-supermodular (also called weakly-supermodular) requirement functions which capture
∗
Dept. of Computer Science, Univ. of Illinois at Urbana-Champaign, Urbana, IL, USA. [email protected]. Supported
in part by NSF grants CCF-1016684, CCF-1319376 and CCF-1526799.
†
Chiang Mai University, Chiang Mai, Thailand. [email protected]. The author was an undergraduate student at
Univ. of Illinois when he worked on results of this paper.
1
EC-SNDP as a special case. The proof of existence of an edge with large value in a basic feasible solution for
this class of requirement functions has two components. The first is to establish that a basic feasible solution is
characterized by a laminar family of sets in the case of EC-SNDP (and set pairs in the case of Elem-SNDP). The
second is a counting argument that uses this characterization to obtain a contradiction if no variable is at least
1
2 . The counting argument of Jain [Jai01] has been simplified and streamlined in subsequent work via fractional
token arguments [BKN09, NRS10]. These arguments have been applied for several related problems for which
iterated rounding has been shown to be a powerful technique; see [LRS11]. The fractional token argument leads
to short and slick proofs. At the same time we feel that it is hard to see the intuition behind the argument. Partly
motivated by pedgogical reasons, in this note, we revisit the counting argument for EC-SNDP and provide a
different counting argument along with a longer explanation. The goal is to give a more combinatorial flavor to
the argument. We give this argument in Section 2.
The second part of the note is on Elem-SNDP. A 2-approximation for this problem has been derived by
generalizing the iterated rounding framework to a set-pair based relaxation [FJW06, CVV06]. The set-pair
based relaxation and arguments add substantial notation to the proofs although one can see that there are strong
similarities to the basic argument used in EC-SNDP. The notational overhead limits the ability to teach and
understand the proof for Elem-SNDP. Interestingly, in a little noticed paper, Zhao, Nagamochi and Ibaraki
[ZNI03] defined a generalization of EC-SNDP to hypergraphs which we refer to as Hypergraph-SNDP. They
observed that Elem-SNDP can be easily reduced to Hypergraph-SNDP in which the only non-zero weight
hyperedges are of size 2 (regular edges in a graph). The advantage of this reduction is that one can derive a
2-approximation for Elem-SNDP by essentially appealing to the same argument as for EC-SNDP with a few
minor details. We believe that this is a useful perspective. Second, there is a simple and well-known connection
between node-weighted network design in graphs and network design problems on hypergraphs. We explicitly
point these connections which allows us to derive some results for Hypergraph-SNDP. Section 3 describes
these connections and results.
This note assumes that the reader has some basic familiarity with previous literature on SNDP and iterated
rounding.
2
Iterated rounding for EC-SNDP
The 2-approximation for EC-SNDP is based on casting it as a special case of covering a skew-supermodular
requirement function by a graph. We set up the background now. Given a finite ground set V an integer valued
set function f : 2V → Z is skew-supermodular if for all A, B ⊆ V one of the following holds:
f (A) + f (B) ≤ f (A ∩ B) + f (A ∪ B)
f (A) + f (B) ≤ f (A − B) + f (B − A)
Given an edge-weighted graph G = (V, E) and a skew-supermodular requirement function f : 2V → Z, we
can consider the problem of finding the minimum-cost subgraph H = (V, F ) of G such that H covers f ; that
is, for all S ⊆ V , |δF (S)| ≥ f (S). Here δF (S) is the set of all edges in F with one endpoint in S and the other
outside. Given an instance of EC-SNDP with input graph G = (V, E) and edge-connectivity requirements
r(uv) for each pair uv, we can model it by setting f (S) = maxu∈S,v6∈S r(uv). It can be verified that f is
skew-supermodular. The important aspect of skew-supermodular functions that make them well-suited for the
iterated rounding approach is the following.
Lemma 2.1 ([Jai01]). Let G = (V, E) be a graph and f : 2V → Z be a skew-supermodular requirement
function, and F ⊆ E be a subset of edges. The residual requirement function g : 2V → Z defined by
g(S) = f (S) − |δF (S)| for each S ⊆ V is also skew-supermodular.
Although the proof is standard by now we will state it in a more general way.
2
Lemma 2.2. Let f : 2V → Z be a skew-supermodular requirement function and let h : 2V → Z+ be a
symmetric submodular function. Then g = f − h is a skew-supermodular function.
Proof: Since h is submodular we have that for all A, B ⊆ V ,
h(A) + h(B) ≥ h(A ∪ B) + h(A ∩ B).
Since h is also symmetric it is posi-modular which means that for all A, B ⊆ V ,
h(A) + h(B) ≥ h(A − B) + h(B − A).
Note that h satisfies both properties for each A, B. It is now easy to check that f − h is skew-supermodular.
Lemma 2.1 follows from Lemma 2.2 by noting that the cut-capacity function |δF | : 2V → Z+ is submodular
and symmetric in undirected graphs. We also note that the same property holds for the more general setting
when G is a hypergraph.
The standard LP relaxation for covering a function by a graph is described below where there is variable
xe ∈ [0, 1] for each edge e ∈ E.
X
min
ce xe
e∈E
X
e∈δ(S)
xe ≥ f (S)
S⊂V
∈ [0, 1]
e∈E
xe
The technical theorem that underlies the 2-approximation for EC-SNDP is the following.
Theorem 2.3 ([Jai01]). Let f be a non-trivial1 skew-supermodular function. In any basic feasible solution x
to the LP relaxation of covering f by a graph G there is an edge e such that xe ≥ 12 .
To prove the preceding theorem it suffices to focus on basic feasible solutions x that are fully fractional;
that is, xe ∈ (0, 1) for all e. For a set of edges F ⊆ E let χ(F ) ∈ {0, 1}|E| denote the characteristic vector
of F ; that is, a |E|-dimensional vector that has a 1 in each position corresponding to an edge e ∈ F and a 0 in
all other positions. Theorem 2.3 is built upon the following characterization of basic feasible solutions and is
shown via uncrossing arguments.
Lemma 2.4 ([Jai01]). Let x be a fully-fractional basic feasible solution to the the LP relaxation. Then there is
a laminar family of vertex subsets L such that x is the unique solution to the system of equalities
x(δ(S)) = f (S)
S ∈ L.
In particular this also implies that |L| = |E| and that the vectors χ(δ(S)), S ∈ L are linearly independent.
The second part of the proof of Theorem 2.3 is a counting argument that relies on the characterization in
Lemma 2.4. The rest of this section describes a counting argument which we believe is slightly different from
the previous ones in terms of the main invariant. The goal is to derive it organically from simpler cases.
With every laminar family we can associate a rooted forest. We use terminology for rooted forests such as
leaves and roots as well as set terminology. We refer to a set C ∈ L as a child of a set S if C ⊂ S and there is
no S 0 ∈ L such that C ⊂ S 0 ⊂ S; If C is the child of S then S is the parent of C. Maximal sets of L correspond
to the roots of the forest associated with L.
1
We use the term non-trivial to indicate that there is at least one set S ⊂ V such that f (S) > 0.
3
2.1
Counting Argument
The proof is via contradiction where we assume that 0 < xe < 12 for each e ∈ E. We call the two nodes
incident to an edge as the endpoints of the edges. We say that an endpoint u belongs to a set S ∈ L if u is the
minimal set from L that contains u.
We consider the simplest setting where L is a collection of disjoint sets, in other words, all sets are maximal.
In this case the counting argument is easy. Let m = |E| = |L|. For each S ∈ L, f (S) ≥ 1 and x(δ(S)) = f (S).
If we assume that xe < 21 for each e, we have |δ(S)| ≥ 3 which implies that each S contains at least 3 distinct
endpoints. Thus, the m disjoint sets require a total of 3m endpoints. However the total number of endpoints is
at most 2m since there are m edges, leading to a contradiction.
Now we consider a second setting where the forest associated with L has k leaves and h internal nodes but
each internal node has at least two children. In this case, following Jain, we can easily prove a weaker statement
that xe ≥ 1/3 for some edge e. If not, then each leaf set S must have four edges leaving it and hence the total
number of endpoints must be at least 4k. However, if each internal node has at least two children, we have
h < k and since h + k = m we have k > m/2. This implies that there must be at least 4k > 2m endpoints
since the leaf sets are disjoint. But m edges can have at most 2m endpoints. Our assumption on each internal
node having at least two children is obviously a restriction. So far we have not used the fact that the vectors
χ(δ(S)), S ∈ L are linearly independent. We can handle the general case to prove xe ≥ 1/3 by using the
following lemma.
Lemma 2.5 ([Jai01]). Suppose C is a unique child of S. Then there must be at least two endpoints in S that
belong to S.
Proof: If there is no endpoint that belongs to S then δ(S) = δ(C) but then χ(δ(S)) and χ(δ(C)) are linearly
dependent. Suppose there is exactly one endpoint that belongs to S and let it be the endpoint of edge e. But then
x(δ(S)) = x(δ(C)) + xe or x(δ(S)) = x(δ(C)) − xe . Both cases are not possible because x(δ(S)) = f (S)
and x(δ(C)) = f (C) where f (S) and f (C) are positive integers while xe ∈ (0, 1). Thus there are at least two
end points that belong to S.
Using the preceding lemma we prove that xe ≥ 1/3 for some edge e. Let k be the number of leaves in
L and h be the number of internal nodes with at least two children and let ` be the number of internal nodes
with exactly one child. We again have h < k and we also have k + h + ` = m. Each leaf has at least four
endpoints. Each internal node with exactly one child has at least two end points which means the total number
of endpoints is at least 4k + 2`. But 4k + 2` = 2k + 2k + 2` > 2k + 2h + 2` > 2m and there are only 2m
endpoints for m edges. In other words, we can ignore the internal nodes with exactly one child since there are
two endpoints in such a node/set and we can effectively charge one edge to such a node.
We now come to the more delicate argument to prove the tight bound that xe ≥ 12 for some edge e. Our
main contribution is to show an invariant that effectively reduces the argument to the case where we can assume
that L is a collection of leaves. This is encapsulated in the claim below which requires some notation. Let α(S)
be the number of sets of L contained in S including S itself. Let β(S) be the number of edges whose both
endpoints lie inside S. Recall that f (S) is the requirement of S.
Claim. For all S ∈ L, f (S) ≥ α(S) − β(S).
Assuming that the claim is true we can do P
an easy counting argument. Let R1 , R2 , . . . , Rh be the maximal
sets in L (the roots of the forest). Note that hi=1 α(Ri ) = |L| = m. Applying the claim to each Ri and
summing up,
h
h
h
h
X
X
X
X
f (Ri ) ≥
α(Ri ) −
β(Ri ) ≥ m −
β(Ri ).
i=1
i=1
i=1
4
i=1
∈ Eco
∈ Epo
∈ Eco
∈ Ecp
S
∈ Ecc
∈ Ecc
C3
C1
C2
Figure 1: S is an internal node with several children. Different types of edges that play a role. p refers to parent
set S, c refer to a child set and o refers to outside.
P
P
Note that hi=1 f (Ri ) is the total requirement of the maximal sets. And m − hi=1 β(Ri ) is the total
number of edges that cross the sets R1 , . . . , Rh . Let E 0 be the set P
of edges crossing these maximal sets. Now
we are back to the setting with h disjoint sets and E 0 edges with hi=1 f (Ri ) ≥ |E 0 |. This easily leads to a
contradiction as before if we assume that xe < 12 for all e ∈ E 0 . Formally, each set Ri requires > 2f (Ri ) edges
0
crossing it from E 0 and therefore Ri contains at least 2f
P(Ri ) +1 endpoints of edges from E . Since R0 1 , . . . , Rh
are disjoint the total number of endpoints is at least 2 i f (Ri ) + h which is strictly more than 2|E |.
Thus, it remains to prove the claim which we do by inductively starting at the leaves of the forest for L.
Case 1: S is a leaf node. We have f (S) ≥ 1 while α(S) = 1 and β(S) = 0 which verifies the claim.
Case 2: S is an internal nodes with k children C1 , C2 , . . . , Ck . See Fig 1 for the different types of edges that
are relevant. Ecc is the set of edges with end points in two different children of S. Ecp be the set of edges that
cross exactly one child but do not cross S. Epo be the set of edges that cross S but do not cross any of the
children. Eco is the set of edges that cross both a child and S. This notation is borrowed from [WS11].
Let γ(S) be the number of edges whose both endpoints belong to S but not to any child of S. Note that
γ(S) = |Ecc | + |Ecp |.
Then,
β(S) = γ(S) +
≥ γ(S) +
k
X
i=1
k
X
i=1
β(Ci )
α(Ci ) −
= γ(S) + α(S) − 1 −
k
X
i=1
k
X
f (Ci )
(1)
f (Ci )
i=1
(1) follows by applying the inductive hypothesis to each child. From the preceding inequality, to prove that
β(S) ≥ α(S) − f (S) (the claim for S), it suffices to show the following inequality.
γ(S) ≥
k
X
i=1
f (Ci ) − f (S) + 1.
5
(2)
The right hand side of the above inequality can be written as:
k
X
i=1
f (Ci ) − f (S) + 1 =
X
X
2xe +
e∈Ecc
e∈Ecp
xe −
X
xe + 1.
(3)
e∈Epo
We consider two subcases.
Case 2.1: γ(S) = 0. This implies that Ecc and Ecp are empty. Since χ(δ(S))
is linearly independent from
P
χ(δ(C1 )), . . . , χ(δ(Ck )), we must have that Epo is not empty and hence e∈Epo xe > 0. Therefore, in this
case,
k
X
X
X
X
X
f (Ci ) − f (S) + 1 =
2xe +
xe −
xe + 1 = −
xe + 1 < 1.
i=1
e∈Ecc
e∈Ecp
Since the left hand side is an integer, it follows that
e∈Epo
Pk
i=1 f (Ci )
e∈Epo
− f (S) + 1 ≤ 0 = γ(S).
Case 2.2: γ(S) ≥ 1. Recall that γ(S) = |Ecc | + |Ecp |.
k
X
i=1
f (Ci ) − f (S) + 1 =
X
2xe +
e∈Ecc
X
e∈Ecp
xe −
X
e∈Epo
xe + 1 ≤
X
e∈Ecc
2xe +
X
xe + 1
e∈Ecp
P
1
for
each
e,
we
have
x
<
By
our
assumption
that
e
e∈Ecc 2xe < |Ecc | if |Ecc | > 0, and similarly
2
P
e∈Ecp xe < |Ecp |/2 if |Ecp | > 0. Since γ(S) = |Ecc | + |Ecp | ≥ 1 we conclude that
X
2xe +
X
xe < γ(S).
e∈Ecp
e∈Ecc
Putting together we have
k
X
i=1
f (Ci ) − f (S) + 1 ≤
X
2xe +
X
e∈Ecp
e∈Ecc
xe + 1 < γ(S) + 1 ≤ γ(S)
as desired.
This completes the proof of the claim.
3
Connections between Hypergraph-SNDP, EC-SNDP and Elem-SNDP
Zhao, Nagamochi and Ibaraki [ZNI03] considered the extensions EC-SNDP to hypergraphs. In an hypergraph
G = (V, E) each edge e ∈ E is a subset of V . The degree d of a hypergraph is maxe∈E |e|. Graphs are
hypergraphs of degree 2. Given a set of hyperedges F ⊆ E and a vertex subset S ⊂ V , we use δF (S) to denote
the set all of all hyperedges in F that have at least one endpoint in S and at least one endpoint in V \ S. It is
well-known that |δF | : 2V → Z+ is a symmetric submodular function.
Hypergraph-SNDP is defined as follows. The input consists of an edge-weighted hypergraph G = (V, E)
and integer requirements r(uv) for each vertex pair uv. The goal is to find a minimum-cost hypergraph H =
(V, E 0 ) with E 0 ⊆ E such that for all uv and all S that separate u, v (that is |S ∩ {u, v}| = 1), we have
|δE 0 (S)| ≥ r(uv). Hypergraph-SNDP is a special case of covering a skew-supermodular requirement function
by a hypergraph. It is clear that Hypergraph-SNDP generalizes EC-SNDP. Interestingly, [ZNI03] observed,
via a simple reduction, that Hypergraph-SNDP generalizes Elem-SNDP as well. We now describe Elem-SNDP
6
formally and briefly sketch the reduction from [ZNI03], and subsequently describe some implications of this
connection.
In Elem-SNDP the input consists of an undirected edge-weighted graph G = (V, E) with V partitioned into
terminals T and non-terminals N . The “elements” are the edges and non-terimals, N ∪ E. For each pair uv of
terminals there is an integer requirement r(uv), and the goal is to find a min-cost subgraph H of G such that for
each pair uv of terminals there are r(uv) element-disjoint paths from u to v in H. Note that element-disjoint
paths can intersect in terminals. The notion of element-connectivity and Elem-SNDP have been useful in
several settings in generalizing edge-connectivity problems while having some feastures of vertex connectivity.
In particular, the current approximation for VC-SNDP relies on Elem-SNDP [CK12].
The reduction of [ZNI03] from Elem-SNDP to Hypergraph-SNDP is quite simple. It basically replaces
each non-terminal u ∈ N by a hyperedge. The reduction is depicted in the figure below.
ev
v
Figure 2: Reducing Elem-SNDP to Hypergraph-SNDP. Each non-terminal v is replaced by a hyperedge ev
by introducing dummy vertices on each edge incident to v. The original edges retain their cost while the new
hyperedges are assigned a cost of zero.
The reduction shows that an instance of Elem-SNDP on G can be reduced to an instance of Hypergraph-SNDP
on a hypergraph G0 where the only hyperedges with non-zero weights in G0 are the edges of the graph G. This
motivates the definition of d+ (G) which is the maximum degree of a hyperedge in G that has non-zero cost.
Thus Elem-SNDP reduces to instances of Hypergraph-SNDP with d+ = 2. In fact we can see that the same
reduction proves the following.
Proposition 3.1. Node-weighted Elem-SNDP in which weights are only on non-terminals can be reduced in
an approximation preserving fashion to Hypergraph-SNDP. In this reduction d+ of the resulting instance of
Hypergraph-SNDP is equal to ∆, the maximum degree of a non-terminal with non-zero weight in the instance
of node-weighted Elem-SNDP.
Reducing Elem-SNDP to problem of covering skew-supermodular functions by graphs: We saw that
an instance of Elem-SNDP on a graph H can be reduced to an instance of Hypergraph-SNDP on a graph G
where d+ (G) = 2. Hypergraph-SNDP on G = (V, E) corresponds to covering a skew-supermodular function
f : 2V → Z by G. Let E = F ] E 0 where E 0 is the set of all hyperedges in G with degree more than 2; thus
F is the set of all hyperedges of degree 2 and hence (V, F ) is a graph. Since each edge in E 0 has zero cost we
can include all of them in our solution, and work with the residual requirement function g = f − |δE 0 |. From
Lemma 2.2 and the fact that the cut-capacity function of a hypergraph is also symmetric and submodular, g is a
skew-supermodular function. Thus covering f by a min-cost sub-hypergraph of G can be reduced to covering
g by a min-cost sub-graph of G0 = (V, F ). We have already seen a 2-approximation for this in the context
of EC-SNDP. The only issue is whether there is an efficient separation oracle for solving the LP for covering
g by G0 . This is a relatively easy exercise using flow arguments and we omit them. The main point we wish
to make is that this reduction avoids working with set-pairs that are typically used for Elem-SNDP. It is quite
conceivable that the authors of [ZNI03] were aware of this simple connection but it does not seem to have been
made explicitly in their paper or in [ZNI02].
7
Approximating Hypergraph-SNDP: [ZNI03] derived a d+ Hrmax approximationf for Hypergraph-SNDP
where Hk = 1 + 1/2 + . . . + 1/k is the k’th harmonic number. They obtain this bound via the augmentation
framework for network design [GGP+ 94] and a primal-dual algorithm in each stage. In [ZNI02] they also
observe that Hypergraph-SNDP can be reduced to Elem-SNDP via the following simple reduction. Given a
hypergraph G = (V, E) let H = (V ∪ N, E) be the standard bipartite graph representation of G where for each
hyperedge e ∈ E there is a node ze ∈ N ; ze is connected by edges in H to each vertex a ∈ e. Let r(uv) be the
hyperedge connectivity requirement between a pair of vertices uv in the original instance of Hypergraph-SNDP.
In H we label V as terminals and N as non-terminals. For any pair of vertices uv with u, v ∈ V , it is not hard
to verify that the element-connectivity betwee u and v in H is the same as the hyperedge connectivity in G.
See [ZNI02] for details. It remains to model the costs such that an approximation algorithm for elementconnectivity in H can be translated into an approximation algorithm for hyperedge connectivity in G. This is
straightforward. We simply assign cost to non-terminals in H; that is each node ze ∈ N corresponding to a
hyperedge e ∈ E is assigned a cost equal to ce . We obtain the following easy corollary.
Proposition 3.2. Hypergraph-SNDP can be reduced to node-weighted Elem-SNDP in an approximation preservation fashion.
[ZNI02] do not explicitly mention the above but note that one can reduce Hypergraph-SNDP to (edgeweighted) Elem-SNDP as follows. Instead of placing a weight of ce on the node ze corresponding to the
hyperedge e ∈ E, they place a weight of ce /2 on each edge incident to ze . This transformation loses an approximation ratio of d+ (G)/2. From this they conclude that a β-approximation for Elem-SNDP implies a d+ β/2approximation for Hypergraph-SNDP; via the 2-approximation for Elem-SNDP we obtain a d+ approximation
for Hypergraph-SNDP. One can view this as reducing a node-weighted problem to an edge-weighted problem
by transferring the cost on the nodes to all the edges incident to the node. Since a non-terminal can only be
useful if it has at least two edges incident to it, in this particular case, we can put a weight of half the node on
the edges incident to the node. A natural question here is whether one can directly get a d+ approximation for
Hypergraph-SNDP without the reduction to Elem-SNDP. We raise the following technical question.
Problem 1. Suppose f is a non-trivial skew-supermodular function on V and G = (V, E) be a hypergraph.
Let x be a basic feasible solution to the LP for covering f by G. Is there an hyperedge e ∈ E such that xe ≥ d1
where d is the degree of G?
The preceding propositions show that Hypergraph-SNDP is essentially equivalent to node-weighted Elem-SNDP
where the node-weights are only on non-terminals. Node-weighted Steiner tree can be reduced to nodeweighted Elem-SNDP and it is known that Set Cover reduces in an approximation preserving fashion to nodeweighted Steiner tree [KR95]. Hence, unless P = N P , we do not expect a better than O(log n)-approximation
for Hypergraph-SNDP where n = |V | is the number of nodes in the graph. Thus, the approximation ratio for Hypergraph-SNDP cannot be a constant independent of d+ . Node-weighted Elem-SNDP admits an
O(rmax log |V |) approximation; see [Nut09, CEV12a, CEV12b, Fuk15]. For planar graphs, and more generally graphs from a proper minor-closed family, an improved bound of O(rmax ) is claimed in [CEV12a]. The
O(rmax log |V |) bound can be better than the bound of d+ in some instances. Here we raise a question based on
the fact that planar graphs have constant average degree which is used in the analyis for node-weighted network
design.
Problem 2. Is there an O(1)-approximation for node-weighted EC-SNDP and Elem-SNDP in planar graphs,
in particular when rmax is a fixed constant?
Finally, we hope that the counting argument and the connections between Hypergraph-SNDP, EC-SNDP
and Elem-SNDP will be useful for related problems including the problems involving degree constraints in
network design.
8
References
[BKN09]
Nikhil Bansal, Rohit Khandekar, and Viswanath Nagarajan. Additive guarantees for degreebounded directed network design. SIAM Journal on Computing, 39(4):1413–1431, 2009.
[CCK08]
Tanmoy Chakraborty, Julia Chuzhoy, and Sanjeev Khanna. Network design for vertex connectivity.
In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages 167–176.
ACM, 2008.
[CEV12a] Chandra Chekuri, Alina Ene, and Ali Vakilian. Node-weighted network design in planar and minorclosed families of graphs. In Automata, Languages, and Programming, pages 206–217. Springer,
2012.
[CEV12b] Chandra Chekuri, Alina Ene, and Ali Vakilian. Prize-collecting survivable network design in nodeweighted graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms
and Techniques, pages 98–109. Springer, 2012.
[CK12]
Julia Chuzhoy and Sanjeev Khanna. An O(k 3 log n)-approximation algorithm for vertexconnectivity survivable network design. Theory of Computing, 8:401–413, 2012. Preliminary
version in Proc. of IEEE FOCS, 2009.
[CVV06]
J. Cheriyan, S. Vempala, and A. Vetta. Network design via iterative rounding of setpair relaxations.
Combinatorica, 26(3):255–275, 2006.
[FJW06]
L. Fleischer, K. Jain, and D.P. Williamson. Iterative rounding 2-approximation algorithms
for minimum-cost vertex connectivity problems. Journal of Computer and System Sciences,
72(5):838–867, 2006.
[Fuk15]
Takuro Fukunaga. Spider covers for prize-collecting network activation problem. In Proceedings of
the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’15, pages 9–24.
SIAM, 2015.
[GGP+ 94] M.X. Goemans, A.V. Goldberg, S. Plotkin, D.B. Shmoys, E. Tardos, and D.P. Williamson. Improved approximation algorithms for network design problems. In Proc. of ACM-SIAM SODA,
pages 223–232, 1994.
[Jai01]
K. Jain. A factor 2 approximation algorithm for the generalized Steiner network problem. Combinatorica, 21(1):39–60, 2001. Preliminary version in FOCS 1998.
[KR95]
P. Klein and R. Ravi. A nearly best-possible approximation algorithm for node-weighted Steiner
trees. J. Algorithms, 19(1):104–115, 1995. Preliminary version in IPCO 1993.
[LRS11]
Lap Chi Lau, Ramamoorthi Ravi, and Mohit Singh. Iterative methods in combinatorial optimization, volume 46. Cambridge University Press, 2011.
[NRS10]
Viswanath Nagarajan, R Ravi, and Mohit Singh. Simpler analysis of lp extreme points for traveling
salesman and survivable network design problems. Operations Research Letters, 38(3):156–160,
2010.
[Nut09]
Z. Nutov. Approximating minimum cost connectivity problems via uncrossable bifamilies and
spider-cover decompositions. In Proceedings of the 50th Annual IEEE Symposium on Foundations
of Computer Science (FOCS), pages 417–426. IEEE, 2009.
9
[WS11]
David P Williamson and David B Shmoys. The design of approximation algorithms. Cambridge
university press, 2011.
[ZNI02]
Liang Zhao, Hiroshi Nagamochi, and Toshihide Ibaraki. A note on approximating the survivable
network design problem in hypergraphs. IEICE TRANSACTIONS on Information and Systems,
85(2):322–326, 2002.
[ZNI03]
Liang Zhao, Hiroshi Nagamochi, and Toshihide Ibaraki. A primal–dual approximation algorithm for the survivable network design problem in hypergraphs. Discrete applied mathematics,
126(2):275–289, 2003. Preliminary version appeared in Proc. of STACS, 2001.
10
| 8 |
Ensembling Neural Networks for Digital
Pathology Images Classification and
Segmentation
Gleb Makarchuk?1,3 , Vladimir Kondratenko*1,3 , Maxim Pisov*1,3 ,
Artem Pimkin*1,3 , Egor Krivov1,2,3 , and Mikhail Belyaev2,1
arXiv:1802.00947v1 [] 3 Feb 2018
1
Kharkevich Institute for Information Transmission Problems, Moscow, Russia
2
Skolkovo Institute of Science and Technology, Moscow, Russia
3
Moscow Institute of Physics and Technology, Moscow, Russia
Abstract. In the last years, neural networks have proven to be a powerful framework for various image analysis problems. However, some application domains have specific limitations. Notably, digital pathology
is an example of such fields due to tremendous image sizes and quite
limited number of training examples available. In this paper, we adopt
state-of-the-art convolutional neural networks (CNN) architectures for
digital pathology images analysis. We propose to classify image patches
to increase effective sample size and then to apply an ensembling technique to build prediction for the original images. To validate the developed approaches, we conducted experiments with Breast Cancer Histology Challenge dataset and obtained 90% accuracy for the 4-class tissue
classification task.
Keywords: Convolutional Networks, Ensembles, Digital Pathology
1
Introduction
Histology is a key discipline in cancer diagnosis thanks to its ability to evaluate
tissues anatomy. The classical approach involves glass slide microscopy and requires thoughtful analysis by a pathologist. Digital pathology imaging provides
a statistically equivalent way to analyze tissues [13], so it’s a natural application area for machine learning methods [10]. Recent advances in deep learning
suggest that these methods can be useful for a set of digital pathology image
analysis problems including cell detection and counting and segmentation [7].
One of the most important and challenging tasks is tissue classification, and
recent studies (e.g., [1]) demonstrate promising results. However, these image
analysis problems differ from standard one in different ways. A crucial limiting
factor is a combination of relatively small sample sizes (usually hundreds of examples) and extremely high resolution (the typical image size is 50000 x 50000).
For comparison, the ImageNet dataset contains millions of 256 x 256 images [3].
This combination leads to high variance of deep learning models predictions and
?
Equal contribution
2
G. Makarchuk, V. Kondratenko, M. Pisov, A. Pimkin et al.
requires careful design of data processing pipelines. In this work, we propose
two methods for digital pathology images segmentation and classification. Both
methods include data preprocessing, intensive usage of modern deep learning
architectures and an aggregation procedure for decreasing model variability.
2
Problem
Generally, our task was to recognize benign and malignant formations on breast
histology images. We were solving this problem in two formulations: image classification and segmentation.
2.1
Data
In this paper we use the Breast Cancer Histology Challenge (BACH-18) dataset
which consists of hematoxylin and eosin stained microscopy images as well as
whole-slide images.
Microscopy images The first subset consists of 400 microscopy images of
shape 2048 × 1536 × 3, where 3 stands for the number of channels, the color
space is RGB.
The images were obtained as patches from much larger microscopy images,
similar to those described in subsection 2.1.
Every dataset entry is labeled as belonging to one of the four classes: Normal
(n, 0), Benign (b, 1), Carcinoma in situ (is, 2) or Invasive Carcinoma (iv, 3).
The labels are evenly distributed between the 400 images.
Fig. 1: Samples from the first subset (from left to right): Normal, Benign, Carcinoma in situ and Invasive Carcinoma
Whole-slide images The second subset consist of 20 whole-slide images of
shape (approx.) 40000 × 60000 × 3, where, similarly, 3 stands for the number of
channels, and the color space is RGB.
For half of the images segmentation masks of corresponding spatial shape are
provided. Each pixel of the mask is labeled from 0 to 3 (from Normal to Invasive
Carcinoma). During the model training only this half of the dataset was used.
Ensembling Neural Networks for Breast Cancer Histology
3
Fig. 2: A sample from the second subset: whole-slide image (left) and the corresponding mask overlayed on the image (right). The blue color stands for Invasive
Carcinoma, red - Benign, the rest - Normal. Note the roughness of the ground
truth segmentation.
3
Microscopy images classification
Preprocessing One of the common problems in work with histology slides
is preprocessing. As it turned out, network pays attention to areas of inhomogeneity. For this reason, our image preparation was aimed at normalization
and contrast enhancement of the slide. We tried a couple of well-known methods of digital pathology images preprocessing (e.g., [9]), but it didn’t increase
the performance. Also, we tested simple data transformations like inversion,
channel-wise mean subtraction, conversion to different color spaces, etc. A simple
channel-wise mean subtraction provided the largest performance boost (about
10% of accuracy score), so we ended up using only this preprocessing approach.
Training We used two popular architectures for image classification: ResNet
[5] and DenseNet [6]:
ResNet is a convolutional neural network which won the 1st place on ILSVRC
2015 classification task. We used ResNet34 implemented in torchvision 4 with
slight architectural changes: we replaced the pooling layer before the fullyconnected layer by an average spatial pyramid pooling layer [4]. We tried several
levels of pyramid pooling depth from 1 (global average pooling) to 3.
DenseNet is a convolutional neural network which has a slightly smaller
error rate on the ImageNet dataset than ResNet. We used DenseNet169 and
DenseNet201 from torchvision’s implementation with similar architectural changes
as for ResNet.
In our experiments we used Adam optimizer [8] with an exponential-like
learning rate policy: each 20 epochs the learning rate decreased by a factor of 2.
Initially the learning rate was taken equal to 0.01.
Experiments have shown that feeding the images directly into the network
yields poor results due to quite large images shape. In order to overcome this
difficulty we chose to train out models on patches extracted from the original
images: during training each patch was randomly (with a 2D uniform distribution) extracted from an image, also picked at random, with the label being the
4
http://pytorch.org/docs/0.3.0/torchvision/models.html
4
G. Makarchuk, V. Kondratenko, M. Pisov, A. Pimkin et al.
same as for the original image. This approach led to a performance increase of
about 15%.
Also, we observed that preprocessing each individual patch instead of preprocessing the whole image yields slightly better results.
Each network was trained on patches of shape 500 × 500 pixels, with batches
of size ≈ 10 (this value differs between models). The training process lasted for
120 epochs, and every epoch 300 batches were fed into the network.
We also tried pretraining our models on similar datasets: BreakHis5 and
Breast carcinoma histological images from the Department of Pathology, ”Agios
Pavlos” General Hospital of Thessaloniki, Greece[14]. See Table 1 for performance comparison.
Model Selection and Stacking While building models, we experimented with
different architectures, learning rate policies and pretraining. Thus, we ended
up with 29 different models built and evaluated using 3-fold cross-validation
(CV). We had a goal of combining these models in order to make more accurate
predictions.
During inference we deterministically extracted patches according to a grid
with a stride of 100 pixels. Thus, every multiclass network would generate a
matrix of shape 176 × 4 containing 4 class probabilities for 176 patches extracted
from the image. Similarly, every one-vs-all network would generate a matrix of
shape 176 × 1.
We extracted various features from these class probabilities predictions:
– min, max and mean values of the probabilities of each class
– (for multiclass networks only) on how many patches each class has the highest probability
– 10, 25, 75 and 90 percentile probability values of each class
– on how many patches probabilities go above 15% and 25% threshold values
for each class
After building the features, the problem was reduced to tabular data classification. So, for our final classifier, we have chosen XGBoost [2] as one of the
state-of-the-art approaches for such tasks.
Our pipeline was the following:
– Choose reasonable XGBoost hyperparameters (based on cross-validation score)
for the classifier built on top of all (29) models we have.
– Use greedy search for model selection: keep removing models while the accuracy on CV keeps increasing.
– Fine-tune the XGBoost hyperparameters on the remaining models set.
By following this procedure we reduced the number of models from 29 to
12. It is also worth mentioning that we compared accuracies for different sets of
models and hyperparameters by averaging accuracy score from 10-fold CV across
20 different shuffles of the data to get statistically significant results (for such a
small dataset) and thus optimize based on merit rather than on randomness.
5
https://omictools.com/breast-cancer-histopathological-database-tool
Ensembling Neural Networks for Breast Cancer Histology
5
Inference Given the relatively small dataset, we decided that we might take
advantage from retraining the networks on the whole dataset. However, this
approach leaves no possibility to assess the stacking quality.
As a trade-off, we decided to use 6-fold CV (instead of 3-fold CV), so that the
network would see 83% (vs 66%) of data: a substantial increase in performance
(compared to the 3-fold CV models) would mean that retraining on the whole
dataset might be beneficial.
In the 6-fold CV setting, the performance of every individual network increased significantly. We also have fine-tuned the composition for 6-split networks
that resulted in two more models being held out and slightly changed hyperparameters (see table 1 for a comparison of networks’ performances). However, the
resulting ensemble classifier could not surpass the one build on top of 3-fold CV.
Nevertheless, for our final classifier we chose to average the patch predictions
across all 6 networks and use the XGBoost classifier built on top of 6-fold CV.
This approach is computationally inefficient, but allows us to reduce variance of
the predictions.
4
Whole-slide images segmentation
Preprocessing Similarly to the first problem, we use channel-wise mean subtraction as a preprocessing strategy. Also, given the unusually big images, we
tried to downsample it by various factors: the downsampling by a factor of 40
along each spatial dimension proved to be very effective. Thereby, downsampled
input is used in some ensemble models.
Training In our segmentation experiments we also used the same optimizer and
learning rate policy as in section 3.
In case when downsampling was included in the preprocessing pipeline the
images were fed directly into the network, and the network was trained for 1500
epochs. Otherwise, the network was trained for 150 epochs with patches of shape
300 × 300 (40 patches per batch), similarly to the procedure described in section
3.
Models For the segmentation task, we introduce T-Net, a novel architecture
based on U-Net [11]. It can be regarded as a generalization, which applies additional convolutions to the connections between the downsampling and upsampling branches.
We used 3 different models:
– T-Net for binary segmentation Normal-vs-all trained on patches (T-Net 1).
– T-Net for a similar task but trained on images downsampled by a factor of 40
along each dimension (T-Net 2). We also used a weighted-boundary log loss,
which adds linearly decreasing weights to the pixels near the ground-truth
regions’ boundaries. Basically, it can be reduced to multiplying the ground
truth mask by the corresponding weights and calculating the log loss for the
resulting ”ground truth”.
6
G. Makarchuk, V. Kondratenko, M. Pisov, A. Pimkin et al.
Fig. 3: A schematic example of the T-Net architecture.
– T-Net for multiclass segmentation trained on images downsampled by a factor of 40 along each dimension (T-Net 3).
Postprocessing While working with output of network trained on patches of
the non-downsampled whole-slide images we faced the fact that output probability maps were too heterogeneous, which resulted in holes in segmented areas
after thresholding, although ground truth consists of 1-connected domains. So,
for primary processing we use Gaussian blur with square kernel of fixed size
(processing hyperparameter) to reduce the hole sizes on the next steps. Then
we threshold the probability map and get several clusters of areas with holes,
many of which we merge with the morphological closing operation[12].
Finally,
√
a
a
we discard the connected components with areas less than S , where S a is
the mean of component areas in the power of a (a is a hyperparameter). The
postprocessing steps are shown in Fig. 4
Original
Blurred
Thresholded
Closed
Result
Ground Truth
Fig. 4: Example of postprocessing. (Original) and (blurred) is the probability
maps. Yellow on (Ground truth) is ’iv’, green is ’is’. Yellow on the other images
means abnormal.
Ensembling Due to strong class imbalance (75%−n : 1%−b : 1%−is : 23%−iv)
we focused our research on the Normal-vs-All task to adjust the output more
Ensembling Neural Networks for Breast Cancer Histology
7
precisely. Experiments have shown that the network trained on whole-slide images (T-Net 1) predicts quite ”ragged” regions while the network trained on
downsampled images (T-Net 2) predicts a lot of false-positive pixels.
Our approach is T-Net ensemble that consists of blending these two models, letting them to compensate each other’s mistakes, and transforming output
binary mask into a multiclass one using (T-Net 3)’s prediction:
Result = 3 · BinaryM ask + (T N et3) · (1 − BinaryM ask)
(1)
However, given the fact that the metric proposed by the organizers of the
BACH-18 challenge is biased towards the abnormal classes (1,2,3), we decided
to use a much simpler approach - shifted blending, by setting the binary positive
class to Invasive Carcinoma and the negative class to Benign:
Result = 1 + 2 · BinaryM ask
(2)
It significantly increased the proposed metric. Also, since it is shifted we provide
more common metrics for both approaches. See 5 for details.
Each model was trained and evaluated with 3-fold CV. To evaluate performance of the ensemble we used 5-fold CV (on top of the test predictions from
3-fold CV).
5
Results
Microscopy images classification Table 1 shows the models’ accuracies. Note
the significant performance boost gained from stacking.
Network
ResNet 34
ResNet 34, pretrained on patches from whole slides
ResNet 34, pretrained on BreakHis Dataset
ResNet 34, pretrained on Agios Pavlos dataset
ResNet 34, patch mean subtraction
DenseNet 169
DenseNet 201
ResNet 34, Benign vs all
ResNet 34, InSitu vs all
ResNet 34, Invasive vs all
Ensemble (stacking)
3-fold CV
.83 ± .01
.83 ± .05
.81 ± .04
.83 ± .03
.79 ± .04
.78 ± .06
.80 ± .03
.90 ± .01
.90 ± .05
.90 ± .03
.90 ± .05
6-fold CV
.86 ± .03
.85 ± .04
.87 ± .06
.86 ± .03
.83 ± .03
.85 ± .02
.82 ± .06
.91 ± .02
.93 ± .02
.91 ± .03
.90 ± .05
Table 1: Models’ accuracy for the microscopy images classification task: multiclass (first block), one-vs-all (second block), final ensemble built on top of 3- and
6-fold CV (third block)
Whole-slide images segmentation In the BACH-18 challenge the following
metric is used:
P
i predi − gti
BachScore = 1 − P
,
(3)
i max{gti , 3 − gti } · I[gti > 0 & predi > 0]
8
G. Makarchuk, V. Kondratenko, M. Pisov, A. Pimkin et al.
where the summation is performed across all the pixels and gti , predi are the
i-th pixel values of the ground truth and prediction respectively.
Table 2 shows the models’ performances according to BachScore, as well as
the Dice score - a more common segmentation quality measure (it is computed
for each channel separately and also from the point of ”normal/abnormal” task).
Network
T-Net 1
T-Net 2
T-Net 3
T-Net ensemble
T-Net shifted blending
BachScore
.54±.16
.59±.20
.56±.19
.64±.20
.70±.06
Dice(b) Dice(is) Dice(iv) Dice(abnormal)
.61±.19
.69±.16
.68±.21
.71±.19
.01±.01 .03±.04 .59±.21
.66±.21
.01±.02 .01±.02 .71±.23
.72±.20
.04±.06 .03±.04 .73±.21
.74±.18
Table 2: Segmentation results
6
Conclusion
We proposed a two-stage procedure for digital pathology images classification problem. To increase effective sample size, we used random patches for
training. The developed ensembling technique allowed us not only to increase
the prediction quality due to averaging but also combine results for individual
patches into the whole image prediction. In overall, a promising classification
accuracy was obtained.
As for the whole-slide images segmentation task, we obtained controversial results: on the one hand we obtained promising Bach and Dice scores,
but on the other hand most of our work was aimed at roughening the obtained
predictions in accordance with the given labeling. Moreover, the results of our
top performing ensemble were heavily improved by hard biasing which doesn’t
allow us to say how well this result depicts our method’s performance
References
1. Araújo, T., Aresta, G., Castro, E., Rouco, J., Aguiar, P., Eloy, C., Polónia, A.,
Campilho, A.: Classification of breast cancer histology images using convolutional
neural networks. PloS one 12(6), e0177544 (2017)
2. Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. In: Proceedings
of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining. pp. 785–794. KDD ’16 (2016)
3. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale
hierarchical image database. In: Computer Vision and Pattern Recognition, 2009.
CVPR 2009. IEEE Conference on. pp. 248–255. IEEE (2009)
4. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional
networks for visual recognition pp. 346–361 (2014)
5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition
pp. 770–778 (2016)
Ensembling Neural Networks for Breast Cancer Histology
9
6. Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected
convolutional networks 1(2), 3 (2017)
7. Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis:
A comprehensive tutorial with selected use cases. Journal of pathology informatics
7:29 (2016)
8. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980 (2014)
9. M. Macenko, M. Niethammer, J.M., et al: A method for normalizing histology
slides for quantitative analysis. Journal of magnetic resonance imaging 9, 1107–
1110 (2009)
10. Madabhushi, A., Lee, G.: Image analysis and machine learning in digital pathology:
Challenges and opportunities. Medical image analysis 33, 170–175 (2016)
11. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing
and Computer-Assisted Intervention. pp. 234–241 (2015)
12. Serra, J.: Image Analysis and Mathematical Morphology. Academic Press, Inc.,
Orlando, FL, USA (1983)
13. Snead, D.R., Tsang, Y.W., Meskiri, A., Kimani, P.K., Crossman, R., Rajpoot,
N.M., Blessing, E., Chen, K., Gopalakrishnan, K., Matthews, P., et al.: Validation
of digital pathology imaging for primary histopathological diagnosis. Histopathology 68(7), 1063–1072 (2016)
14. Zioga, C., Kamas, A., Patsiaoura, K., Dimitropoulos, K., Barmpoutis, P., Grammalidis, N.: Breast carcinoma histological images from the department of pathology,
”agios pavlos” general hospital of thessaloniki, greece (jul 2017)
| 1 |
Iterative Residual Fitting for Spherical Harmonic
Transform of Band-Limited Signals on the Sphere:
Generalization and Analysis
Usama Elahi∗ , Zubair Khalid† , Rodney A. Kennedy∗ and Jason D. McEwen‡
† Research
School of Engineering, The Australian National University, Canberra, ACT 2601, Australia
of Science and Engineering, Lahore University of Management Sciences, Lahore 54792, Pakistan
‡ Mullard Space Science Laboratory, University College London, Surrey RH5 6NT, UK
[email protected], [email protected], [email protected], [email protected]
arXiv:1709.02503v1 [] 8 Sep 2017
∗ School
Abstract—We present the generalized iterative residual fitting (IRF) for the computation of the spherical harmonic transform (SHT) of band-limited signals on the sphere. The proposed
method is based on the partitioning of the subspace of bandlimited signals into orthogonal subspaces. There exist sampling
schemes on the sphere which support accurate computation of
SHT. However, there are applications where samples (or measurements) are not taken over the predefined grid due to nature of the
signal and/or acquisition set-up. To support such applications, the
proposed IRF method enables accurate computation of SHTs of
signals with randomly distributed sufficient number of samples.
In order to improve the accuracy of the computation of the SHT,
we also present the so-called multi-pass IRF which adds multiple
iterative passes to the IRF. We analyse the multi-pass IRF for
different sampling schemes and for different size partitions.
Furthermore, we conduct numerical experiments to illustrate
that the multi-pass IRF allows sufficiently accurate computation
of SHTs.
Index Terms—Spherical harmonics, basis functions, spherical
harmonic transform, residual fitting, band-limited signals, 2sphere (unit sphere).
I. I NTRODUCTION
Signals are defined on the sphere in a variety of fields including geodesy [1], computer graphics [2], cosmology [3], astrophysics [4], medical imaging [5], acoustics [6] and wireless
communication [7] to name a few. Spherical harmonic (SH)
functions [8] are a natural choice of basis functions for
representing the signal on the sphere in all these applications.
Analysis on the sphere is done in both spatial (spherical)
and spectral (spherical harmonic) domains. The transformation
between the two domains is enabled by the well known
spherical harmonic transform (SHT) [8], [9]. For harmonic
analysis and signal representation (reconstruction), the ability
to accurately compute the SHT of a signal from its samples
taken over the sphere is of great importance.
Sampling schemes have been devised in the literature for the
accurate and efficient computation of SHTs [10], [11]. However, the samples may not be available, in practice (e.g., [5],
Usama Elahi, Zubair Khalid and Rodney A. Kennedy are supported by
Australian Research Council’s Discovery Projects funding scheme (project no.
DP150101011). Jason D. McEwen is partially supported by the Engineering
and Physical Sciences Research Council (grant number EP/M011852/1).
[12]), over the grid defined by these sampling schemes. To support the computation of SHTs in applications where samples or
data-sets are not available on the pre-defined grid, least squares
fitting (LSF) methods have been investigated for efficient
computation of the SHTs [12]–[18]. LSF methods formulate
a large linear system of basis functions and then attempt to
solve it efficiently. However, due to memory overflow, it is not
suitable for systems with large band-limits, L > 1024 [19]–
[21]. To solve this problem, an iterative residual fitting (IRF)
method has been proposed in [19] as an extension of LSF
and incorporates a divide and conquer technique for the
computation of SHTs. The basic idea of IRF is to divide
the subspace spanned by all spherical harmonics into smaller
partitions and then perform least squares on each partition
iteratively. Although IRF is fast, it creates a less accurate
reconstruction [19] as the size of the harmonic basis increases
for large band-limits. To improve the reconstruction accuracy,
a multi-pass IRF approach is used which includes multiple
passes for fitting. This is same as IRF but it involves multiple
IRF operations rather than one. A variant of this scheme is
presented in [19], where reconstruction for 3D surfaces is
carried out by taking large number of samples.
In this paper, we present an IRF method for the computation
of the SHT of a band-limited signal in a general setting that
partitions the subspace of band-limited signals into orthogonal
subspaces, where each orthogonal subspace can be spanned
by different numbers of basis functions. We also formulate
multi-pass IRF to improve the accuracy of computation of
the SHT. We analyze multipass IRF for different choices of
partitioning of the subspace and sampling schemes [10], [11],
[19], [22] and show that the computation of the SHT converges
in all cases. We also show that the convergence is fast for the
partition choice considered in this work.
The remainder of this paper is organized as follows. Section
2 provides the necessary mathematical background and notation required to understand the work. IRF and multi-pass IRF
methods are formulated in section 3. In section 4, we carry out
accuracy analysis of the proposed IRF method for different
partition choices and sampling schemes. Finally, concluding
remarks are presented in section 5.
II. M ATHEMATICAL BACKGROUND
A. Signals on the Sphere
A point v̂
=
v̂(θ, φ) on the unit sphere
S2 , {v̂ ∈ R3 : |v̂| = 1}, is parameterized by
[sin θ cos φ, cos θ cos φ, cos θ]T ∈ S2 ⊂ R3 , where (.)T
represents the transpose, θ ∈ [0, π] represents the co-latitude
and φ ∈ [0, 2π) denotes the longitude. The space of square
integrable complex functions of the form g(θ, φ), defined
on the unit sphere, form a complex separable Hilbert space,
denoted by L2 (S2 ), with inner product defined as by [8]
Z
hg, hi ,
g(θ, φ)h(θ, φ) sin θ dθ dφ, g, h ∈ L2 (S2 ), (1)
S2
where (·) represents the complex conjugate operation. The
functions with finite induced norm kgk , hg, gi1/2 are referred
to as signals on the sphere.
B. Spherical Harmonics
Spherical harmonic (SH) functions, denoted by Y`m (θ, φ)
for integer degree ` ≥ 0 and integer order |m| ≤ `, serve as
complete basis for L2 (S2 ) [8]. Due to the completeness of the
SH functions, any function g on the sphere can be expanded
as
`
∞ X
X
m
(g)m
(2)
g(θ, φ) =
` Y` (θ, φ),
`=0 m=−`
(g)m
`
where
are the SH coefficients of degree ` and order m
and form the spectral domain representation of the signal g,
given by the spherical harmonic transform (SHT) defined as
Z
m
(g)m
,
f,
Y
=
f (θ, φ)Y`m (θ, φ) sin θ dθ dφ. (3)
`
`
S2
The signal g is band-limited at degree L if (g)m
` = 0 for all
` ≥ L, |m| ≤ `. A set of band-limited signals forms an L2
dimensional subspace of L2 (S2 ), denoted by HL .
III. G ENERALIZED I TERATIVE R ESIDUAL F ITTING
Here we present the generalization of the IRF method [5],
[19] for the computation of the SHT of the band-limited signal
g ∈ HL from its samples.
A. Iterative Residual Fitting (IRF) – Formulation
The IRF method is based on the idea to partition the
subspace HL into smaller subspaces and carry out leastsquares estimation on these partitions iteratively. In this way, a
large linear problem is divided into manageable small subsets
of linear problems. The subspace HL has graphical representation of the form shown in Fig. 1, which also represents the
SH (spectral) domain formed by the SH coefficients of the
band-limited signal in HL . We partition HL into K orthogonal
k
subspaces HL
, k = 1, 2, · · · , K, each of dimension Nk . We
analyse different choices for partitioning later in the paper.
k
We index the SH functions that span the subspace HL
as
Ykj , j = 1, 2, · · · , Nk . We also define (g)kj = g, Ykj .
Fig. 1: Spherical harmonic domain representation of a bandlimited signal in HL .
Given M samples (measurements) of the band-limited signal g ∈ HL , we wish to compute SH coefficients. By defining
a vector
G , g(θ1 , φ1 ), . . . , g(θM , φM )]T ,
(4)
of M measurements (samples) of the signal g ∈ HL on
the sphere and the matrix Yk , with entries {Yk }p,q =
Ykq (θp , φq ), of size M × Nk containing SH functions that
k
evaluated at M sampling points, the
span the subspace HL
vector gk = [gk1 , gk2 , · · · , gkNk ]T of SH coefficients can be
iteratively computed (estimated) in the least-squares sense as
g̃k = (YkH Yk )−1 YkH rk ,
(5)
where (.)H represents the Hermetian of a matrix and
rk = G −
k−1
X
Yk0 g̃k0 ,
r0 = G
(6)
k0 =1
is the residual between the samples of the signal and the
signal obtained by using the coefficients g̃k0 for k 0 =
1, 2, . . . , k − 1 and the estimation of coefficients is carried out
iteratively for k = 1, 2, . . . , K. We note that the computational
complexity for (5) for each k would be of the order of
max(O(M Nk2 ), O(Nk3 )) = O(M Nk2 ). The computational
complexity to compute (6) is O(M L2 ). We later analyse the
estimation accuracy of the IRF method for different sampling
schemes on the sphere and different partitions of the subspace
HL of band-limited signals. For a special case of partitioning
k
the subspace HL into L subspaces HL
based on the degree
of spherical harmonics ` = 0, 1, . . . , L − 1, it has been shown
that the iterative residual fitting allows sufficiently accurate
estimation of SH coefficients [19].
The proposed IRF method enables accurate computation
of the SHT of signals with a sufficient number of randomly
distributed samples. The IRF algorithm finds significance use
in applications where samples on the sphere are not taken over
a predefined grid. For example, the samples are taken over
the cortical surface in medical imaging [5], where IRF allows
sufficient accurate parametric modeling of cortical surfaces.
B. Multi-Pass IRF and Residual Formulation
To improve the estimation accuracy, we employ the socalled multi-pass IRF [19] which is based on the use of
IRF method in an iterative manner. In multi-pass IRF, the
IRF algorithm is run for a number of iterations, denoted
by i = 1, 2, . . .. To clarify the concept, we incorporate the
iteration index i in the formulation in (5) and (6) as
g̃k (i) = (YkH Yk )−1 YkH rk (i),
rk (i) = G −
i−1 k−1
X
X
(7)
Yk0 g̃k0 (i0 ),
i0 =1 k0 =1
r0 (i) = rK (i − 1), r0 (1) = G.
(8)
After i-th iteration, g̃k can be computed for each k =
1, 2, . . . , K as
g̃k (i) =
i
X
g̃k (i0 ).
(9)
i0 =1
By defining
−1
Ak , (YkH Yk )
YkH ,
Ck , Yk Ak ,
the residual after the i-th iteration is given by
!i
K
Y
rK (i) =
(1 − Ck ) G.
(10)
(11)
k=1
In general, the residual in (11) depends on the distribution
of sampling points and nature of partitioning of HL . In the
next section, we show that the residual converges to zero for
a variety of sampling schemes and different partitions.
IV. A NALYSIS OF M ULTI -PASS IRF
A. Partition Choices
In order to understand the partitioning of HL , we refer to
the graphical representation of HL shown in Fig. 1, which
describes the position of spherical harmonic coefficients with
respect to degree ` ∈ (0, 1, . . . , L − 1) and order m ≤ |`|.
We give numbers to the spectral harmonic coefficients (basis
functions) shown in Fig. 1 from 1 to L2 in a way that we
start the domain from ` = 0, m = 0 and then traverse the
whole domain by m = −` to m = ` for increasing values of
`. In a similar way, we can also traverse the whole domain by
fixing m for all values of `. We analyse four different type of
partitions, whose sizes vary with the increasing or decreasing
values of degrees ` and orders m. The size of each partition
is denoted by Nk . In all the partitions, the generalized IRF is
run for all values of k and for a fixed value of i.
Partition Choice 1: We first consider the partitioning of
HL based on the spherical harmonic degree [19]. We take
k
K = L partitions HL
each for degree ` = k − 1 such that
k
the subspace HL is spanned by spherical harmonics of degree
k − 1. Consequently, the dimension of each subspace is Nk =
2k −1. As mentioned earlier, the IRF has been applied already
for this choice of partition [19]. We show through numerical
experiments that alternative choices for partitioning result in
faster convergence and more accurate computation of the SHT.
Partition Choice 2: For partition choice 2, we combine the
k-th partition choice 1 and K − k + 1-th partition choice 1,
partitions for even or odd band-limit L
to obtain L2 or L+1
2
k
respectively. For even L, each partition 2 HL
is of size Nk =
L
partitions
2L for k = 1, 2, . . . , 2 . For odd L, we have L+1
2
with Nk = 2L for k = 1, 2, . . . , L−1
and
one
partition
of size
2
N L+1 L.
2
Partition Choice 3: Here, we consider partitioning with
respect to each order |m| < L (see Fig. 1). Consequently, we
have 2L−1 partitions, one for each order |m| < L and spanned
by SH functions of order m.
Partition Choice 4: Partition choice 4 is obtained by
combining the partitions in partition choice 3. We obtain L
partitions by combining partition choice 3 for m and −(L−m)
for m = 1, 2, . . . L − 1. With such combining, we have L
partitions of HL each of size L.
B. Analysis
Here we analyse the accuracy of the computation of the
SHT, that is, the computation of SH coefficients, of the bandlimited signal sampled over different sampling schemes. For
the distribution of samples on sphere, we consider equiangular
sampling [10] and optimal-dimensionality sampling [11] in our
analysis as these schemes support the accurate computation
of the SHT for band-limited signals. Among the sampling
schemes on the sphere, which do not support the highly
accurate computation of the SHT, we consider the HEALPix
sampling scheme [22] and random samples with uniform
distribution with respect to the differential measure sin θdθdφ.
In order to analyse accuracy, we take a test signal g ∈ HL
by first generating the spherical harmonic coefficients (g)m
`
with real and imaginary part uniformly distributed in [−1, 1]
and using (2) to obtain the signal over the samples for each
sampling scheme. For a meaningful comparison, we take
approximately the same number of points for each sampling
scheme. We apply the proposed multi-pass IRF for each
choice of partition and each sampling scheme to compute the
estimate of SH coefficients (g̃)m
` and record the maximum
error between reconstructed and original SH coefficients given
by
max ,
max
`<L, |m|≤`
m
|(g)m
` − (g̃)` |,
(12)
which is plotted in logarithmic scale in Fig. 2 for band-limit
L = 15. Different partition choices and different sampling
schemes (see caption for number of samples for each sampling
scheme) against the number of iterations of the proposed
multi-pass IRF are plotted, where it can be observed that
1) the error converges to zero (10−16 , double precision) for
all partition choices and sampling schemes, and 2) the error
converges quickly for partition choice 4. We also validate
the formulation of the residual in (11) by computing after
each iteration of the multi-pass IRF. To illustrate the effect
of the number of samples on the accuracy of the proposed
multi-pass IRF, we have taken 2L2 , 4L2 and 6L2 samples of
optimal dimensionality sampling [11] and plot the error max
5
5
Partition1
Partition2
Partition3
Partition4
5
Partition1
Partition2
Partition3
Partition4
0
-5
-5
-10
-10
-10
-15
-15
-15
-20
-20
0
20
40
60
80
100
-20
0
20
No of Iterations, i
40
60
80
100
0
0
ǫ max
-5
-5
-5
-10
-10
-15
-15
-15
-20
-20
40
60
80
100
No of Iterations, i
100
Partition1
Partition2
Partition3
Partition4
0
-10
20
80
5
Partition1
Partition2
Partition3
Partition4
ǫ max
0
60
(c)
5
Partition1
Partition2
Partition3
Partition4
40
No of Iterations, i
(b)
5
0
20
No of Iterations, i
(a)
ǫ max
Partition1
Partition2
Partition3
Partition4
0
ǫ max
-5
ǫ max
ǫ max
0
-20
0
20
40
60
No of Iterations, i
(d)
(e)
80
100
0
20
40
60
80
100
No of Iterations, i
(f)
Fig. 2: Maximum reconstruction error max , given in (12), between the original and reconstructed SH coefficients of a bandlimited signal with L = 15. Reconstructed SH coefficients are obtained using the proposed multi-pass IRF, where the samples
of the signal are taken as (a) 991 samples of the Equiangular sampling scheme, (b) 972 samples of the HEALpix sampling
scheme, (c) 900 random samples (d) 450 (e) 900 and (f) 1350 samples of the optimal dimensionality sampling scheme.
in Fig. 2(d)-(f), where it is evident that the error converges
quickly for a greater number of samples. The convergence of
the error is in agreement with the formulation of the residual in
(11), however, convergence changes with the sampling scheme
and nature of the partition of the subspace of band-limited
signals. This requires further study and is the subject of future
work.
V. C ONCLUSIONS
We have presented the generalized iterative residual fitting (IRF) method for the computation of the spherical harmonic transform (SHT) of band-limited signals on the sphere.
Proposed IRF is based on partitioning the subspace of bandlimited signals into orthogonal spaces. In order to improve the
accuracy of the transform, we have also presented a multi-pass
IRF scheme and analysed it for different sampling schemes
and for four different size partitions. We have performed
numerical experiments to show that accurate computation of
the SHT is achieved by multi-pass IRF. For different partitions
and different sampling distributions, we have analysed the
residual (error) and demonstrated the convergence of the
residual to zero. Furthermore, it has been demonstrated that the
rate of convergence of error depends on the sampling scheme
and choice of partition. Rigorous analysis relating the nature
of partitions and convergence of the proposed method and
application of proposed method in medical imaging, computer
graphics and beyond are subjects of future work.
R EFERENCES
[1] A. Amirbekyan, V. Michel, and F. J. Simons, “Parametrizing surface
wave tomographic models with harmonic spherical splines,” Geophys.
J. Int.l, vol. 174, no. 2, pp. 617–628, Jan. 2008.
[2] R. Ng, R. Ramamoorthi, and P. Hanrahan, “Triple product wavelet
integrals for all-frequency relighting,” ACM Trans. Graph., vol. 23, no. 3,
pp. 477–487, Aug. 2004.
[3] Y. Fantaye, C. Baccigalupi, S. Leach, and A. Yadav, “Cmb lensing
reconstruction in the presence of diffuse polarized foregrounds,” J.
Cosmol. Astropart. Phys., vol. 2012, no. 12, p. 017, July 2012.
[4] N. Jarosik, C. L. Bennett, J. Dunkley, B. Gold, M. R. Greason,
M. Halpern, R. S. Hill, G. Hinshaw, A. Kogut, E. Komatsu, D. Larson,
M. Limon, S. S. Meyer, M. R. Nolta, N. Odegard, L. Page, K. M. Smith,
D. N. Spergel, G. S. Tucker, J. L. Weiland, E. Wollack, and E. L.
Wright, “Seven-year wilkinson microwave anisotropy probe (wmap)
observations: Sky maps, systematic errors, and basic results,” ApJS, vol.
192, no. 2, p. 14, Jan. 2011.
[5] M. Chung, K. Dalton, L. Shen, A. Evans, and R. Davidson, “Weighted
fourier series representation and its application to quantifying the amount
of gray matter,” IEEE Trans. Med. Imag., vol. 26, no. 4, pp. 566–581,
Apr. 2007.
[6] W. Zhang, M. Zhang, R. A. Kennedy, and T. D. Abhayapala, “On
high resolution head-related transfer function measurements: An efficient sampling scheme,” IEEE/ACM Trans. Audio, Speech, Language
Process., vol. 20, no. 2, pp. 575–584, Dec. 2012.
[7] Y. F. Alem, Z. Khalid, and R. A. Kennedy, “3D spatial fading correlation
for uniform angle of arrival distribution,” IEEE Commun. Lett., vol. 19,
pp. 1073–1076, June 2015.
[8] R. A. Kennedy and P. Sadeghi, Hilbert Space Methods in Signal
Processing. Cambridge, UK: Cambridge University Press, March 2013.
[9] J. J. Sakurai, Modern Quantum Mechanics, 2nd ed. Reading, MA:
Addison Wesley Publishing Company, Inc., 1994.
[10] J. D. McEwen and Y. Wiaux, “A novel sampling theorem on the sphere,”
IEEE Trans. Signal Process., vol. 59, no. 12, pp. 5876–5887, Dec 2011.
[11] Z. Khalid, R. A. Kennedy, and J. D. McEwen, “An optimaldimensionality sampling scheme on the sphere with fast spherical
harmonic transforms,” IEEE Trans. Signal Process., vol. 62, no. 17,
pp. 4597–4610, Sep. 2014.
[12] N. Sneeuw, “Global spherical harmonic analysis by least-squares and
numerical quadrature methods in historical perspective,” Geophys. J.
Int., vol. 118, no. 3, pp. 707–716, Sep. 1994.
[13] D. M. Healy, Jr., D. Rockmore, P. J. Kostelec, and S. S. B. Moore,
“FFTs for the 2-sphere - improvements and variations,” J. Fourier Anal.
and Appl., vol. 9, pp. 341–385, 2003.
[14] J. Blais and M. Soofi, “Spherical harmonic transforms using quadratures
and least squares,” in Computational Science ICCS 2006, ser. Lecture
Notes in Computer Science. Springer Berlin Heidelberg, 2006, vol.
3993, pp. 48–55.
[15] K. Ivanov and P. Petrushev, “Irregular sampling of band-limited functions on the sphere,” Appl Comput Harmon Anal., vol. 37, no. 3, pp.
545 – 562, Nov. 2014.
[16] J. Keiner, S. Kunis, and D. Potts, “Efficient reconstruction of functions
on the sphere from scattered data,” J. Fourier Anal. Appl., vol. 13, no. 4,
pp. 435–458, May 2007.
[17] S. Kunis and D. Potts, “Fast spherical fourier algorithms,” J. Comput.
Appl. Math., vol. 161, no. 1, pp. 75 – 98, Dec. 2003.
[18] ——, “Stability results for scattered data interpolation by trigonometric
polynomials,” SIAM J. Sci. Comput., vol. 29, no. 4, pp. 1403–1419, Feb.
2007.
[19] L. Shen and M. K. Chung, “Large-scale modeling of parametric surfaces
using spherical harmonics,” in 3D Data Processing, Visualization, and
Transmission, Third International Symposium on, June 2006, pp. 294–
301.
[20] C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations.
Society for Industrial and Applied Mathematics, 1995.
[21] R. Barrett, M. Berry, T. Chan, J. Demmel, J. Donato, J. Dongarra,
V. Eijkhout, R. Pozo, C. Romine, and H. van der Vorst, Templates for
the Solution of Linear Systems: Building Blocks for Iterative Methods.
Society for Industrial and Applied Mathematics, 1994.
[22] K. M. Grski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen,
M. Reinecke, and M. Bartelmann, “Healpix: A framework for highresolution discretization and fast analysis of data distributed on the
sphere,” Astrophys. J., vol. 622, no. 2, p. 759, Apr. 2005.
| 7 |
INITIAL MONOMIAL INVARIANTS OF HOLOMORPHIC MAPS
arXiv:1502.03434v3 [math.CV] 9 Sep 2015
DUSTY GRUNDMEIER AND JIŘÍ LEBL
Abstract. We study a new biholomorphic invariant of holomorphic maps between domains in different dimensions based on generic initial ideals. We start with the standard
generic monomial ideals to find invariants for rational maps of spheres and hyperquadrics,
giving a readily computable invariant in this important case. For example, the generic initial
monomials distinguish all four inequivalent rational proper maps from the two to the three
dimensional ball. Next, we associate to each subspace X ⊂ O(U ) a generic initial monomial subspace, which is invariant under biholomorphic transformations and multiplication
by nonzero functions. The generic initial monomial subspace is a biholomorphic invariant for holomorphic maps if the target automorphism is linear fractional as in the case of
automorphisms of spheres or hyperquadrics.
1. Introduction
Let U ⊂ Cn and V ⊂ Cm be domains. Denote by O(U, V ) the set of holomorphic maps
f : U → V.
(1)
Write O(U) = O(U, C) as usual. A fundamental problem in several complex variables is to
understand maps in O(U, V ) up to automorphisms; that is, f : U → V and g : U → V are
equivalent if there exist biholomorphisms τ ∈ Aut(U) and χ ∈ Aut(V ) such that
f ◦ τ = χ ◦ g.
(2)
A particularly important case is when the maps are proper. A map f : U → V is proper
if for every compact K ⊂⊂ V , the set f −1 (K) is compact. If f extends continuously to
the boundary, then the proper map f takes boundary to boundary. The map f restricted
to the boundary gives a CR map, and therefore, we have a problem in CR geometry. It
is important to understand those situations where V has a large automorphism group, and
thus we focus most of our attention on the unit ball of a certain signature. For a pair of
integers (a, b), a ≥ 1, the unit ball of signature b is given by
Ba+b
b
b
a+b
o
n
X
X
2
a+b
|zj |2 < 1 .
= z = (z1 , . . . , za+b ) ∈ C
:−
|zj | +
j=1
(3)
j=1+b
is the
The ball Bn0 with signature 0 is the standard unit ball Bn . The boundary of Ba+b
b
hyperquadric
b
a+b
o
n
X
X
2
a+b
|zj |2 = 1 .
(4)
Q(a, b) = z ∈ C
:−
|zj | +
j=1
j=1+b
Date: September 9, 2015.
The second author was in part supported by NSF grant DMS-1362337 and Oklahoma State University’s
DIG and ASR grants.
1
2
DUSTY GRUNDMEIER AND JIŘÍ LEBL
Hyperquadrics are the model hypersurfaces for Levi-nondegenerate surfaces. The hyperquadrics are also the flat models in CR geometry from the point of view of Riemannian
geometry.
The CR geometry problem of maps into hyperquadrics has a long history beginning with
Webster who in 1978 [17] proved that every algebraic hypersurface can be embedded into
a hyperquadric for large enough a and b. On the other hand Forstnerič [8] proved that in
general a CR submanifold will not map to a finite dimensional hyperquadric. Thus not every
CR submanifold can be realized as a submanifold of the flat model (i.e. the hyperquadric).
The mapping problem has been studied extensively for proper maps between balls, where
the related CR question is to classify the CR maps between spheres. That is, suppose
f : Bn → BN is a proper holomorphic map. Two such maps f and g are spherically equivalent
if there exist automorphisms τ ∈ Aut(Bn ) and χ ∈ Aut(BN ) such that f ◦ τ = χ ◦ g.
There has been considerable progress in the classification of such maps, especially for small
codimensional cases; see for example [1, 3, 4, 6, 9, 14] and the many references within.
In particular, Forstnerič [9] proved that given sufficient boundary regularity, the map
is rational and the degree bounded in terms of the dimensions only. The degree is an
invariant under spherical equivalence and the sharp bounds for the degree were conjectured
by D’Angelo [5].
In this article we introduce a new biholomorphic invariant that is far more general than
degree. When the map is rational and the domain and target are balls (possibly of nonzero
signature), we obtain invariants via generic initial ideals from commutative algebra. For
arbitrary domains and maps, we generalize the smallest degree part of the ideal.
The techniques center on the idea of generic initial monomial ideals; see e.g. Green [12].
Given a homogoeneous ideal I, the monomial ideal in(I) is generated by the initial monomials of elements of I (see section 3 for precise definitions). The generic initial ideal, denoted
gin(I), is an initial monomial ideal after precomposing I with a generic invertible linear
map. Grauert was the first to introduce generic initial ideals to several complex variables
(see [10]), and they have been used extensively for understanding singularities of varieties.
For rational maps of balls of the form fg , we homogenize f and g, and we then look at
the ideal generated by the components. We prove that the generic initial ideal generated by
the homogenizations of f and g is invariant under spherical equivalence. More precisely, the
first main result is the following.
Theorem 1.1. Suppose f1 : Bn → BN and f2 : Bn → BN are rational proper maps that are
spherically equivalent. Let F1 and F2 be the respective homogenizations. Then
gin I(F1 ) = gin I(F2 ) .
(5)
In section 3, we prove this result, and compute this new invariant for a number of wellknown examples. In particular we show how this invariant distinguishes many of these maps.
In section 4, we obtain a further invariant by considering the holomorphic decomposition of
the quotient
kf (z)k2 − |g(z)|2
.
(6)
kzk2 − 1
To generalize these results to arbitrary holomorphic maps we improve a result proved in
Grundmeier-Lebl-Vivas [13] that itself is a version of Galligo’s theorem for vector subspaces
of O(U). Given a subspace X ⊂ O(U), we precompose with a generic affine map τ , and
define the generic initial monomial subspace as the space spanned by the initial monomials of
INITIAL MONOMIAL INVARIANTS OF HOLOMORPHIC MAPS
3
elements of X ◦ τ , denoted gin(X). Let X be the affine span of the components of a holomorphic map F : U → CN , denoted by affine-span F . See section 5 for precise definitions. We
obtain an invariant under biholomorphic transformations of U and invertible linear fractional
transformations of CN . By using affine rather than just linear maps we obtain invariants of
the map without having a distinguished point. The second main result says this is, indeed,
an invariant.
Theorem 1.2. Let M, M ′ ⊂ Cn be connected real-analytic CR submanifolds. Suppose
F : M → Q(a, b) and G : M ′ → Q(a, b) are real-analytic CR maps equivalent in the sense
that there exists a real-analytic CR isomorphism τ : M ′ → M and a linear fractional automorphism χ of Q(a, b) such that
F ◦ τ = χ ◦ G.
(7)
Then
gin(affine-span F ) = gin(affine-span G).
(8)
In sections 5 and 6, we give precise definitions and lemmas. In particular we prove the
extended version of Galligo’s theorem for our setting. In section 7, we prove the second major
theorem of this paper, and we give examples where we compute this invariant. Finally, in
section 8 we define the generic initial subspace for the analogue of the quotient (6) for CR
maps between spheres and hyperquadrics.
The authors would like to acknowledge John D’Angelo for many conversations on the
subject.
2. The projective setting
Before we work with arbitrary maps, we consider the special case of rational maps between
hyperquadrics and spheres. In this case we directly apply the standard theory of generic
initial ideals to obtain invariants.
For z, w ∈ Ca+b define
def
hz, wib = hIb z, wi = −
b
X
zj w̄j +
j=1
a+b
X
zj w̄j
and
j=b+1
def
kzk2b = hz, zib .
(9)
By Ib we mean the (a + b) × (a + b) diagonal matrix with b (−1)’s and a 1’s on the diagonal.
We use the same definition for the homogeneous case when the index on the variables starts
with a zero. In this case the subscript still refers to the number of negatives. That is, for
Z = (Z0 , . . . , Za+b ) ∈ Ca+b+1 we write
def
hZ, W ib+1 = hIb+1 Z, W i = −
b
X
j=0
Zj W̄j +
a+b
X
j=b+1
Zj W̄j
and
def
kZk2b+1 = hZ, Zib+1. (10)
Let us homogenize Q(a, b). We add a variable Z0 and work with the homogeneous coordinates [Z0 , Z1 , . . . , Zn ] in Pn . Homogenize the equation above to obtain
def
(11)
HQ(a, b + 1) = Z ∈ Pa+b : kZk2b+1 = 0 .
If we think of Ca+b ⊂ Pa+b , then Q(a, b) is a subset of Pa+b and HQ(a, b + 1) is the closure
of Q(a, b) in Pa+b .
4
DUSTY GRUNDMEIER AND JIŘÍ LEBL
Automorphisms of Pa+b can be represented as invertible linear maps on Ca+b+1 . The
automorphisms of HQ(a, b + 1) are those linear maps T that preserve the form in (11) up
to a real scalar λ 6= 0, that is
(12)
kT Zk2b+1 = λ kZk2b+1 .
p
If we represent T as a matrix, then the condition above is T ∗ Ib+1 T = µIb , where µ is ± |λ|.
If a 6= b + 1, then λ > 0. If λ < 0, then the automorphism swaps the sides of HQ(a,
b + 1).
a+b
Write the set of the corresponding automorphisms of P
as Aut HQ(a, b + 1) . As usual
the point Z ∈ Pa+b is an equivalence class of points of Ca+b+1 up to complex multiple. Thus
(as long as a 6= b + 1) the group Aut HQ(a, b + 1) is the group SU(a, b + 1)/K where K is
the subgroup of matrices ζI where ζ a+b+1 = 1. If a = b + 1, we need to include the matrix
that switches sides. The important fact for us is that automorphisms are represented by
matrices.
Fix (a, b) and (A, B). A rational map F : Pa+b 99K PA+B is represented by a homogeneous
polynomial map of Ca+b+1 to CA+B+1 . The equivalence we wish to consider is the following.
a+b
Two rational maps
99K PA+B and G : Pa+b 99K PA+B are equivalent if there exists
F: P
a+b+1
τ ∈ Aut P
and χ ∈ Aut PA+B+1 such that
F ◦ τ = χ ◦ G.
(13)
In the applications we will have τ ∈ Aut HQ(a, b + 1) and χ ∈ Aut HQ(A, B + 1) .
If f : Bn → BN is a rational proper map of balls, its homogenization F : Pn 99K PN takes
HQ(n, 1) to HQ(N,
1)
1). The equivalence on such maps F using the groups Aut HQ(n,
n
and Aut HQ(N, 1) is precisely the standard spherical equivalence on f ; that is, f : B → BN
and g : Bn → BN are spherically equivalent if there exist automorphisms τ and χ of Bn and
BN respectively such that f ◦ τ = χ ◦ g.
3. Generic initial ideals and rational maps
In this section we briefly introduce the relevant definitions and results from commutative
algebra. We use the setup from Green [12], and in a later section we use the techniques
developed by the authors in [13]. To be consistent with these two papers, we use the slightly
unusual monomial ordering as used by Green.
Let Z0 , Z1 , . . . , Zn denote our variables. Given a multi-index α ∈ Nn+1
we write Z α to
0
mean Z0α0 Z1α1 · · · Znαn as usual, and let |α| = α0 + · · · + αn denote the total degree. A
multiplicative monomial order is a total ordering on all monomials Z α , such that
(i) Z0 > Z1 > · · · > Zn ,
(ii) Z α > Z β ⇒ Z γ Z α > Z γ Z β ,
(iii) |α| < |β| ⇒ Z α > Z β .
Such orderings are not unique. Using the so-called graded reverse lexicographic ordering
when n = 2, we obtain
1 > Z0 > Z1 > Z2 > Z02 > Z0 Z1 > Z12 > Z0 Z2 > Z1 Z2 > Z22 > · · · .
(14)
Fix a certain monomial order. Given a set S of monomials, then the initial monomial is
the maximal monomial in S according to the ordering. For a homogeneous polynomial P ,
write
def
in(P ) = initial monomial of P .
(15)
INITIAL MONOMIAL INVARIANTS OF HOLOMORPHIC MAPS
5
Let I be a homogeneous ideal. Define the initial monomial ideal in(I) as the smallest ideal
such that z α ∈ in(I) whenever z α = in(P ) for some P ∈ I.
Galligo’s theorem implies that for an open dense set of linear maps T , in(I) = in(I ◦ T ).
Let T be such a generic linear map and define
def
gin(I) = in(I ◦ T ).
(16)
I(F ) = ideal generated by components of F in homogeneous coordinates.
(17)
F ◦ τ = χ ◦ G.
(18)
n
Finally, for a rational map F : P 99K P
N
define
Proposition 3.1. Suppose
F : Pn 99K PNand G : Pn 99K PN are equivalent in the sense that
n
there exist τ ∈ Aut P and χ ∈ Aut PN such that
Then
gin I(F ) = gin I(G) .
(19)
Proof. By definition of the generic initial ideal, gin I(F ◦ τ ) = gin I(F ) , as τ is an
invertible linear map. The ideal generated by linear combinations of G is the
same as the
one
generated by G, and χ is also an invertible linear map. So, gin I(χ ◦ G) = gin I(G) .
In the important special case of rational proper maps of balls we get the first main theorem
from the introduction.
Corollary 3.2. Suppose f : Bn → BN and g : Bn → BN are rational proper maps that are
spherically equivalent. Let F and G be the respective homogenizations. Then
gin I(F ) = gin I(G) .
(20)
We close this section with several examples. The following computations are done with
Macaulay2 [11] with the GenericInitialIdeal package using the standard graded reverse
lex ordering unless stated otherwise. One advantage of using gins as invariants is that they
are very simple to compute using computer algebra systems.
Example 3.3. Generic initial ideals distinguish all the maps from B2 to B3 . Faran [6] proved
that all maps sufficiently smooth up to the boundary are spherically equivalent to one of the
following 4 maps:
(i) (z1 , z2 ) 7→ (z1 , z2 , 0),
(ii) (z1 , z2 ) 7→ (z1 , z√1 z2 , z22 ),
(iii) (z1 , z2 ) 7→ (z12 , √2z1 z2 , z22 ),
(iv) (z1 , z2 ) 7→ (z13 , 3z1 z2 , z23 ).
Let us list the results in a table. The first column gives the map itself. The second column
gives the map homogenized by adding the Z0 variable. The last column gives the generators
of the generic initial ideal.
Map
Homogenized
gin
(z1 , z2 , 0)
(Z0 , Z1 , Z2 , 0)
(Z0 , Z1 , Z2 )
2
2
2
(z1 , z√1 z2 , z2 )
(Z0 , Z1 Z√
(Z02 , Z0 Z1 , Z12 , Z0 Z2 , Z1 Z22 )
0 , Z1 Z2 , Z2 )
(z12 , √2z1 z2 , z22 ) (Z02 , Z12 , √2Z1 Z2 , Z22 )
(Z02 , Z0 Z1 , Z12 , Z0 Z2 , Z23 )
3
3
3
3
3
(z1 , 3z1 z2 , z2 ) (Z0 , Z1 , 3Z1 Z2 Z0 , Z2 ) (Z03 , Z02 Z1 , Z0 Z12 , Z02 Z2 ,
Z14 , Z13 Z2 , Z0 Z1 Z22 , Z12 Z22 ,
Z0 Z24 , Z1 Z24 , Z25 )
6
DUSTY GRUNDMEIER AND JIŘÍ LEBL
Notice that the generic initial ideals are all different. In particular, we distinguish the two
very similar second degree maps using the third degree part of the ideal.
Example 3.4. In [7] it was proved that the proper map of B2 to B4 given by
q
2
3
z
1
−
|a|
√
− a) 2
(z1 , z2 ) 7→ z12 , 2z1 z2 ,
,
1 − āz1
1 − āz1
z22 (z1
(21)
is spherically equivalent to a polynomial map if and only if a = 0. This map is the homogeneous second degree map from Faran’s theorem above with the last component tensored by
an automorphism of the ball B2 taking a to 0. If a = 0, then the generic initial ideal is
(Z03 , Z02 Z1 , Z0 Z12 , Z13 , Z02 Z2 , Z0 Z1 Z22 , Z12 Z22 , Z0 Z24 ).
(22)
If a = 21 , we obtain a different generic initial ideal
(Z03 , Z02 Z1 , Z0 Z12 , Z13 , Z02 Z2 , Z0 Z1 Z22 , Z12 Z22 , Z0 Z23 ).
(23)
Example 3.5. In [15] the second author classified all the maps from the two sphere, Q(2, 0),
to the hyperquadric Q(2, 1) (see also Reiter [16] for a different approach to the classification).
There are 7 equivalence classes of maps:
(i) (z1 , z2 ) 7→ (0, z1 , z2√),
(ii) (z1 , z2 ) 7→
(z22 , z12 , 2z2 ),
z2
(iii) (z1 , z2 ) 7→ zz22 , z11 , z22 ,
12 √ 1 2
√
√
z − 3 z z +z −z z 2 + 3 z z +z 2 −z z 2 +z − 3 z −1
(iv) (z1 , z2 ) 7→ 1z 2 +z +1 √23 z 2−1 1 , 1z 2 +z +1 √23 z 2−1 1 , z22 +z1 +√3 z2 −1 ,
2
2
2
1
2
2 1
√4 2 1
√
√
2(z1 z2 +iz1 ) 4 2(z1 z2 −iz1 ) z22 − 2 iz2 +1
√
√
√
(v) (z1 , z2 ) 7→ z 2 + 2 iz +1 , z 2 + 2 iz +1 , z 2 + 2 iz +1 ,
2
2
2
2
√2
2
z2 z12 −z2
2z23
z13 +3z1
(vi) (z1 , z2 ) 7→
3 3z 2 +1 , 3z 2 +1 , 3z 2 +1 ,
1
1
1
(vii) (z1 , z2 ) 7→ g(z1 , z2 ), g(z1 , z2 ), 1 for an arbitrary holomorphic function g.
Note that the maps are written slightly differently from [15] as we are ordering the negative
components first.
We list the generic initial ideal for each of the first 6 classes of maps. The final 7th class
is already distinguished as it is not transversal; that is, it maps an open neighborhood into
Q(2, 1).
INITIAL MONOMIAL INVARIANTS OF HOLOMORPHIC MAPS
Map
(i)
(ii)
(iii)
(iv)
(v)
Homogenized
(Z0 , 0, Z1 , Z2√
)
2
2
2
(Z0 , Z2 , Z1 , 2Z2 Z0 )
2
(Z12 , Z2 Z0 , Z1Z√
0 , Z2 )
(Z22 + Z
3Z2 Z0 − Z02 ,
1 Z0 +
√
Z12 − √3Z1 Z2 + Z22 − Z1 Z0 ,
Z12 + 3Z1 Z2√
+ Z22 − Z1 Z0 ,
2
Z22 + Z
√1 Z0 − 3Z22Z0 − Z0 )
2
(Z
2iZ2 Z0 + Z0 ,
2 +
√
4
2(Z1 Z2 + iZ1 Z0 ),
√
4
2(Z√
1 Z2 − iZ1 Z0 ),
2
2
Z2 − 2iZ2 Z√
0 + Z0 )
(3Z12 Z0 + Z03 , 3(Z2 Z12 − Z2 Z02 ),
2Z23 , Z13 + 3Z1 Z02 )
7
gin
(Z0 , Z1 , Z2 )
(Z02 , Z0 Z1 , Z12 , Z0 Z2 , Z1 Z22 , Z23 )
(Z02 , Z0 Z1 , Z12 , Z0 Z2 , Z1 Z22 )
(Z02 , Z0 Z1 , Z12 , Z0 Z2 , Z1 Z22 )
(Z02 , Z0 Z1 , Z12 , Z0 Z2 , Z1 Z22 )
(Z03 , Z02 Z1 , Z0 Z12 , Z02 Z2 ,
Z14 , Z0 Z1 Z22 , Z12 Z22 , Z13 Z2 ,
Z0 Z24 , Z1 Z24 , Z25 )
We distinguish some maps, but not all. It is to be expected that not all maps can be
distinguished. After all, computing a generic initial ideal throws away much information
about the map. It should be noted that we only looked at small degree examples, where the
number of different possible gins is limited.
(vi)
Example 3.6. The particular monomial ordering may make a difference, even in simple
situations. It might be possible to tell some ideals (and hence maps) apart using one ordering,
but not using another. Let us give a very simple example. In P2 , the ideal generated by
(Z02 , Z1 Z2 ) has the gin
(Z02 , Z0 Z1 , Z13 )
in graded reverse lex ordering,
(Z02 , Z0 Z1 , Z0 Z22 , Z14 )
in graded lex ordering.
(24)
However, the ideal generated by (Z02 , Z12 ) has the same gin in both orderings:
(Z02 , Z0Z1 , Z13 ).
(25)
4. Invariants of the quotient for rational maps
Let F : Pa+b 99K PA+B be a rational map that takes HQ(a, b + 1) to HQ(A, B + 1), where
defined. Identify F with the homogeneous polynomial map taking Ca+b+1 to CA+B+1 . Write
kF (Z)k2B+1 = kZk2b+1 q(Z, Z̄),
(26)
q(Z, Z̄) = kh+ (Z)k2 − kh− (Z)k2 ,
(27)
H(q) = {h+ , h− }
(28)
where the quotient q is a bihomogeneous polynomial in Z. Find the holomorphic decomposition (see p. 101 of [2]) of q as
where h+ and h− are homogeneous holomorphic polynomial maps. Write
for the set of functions in the holomorphic decomposition of q. The holomorphic decomposition is not unique. However, the linear span of H(q) is unique,
and therefore the ideal
generated by H(q) is unique.
We thus study the ideal I H(q) and furthermore the generic
initial ideal gin I H(q) .
8
DUSTY GRUNDMEIER AND JIŘÍ LEBL
The reason for looking at q is that it may reveal further information about the map that
is not immediately visible from F . For example the quotient q was critically used in [5] for
the degree estimates problem. The number of functions in the decomposition of q is often
larger than the number of components of F as many of these components may cancel out
once multiplied with kZk2b+1 .
Suppose F : Pa+b 99K PA+B and G : Pa+b 99K PA+B are equivalent as before; i.e. F ◦τ = χ◦
G, where τ and χ are automorphisms preserving HQ(a, b+1) and HQ(A, B +1) respectively.
Let qF and qG be the corresponding quotients of F and G respectively. We regard τ and χ
as linear maps, which we also rescale, such that kτ Zk2b+1 = ± kZk2b+1 for Z ∈ Ca+b+1 and
kχW k2B+1 = ± kW k2B+1 for W ∈ CA+B+1 . The ± is there in case a = b + 1 or A = B + 1
and the sides are swapped, otherwise it would be a +.
As F and G are equivalent, kF (τ Z)k2B+1 = kχ ◦ G(Z)k2B+1 . Then
kZk2b+1 qF (τ Z, τ Z) = ± kτ Zk2b+1 qF (τ Z, τ Z) = ± kF (τ Z)k2B+1 =
= ± kχ ◦ Gk2B+1 = ± kG(Z)k2B+1 = ± kZk2b+1 qG (Z, Z̄). (29)
In other words qF (τ Z, τ Z) = ±qG (Z, Z̄).
a+b
Proposition 4.1. Suppose F : Pa+b 99K PA+B
99K PA+B are equivalent in the
and G : P
sense that there exist τ ∈ Aut HQ(a, b + 1) and χ ∈ Aut HQ(A, B + 1) such that
F ◦ τ = χ ◦ G.
Let qF and qG be the corresponding quotients. Then
gin I H(qF ) = gin I H(qG ) .
(30)
(31)
Proof. Above we proved qF (τ Z, τ Z) = ±qG (Z, Z̄). As we are talking about gins, the τ is not
relevant. The ± does not change the components of the holomorphic decomposition.
Again, in the important special case of rational proper maps of balls we get:
Corollary 4.2. Suppose f : Bn → BN and g : Bn → BN are rational proper maps that are
spherically equivalent. Let F and G be the respective homogenizations and let qF and qG be
the corresponding quotients. Then
gin I H(qF ) = gin I H(qG ) .
(32)
√
Example 4.3. Let us consider the Faran map (z1 , z2 ) 7→ (z13 , 3z1 z2 , z23 ) that takes B2 to
B3 . We compute the quotient
√
2
2
2
|z13 | + 3z1 z2 + |z23 | − 1
2
2
(33)
= z12 + z22 − |z1 z2 |2 + |z1 |2 + |z2 |2 + 1.
2
2
|z1 | + |z2 | − 1
Bihomogenizing the quotient, we obtain all the degree-two monomials in H(q). Therefore
the gin is generated by all the degree-two monomials:
gin I H(q) = (Z02 , Z0 Z1 , Z12 , Z0Z2 , Z1 Z2 , Z22 ).
(34)
INITIAL MONOMIAL INVARIANTS OF HOLOMORPHIC MAPS
9
5. Affine spans and automorphisms of the target
To deal with nonrational maps we switch to the affine setting. We work in CN and treat
it as a subset of PN . From now on z = (z1 , . . . , zN ) will be the inhomogeneous coordinates
on PN , that is, coordinates on CN . In other words, we fix a specific embedding ι : CN → PN .
For equivalence of maps F : U → CN we consider the target automorphisms to be the linear
fractional automorphisms of Aut(PN ) and consider CN ⊂ PN as above. Take χ ∈ Aut(PN ),
F : U → CN , and G : U → CN . When we write an equation such as χ ◦ G = F , we mean
χ ◦ ι ◦ G = ι ◦ F , where ι is the embedding above. That is, we consider F to be valued in PN
using the embedding ι : CN → PN . Note that after composing with automorphisms of PN
given as linear fractional maps, it may happen that the new map has poles in U. In fact,
when looking at linear fractional automorphisms of Q(a, b), there will in general be poles
that intersect Q(a, b).
The reason for not simply working in projective space is that the components in nonrational
holomorphic maps F : U → CN cannot be homogenized; the degree is unbounded. For this
same reason, we have to work with affine span instead of just linear span.
Definition 5.1. Given a collection of holomorphic functions F , the affine span of F is
affine-span F = span F ∪ {1} .
(35)
If F is a map, then affine-span F is the affine span of the components of F . Let X ⊂ O(U)
be a vector subspace and ϕ ∈ O(U). Denote by ϕX the vector subspace obtained by
multiplying every element in X by ϕ.
Lemma 5.2. Suppose F : U ⊂ Cn → CN and G : U ⊂ Cn → CN are such that there exists
χ ∈ Aut(PN ) such that F = χ ◦ G. Then there exists ϕ ∈ affine-span G such that
affine-span G = ϕ affine-span F.
(36)
P (w)
◦G(z)
Proof. Let χ(w) = Q(w)
be the linear fractional automorphism such that F (z) = PQ◦G(z)
for
N
N
N
z ∈ U. Here P : C → C and Q : C → C are affine maps. Therefore (Q ◦ G)F = P ◦ G.
Therefore the components of (Q ◦ G)F are in the affine span of G. Also Q ◦ G is in the affine
span of G. The span of the components of (Q ◦ G)F and the function (Q ◦ G) is exactly
(Q ◦ G) affine-span F , and therefore (Q ◦ G) affine-span F ⊂ affine-span G.
Clearly dim affine-span F ≤ dim affine-span G. By inverting χ and applying the argument
in reverse we get dim affine-span G ≤ dim affine-span F . Therefore, (Q ◦ G) affine-span F =
affine-span G.
The point of the lemma is to eliminate the target automorphism by looking at the vector
space generated by F1 , . . . , FN and 1, up to multiplication by a scalar-valued function. To find
invariants of the map F we need to find invariants of this new object under biholomorphisms
of the source.
6. Generic initial vector space
In this section we define the gin of a vector subspace of holomorphic maps. The idea is to
generalize the generic initial ideals to the setting of vector spaces of holomorphic functions
using affine maps instead of linear maps. We generalize the setup from Green [12], using the
techniques developed by the authors in [13].
10
DUSTY GRUNDMEIER AND JIŘÍ LEBL
Let monomial order be defined on z1 , . . . , zn in the same way as above. That is, given a
multi-index α ∈ Nn0 we write z α to mean z1α1 z2α2 · · · znαn , and |α| = α1 + · · · + αn is the total
degree. We require the order to be multiplicative:
(i) z1 > z2 > · · · > zn ,
(ii) z α > z β ⇒ z γ z α > z γ z β ,
(iii) |α| < |β| ⇒ z α > z β .
In the following definitions we fix a point. We assume this point is 0 ∈ Cn , although any
point can be used after translation.
Definition 6.1. Fix a monomial order. By an initial monomial from a collection we mean
the largest monomial in the order. Given a Taylor series T , the initial monomial of T is
the largest monomial in T with a nonzero coefficient. Suppose U ⊂ Cn and 0 ∈ U. For
f ∈ O(U), let T0 f be the Taylor series for f at 0 and
def
in(f ) = initial monomial of T0 f .
(37)
Given X ⊂ O(U) a vector subspace, define the initial monomial subspace
def
in(X) = span{z α : z α = in(f ) for some f ∈ X}.
(38)
A subspace X ⊂ O(U) is a monomial subspace if X admits an algebraic basis consisting of
monomials.
If X is a monomial subspace then the basis of monomials defining it is unique. Therefore,
there is a one-to-one equivalence between monomial subspaces and subsets of the set of all
monomials.
A set S of monomials is affine-Borel-fixed if whenever z α ∈ S and zj |z α , then for all ℓ < j,
the monomials z α z1j and z α zzjℓ are in S. If X ⊂ O(U) is a monomial subspace, then we say
that X is affine-Borel-fixed if the basis of monomials that generates X is affine-Borel-fixed.
Take an invertible affine self map τ of Cn such that τ (0) ∈ U. For a subspace X ⊂ O(U),
define a subspace X ◦ τ ⊂ O(U ′ ) where U ′ = τ −1 (U). Notice that 0 ∈ U ′ and we may again
talk about initial monomials as above.
The initial monomial space in(X) is not always preserved under such precomposition with
affine maps. For example, let X be the span of {1, z2 }, so X = in(X). For a generic choice of
an affine map τ we have in(X ◦ τ ) = span{1, z1 }. Notice that {1, z2 } is not affine-Borel-fixed,
but in(X ◦ τ ) is.
We need to prove an analogue to Galligo’s Theorem (see Theorem 1.27 in [12]). The finite
dimensional version of the result is proved in [13].
2
The set Aff(n) of affine self maps of Cn can be parametrized as Mn (C) × Cn , or Cn × Cn .
We use the standard topology on this set.
Theorem 6.2. Suppose U ⊂ Cn and 0 ∈ U, and let X ⊂ O(U) be a vector subspace.
There is some neighbourhood N of the identity in Aff(n) and A ⊂ N of second category
(countable intersection of open dense subsets of N ), such that for τ ∈ A, the space in(X ◦ τ )
is affine-Borel-fixed. Furthermore, in(X ◦ τ ) = in(X ◦ τ ′ ) for any two affine τ and τ ′ in A.
Finally, if X is finite dimensional, A is open and dense in N .
So after a generic affine transformation the initial monomial subspace is always the same
space. It will also not be necessary to assume 0 ∈ U. We pick a generic affine τ such that
INITIAL MONOMIAL INVARIANTS OF HOLOMORPHIC MAPS
11
τ (0) ∈ U. Then 0 ∈ τ −1 (U), and so it makes sense to take in(X ◦ τ ) for X ⊂ O(U). Before
we prove the theorem, let us make the following definition.
Definition 6.3. Let U ⊂ Cn and let X ⊂ O(U) be a subspace. Take a generic affine τ such
that τ (0) ∈ U. Define the generic initial monomial subspace
gin(X) = in(X ◦ τ ).
(39)
Remark 6.4. While the proof of the theorem may seem formal, we are using affine transformations and precomposing formal power series with affine transformations may not make
sense.
Proof of Theorem 6.2. The finite dimensional version of the theorem is proved in [13] as
Proposition 1 (or Proposition 5.5 on arXiv).
Suppose X is not finite dimensional. The space O(U) is a separable Frechét space. For
the subspace X ⊂ O(U), pick a countable set {fk }∞
k=1 in X such that if
Xm = span{f1 , f2 , . . . , fm },
(40)
[
(41)
then
m
Xm ⊂ X ⊂ X =
[
Xm .
m
Apply the finite dimensional result to Xm for each m. The τ varies over countably many
open dense sets and the intersection of those sets is a second category set as claimed.
Given any k, the initial k monomials in {f1 , f2 , . . . , fm } must stabilize as m grows. That
is, the first k monomials are the same for all large enough m. We therefore obtain a sequence
of initial monomials. To obtain the first k monomials, simply go far enough
S until the initial
k monomials stabilize. Then z α is in this sequence if and only if z α ∈ in m Xm .
It is left to show that
[
[
Xm = in
Xm .
(42)
in X = in
m
m
α
α
Suppose this is not
true.
Find
the
first
(maximal)
monomial
z
such
that
z
∈
in
X
, but
S
α
α
z 6∈ in m Xm . Suppose z is the kth monomial in the ordering given. Let πk be the
projection of O(U) onto the space spanned by the
first k monomials. By Cauchy’s formula,
SM
πk is continous. The dimension of πk m=1 Xm stabilizes as M grows, and hence for large
enough M
[
∞
πk XM = πk
(43)
Xm = πk X .
m=1
α
There exists an f ∈ X with z = in(f ). By the above equality there must exist a g ∈ XM
such that z α = in(g). That is a contradiction.
We now generalize the gins to biholomorphic maps. Let X ⊂ O(U) be a subspace and
let f : U ′ → U be a holomorphic map. Define X ◦ f to be the vector subspace of O(U ′ )
consisting of all F ◦ f for F ∈ X.
Lemma 6.5. Let X be a vector subspace of O(U) and let f : U ′ → U be a biholomorphic
map. Then gin(X) = gin(X ◦ f ).
12
DUSTY GRUNDMEIER AND JIŘÍ LEBL
Proof. Suppose 0 is in both U ′ and U, and suppose f (0) = 0. Composing f with an invertible
linear map does not change the gin (we must change U appropriately). Hence, assume f ′ (0)
is the identity. Let
f (z) = z + E(z),
(44)
where E is of order 2 and higher. We only need to show that in(X) = in(X ◦ f ).
Suppose z α ∈ in(X), that is, there exists an element of X of the form
X
cβ z β .
(45)
g(z) = z α +
β<α
In (g ◦ f ), the terms from E create monomials of degree strictly higher than |α|. So
X
dβ z β .
(g ◦ f )(z) = z α +
(46)
β<α
α
for some dβ . Therefore z ∈ in(X ◦ f ).
Since f −1 (z) = z + F (z) for some F or higher order we obtain by symmetry that if
α
z ∈ in(X ◦ f ) then if z α ∈ in(X ◦ f ◦ f −1 ) = in(X).
Lemma 6.6. Let X be a vector subspace of O(U), 0 ∈ U, and let ϕ : U → C be a holomorphic
function that is not identically zero. Then gin(X) = gin(ϕX).
Proof. As we are talking about gins, we precompose with a generic affine map, which also
precomposes ϕ, and therefore we assume that ϕ(0) 6= 0. In fact, without loss of generality
we assume ϕ(0) = 1. Suppose z α ∈ in(X), that is, there is a g ∈ X of the form
X
g(z) = z α +
cβ z β .
(47)
β<α
Then
ϕ(z)g(z) = z α +
X
dβ z β ,
(48)
β<α
for some dβ by multiplicativity of the monomial ordering. Therefore z α ∈ in(ϕX). By
symmetry, in(X) = in(ϕX).
7. Gins as invariants of maps
If X ⊂ O(U) is a subspace and U ′ ⊂ U is an open set then the restriction X|U ′ (the space
of restrictions of maps from X to U ′ ) clearly has the same gin as X. Let us extend gins to
complex manifolds. Let X ⊂ O(U) be a subspace where U is a connected complex manifold
of dimension n. As gin is invariant under biholomorphic transformations, it is well-defined
on every chart for U. If two (connected) charts overlap, then the gin must be the same
(compute the gin on the intersection). Therefore, gin(X) is well-defined.
Theorem 7.1. Let U, W be connected complex manifolds of dimension n. Suppose F : U →
CN and G : W → CN are equivalent in the sense that there exists a biholomorphic map
τ : W → U and a linear fractional automorphism χ ∈ Aut(PN ) such that
Then
F ◦ τ = χ ◦ G.
(49)
gin(affine-span F ) = gin(affine-span G).
(50)
INITIAL MONOMIAL INVARIANTS OF HOLOMORPHIC MAPS
13
Proof. As noted above, we take two charts of U and W , and therefore, without loss of
generality we assume that U and W are domains in Cn .
As F ◦ τ and G are equivalent via a target automorphism, Lemma 5.2 says
affine-span(F ◦ τ ) = ϕ affine-span G.
As affine-span(F ◦ τ ) = (affine-span F ) ◦ τ , Lemma 6.5 says
gin affine-span(F ◦ τ ) = gin(affine-span F ).
(51)
(52)
Via Lemma 6.6, we get
gin(ϕ affine-span G) = gin(affine-span G).
The result follows.
(53)
The same proof is used for the following CR version. In the CR version we start with realanalytic CR maps. Since real-analytic CR maps extend to holomorphic maps, we therefore
work with the extended holomorphic maps when taking gins and affine spans.
Corollary 7.2. Let M, M ′ ⊂ Cn be connected real-analytic CR submanifolds. Suppose
F : M → Q(a, b) and G : M ′ → Q(a, b) are real-analytic CR maps equivalent in the sense
that there exists a real-analytic CR isomorphism τ : M ′ → M and a linear fractional automorphism χ of Q(a, b) such that
F ◦ τ = χ ◦ G.
(54)
Then
gin(affine-span F ) = gin(affine-span G).
(55)
Again we may take M and M ′ to be submanifolds of a complex manifold of dimension n
instead of Cn .
For examples we need only look at rational maps. If the map is rational we homogenize
with Z0 as before. The generic initial monomial vector space is simply the lowest degree part
of the generic initial ideal generated by the components of the homogenized map. That is
because the lowest degree part of the ideal is the linear span of the components of the map.
We must be careful that the ordering is compatible. In particular, the ordering must respect
the “grading” above, that is in each degree if we set Z0 = 1 we must still get a multiplicative
monomial ordering. For example the standard reverse lex ordering in each degree will not
work, although the lex order will.
√
3
As an example
take
the
Faran
map
F
(z)
=
(z
,
3z1 z2 , z23 ). We add 1 and homogenize to
1
√
3
3
3
get (Z0 , Z1 , 3Z0 Z1 Z2 , Z2 ). We compute the gin in the graded lex order. The degree 3 part
of the gin is generated by
Z03 , Z02 Z1 , Z02 Z2 , Z0 Z12 .
(56)
Therefore,
gin(affine-span F ) = span{1, z1 , z2 , z12 }.
(57)
8. Invariants of the quotient of a hyperquadric map
When the source is either the ball Ba+b
in the holomorphic case or Q(a, b) in the CR case,
b
we obtain further invariants by considering the quotient of the function composed with the
defining function of the target divided by the defining function of the source as we did in
the rational case.
14
DUSTY GRUNDMEIER AND JIŘÍ LEBL
P
P
2
Recall the definition kzk2b = − bj=1 |zj |2 + a+b
j=b+1 |zj | . The defining function for Q(a, b)
is then kzk2b − 1 = 0. Suppose F : U ⊂ Q(a, b) → Q(A, B) is a real-analytic CR map. A
real-analytic CR map is a holomorphic map of a neighborhood of U, and so we will identify
F with this holomorphic map. Define the quotient q via
kF (z)k2B − 1 = kzk2b − 1 q(z, z̄).
(58)
Near some point find the holomorphic decomposition of q:
q(z, z̄) = kh+ (z)k2 − kh− (z)k2 ,
(59)
H(q) = {h+ , h− }
(60)
where h+ and h− are possibly ℓ2 valued holomorphic maps. Write
for the set of functions in the holomorphic decomposition of q. The holomorphic decomposition is not unique and depends on the point. However we do have the following lemma.
Lemma 8.1. Suppose q : U ⊂ Cn → R is real-analytic and U is a connected open set. If
H1 (q) and H2 (q) are two holomorphic decompositions at two points of U, then
(61)
gin span H1 (q) = gin span H2 (q) .
Proof. By a standard connectedness argument using a path between the two points we only
need to consider the case where the domains of convergence of H1 and H2 overlap. Assume
that we work on this intersection. So we only need to show that if
kh+ (z)k2 − kh− (z)k2 = h′+ (z)
2
− h′− (z)
2
(62)
′
′
for maps h+ , h− , h′+ , h′− converging on a fixed open set, with {h+ , h
− } and {h+ , h− } lin′
′
early independent sets, then gin span{h+ , h− } = gin span{h+ , h− } . From (62) we have
span{h+ , h− } = span{h′+ , h′− }, and so the result follows.
Thus, we do not need to specify which point and which decomposition is used.
Theorem 8.2. Suppose F : U ⊂ Q(a, b) → Q(A, B) and G : V ⊂ Q(a, b) → Q(A, B) are
real-analytic CR maps equivalent in the sense that there exists a linear fractional automorphism τ of Q(a, b) and a linear fractional automorphism χ of Q(A, B) such that
Define the quotients qF and qG via
kF (z)k2B − 1 = kzk2b − 1 qF (z, z̄)
F ◦ τ = χ ◦ G.
kG(z)k2B − 1 = kzk2b − 1 qG (z, z̄).
and
(63)
(64)
Then taking holomorphic decompositions of qF and qG at any point in U and V respectively
gin span H(qF ) = gin span H(qG ) .
(65)
Proof. Write the linear fractional maps τ and χ as
τ′
χ′
τ = ′′
and
χ = ′′ .
τ
χ
′
′′
′
′′
for affine maps τ , τ , χ , and χ . Then
2
so
2
kτ ′ (z)kb − |τ ′′ (z)| = ±(kzk2b − 1)
2
|τ ′′ (z)| (kτ (z)k2b − 1) = ±(kzk2b − 1).
(66)
(67)
(68)
INITIAL MONOMIAL INVARIANTS OF HOLOMORPHIC MAPS
15
The ± is there again in case τ switches the sides of Q(a, b). We have the same equation for
w and χ with A, B instead of a, b.
We have kF (τ (z))k2B + 1 = kχ ◦ G(z)k2B + 1. Then
2
(kzk2b − 1)qF τ (z), τ (z) = ± |τ ′′ (z)| kτ (z)k2b − 1 qF τ (z), τ (z)
2
= ± |τ ′′ (z)| kF (τ (z))k2B − 1
2
= ± |τ ′′ (z)| kχ ◦ G(z)k2B − 1
τ ′′ (z)
= ± ′′
χ G(z)
In other words
τ ′′ (z)
= ± ′′
χ G(z)
2
kG(z)k2B − 1
(69)
2
kzk2b − 1 qG (z, z̄).
2
τ ′′ (z)
qG (z, z̄).
qF τ (z), τ (z) = ± ′′
(70)
χ G(z)
Multiplying by absolute value squared of a holomorphic function multiplies the elements
of the holomorphic decomposition by that function and so does not change the gin by
Lemma 6.6. Similarly composing with τ does not change the gin either by Lemma 6.5.
There is an equivalent theorem for holomorphic maps, when the automorhphism maps on
the source are simply the linear fractional automorphisms of Ba+b
b , that is, automorphisms of
a+b
a+b
a+b
P
preserving the closure of Bb in P . We lose no generality if we also allow swapping
sides (when a = b + 1) and hence we consider all linear fractional automorphisms of Q(a, b),
that is, self maps of Pa+b preserving HQ(a, b + 1).
Theorem 8.3. Suppose F : U ⊂ Ca+b → CA+B and G : V ⊂ Ca+b → CA+B are holomorphic
maps equivalent in the sense that there exists a linear fractional automorphism τ of Q(a, b),
where τ (V ) = U, and a linear fractional automorphism χ of Q(A, B) such that
F ◦ τ = χ ◦ G.
Define the quotients qF and qG as before:
and
kF (z)k2B − 1 = kzk2b − 1 qF (z, z̄)
kG(z)k2B − 1 = kzk2b − 1 qG (z, z̄).
(71)
(72)
Then taking holomorphic decompositions of qF and qG at any point in U and V respectively
gin span H(qF ) = gin span H(qG ) .
(73)
9. Gins of real-analytic functions
We end the article with a remark that Lemma 8.1 may be of independent interest, not
only for mapping questions. Let us consider the gin of a holomorphic decomposition of realanalytic functions. That is, for a domain U ⊂ Cn , consider two real-analytic functions of realanalytic functions r1 : U → R and r2 : U → R. Then say the functions are biholomorphically
equivalent if there exists a biholomorphism F : U → U, such that r1 = r2 ◦ F .
If we take H(rj ) to be the holomorphic decomposition of rj at any point of U, then
Lemma 8.1 says that if r1 and r2 are equivalent as above, then
gin affine-span H(r1 ) = gin affine-span H(r2 ) .
(74)
16
DUSTY GRUNDMEIER AND JIŘÍ LEBL
The gin as defined above is not a pointwise invariant. That is, a real-analytic function
defined on a connected open set has the same gin near every point. By the same argument as
before we can also define the gin of a real-analytic function on a connected complex manifold
U by noting that the gin is already well-defined if we take any connected chart.
References
[1] H. Alexander, Proper holomorphic mappings in C n , Indiana Univ. Math. J. 26 (1977), no. 1, 137–146.
MR0422699
[2] John P. D’Angelo, Several complex variables and the geometry of real hypersurfaces, Studies in Advanced
Mathematics, CRC Press, Boca Raton, FL, 1993. MR1224231
[3]
, Polynomial proper maps between balls, Duke Math. J. 57 (1988), no. 1, 211–219, DOI
10.1215/S0012-7094-88-05710-9. MR952233
, Hermitian analysis, Cornerstones, Birkhäuser/Springer, New York, 2013. From Fourier series
[4]
to Cauchy-Riemann geometry. MR3134931
[5] John P. D’Angelo, Šimon Kos, and Emily Riehl, A sharp bound for the degree of proper monomial mappings between balls, J. Geom. Anal. 13 (2003), no. 4, 581–593, DOI 10.1007/BF02921879. MR2005154
[6] James J. Faran, Maps from the two-ball to the three-ball, Invent. Math. 68 (1982), no. 3, 441–475, DOI
10.1007/BF01389412. MR669425
[7] James Faran, Xiaojun Huang, Shanyu Ji, and Yuan Zhang, Polynomial and rational maps between balls,
Pure Appl. Math. Q. 6 (2010), no. 3, 829–842, DOI 10.4310/PAMQ.2010.v6.n3.a10. MR2677315
[8] Franc Forstnerič, Embedding strictly pseudoconvex domains into balls, Trans. Amer. Math. Soc. 295
(1986), no. 1, 347–368, DOI 10.2307/2000160. MR831203
[9]
, Extending proper holomorphic mappings of positive codimension, Invent. Math. 95 (1989), no. 1,
31–61. MR969413
[10] Hans Grauert, Über die Deformation isolierter Singularitäten analytischer Mengen, Invent. Math. 15
(1972), 171–198 (German). MR0293127
[11] Daniel R. Grayson and Michael E. Stillman, Macaulay2, a software system for research in algebraic
geometry. Available at http://www.math.uiuc.edu/Macaulay2/.
[12] Mark Green, Generic initial ideals, Six lectures on commutative algebra (Bellaterra, 1996), Progr. Math.,
vol. 166, Birkhäuser, Basel, 1998, pp. 119–186. MR1648665
[13] Dusty Grundmeier, Jiřı́ Lebl, and Liz Vivas, Bounding the rank of Hermitian forms and rigidity for CR
mappings of hyperquadrics, Math. Ann. 358 (2014), no. 3-4, 1059–1089, DOI 10.1007/s00208-013-0989-z.
MR3175150
[14] Xiaojun Huang, Shanyu Ji, and Wanke Yin, On the third gap for proper holomorphic maps between
balls, Math. Ann. 358 (2014), no. 1-2, 115–142, DOI 10.1007/s00208-013-0952-z. MR3157993
[15] Jiřı́ Lebl, Normal forms, Hermitian operators, and CR maps of spheres and hyperquadrics, Michigan
Math. J. 60 (2011), no. 3, 603–628, DOI 10.1307/mmj/1320763051. MR2861091
[16] Michael Reiter, Classification of Holomorphic Mappings of Hyperquadrics from C2 to C3 , to appear in
J. Geom. Anal. arXiv:1409.5968.
[17] S. M. Webster, Some birational invariants for algebraic real hypersurfaces, Duke Math. J. 45 (1978),
no. 1, 39–46. MR0481086
[18] Hassler Whitney, Complex analytic varieties, Addison-Wesley Publishing Co., Reading, Mass.-LondonDon Mills, Ont., 1972. MR0387634
Department of Mathematics, Ball State University, Muncie, IN 47306, USA
Current address: Department of Mathematics, Harvard University, Cambridge MA 02138, USA
E-mail address: [email protected]
Department of Mathematics, Oklahoma State University, Stillwater, OK 74078, USA
E-mail address: [email protected]
| 0 |
1
Codes Correcting Two Deletions
Ryan Gabrys∗ and Frederic Sala†
∗ Spawar
† University
Systems Center
[email protected]
of California, Los Angeles
[email protected]
Abstract
I. I NTRODUCTION
This paper is concerned with deletion-correcting codes. The problem of creating error-correcting codes that correct one or more
deletions (or insertions) has a long history, dating back to the early 1960’s [12]. The seminal work in this area is by Levenshtein,
who showed in [9] that the Varshamov-Tenengolts asymmetric error-correcting code (introduced in [13]) also corrects a single
deletion or insertion. For single deletion-correcting codes, Levenshtein introduced a redundancy lower bound of log2 (n) − O(1)
bits, demonstrating that the VT code, which requires at most log2 (n) redundancy bits, is nearly optimal.
The elegance of the VT construction has inspired many attempts to extend this code to correct multiple deletions. Such an
approach is found in [7] where the authors introduce a number-theoretic construction that was later shown in [1] to be capable
of correcting two or more deletions. Unfortunately, even for the case of just two deletions, the construction from [7] has a rate
which does not converge to one. Other constructions for multiple insertion/deletion-correcting codes such as those found in [10],
[11] rely on (d, k)-constrained codes, and consequently, these codes also have rates less than one.
To the best of the authors’ knowledge, the best known construction for two deletions (in terms of the minimum redundancy
bits) can be found in the recent work by Brakensiek et al. [2]. The authors show that it possible to construct a t deletion-correcting
code with ct · log2 n bits of redundancy where ct = O(t2 log2 t). The construction from [2] is for general t and does not report
any improved results for the case where t is small. However, it can be shown that their methods result in a construction requiring
at least 128 log2 n bits of redundancy for the case of t = 2.
The best known lower bound for the redundancy of a double deletion-correcting code is 2 log2 n − O(1) ( [8]) bits; thus, there
remains a significant gap between the upper and lower bounds for t deletion-correcting codes even for the case where t = 2. This
motivates the effort to search for more efficient codes.
We note that using a counting argument such as the one found in [9], one can show that there exists a t deletion-correcting
code with redundancy at most 2t log2 n − O(t). However, these codes require the use of a computer search to form the codebooks
along with a lookup table for encoding/decoding. Such codes do not scale as n becomes large and there is no efficient search
mechanism that scales sub-exponentially with n.
The contribution of the present work is a double deletion-correcting code construction that requires 8 log2 n + O(log2 log2 n)
bits of redundancy. To the best of the authors’ knowledge this represents the best construction for a double deletion-correcting
Redudancy of Codes
1600
Our Two Deletion Code
Codes from [2]
1400
1200
Bits of Redudancy
arXiv:1712.07222v1 [] 19 Dec 2017
In this work, we investigate the problem of constructing codes capable of correcting two deletions. In particular, we construct a
code that requires redundancy approximately 8 log2 n + O(log2 log2 n) bits of redundancy, where n denotes the length of the code.
To the best of the author’s knowledge, this represents the best known construction in that it requires the lowest number of redundant
bits for a code correcting two deletions.
1000
800
600
400
200
0
0
500
1000
1500
2000
2500
Code Length
Fig. 1. Comparison of new codes with codes from [2].
3000
3500
4000
4500
2
Rate of Codes
1
0.9
0.8
0.7
Rate
0.6
0.5
0.4
0.3
0.2
Our Two Deletion Code
Codes from [2]
0.1
0
0
500
1000
1500
2000
2500
3000
3500
4000
4500
Code Length
Fig. 2. Comparison of rate of new codes with codes from [2].
code in terms of the redundancy; it is within a factor of four of the optimal redundancy. In Figures 1 and 2, we compare
our construction and the construction from [2]. Although both constructions approach rate 1 for long enough code lengths, our
construction approaches rate one much faster than the construction from [2]. For example, for n = 1024, our construction has
rate approximately 0.86 whereas the construction from [2] has rate at most 0.62.
The paper is organized as follows. In Section II, we provide the main ideas behind our construction and provide an outline of
our approach. Section III introduces our first construction. Afterwards, an improved construction is described in Section IV. We
discuss the issue of run-length-limited constrained codes in Section V and conclude with Section VI.
II. M AIN I DEAS AND O UTLINE
The idea behind our approach is to isolate the deletions of zeros and ones into separate sequences of information, and to then
use error correction codes on the substrings appearing in the string. We also use a series of constraints that allow us to detect
what types of deletions occurred, and consequently we are able to reduce the number of codes in the Hamming metric which are
used as part of the construction. As a result, we present a construction which achieves the advertised redundancy.
Let C(n) ∈ Fn2 denote our codebook of length n that is capable of correcting two deletions. Suppose y ∈ Fn−2
is received
2
where y is the result of two deletions occurring to some vector x ∈ C(n), denoted y ∈ D2 (x). Then, we have the following 6
scenarios:
1) Scenario 1: Two zeros were deleted from x.
2) Scenario 2: Two ones were deleted from x.
3) Scenario 3: A zero was deleted from a run of of length > 4 and a one was deleted from a run of length > 4 in x.
4) Scenario 4: A symbol b ∈ F2 was deleted from a run of length > 4 and another symbol b̄ was deleted from a run of length
` ∈ {1, 2, 3}. Furthermore, if ` = 1, then b̄ is adjacent to runs of lengths `1 , `2 where `1 + `2 < 4.
5) Scenario 5: A symbol b ∈ F2 was deleted from a run of length > 4, and a symbol b̄ is deleted from a run of length 1, where
b̄ is adjacent to runs of lengths `1 , `2 where `1 + `2 = 4.
6) Scenario 6: Scenarios 1)-5) do not occur.
The first 5 of these scenarios are shown in Figure 3.
Our approach is to use a series of detection codes to attempt to delineate between the 6 scenarios enumerated above. In addition,
and similar to [2], we make use of substrings that are not affected by the deletions. The main difference between our approach
here and the one used by [2] is that we use a series of detection codes that allow us to place error-correcting codes on fewer
substrings occuring in our codewords. We provide an example which illustrates the basic ideas behind the approach in [2] and it
highlights some subsequent notation used throughout the paper.
Example 1. Suppose x = (0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0) is transmitted. Let L(x, 00) be an integer which denotes the maximum
number of bits between any two occurrences of the substring 00 in x. Notice that L(x, 00) = 4. For shorthand, let L(x, 00) = s
and let fs : {∅} ∪ F12 ∪ · · · ∪ Fs2 → [2s+1 − 1] be an injective mapping where ∅ denotes the null string. Then,
Ffs (x, 00) = Ff4 (x, 00) = (f4 (0, 1, 1), f4 (∅), f4 (1, 0, 1, 1), f4 (∅)).
Thus, if there are k occurrences of the substring 00, then the sequence Ff4 (x, 00) has length k + 1.
Suppose y = (1, 1, 0, 0, 0, 1, 0, 1, 0, 0) is received where y is the result of two deletions occurring to x. Notice that
Ff4 (y, 00) = (f4 (1, 1), f4 (∅), f4 (1, 0, 1), f4 (∅)).
3
0,1,0,0,0,1,1,1,1,0,1,0,1,0,0
Scenario 1
0,1,0,0,0,1,1,1,1,0,1,0,1,0,0
Scenario 2
0,1,0,0,0,1,1,1,1,0,1,0,1,0,0
Scenario 3
0,1,0,0,0,1,1,1,1,0,1,0,1,0,0
Scenario 4
0,1,0,0,0,1,1,1,1,0,1,0,1,0,0
Scenario 5
Fig. 3. First 5 scenarios
In particular, notice that dH (Ff4 (x, 00), Ff4 (y, 00)) = 2, where dH denotes the Hamming distance. Thus, if Ff4 (x, 00) belongs
to a double error-correcting code, it is possible to recover Ff4 (x, 00) from y and in particular, it is possible to then recover x.
Notice in the previous example that all occurrences of the substring 00 were preserved (we will more rigorously define this
notation shortly). It is not too hard to see that if (in the previous example) a deletion occurred where a substring 00 is deleted
in x and therefore does not appear in y, then we can no longer claim dH (Ff4 (x, 00), Ff4 (y, 00)) = 2.
In order to overcome this issue, the approach taken in [2] was to require that the sequence Ff4 (x, w) belongs to a double
error-correcting code for many different choices of w. In particular, the approach in [2] is to enforce that Ffs (x, w) holds for
every binary string w of length m where according to Theorem 5 from [2] we need 2m > 2t · (2m − 1). Since any two-error
correcting code of length n requires approximately 2 log2 n bits of redudancy and t = 2 for our setup, this implies that using the
construction from [2], we need m > 6 and so the overall construction requires approximately 26 · (2 log2 n) = 128 log2 n bits of
redudancy.
The approach taken here is to use a series of detection codes along with different mappings and more carefully choose which
substrings to place error-correcting codes on. Consequently, we show it is possible to construct a code with fewer redundant bits
than the approach outlined in [2] for the case of two deletions.
From the above example if we use the constraints Ffs (x, w1 ), Ffs (x, w2 ), Ffs (x, w3 ) , and Ffs (x, w4 ) (for some appropriately chosen substrings w1 , . . . , w4 ), then this would require the use of a series of double error-correcting codes defined over an
alphabet of size approximately 2s . To reduce the size of this alphabet, we make use of the following lemma.
Lemma 1. (c.f., [2]) There is a hash function hs : {0, 1}s → {0, 1}v for v 6 4 log2 s + O(1), such that for all x ∈ {0, 1}s , given
any y ∈ D2 (x) and hs (x), the string x can be recovered.
Codes constructed according to Lemma 1 can be found using brute force attempts such as finding an independent set on a
graph with 2n vertices for which no polynomial-time algorithms (with respect to n) exist. Note, however, that if s = O(log2 n),
then using an algorithm for determining a maximal independent set on a graph which is polynomial with respect to the number
of vertices in the graph (such as [6] for instance) results in a code of length s that can be constructed in polynomial time with
respect to n.
At first, we will make use of the sequences Fhs (x, 0000), Fhs (x, 1111), Fhs (x, 11011), and Fhs (x, 110011) where x ∈ C(n).
In particular, we will require that each of these sequences belongs to a code with minimum Hamming distance 5 over an alphabet
of size approximately s. Assuming s = O(log2 n), then these constraints together require approximately 4 · 37 log2 (n) bits of
redundancy if we use the non-binary codes from Dumer [5]. Afterwards, we alter one of the maps used in conjunction with our
Hamming codes and show it is possible construct a code with 8 log2 n + O(1) bits of redudancy. We now turn to some additional
notation before presenting the construction.
For a vector x ∈ Fn2 , let D(i1 , i2 , x) ∈ Fn−2
be the result of deleting the symbols in x in positions i1 and i2 where 1 6
2
i1 < i2 6 n. For example if x = (0, 1, 0, 1, 0, 0), then D(2, 4, x) = (0, 0, 0, 0). Using this notation, we have D2 (x) = {y :
∃i1 , ∃i2 , y = D(i1 , i2 , x)}.
Let w ∈ {0, 1}m . Suppose y ∈ D2 (x). Then we say that the substring w ∈ {0, 1}m is preserved from x to y if, for every
occurrence of w, there exists indices i1 and i2 such that D(i1 , i2 , x) = y and the following holds:
4
1) w 6∈ {(xi1 , xi1 +1 , . . . , xi1 +m−1 ), (xi1 −1 , xi1 , . . . , xi1 +m−2 ),
. . . , (xi1 −m+1 , xi1 −m+2 ,. . . , xi1 )},
2) w 6∈ {(xi2 , xi2 +1 , . . . , xi2 +m−1 ), (xi2 −1 , xi2 , . . . , xi2 +m−2 ),
. . . , (xi2 −m+1 , xi2 −m+2 ,. . . , xi2 )},
3) w 6∈ {(yi1 −1 , yi1 , . . . , yi1 +m−2 ), (yi1 −2 , yi1 −1 , . . . , yi1 +m−3 ),. . . , (yi1 −m+1 , yi1 −m+2 ,. . . , yi1 )},
4) w 6∈ {(yi2 −2 , yi2 −1 , . . . , yi2 +m−3 ), (yi2 −3 , yi2 −2 , . . . , yi1 +m−4 ),. . . , (yi2 −m , yi1 −m+1 ,. . . , yi2 −1 )}.
In words, the first two statements above require that any substring w is not deleted from x and the last two statements require
that no new appearances of w are in y that were not also in x. If w is not preserved from x to y, and the first two conditions
above are violated, then we say that w was destroyed from x to y. If w is not preserved, and the last two conditions above are
violated, then we say that w was created from x to y. Notice that in order for w to be preserved from x to y, 1)-4) has to hold
for at least one pair of i1 , i2 such that we can write y = D(i1 , i2 , x) since the choice of i1 , i2 may not be unique. The following
example shows this.
Example 2. Suppose x = (0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0) and y = (0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0). Then we say that
(1, 0, 1) is preserved from x to y since there are three occurrences of 101 in x and D(6, 15, x) = D(5, 15, x) = y. Notice that
(1, 1, 1) is not preserved from x to y and in particular (1, 1, 1) is destroyed.
For a vector x ∈ Fn2 , let N0 (x) denote the number of zeros in x. Similarly, let N1 (x) be the number of ones that appear in x.
Furthermore, let N0 (x), N1 (x), N11011 (x) be the number of appearances of the substrings 0000, 1111, and 11011 respectively.
We illustrate these notations in the following example.
Example 3. Let x = (0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1 , 1, 0, 0, 0). Then, N0 (x) = 11, N1 (x) = 10, N0 = 2, N1 = 1, and
N11011 (x) = 1. Notice that two occurrences of the substring 0000 overlap.
III. C ONSTRUCTION - F IRST ATTEMPT
We now turn to describing out code. Let CT (n, s) denote the following set
CT (n, s) = {x ∈ Fn2 : L(x, 0000) 6 s, L(x, 1111) 6 s, L(x, 110011) 6 s, L(x, 11011) 6 s}.
As we will see shortly, our main construction will be a sub-code of CT (n, s).
Let c ∈ F57 . Suppose q the smallest odd prime greater than the size of the image of the map hs . Suppose N is the smallest
positive integer such that q N −1 > n. Let a0 , a1 , a110011 , a11011 ∈ Frq where r 6 2N + d N 3−1 e. Our construction is the following:
n
C(n,a0 , a1 , a110011 , a11011 , c, s) = x ∈ CT (n, s) :
N0 (x) mod 7 = c1 , N1 (x) mod 7 = c2 ,
N1 (x) mod 7 = c3 , N0 (x) mod 7 = c4 ,
N11011 (x) mod 7 = c5 ,
Fhs (x, 0000) ∈ C2 (n, q, a0 ), Fhs (x, 1111) ∈ C2 (n, q, a1 ),
Fhs (x, 110011) ∈ C2 (n, q, a110011 ),
o
Fhs (x, 11011) ∈ C2 (n, q, a11011 ) ,
where C2 (n, q, a) is a code over Fq of length n. If any of the sequences above that are required to be in codes of length n have
lengths M < n, then we simply assume the last n − M components of the sequences are equal to zero.
Let H be a parity check matrix for a double error-correcting code (minimum Hamming distance 5) from [5] so that H ∈
N −1
Fqr×q
. We define the double error correcting code C2 (n, q, a) so that
C2 (n, q, a) = {x ∈ Fqq
N −1
: H · x = a}.
We now show that given any y ∈ D2 (x), it is possible to recover x ∈ C(n, a0 , a1 , a110011 , a11011 , c, s). For shorthand, we
refer to C(n, a0 , a1 , a110011 , a11011 , c, s) as C(n). For the remainder of the section, we always assume x is a codeword from
C(n) and y ∈ D2 (x).
The following claims will be used throughout the section.
Claim 1. Suppose a zero is deleted from a run of length one in x and the deletion causes
1) 11011 to be created/destroyed,
2) 1111 to be created,
then the substring 110011 is preserved from x to y.
5
As an example, the previous claim will be concerned with the following type of deletion:
(×, ×, ×, 1, 0, 1, 1, 0, 1, 1, ×, ×, ×, ×, ×),
where 0 ×0 indicates a symbol which is either a zero or a one and 0 represents a deletion (in this case of a symbol with value 0).
Proof: The deletion of a zero from a run of length 1 can destroy a 11011 substring only if the middle zero is deleted. In
this case, the 110011 substring is preserved. The deletion of a zero from a run of length 1 can create a 11011 substring only if
either the first zero is deleted from the substring 11101011 in x or if the second zero is deleted from the substring 11010111 in
x. In either case, the substring 110011 is preserved from x to y.
Claim 2. Suppose a symbol with value b ∈ F2 is deleted from a run of length > 4 and a symbol with value b is deleted from a run of
length 1 where
N1 (x) 6= N1 (y), N0 (x) 6= N0 (y).
Under this setup, if b = 1, the substring 110011 is preserved. Otherwise, if b = 0 and 11011 is not preserved from x to y, then
110011 is preserved.
For example, the previous claim will be concerned with the following setups. If b = 0, then one instance of the setup from this
claim is
(×, ×, 0, 0
, 0, 0, ×, ×, ×, ×, ×, 1, 0, 1, ×, ×, ×)
and if b = 1, then another example is
(×, ×, 1,
1, 1, 1, ×, ×, ×, ×, ×, 0, 1, 0, ×, ×, ×).
Proof: Suppose that a symbol with value b = 0 is deleted from a run of length > 4 and another symbol with value 0 is
deleted from a run of length 1. To begin, notice that the zero which was deleted from the run of length > 4 cannot create/destroy
the substrings 11011, 110011, 1111 from x to y. Then, according to Claim 1, if the deletion of a zero from a run of length 1 a)
creates/destroys the substring 11011 and b) creates the substring 1111 from x to y, then the substring 110011 is preserved from
x to y.
Suppose b = 1. Under this setup, the substring 110011 is preserved, and so we can recover x from the constraint Fhs (x, 110011) ∈
C2 (n, q, a110011 ). To see this, we first note that an occurrence of the substring 110011 is destroyed only if a one is deleted from
a run of length 2 which is not possible under this setup. In addition, an occurrence of the substring 110011 cannot be created
by deleting a one from a run of length 1 (since this would require that the one is also adjacent to runs of lengths `1 , `2 with
`1 + `2 > 4 since a 0000 substring is created from x to y) or by deleting a one from a run of length 4. Therefore, 110011 is
preserved from x to y when b = 1.
We begin with the cases where either y is the result of deleting two zeros or two ones from x. The first two lemmas handle
Scenarios 1) and 2) from the previous section.
Lemma 2. Suppose N1 (x) − N1 (y) mod 7 = 2. Then, x can be recovered from y.
For example, the previous claim will be concerned with the following setup:
(×, ×, ×, ×, 1, ×, ×, ×, ×, ×, ×, ×, ×, 1, ×, ×, ×).
Proof: Since N1 (x) − N1 (y) mod 7 = 2, two ones were deleted from x to obtain y. If N0 (x) − N0 (y) ≡ 0 mod 7, then
0000 is preserved since the deletion of a 1 can create at most 3 0000s and so the deletion of two ones can create at most 6 0000s.
Thus, we conclude that 0 is preserved from x to y. Since two ones were deleted, clearly no 0000 substrings were destroyed.
Therefore, 0000 is preserved from x to y, and so we can recover x from y using the constraint Fhs (x, 0000) ∈ C2 (n, q, a0 ).
If N1 (x) − N1 (y) ≡ 0 mod 7, then 1111 is preserved, and so we can recover x from y using the constraint Fhs (x, 1111) ∈
C2 (n, q, a1 ).
Now we assume that both N1 (x) − N1 (y) 6≡ 0 mod 7 and N0 (x) − N0 (y) 6≡ 0 mod 7. Note that this is only possible if a
one is deleted from a run of length > 4 and a one is deleted from a run of length 1. According to Claim 2, we can determine x
from Fhs (x, 110011) ∈ C2 (n, q, a110011 ).
Next we turn to the case where two zeros have been deleted.
Lemma 3. Suppose N0 (x) − N0 (y) mod 7 = 2. Then, x can be recovered from y.
For example, we will be concerned with the following setup:
(×, ×, ×, ×, 0, ×, ×, ×, ×, ×, ×, ×, ×, 0, ×, ×, ×).
6
Proof: Since N0 (x)−N0 (y) mod 7 = 2, two zeros were deleted from x to obtain y. If N1 (x)−N1 (y) ≡ 0 mod 7, then we
can recover x from y using the constraint Fhs (x, 1111) ∈ C2 (n, q, a1 ) using the same logic as the previous lemma. In addition,
if N0 (x) − N0 (y) ≡ 0 mod 7, then 0000 is preserved, and so we can recover x from y using the constraint Fhs (x, 0000) ∈
C2 (n, q, a0 ).
Now we assume that both N1 (x) − N1 (y) 6≡ 0 mod 7 and N0 (x) − N0 (y) 6≡ 0 mod 7. Similar to the previous lemma, this
is only possible if a zero is deleted from a run of length > 4 and a zero is deleted from a run of length 1. We can use the
constraint N11011 (x) mod 7 = c5 to determine whether the substring 11011 is preserved from x to y. If 11011 is preserved,
then we can determine x from Fhs (x, 11011) ∈ C2 (n, q, a11011 ). If 11011 is not preserved then we can determine x from
Fhs (x, 110011) ∈ C2 (n, q, a110011 ) according to Claim 2.
As a result of the previous two lemmas, we assume in the remainder of this section that y is the result of deleting a symbol
with a value 1 and a symbol with a value 0. The next 3 lemmas handle the case where N0 (x) > N0 (y) or N1 (x) > N1 (y).
The next lemma covers Scenario 3).
Lemma 4. Suppose N0 (x) − N0 (y) mod 7 = 1, and N1 (x) − N1 (y) mod 7 = 1. Then, x can be recovered from y.
For example, we will be concerned with the following setup:
(×, ×, 0, 0, 0, 0, ×, ×, ×, ×, ×, ×, 1, 1, 1, 1, ×).
Proof: Since N0 (x) − N0 (y) mod 7 = 1 and N1 (x) − N1 (y) mod 7 = 1, y is the result of deleting a 1 and a 0 from x
where both symbols belong to runs of lengths > 4. Since both symbols were deleted from runs of lengths at least 4, it follows
that no 110011 substrings were created/destroyed and so we can recover x from Fhs (x, 110011) ∈ C2 (n, q, a110011 ).
The next two lemmas handle Scenario 4).
Lemma 5. Suppose N0 (x) − N0 (y) mod 7 = 1 and N1 (x) − N1 (y) mod 7 = 0. Then, x can be recovered from y.
For example, we will be concerned with the following setup:
(×, ×, 0, 0, 0, 0, ×, ×, ×, ×, ×, ×, 0, 1, 1, 0, ×).
Proof: Since N0 (x) − N0 (y) mod 7 = 1, clearly a zero was deleted from a run of zeros of length at least 4 in x. Since
N1 (x) − N1 (y) mod 7 = 0 and there are exactly two deletions (of a zero and a one), no ones were deleted from runs of ones
of length > 4. Thus, we can recover x from Fhs (x, 1111) ∈ C2 (n, q, a1111 ).
Lemma 6. Suppose N1 (x) − N1 (y) mod 7 = 1 and N0 (x) − N0 (y) mod 7 = 0. Then, x can be recovered from y.
For example, the previous lemma is concerned with the following setup:
(×, ×, 1, 1, 1, 1, ×, ×, ×, ×, ×, ×, 1, 0, 0, 1, ×).
Finally, we turn to the case where either Scenario 5) or Scenario 6) occurs. The next two lemmas handle the case where either
N0 (y) > N0 (x) or N1 (y) > N1 (x).
Lemma 7. Suppose N0 (y) − N0 (x) mod 7 ∈ {1, 2, 3} or N1 (y) − N1 (x) mod 7 ∈ {1, 2, 3}. Then, x can be recovered from y.
For example, we will be concerned with the following setup:
(×, 0, 0, 0, 1, 0, ×, ×, ×, ×, ×, ×, 1, 0, 0, 1, ×),
or
(×, 1, 1, 1, 0, 1, ×, ×, ×, ×, ×, ×, 0, 1, 1, 0, ×).
Proof: Suppose first that N0 (y) − N0 (x) mod 7 ∈ {1, 2, 3}. Then, clearly a one from a run of length 1 was deleted in x
resulting in the creation of new 0000 substrings. If N1 (y) − N1 (x) mod 7 = 0, then 1111 is preserved from x to y and so we
can recover x.
Otherwise, if N0 (y) − N0 (x) mod 7 ∈ {1, 2, 3} and N1 (y) − N1 (x) mod 7 ∈ {1, 2, 3}, it follows that a one was deleted
from a run of length 1 and also a zero was deleted from a run of length 1, so that we have the following type of setup:
(×, 0, 0, 0, 1, 0, ×, ×, ×, ×, ×, 1, 1, 0, 1, 1, ×),
The deletion of a one from a run of length 1 cannot or create destroy a 110011 substring or a 11011 substring since the 1 needs to
be adjacent to two runs of lengths `1 , `2 where `1 + `2 > 4 since at least one 0000 substring is created from x to y. Furthermore,
the deletion of the one from a run of length 1 clearly cannot create a 1111 substring from x to y. Therefore, the deletion of
the zero from a run of length 1 creates a 1111 substring from x to y and from Claim 1, if 11011 is not preserved from x to
7
y, then 110011 is preserved. Thus, we can use the constraints N11011 (x) mod 7 = c5 , Fhs (x, 110011) ∈ C2 (n, q, a110011 ) and
Fhs (x, 11011) ∈ C2 (n, q, a11011 ) to determine x.
Notice that it is not possible to have N0 (y) − N0 (x) mod 7 ∈ {1, 2, 3} and N1 (x) − N1 (y) mod 7 ∈ {1, 2, 3}. This is because
in order to have N0 (y) − N0 (x) mod 7 ∈ {1, 2, 3}, a one is deleted from a run of length 1 and this creates a new run of zeros
of length at least 4. Then, the deletion of the zero (which also occurs by assumption) can only create 1111 substrings from x to
y and so N1 (x) − N1 (y) mod 7 6∈ {1, 2, 3}.
The case where N1 (y) − N1 (x) mod 7 ∈ {1, 2, 3}, but N0 (y) − N0 (x) mod 7 6∈ {1, 2, 3} can be handled using the same
logic as before.
We have one case left to consider.
Lemma 8. Suppose N1 (x) − N1 (y) mod 7 = 0, and N0 (x) − N0 (y) mod 7 = 0. Then, x can be recovered from y.
Proof: Since N1 (x) − N1 (y) mod 7 = 0 and N0 (x) − N0 (y) mod 7 = 0, there are two setups to consider:
1) A symbol b was deleted from a run of length > 4 and another symbol b̄ from a run of length 1 was deleted which was
adjacent to a run of length `1 and another run of length `2 such that `1 + `2 = 4, so that we have the following type of
setup:
(×, 0, 0, 0, 1, 0, ×, ×, ×, ×, ×, 0, 0, 0, 0, ×, ×).
2) The 0000 and 1111 substrings were preserved from x to y.
We first discuss the decoding procedure and then show that the decoding is correct given that either 1) or 2) hold. Suppose
that N11011 (x) ≡ nN11011 (y) mod 7. Then we estimate x to be the sequence which agrees with at least two of
o the following
three constraints: Fhs (x, 0000) ∈ C2 (n, q, a0 ), Fhs (x, 1111) ∈ C2 (n, q, a1 ), Fhs (x, 11011) ∈ C2 (n, q, a11011 ) . Otherwise, if
N11011 (x) 6≡ N11011 (y) mod 7, we estimate x to be the sequence which agrees with the constraint Fhs (x, 0000) ∈ C2 (n, q, a0 ).
First suppose that N11011 (x) ≡ N11011 (y) mod 7 and suppose that 1) holds. If b = 0, then the decoding is correct since in this
case x agrees with the constraints Fhs (x, 1111) ∈ C2 (n, q, a1 ) and Fhs (x, 11011) ∈ C2 (n, q, a11011 ), since 11011 can be created
only if a 0 is deleted from a run of length 1. Notice that if b = 0, then the substring 11011 cannot be destroyed. Now, suppose
b = 1. In this case, if N11011 (x) ≡ N11011 (y) mod 7, then the deletion of a zero from
n a run of length 1 did not create/destroy the
substring 11011 and so x agrees with at least two of the three constraints from Fhs (x, 0000) ∈ C2 (n, q, a0 ), Fhs (x, 1111) ∈
o
C2 (n, q, a1 ), Fhs (x, 11011) ∈ C2 (n, q, a11011 ) . Thus, the decoding is correct when N11011 (x) ≡ N11011 (y) mod 7 and 1) holds.
Next consider the n
case where N11011 (x) ≡ N11011 (y) mod 7 and suppose that 2) holds. Then clearly, x agreesowith two of
the three constraints Fhs (x, 0000) ∈ C2 (n, q, a0 ), Fhs (x, 1111) ∈ C2 (n, q, a1 ), Fhs (x, 11011) ∈ C2 (n, q, a11011 ) and so the
decoding is correct in this case.
Suppose now that N11011 (x) 6≡ N11011 (y) mod 7 and that 1) holds. Notice that under this setup, b 6= 0, since the deletion of a
0 from a run of length at least 4 and the deletion of a 1 from a run of length 1 cannot create/destroy any substrings 11011 from
x to y. If b = 1 and N11011 (x) 6≡ N11011 (y) mod 7, then a zero was deleted from a run of length one and a one was deleted
from a run of length > 4 so that the substring 0000 was preserved and so the decoding is correct in this case.
Finally, we consider the case where N11011 (x) 6≡ N11011 (y) mod7 and 2) holds. If 2) holds and the 0000 substring is preserved then clearly x agrees with the constraint Fhs (x, 0000) ∈ C2 (n, q, a0 ), and so the decoding procedure is correct in this
case as well.
As a consequence of Lemmas 2-8, we have the following theorem.
Theorem 9. The code C(n, a0 , a1 , a110011 , a11011 , c, s) can correct two deletions.
In the next section, we make some modifications to the code discussed in this section and afterwards we discuss the redudancy
of the resulting code.
IV. A N I MPROVED C ONSTRUCTION
In this section, we modify the construction in the previous section to obtain a code with redudancy 8 log2 n + O(log2 log2 n).
Our construction uses the same substrings to partition our codewords as in the previous section, but we make use of a different hash
(R)
function in place of hs from Lemma 1, denoted hs . Consequently we show that we can replace the constraint Fhs (x, 11011) ∈
C2 (n, q, a11011 ) with the constraint that Fh(R) (x, 11011) belongs to a code with Hamming distance 3 (rather than Hamming
s
distance 5). Our analysis and the subsequent proof will mirror the previous section in light of these modifications. This section is
organized as follows. We first describe our code construction in detail and then show it has the advertised redudancy. Afterwards,
we prove the code can correct two deletions.
8
Let CT 2 (n, s) denote the following set where for a binary vector v, τ (v) is the length of the longest run of of zeroes or ones
in v,
CT 2 (n, s) = {x ∈ Fn2 : L(x, 0000) 6 s, L(x, 1111) 6 s,
L(x, 110011) 6 s, L(x, 11011) 6 s,
τ (x) 6 s}.
In the following, for a vector v ∈ Fn2 , let r1 (x) denote the run-length representation of the runs of ones in x. For example, if
v = (1, 1, 0, 1, 0, 1, 1, 1), then r1 (v) = (2, 1, 3). Furthermore, let r>2 be the run-length representation of ones in x with lengths
at least 2. For example, r>2 (v) = (2, 3).
(R)
Let Q be the smallest prime greater than s. We now turn to describing the map hs : Fs2 → F2Q . Let HR1 ∈ F2×s
be the
Q
(R)
s
parity check matrix for a code CL with Hamming distance at least 3 over FQ . For a vector v ∈ {0, 1} we define hs as the
vector which results by considering the run-length representation (as a vector) of the runs of ones in v of length at least 2 and
multiplying HR1 by this vector. We provide an example of this map next.
Example 4. Suppose v = (0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0). Then, the vector representing the runs of ones in v is r1 (v) = (2, 1, 2).
(R)
Notice that r1 (v) has an alphabet size which is equal to the length of the longest run in v. Then, hs (v) = HR1 · r>2 (v) =
HR1 · (2, 2).
Let c ∈ F67 . Suppose q1 the smallest odd prime greater than the size of the image of the map hs , and let q2 be the smallest
(R)
prime greater than the size of the image of the map hs . As before, let N1 be the smallest positive integer such that q1N1 −1 > n,
and suppose N2 is the smallest positive integer such that q2N2 − 1 > n. Let a0 , a1 , a110011 ∈ Frq11 and let a11011 ∈ Frq22 where
r1 6 2N1 + d N13−1 e and r2 6 1 + N2 . In the following, let b ∈ Zs+1 .
Our construction is the following:
n
C (2) (n,a0 , a1 , a110011 , a11011 , c, b, s) = x ∈ CT 2 (n, s) :
N0 (x) mod 7 = c1 , N1 (x) mod 7 = c2 ,
N0 (x) mod 7 = c3 , N1 (x) mod 7 = c4 ,
N110011 (x) mod 7 = c5 , N11011 (x) mod 7 = c6
Fhs (x, 0000) ∈ C2 (n, q1 , a0 ),
Fhs (x, 1111) ∈ C2 (n, q1 , a1 ),
Fhs (x, 110011) ∈ C2 (n, q1 , a110011 ),
Fh(R) (x, 11011) ∈ C1 (n, q2 , a11011 ),
s
o
X
r1 (x)i = b mod s + 1 ,
i odd
where C1 (n, q2 , a11011 ) is either a primitive BCH code with roots {1, α} (α ∈ FqN2 is an element of order q2N2 − 1) or a coset
2
of such a code. If any of the sequences above that are required to be in codes of length n have lengths M < n, then we simply
assume the last n − M components of the sequences are equal to zero.
Since the parameters c, a0 , a1 , a110011 , a11011 , and b can be chosen arbitrarily, it follows using an averaging argument that
there exists a choice of c, a0 , a1 , a110011 , a11011 , and b that gives
|C (2) (n,a0 , a1 , a110011 , a11011 , c, b, s)| >
|CT 2 (n, s)|
.
76 q13r1 q2r2 (s + 1)
Assuming that the image of the map hs has cardinality 24 log2 (s) and s = 128 log2 (n) then we can approximate q1 =
7
2 (n+1)
(128 log2 n)4 . In addition, if q1N1 −1 = n + 1, then N1 = 4 loglog
(128 log (n)) + 1. Then, r1 6 3 · N1 , and so
2
log2 q1r1 =
2
7
log2 (n + 1) + O(log2 log2 n).
3
Assuming Q = s + 1 = 128 log2 (n) + 1, then we approximtae q2 = 256 log2 (n) + 2. In addition if q2N2 = n + 2, then N2 =
log2 (n+2)
log (256 log (n)+2) . Since r2 6 1 + N2 , we have
2
2
log q2r2 6 log2 (n + 2) + O(log2 log2 (n)).
9
Thus,
log2 |C (2) (n,a0 , a1 , a110011 , a11011 , c, b, s)| >
log2 |CT 2 (n, s)| − 8 log n − O(log2 log2 n).
In the next section, we show that when s > 128 log2 (n), log2 |CT 2 (n, s)| > n − O(1) and so there exists a code which meets
our lower bound.
We now prove that the code C (2) (n, a0 , a1 , a110011 , a11011 , c, b, s) can correct two deletions by considering the same scenarios
as in the previous section. With a slight abuse of notation, in this section C(n) will denote C (2) (n, a0 , a1 , a110011 , a11011 , c, b, s)
and NOT C(n, a0 , a1 , a110011 , a11011 , c, s) from the previous section.
Analagous to the previous section, we begin with the following claim and throughout we assume x ∈ C(n).
Claim 3. Suppose a zero is deleted from a run of length one in x and the deletion causes
1) 110011 to be created/destroyed,
2) 1111 to be created,
then the substring 11011 is preserved from x to y, and dH Fh(R) (x, 11011), Fh(R) (y, 11011) = 1.
s
s
Proof: The deletion of a zero from a run of length 1 can destroy a 110011 substring only if one of the middle zeros is
deleted. In this case, the substring 1111 cannot be created from x to y. The only other case to consider is when a substring 110011
is created and also the substring 1111 is created. Notice that this is only possible when the first zero is deleted from the substring
111010011 or if the last zero is deleted from the substring 110010111. Notice that under
either setup, the substring 11011
is
preserved. In addition r>2 (111010011) = (3, 2) and r>2 (11110011) = (4, 2) so that dH Fh(R) (x, 11011), Fh(R) (y, 11011) = 1
s
s
(notice r>2 (110010111) = (2, 3) and r>2 (11001111) = (2, 4) ).
Claim 4. Suppose that y is the result of deleting
P a zero in x from a run of length 1 where the zero is adjacent to runs of length 1 and
length ` where ` > 3. Then, given r>2 (x), i odd r1 (x)i mod s + 1, and y, it is possible to determine r1 (x).
Proof: Since y is the result of deleting a zero from a run of length 1 where the zero is adjacent to runs of length 1 and
` > 3, it follows that r>2 (y) can be obtained by substituting a symbol in r>2 (y) which has value ` with another symbol which
has value ` + 1. Since ` + 1 > 3, it follows that dH (r>2 (y), r>2 (x)) = 1 and so we can determine the location of the symbol in
r>2 (y) that was altered as a result of the deletion to y. To obtain r1 (x) from r1 (y), r>2 (x), r2 > 2(y), we replace the symbol,
say a, that has value ` + 1 in r1 (y) which corresponds to the same symbol in r>2 (x) (that was
P affected by the deletion of the
zero in x) and we replace the symbol a in r1 (y) with 2 adjacent symbols 1 and `. Given i odd r1 (x)i mod s + 1, we can
determine whether the symbol 1 should be inserted before the symbol ` or whether ` comes before the symbol 1. Thus, we can
recover r1 (x) as stated in the claim.
We have the following lemmas which mirror the logic from the previous section. The first lemma follows immediately using
the same logic as in the proof of Lemma 2.
Lemma 10. Suppose N1 (x) − N1 (y) mod 7 = 2. Then, x can be recovered from y.
The next lemma requires a little more work.
Lemma 11. Suppose N0 (x) − N0 (y) mod 7 = 2. Then, x can be recovered from y.
Proof: Similar to the proof of Lemma 3, we focus on the case where both N1 (x) − N1 (y) 6≡ 0 mod 7 and N0 (x) −
N0 (y) 6≡ 0 mod 7, since if at most one of these two conditions hold then we can determine x from y given Fhs (x, 0000) ∈
C2 (n, q, a0 ), Fhs (x, 1111) ∈ C2 (n, q, a1 ).
Since N1 (x) − N1 (y) 6≡ 0 mod 7 and N0 (x) − N0 (y) 6≡ 0 mod 7 hold, it follows that y is the result of deleting a zero from
a run of length 1 and another zero from a run of length at least 4. Notice that the deletion of the zero from a run of length 4
cannot create/destroy the substrings 110011, 1111. Thus, if the substring 110011 is not preserved
from x to y, it is a resultof the
deletion of the zero from a run of length 1. According to Claim 3, under this setup, dH Fh(R) (x, 11011), Fh(R) (y, 11011) = 1.
s
s
Thus, we can determine r>2 (x) from Fh(R) (x, 11011) ∈ C1 (n, q2 , a11011 ). According to Claim 4, we can then determine r1 (x)
s
so that we can correct the deletion of the zero from a run of length 1. The remaining deletion (of a zero from a run of length
> 4) can be corrected using the constraint Fhs (x, 1111) ∈ C2 (n, q1 , a2 ).
Thus, we can determine x from y as follows. Suppose N1 (x) − N1 (y) 6≡ 0 mod 7, N0 (x) − N0 (y) 6≡ 0 mod 7, and
N110011 (x) 6≡ N110011 (y) mod 7. Then, x can be recovered as described in the previous paragraph using the constraints Fh(R) (x, 11011) ∈
s
C1 (n, q2 , a11011 ), Fhs (x, 1111) ∈ C2 (n, q1 , a1 ). Otherwise, if N1 (x) − N1 (y) 6≡ 0 mod 7, N0 (x) − N0 (y) 6≡ 0 mod 7, and
N110011 (x) ≡ N110011 (y) mod 7, x can be recovered from y using Fhs (x, 110011) ∈ C2 (n, q1 , a110011 ).
10
The next lemma can be proven in the same manner as Lemma 4.
Lemma 12. Suppose N0 (x) − N0 (y) mod 7 = 1, and N1 (x) − N1 (y) mod 7 = 1. Then, x can be recovered from y.
The proofs of the next two lemmas are the same as Lemma 5 and Lemma 6.
Lemma 13. Suppose N0 (x) − N0 (y) mod 7 = 1 and N1 (x) − N1 (y) mod 7 = 0. Then, x can be recovered from y.
Lemma 14. Suppose N1 (x) − N1 (y) mod 7 = 1 and N0 (x) − N0 (y) mod 7 = 0. Then, x can be recovered from y.
Next, we consider the case where either N0 (y) > N0 (x) or N1 (y) > N1 (x). The result can be proven using ideas similar to
Lemma 7 and Lemma 11. The proof can be found in Appendix A.
Lemma 15. Suppose N0 (y) − N0 (x) mod 7 ∈ {1, 2, 3} or N1 (y) − N1 (x) mod 7 ∈ {1, 2, 3}. Then, x can be recovered from y.
The next lemma follows using similar ideas as from Lemma 8. The proof can be found in Appendix B.
Lemma 16. Suppose N1 (x) − N1 (y) mod 7 = 0, and N0 (x) − N0 (y) mod 7 = 0. Then, x can be recovered from y.
V. C ONSTRAINT R EDUNDANCY
The purpose of this section is to show that there is no asymptotic rate loss incurred by starting with our constrained sequence
space where there are no more than s symbols between consecutive appearances of v1 = 0000, v2 = 1111, v3 = 11011,
v4 = 110011, v 5 = 1 and v 6 = 0. Our goal will be to show that the probability a sequence of length n satisfies these constraints
converges to a constant for n → ∞. Let Ei be the event that P
constraint i is met by some sequence a selected uniformly at
6
random from {0, 1}n . We show that for all sufficiently large n, i=1 P r(Ei ) > 5. Then, we have by union bound
P r(E1 ∩ · · · ∩ E6 ) = 1 − P r(E¯1 ∪ · · · ∪ E¯6 )
>1−
6
X
P r(Ēi ) > 0.
(1)
i=1
This implies that the redundancy incurred is indeed a constant number of bits.
To compute Ei , we use the following argument. Set Xi to be a random variable counting the number of appearances of the
constraint sequence vi in a. Let Yi be the number of symbols between two consecutive appearances of the constraint. Then,
conditioning on Xi ,
X
P r(Ei ) =
P r(Yi 6 s)x P r(Xi = x) = E[P r(Yi 6 s)Xi ].
x
x
The next step is to use the fact that c is convex and apply Jensen’s inequality:
E[P r(Yi 6 s)Xi ] > P r(Yi 6 s)E[Xi ] .
First, we can easily compute the E[Xi ] terms. Let us set M = |vi | to be the constraint sequence length. In our case, M ∈
{1, 4, 5, 6}. Let aji = (ai , ai+1 , . . . , aj ) be a subsequence of a. Then, using linearity of expectation, we have that E[Xi ] =
Pn−M +1
P r(aji = vi ) = (n − M + 1)/2M .
i=1
Next, we consider the P r(Yi 6 s) probabilities; these probabilities measure waiting times. A useful observation is that constraint sequences such as 110011 have lower waiting times compared to the equal-length pattern 111111. This can be easily
formalized through a Markov chain analysis: we model the appearance of a constraint sequence through a series of states forming
a Markov chain. In the case of 111111, the appearance of any 0 prior to the final state takes us back to the initial state. However, this is not the case for 110011 (or any other mixed-run sequence of the same length), since, among other examples, the
appearance of a 1 in state 11 takes us back to the 11 state rather than the initial state. Therefore, since we are lower bounding
our probability, we may directly work with the constraint sequences 1111, 111111, and 111111. The 0000 constraint is identical
to 1111.
Next, observe that the waiting times satisfy the stochastic recursion YM +1 = YM + 1 + B YeM +1 , where B is a Bernoulli random
variable with parameter 1/2, all variables are independent, and YeM +1 is distributed like YM +1 . It can be shown that the solutions
to this recursion are exponential random variables; more specifically, 2−M YM converges (in distribution) to an exponential r.v.
with parameter 1/2. Then, we may write, for sufficiently large n,
P r(Yi 6 s) = P r(2−M Yi 6 2−M s)
>1 − exp(−2−(M +1) s).
11
Thus, for constraint i with length M , the probability P r(Ei ) is lower bounded as
n−M +1
s 2M
P r(Ei ) > 1 − exp − M +1
.
2
We can safely ignore the terms in the exponent that are not n/2M , since the resulting expression goes to 1.
Next, we substitute the value of s. Since s = 2M +1 log2 (n), we have that
n
s 2nM
1 − exp − M +1
= (1 − exp (− log2 (n))) 2M
2
n
1 2M
= 1−
n
n
2−M 2M
= 1− n
2M
=e
−2−M
.
For our constraints, we have M = (4, 4, 5, 6, 1, 1), yielding lower bounds on
is met.
P6
i=1
P r(Ei ) of 5.046 so that the condition in (1)
VI. C ONCLUSION
In this work, we provided a construction for a code capable of correcting two deletions improved upon existing art in terms of
the number of redundant bits. Open problems include extending these techniques to codes capable of correcting multiple deletions
as well as developing simple encoding techniques.
R EFERENCES
[1] K.A.S. Abdel-Ghaffar, F. Paluncic, H.C. Ferreira, and W.A. Clarke, “On Helberg’s generalization of the Levenshtein code for multiple deletion/insertion
error correction,” IEEE Transactions on Information Theory, vol. 58, no. 3, pp. 1804-1808, 2012.
[2] J. Brakensiek, V. Guruswami, and S. Zbarsky, “Efficient low-redundancy codes for correcting multiple deletions,” in Proceedings of the Twenty-Seventh
Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1884-1892, SIAM, 2016.
[3] S. Datta and S. W. McLaughlin, “An enumerative method for runlength-limited codes: permutation codes,” IEEE Transactions on Information Theory, vol.
45, no. 6, pp. 2199-2204, 1999.
[4] S. Datta and S. W. McLaughlin, “Optimal block codes for M -ary runlength-constrained channels,” IEEE Transactions on Information Theory, vol. 47, no.
5, pp. 2069-2078, 2001.
[5] I. Dumer, “Nonbinary double-error-correcting codes designed by means of algebraic varieties,” IEEE Trans. Inf. Theory, vol. 41, no. 6, pp. 1657–1666, Nov.
1995.
[6] F. V. Fomin, F. Grandoni, and D. Kratsch, “A measure & conquer approach for the analysis of exact algorithms,” Journal of the ACM (JACM), vol. 56, no.
5, pp. 25, 2009
[7] A.S.J. Helberg and H.C. Ferreira, “On multiple insertion/deletion correcting codes,” IEEE Transactions on Information Theory, vol. 48, no. 1, pp. 305-308,
2002.
[8] A.A. Kulkarni, and N. Kiyavash, “Nonasymptotic upper bounds for deletion correcting codes,” IEEE Transactions on Information Theory, vol. 59, no. 8,
pp. 5115-5130, 2013.
[9] V.I. Levenshtein, “Binary codes capable of correcting deletions, insertions, and reversals” Soviet Physics-Doklady, vol. 10, no. 8, pp. 707-710, 1966.
[10] F. Paluncic, Khaled A.S. Abdel-Ghaffar, H.C. Ferreira, and W.A. Clarke, “A multiple insertion/deletion correcting code for run-length limited sequences,”
IEEE Transactions on Information Theory, vol. 58 no. 3, pp. 1809-1824, 2012.
[11] R. M. Roth and P.H. Siegel, “Lee-metric BCH codes and their application to constrained and partial-response channels,” vol. 40, no. 4, pp. 1083-1096,
1994.
[12] F. F. Sellers, Jr., “Bit loss and gain correction codes,” IRE Transactions on Information Theory, vol. 8, no. 1, pp. 35-38, 1962.
[13] R. R. Varshamov and G. M. Tenengolts, “A code for correcting a single asymmetric error,” Avtomatika i Telemekhanika, vol. 26, no. 2, pp. 288-292, 1965.
A PPENDIX A
P ROOF OF L EMMA 15
Lemma 15. Suppose N0 (y) − N0 (x) mod 7 ∈ {1, 2, 3} or N1 (y) − N1 (x) mod 7 ∈ {1, 2, 3}. Then, x can be recovered from
y.
Proof: We consider the case where N0 (y)−N0 (x) mod 7 ∈ {1, 2, 3} and N1 (y)−N1 (x) mod 7 ∈ {1, 2, 3} since otherwise
x can be recovered from y using the same ideas as in the proof of Lemma 7. The deletion of a one from a run of length 1
cannot create/destroy a 110011 substring or a 11011 substring since the 1 needs to be adjacent to two runs of lengths `1 , `2
where `1 + `2 > 4 since at least one 0000 substring is created from x to y. Furthermore, the deletion of the one from a run of
length 1 clearly cannot create a 1111 substring from x to y. Therefore, the deletion of the zero from a run of length 1 creates
a 1111 substring from x to y. If, in addition, the deletion of the zero does also does not
x
preserve the 110011 substring from
to y, then according to Claim 3, the substring 11011 is preserved from x to y and dH Fh(R) (x, 11011), Fh(R) (y, 11011) = 1.
s
s
Using the same logic as in the proof of Lemma 11, according to Claim
P4 we can correct the deletion of a zero from a run of
length 1 using the constraints Fh(R) (x, 11011) ∈ C1 (n, q2 , a11011 ) and i odd r1 (x)i = b mod s + 1 and the deletion of the one
s
from a run of length > 4 can be corrected with the constraint Fhs (x, 0000) ∈ C2 (n, q1 , a0 ). Thus, the decoding in this case is
the same as described in the last paragraph of Lemma 11.
12
A PPENDIX B
P ROOF OF L EMMA 16
Lemma 16. Suppose N1 (x) − N1 (y) mod 7 = 0, and N0 (x) − N0 (y) mod 7 = 0. Then, x can be recovered from y.
Proof: Using the same logic as before, N1 (x) − N1 (y) mod 7 = 0 and N0 (x) − N0 (y) mod 7 = 0, there are two setups
to consider:
1) A symbol b was deleted from a run of length > 4 and another symbol b̄ from a run of length 1 was deleted which was
adjacent to a run of length `1 and another run of length `2 such that `1 + `2 = 4.
2) The 0000 and 1111 substrings were preserved from x to y.
The decoding procedure is the following. Suppose N110011n(x) ≡ N110011 (y) mod 7. Then we estimate x to be the sequence which
agrees with at least two of the following three constraints: Fhs (x, 0000) ∈ C2 (n, q1 , a0 ), Fhs (x, 1111) ∈ C2 (n, q1 , a1 ), Fhs (x, 110011) ∈
o
C2 (n, q1 , a110011 ) . Otherwise, if N110011 (x) 6≡ N110011 (y) mod 7, we estimate x to be the sequence which agrees with the
constraint Fhs (x, 0000) ∈ C2 (n, q1 , a0 ).
First suppose that N110011 (x) ≡ N110011 (y) mod 7 and suppose that 1) holds. If b = 0, then the decoding is correct since
in this case x agrees with the constraints Fhs (x, 1111) ∈ C2 (n, q1 , a2 ) and Fhs (x, 110011) ∈ C2 (n, q1 , a110011 ), since it is
not possible to create a 0000 substring and also to create/destroy 110011. Now, suppose b = 1. In this case, if N110011 (x) ≡
N110011 (y) mod 7, then the deletion of a zero from
n a run of length 1 did not create/destroy the substring 110011 and so x agrees
with at least two of the three constraints from Fhs (x, 0000) ∈ C2 (n, q1 , a0 ), Fhs (x, 1111) ∈ C2 (n, q1 , a1 ), Fhs (x, 110011) ∈
o
C2 (n, q1 , a110011 ) . Thus, the decoding is correct when N110011 (x) ≡ N110011 (y) mod 7 and 1) holds.
Next consider the case
n where N110011 (x) ≡ N110011 (y) mod 7 and suppose that 2) holds. Then clearly, x agrees with
o two of
the three constraints Fhs (x, 0000) ∈ C2 (n, q1 , a0 ), Fhs (x, 1111) ∈ C2 (n, q1 , a1 ), Fhs (x, 110011) ∈ C2 (n, q1 , a110011 ) and so
the decoding is correct in this case.
Suppose now that N110011 (x) 6≡ N110011 (y) mod 7 and that 1) holds. Notice that under this setup, b 6= 0, since the deletion of
a 0 from a run of length at least 4 and the deletion of a 1 from a run of length 1 that creates a 0000 substring cannot create/destroy
any substrings 110011 from x to y. If b = 1 and N110011 (x) 6≡ N110011 (y) mod 7, then a zero was deleted from a run of length
one and a one was deleted from a run of length > 4 so that the substring 0000 was preserved and so the decoding is correct in
this case.
Finally, we consider the case where N110011 (x) 6≡ N110011 (y) mod7 and 2) holds. If 2) holds and the 0000 substring is
preserved then clearly x agrees with the constraint Fhs (x, 0000) ∈ C2 (n, q1 , a0 ), and so the decoding procedure is correct in this
case as well.
| 7 |
1
On Binary Distributed Hypothesis Testing
Eli Haim and Yuval Kochman
arXiv:1801.00310v1 [] 31 Dec 2017
Abstract
We consider the problem of distributed binary hypothesis testing of two sequences that are generated by an i.i.d.
doubly-binary symmetric source. Each sequence is observed by a different terminal. The two hypotheses correspond
to different levels of correlation between the two source components, i.e., the crossover probability between the two.
The terminals communicate with a decision function via rate-limited noiseless links. We analyze the tradeoff between
the exponential decay of the two error probabilities associated with the hypothesis test and the communication rates.
We first consider the side-information setting where one encoder is allowed to send the full sequence. For this setting,
previous work exploits the fact that a decoding error of the source does not necessarily lead to an erroneous decision
upon the hypothesis. We provide improved achievability results by carrying out a tighter analysis of the effect of
binning error; the results are also more complete as they cover the full exponent tradeoff and all possible correlations.
We then turn to the setting of symmetric rates for which we utilize Körner-Marton coding to generalize the results,
with little degradation with respect to the performance with a one-sided constraint (side-information setting).
I. I NTRODUCTION
We consider the distributed hypothesis testing (DHT) problem, where there are two distributed sources, X and
Y , and the hypotheses are given by
(0)
H0 : (X, Y ) ∼ PX,Y
(1)
H1 : (X, Y ) ∼ PX,Y ,
(0)
(1a)
(1b)
(1)
where PX,Y and PX,Y are different joint distributions of X and Y . The test is performed based on information
sent from two distributed terminals (over noiseless links), each observing n i.i.d. realizations of a different source,
where the rate of the information sent from each terminal is constrained. This setup, introduced in [1, 2], introduces
a tradeoff between the information rates and the probabilities of the two types of error events. In this work we
focus on the exponents of these error probabilities, with respect to the number of observations n.
When at least one of the marginal distributions depends on the hypothesis, a test can be constructed based only on
the type of the corresponding sequence. Although this test may not be optimal, it results in non-trivial performance
(positive error exponents) with zero rate. In contrast, when the marginal distributions are the same under both
hypotheses, a positive exponent cannot be achieved using a zero-rate scheme, see [3].
One may achieve positive exponents while maintaining low rates, by effectively compressing the sources and then
basing the decision upon their compressed versions. Indeed, many of the works that have considered the distributed
hypothesis testing problem bear close relation to the distributed compression problem.
January 3, 2018
DRAFT
2
Ahlswede and Csiszár [4] have suggested a scheme based on compression without taking advantage of the
correlation between the sources; Han [5] proposed an improved scheme along the same lines. Correlation between
the sources is exploited by Shimokawa et al. [6, 7] to further reduce the coding rate, incorporating random binning
following the Slepian-Wolf [8] and Wyner-Ziv [9] schemes. Rahman and Wagner [10] generalized this setting and
also derived an outer bound. They also give a “quantize and bin” interpretation to the results of [6]. Other related
works include [11–15]. See [16, 10] for further references.
We note that in spite of considerable efforts over the years, the problem remains open. In many cases, the gap
between the achievability results and the few known outer bounds is still large. Specifically, some of the stronger
results are specific to testing against independence (i.e., under one of the hypotheses X and Y are independent),
or specific to the case where one of the error exponents is zero (“Stein’s-Lemma” setting). The present work
significantly goes beyond previous works, extending and improving the achievability bounds. Nonetheless, the
refined analysis comes at a price. Namely, in order to facilitate analysis, we choose to restrict attention to a simple
source model.
To that end, we consider the case where (X, Y ) is a doubly symmetric binary source (DSBS). That is, X and Y
△
are each binary and symmetric. Let Z = Y ⊖ X be the modulo-two difference between the sources.1 We consider
the following two hypotheses:
H0 : Z ∼ Ber (p0 )
(2a)
H1 : Z ∼ Ber (p1 ) ,
(2b)
where we assume throughout that p0 ≤ p1 ≤ 1/2. Note that a sufficient statistic for hypothesis testing in this case
is the weight (which is equivalent to the type) of the noise sequence Z. Under communication rate constraints,
a plausible approach would be to use a distributed compression scheme that allows lossy reconstruction of the
sequence Z, and then base the decision upon that sequence.
We first consider a one-sided rate constraint. That is, the Y -encoder is allocated the full rate of one bit per source
sample, so that the Y sequence is available as side information at the decision function. In this case, compression
of Z amounts to compression of X; a random binning scheme is optimal for this task of compression, lossless or
lossy.2 Indeed, in this case, the best known achievability result is due to [6], which basically employs a random
binning scheme.3
A natural question that arises when using binning as part of the distributed hypothesis testing scheme is the effect
of a “bin decoding error” on the decision error between the hypotheses. The connection between these two errors
is non-trivial as a bin decoding error inherently results in a “large” noise reconstruction error, much in common
with errors in channel coding (in the context of syndrome decoding). Specifically, when a binning error occurs,
1 Notice
2 More
that in this binary case, the uniform marginals mean that Z is necessarily independent of X.
precisely, it gives the optimal coding rates, as well as the best known error exponents when the rate is not too high.
3 Interestingly,
when p1 = 1/2 (testing against independence), the simple scheme of [4] which ignores the side-information altogether is
optimal.
January 3, 2018
DRAFT
3
the reconstruction of the noise sequence Z is roughly consistent with an i.i.d. Bernoulli 1/2 distribution. Thus, if
one feeds the weight of this reconstructed sequence to a simple threshold test, it would typically result in deciding
that the noise was distributed according to p1 , regardless of whether that is the true distribution or not. This effect
causes an asymmetry between the two error probabilities associated with the hypothesis test. Indeed, as the Stein
exponent corresponds to highly asymmetric error probabilities, the exponent derived in [6] may be interpreted as
taking advantage of this effect.4
The contribution of the present work is twofold. First we extend and strengthen the results of [6]. By explicitly
considering and leveraging the properties of good codes, we bound the probability that the sequence Z happens
to be such that Y ⊖ Z is very close to some wrong yet “legitimate” X, much like an undetected error event in
erasure decoding [17]. This allows us to derive achievability results for the full tradeoff region, namely the tradeoff
between the error exponents corresponding to the two types of hypothesis testing errors.
The second contribution is in considering a symmetric-rate constraint. For this case, the optimal distributed
compression scheme for Z is the Körner-Marton scheme [18], which requires each of the users to communicate at
a rate H (Z); hence, the sum-rate is strictly smaller than the one of Slepian-Wolf, unless Z is symmetric. Thus,
the Körner-Marton scheme is a natural candidate for this setting. Indeed, it was observed in [4, 16] that a standard
information-theoretic solution such as Slepian-Wolf coding may not always be the way to go, and [16] mentions the
the Körner-Marton scheme in this respect. Further, Shimokawa and Amari [19] point out the possible application
of the Körner-Marton scheme to distributed parameter estimation in a similar setting and a similar observation is
made in [20]. However, to the best of our knowledge, the present work is the first to propose an actual KörnerMarton-based scheme for distributed hypothesis testing and to analyze its performance. Notably, the performance
tradeoff obtained recovers the achievable tradeoff derived for a one-sided constraint.
The rest of this paper is organized as follows. In Section II we formally state the problem, define notations and
present some basic results. Section III and IV provide necessary background: the first surveys known results for
the case of a one-sided rate constraint while the latter provides definitions and properties of good linear codes. In
Section V we present the derivation of a new achievable exponents tradeoff region. Then, in Section VI we present
our results for a symmetric-rate constraint. Numerical results and comparisons appear in Section VII. Finally,
Section VIII concludes the paper.
II. P ROBLEM S TATEMENT AND N OTATIONS
A. Problem Statement
Consider the setup depicted in Figure 1. X and Y are random vectors of blocklength n, drawn from the (finite)
source alphabets X and Y, respectively. Recalling the hypothesis testing problem (1), we have two possible i.i.d.
distributions. In the sequel we will take a less standard notational approach, and define the hypotheses by random
variable H which takes the values 0, 1, and assume a probability distribution function PX,Y |H ; Therefore H = i
4 Another interesting direction, not pursued in this work, is to change the problem formulation to allow declaring an “erasure” when the
probability of a bin decoding error exceeds a certain threshold.
January 3, 2018
DRAFT
4
φX
X
iX ∈ MX
ψ
φY
Y
Fig. 1.
Ĥ
iY ∈ MY
Problem setup.
(i)
refers to Hi of (1) and (2).5 We still use for the distribution PX,Y |H=i (for i = 0, 1) the shortened notation PX,Y .
Namely, for any x ∈ X n and y ∈ Y n , and for i ∈ {0, 1},
P (X = x, Y = y|H = i) =
n
Y
(i)
PX,Y (xj , yj ).
j=1
A scheme for the problem is defined as follows.
△
Definition 1: A scheme Υ = (φX , φY , ψ) consists of encoders φX and φY which are mappings from the set of
length-n source vectors to the messages sets MX and MY :
φX : X n 7→ MX
(3a)
φY : Y n 7→ MY .
(3b)
and a decision function, which is a mapping from the set of possible message pairs to one of the hypotheses:
ψ : MX × MY 7→ {0, 1}.
(4)
Definition 2: For a given scheme Υ, denote the decision given the pair (X, Y) by
△
Ĥ = ψ (φX (X), φY (Y)) .
(5)
The decision error probabilities of Υ are given by
△
ǫi = P Ĥ 6= H H = i ,
i = 0, 1.
(6a)
Definition 3: For any E0 > 0 and E1 > 0, the exponent pair (E0 , E1 ) is said to be achievable at rates (RX , RY )
if there exists a sequence of schemes
△
(n)
(n)
Υ(n) = φX , φY , ψ (n) ,
5 We
n = 1, 2, . . .
(7)
do not assume any given distribution over H, as we are always interested in probabilities given the hypotheses.
January 3, 2018
DRAFT
5
(n)
(n)
with corresponding sequences of message sets MX and MY
(n)
and error probabilities ǫi , i ∈ {0, 1}, such that6
1
(n)
log MX ≤ RX
n
1
(n)
lim sup log MY ≤ RY ,
n→∞ n
lim sup
(8a)
n→∞
(8b)
and
lim inf −
n→∞
1
(n)
log ǫi ≥ Ei ,
n
i = 0, 1.
(8c)
The achievable exponent region C (RX , RY ) is the closure of the set of all achievable exponent pairs.7
The case where only one of the error probabilities decays exponentially is of special interest; we call the resulting
(n)
quantity the Stein exponent after Stein’s Lemma (see, e.g., [21, Chapter 12]). When ǫ1
is exponential, the Stein
exponent is defined as:
△
σ1 (RX , RY ) = sup {E1 : ∃(E0 , E1 ) ∈ C (RX , RY )} .
(9)
E0 >0
σ0 (RX , RY ) is defined similarly.
We will concentrate on this work on two special cases of rate constraints, where for simplicity we can make the
notation more concise.
1) One-sided constraint where RY = ∞. We shall denote the achievable region and Stein exponents as CX (RX ),
σX,0 (RX ) and σX,1 (RX ).
2) Symmetric constraint where RX = RY = R. We shall denote the achievable region and Stein exponents as
C (R), σ0 (R) and σ1 (R).
Note that for any R we have that C (R) ⊆ CX (R).
Whenever considering a specific source distribution, we will take (X, Y ) to be a DSBS. Recalling (2), that means
△
that X and Y are binary symmetric, and the “noise” Z = Y ⊖ X satisfies:
P (Z = 1|H = i) = pi ,
i = 0, 1
(10)
for some parameters 0 ≤ p0 ≤ p1 ≤ 1/2 (note that there is loss of generality in assuming that both probabilities
are on the same side of 1/2).
B. Further Notations
The following notations of probability distribution functions are demonstrated for random variables X, Y and Z
over alphabets X , Y and Z, respectively. The probability distribution function of a random variable X is denoted
by PX , and the conditional probability distribution function of a random variable Y given a random variable X is
6 All
logarithms are taken to the base 2, and all rates are in units of bits per sample.
7 For
simplicity of the notation we omit here and in subsequent definitions the explicit dependence on the distributions (P (0) , P (1) ).
January 3, 2018
DRAFT
6
denoted by PY |X . A composition PX and PY |X is denoted by PX PY |X , leading to the following joint probability
distribution function PX,Y of X and Y :
△
PX PY |X (x, y) = PX (x)PY |X (y|x),
(11)
for any pair x ∈ X and y ∈ Y.
The Shannon entropy of a random variable X is denoted by H (PX ), and the Kullback-Leibler divergence of
a pair of probability distribution functions (P, Q) is denoted by D (P kQ). The mutual information of a pair of
random variables (X, Y ) is denoted by I PX , PY |X . The similar conditional functionals of the entropy, divergence
and mutual information are defined by an expectation over the a-priori distribution: the conditional entropy of a
random variable X given a random variable Z is denoted by
X
△ X
H PX|Z PZ =
PX (x)
PY |X (y|x) log
x∈X
y∈Y
1
.
PY |X (y|x)
(12)
The divergence of a pair of conditional probability distribution functions PX|Z and PY |Z is denoted by
D PX|Z kPY |Z PZ .
The conditional mutual information of a pair of random variables (X, Y ) given a random variable Z is denoted by
I PX|Z , PY |X,Z PZ ,
and notice that it is equal to
H PX|Z PZ − H PX|Y,Z PZ PX|Z .
If there is a Markov chain Z ↔ X ↔ Y , then we can omit the Z from PY |X,Z and the expression becomes
I PX|Z , PY |X PZ .
Since we concentrate on a binary case, we need the following. Denote the binary divergence of a pair (p, q),
where p, q ∈ (0, 1), by
△
Db (pkq) = p log
1−p
p
+ (1 − p) log
,
q
1−q
(13)
which is the Kullback-Leibler divergence of the pair of probability distributions ((p, 1 − p), (q, 1 − q)). Denote the
binary entropy of p ∈ (0, 1) by
△
Hb (p) = p log
1
1
+ (1 − p) log
,
p
1−p
(14)
which is the entropy function of the probability distribution (p, 1 − p). Denote the Gilbert-Varshamov relative
January 3, 2018
DRAFT
7
distance of a code of rate R, δGV : [0, 1] 7→ [0, 1/2] by
△
δGV (R) = Hb−1 (1 − R).
(15)
The operator ⊕ denotes addition over the binary field. The operator ⊖ is equivalent to the ⊕ operator over the
binary field, but nevertheless, we keep them for the sake of consistency.
The Hamming weight of a vector u = (u1 , . . . , un ) ∈ {0, 1}n is denoted by
wH (u) =
n
X
1{ui =1} ,
(16)
k=1
where 1{·} denotes the indicator function, and the sum is over the reals. The normalized Hamming weight of this
vector is denoted by
δH (u) =
1
wH (u).
n
(17)
Denote the n dimensional Hamming ball with center c and normalized radius r ∈ [0, 1] by
△
Bn (c, r) = {x ∈ {0, 1}n|δH (x ⊖ c) ≤ r} ,
(18)
The binary convolution of p, q ∈ [0, 1] is defined by
△
p ∗ q = (1 − p)q + p(1 − q).
(19)
Definition 4 (Bernoulli Noise): A Bernoulli random variable Z with P (Z = 1) = p is denoted by Z ∼ Ber (p).
An n dimensional random vector Z with i.i.d. entries Zi ∼ Ber (p) for i = 1, . . . , n is called a Bernoulli noise, and
denoted by
Z ∼ BerV (n, p)
(20)
Definition 5 (Fixed-Type Noise): Denote the set of vectors with type a ∈ [0, 1] by
△
Tn (a) = {x ∈ {0, 1}n : δH (x) = a}.
(21a)
N ∼ Uniform (Tn (a))
(21b)
A noise
is called an n-dimensional fixed-type noise of type a ∈ [0, 1].
.
−1
∞
log(an /bn ) = 0. We write
For any two sequences, {an }∞
n=1 and {bn }n=1 , we write an = bn if limn→∞ n
an ≤ bn if limn→∞ n−1 log(an /bn ) ≤ 0.
For any two sequences of random vectors Xn , Yn ∈ X n (n = 1, 2, . . .), we write
Xn = Yn
D
January 3, 2018
(22)
DRAFT
8
if
.
P (Xn = xn ) = P (Y = xn )
(23)
uniformly over xn ∈ X n , that is,
lim
n→∞
1
PX (xn )
log n
=0
n
PYn (xn )
(24)
PX (xn )
1
log n
≤0
n
PYn (xn )
(25)
uniformly over xn ∈ X n . We write Xn 6 Yn if
D
lim
n→∞
uniformly over xn ∈ X n .
The set of non-negative integers are denoted by Z+ , and the set of natural numbers, i.e., 1, 2, . . ., by N.
C. Some Basic Results
When the rate is not constrained, the decision function has access to the full source sequences. The optimal
tradeoff of the two types of errors is given by the following decision function, depending on the parameter T ≥ 0
(Neyman-Pearson [22]),8
0, P (0) (x, y) ≥ T · P (1) (x, y)
X,Y
X,Y
ϕ(x, y) =
1, otherwise.
(26)
Proposition 1 (Unconstrained Case): Consider the hypothesis testing problem as defined in Section II-A, where
there is no rate constraint, i.e. RX = RY = ∞, then (E0 , E1 ) ∈ C (∞) if and only if there exists a distribution
(∗)
function PX,Y over the pair (X , Y) such that
(i)
(∗)
Ei ≤ D PX,Y PX,Y , for i = 0, 1.
(27)
For proof, see e.g. [21]. Note that in fact rates equal to the logarithms of the alphabet sizes suffice.
For the DSBS, the Neyman-Pearson decision function is a threshold on the weight of the noise sequence. We
denote it (with some abuse of notations) by
△
ϕt (x, y) = ϕt (δH (x ⊕ y)) ,
where ϕt : R 7→ {0, 1} is a threshold test,
0, w ≤ t
ϕt (w) =
1, w > t.
(28)
8 In order to achieve the full Neyman-Pearson tradeoff, special treatment of the case of equality is needed. As this issue has no effect on error
exponents, we ignore it.
January 3, 2018
DRAFT
9
It leads to the following performance:
Corollary 1 (Unconstrained Case, DSBS): For the DSBS, CX (1) = C (1), and they consist of all pairs (E0 , E1 )
satisfying that for some t ∈ (p0 , p1 ),
Ei ≤ Db (tkpi ), for i = 0, 1.
(29)
We now note a time-sharing result, which is general to any given achievable set.
Proposition 2 (time-sharing): Suppose that (E0 , E1 ) ∈ C (RX , RY ). Then ∀α ∈ [0, 1] :
(αE0 , αE1 ) ∈ C (αRX , αRY ).
(30)
The proof is standard, by noting that any scheme may be applied to an α-portion of the source blocks, ignoring
the additional samples. Applying this technique to Corollary 1, we have a simple scheme where each encoder sends
only a fraction of its observed vector.
Corollary 2: Consider the DSBS hypothesis testing problem as defined in Section II-A. For any rate constraint
R ∈ [0, 1], for any t ∈ (p0 , p1 )
(R · Db (tkp0 ), R · Db (tkp1 )) ∈ C (R)
(31)
Specializing to Stein’s exponents, we have:
σ0 (R) ≥ R · Db (p1 kp0 )
(32a)
σ1 (R) ≥ R · Db (p0 kp1 ),
(32b)
Of course we may apply the same result to the one-sided constrained case, i.e., CX and the corresponding Stein
exponents.
III. O NE -S IDED C ONSTRAINT: P REVIOUS R ESULTS
In this section we review previous results for the one-sided constraint case RY = ∞. We first present them for
(0)
(1)
general distributions PX,Y , PX,Y and then specialize to the DSBS.
A. General Sources
Ahlswede and Csiszár have established the following achievable Stein’s exponent.
Proposition 3 ([4, Theorem 5]): For any RX > 0,
(0)
(1)
σX,1 (R) ≥ D PX PX
+
max
PV |X :
(0)
I PX ,PV |X ≤RX
January 3, 2018
(0)
(∗)
D PV,Y PV,Y ,
(33)
DRAFT
10
(∗)
(0)
where PV,Y and PV,Y are the marginals of
(0)
△
(0)
(0)
(∗)
△
(0)
(1)
PV,X,Y = PV |X PX PY |X
and
PV,X,Y = PV |X PX PY |X ,
respectively.
The first term of (33) reflects the contribution of the type of X (which can be conveyed with zero rate), while the
second reflects the contribution of the lossy version of X sent with rate RX . Interestingly, this exponent is optimal
(1)
(1)
for case PY |X = PY , known as test against independence.
Han has improved upon this exponent by conveying the joint type of the source sequence X and its quantized
version (represented by V ) to the decision function.9
Proposition 4 ([5, Theorems 2,3]): For any RX ≥ 0,
(1)
(0)
σX,1 (RX ) ≥ D PX PX +
max
PV |X :
(0)
I PX ,PV |X ≤RX ,
σHAN (V ),
(34a)
|V|≤|X |+1
where
△
σHAN (V ) =
(0)
(1)
(∗)
D PY |X,V kPY |X PX PV |X
min
(∗)
PY |V,X :
(∗)
(34b)
(0)
PV,Y =PV,Y
(0)
(∗)
and where PV,Y and PV,Y are the marginals of
(0)
△
(0)
(0)
PV,X,Y = PV |X PX PY |X
(35a)
and
(∗)
△
(0)
(∗)
PV,X,Y = PV |X PX PY |V,X ,
(35b)
respectively.
The following result by Shimokawa et al., gives a tighter achievable bound by using the side information Y when
encoding X.
Proposition 5 ([6, Corollary III.2],[16, Theorem 4.3]): Define
△
(0)
(0)
σSHA (V ) = −I PX|Y , PV |X PY
9 Han’s
result also extends to any rate pair (RX , RY ); however, we only state it for the single-sided constraint.
January 3, 2018
DRAFT
11
+
(0)
(1)
(∗)
D PY |X,V kPY |X PX PV |X ,
min
(∗)
PY |V,X :
(∗)
H
(0)
(∗)
PV |Y
(36a)
(0)
PY =PY ,
(0)
(0)
PỸ ≥H PV |Y PY
(∗)
where PV,Y and PV,Y are the marginals of the distributions defined in (35a) and (35b), respectively. Then, for any
RX > 0,
(0)
(1)
σX,1 (RX ) ≥ D PX PX
+
max
PV |X :
(0)
(0)
I PX|Y ,PV |X PY
≤RX ,
min {σHAN (V ), RX + σSHA (V )} .
(36b)
|V|≤|X |+1
Notice that for PV |X
(0)
such that I PX , PV |X ≤ RX , the bound of the last proposition will be not greater than
the bound of Proposition 4. Therefore the overall bound yields by taking the maximal one.
It is worth pointing out that for distributed rate-distortion problem, the bound in Proposition 5 is in general
suboptimal [23].
A non-trivial outer bound derived by Rahman and Wagner [10] using an additional information at the decoder,
which does not exist in the original problem.
Proposition 6 ([10, Corollary 5]): Suppose that
(1)
(0)
PX = PX .
(37a)
(1)
(0)
Consider a pair of conditional distributions PZ|X,Y and PZ|X,Y such that
(1)
(0)
PZ|X = PZ|X
(37b)
and such that X ↔ Z ↔ Y under the distribution
(1)
△
(1)
(1)
PX,Y,Z = PX,Z PZ|X,Y .
(37c)
Then, for any RX > 0,
(1)
(0)
σX,1 (RX ) ≤ D PY |Z kPY |Z PZ
(0)
(0)
I PY |Z , PX|Y,Z PV |X PZ .
+
max
PV |X :
(0)
(0)
≤RX ,
I PX|Y ,PV |X PY
(37d)
|V|≤|X |+1
B. Specializing to the DSBS
We now specialize the results of Section III-A to the DSBS. Throughout, we choose the auxiliary variable V to
be connected to X by a binary symmetric channel with crossover probability a; with some abuse of notation, we
write e.g. σ(a) for the specialization of σ(V ). Due to symmetry, we conjecture that this choice of V is optimal,
January 3, 2018
DRAFT
12
up to time sharing that can be applied according to Proposition 2; we do not explicitly write the time-sharing
expressions.
The connection between the general and DSBS-specific results can be shown;.However, we follow a direction
that is more relevant to this work, providing for each result a direct interpretation, explaining how it can be obtained
for the DSBS; in doing that, we follow the interpretations of Rahman and Wagner [10].
The Ahlswede-Csiszár scheme of Proposition 3 amounts to quantization of the source X, without using Y as side
information.
Corollary 3 (Proposition 3, DSBS with symmetric auxiliary): For any RX > 0,
σX,1 (RX ) ≥ σAC (δGV (RX )),
(38a)
where
△
σAC (a) = Db (a ∗ p0 ka ∗ p1 ).
(38b)
This performance can be obtained as follows. The encoder quantizes X using a code that is rate-distortion optimal under the Hamming distortion measure; specifically, averaging over the random quantizer, the source and
reconstruction are jointly distributed according to the RDF-achieving test channel, that is, the reconstruction X̂ is
obtained from the source X by a BSC with crossover probability a that satisfies the RDF, namely a = δGV (RX ).
The decision function is φt (X̂, Y) which can be seen as two-stage: first the source difference sequence is estimated
as Ẑ = Y ⊖ X̂, and then a threshold is applied to the weight of that sequence, as if it were the true noise. Notice
that given H = i, Ẑ ∼ BerV (n, a ∗ pi ); the exponents are thus the probabilities of such a vector to fall inside
or outside a Hamming sphere of radius nt around the origin. As Proposition 3 relates to a Stein exponent, the
threshold t is set arbitrarily close to a ∗ p0 , resulting in the following; one can easily generalize to an achievable
exponent region.
The Han scheme of Proposition 4 amounts (for the DSBS) to a similar approach, using a more favorable
quantization scheme. In order to express its performance, we use the following exponent, which is explicitly
evaluated in Appendix A. While it is a bit more general than what we need at this point, this definition will allow
us to present later results in a unified manner.
Definition 6: Fix some parameters p, a, t, w ∈ [0, 1]. Let cn ∈ {0, 1}n, n = 1, 2, . . . be a sequence of vectors
such that limn→∞ δH (cn ) = w. Let Z ∼ BerV (n, p) and let U ∼ Uniform (Tn (a)). Then:
EBT (p, a, w, t) , lim −
n→∞
1
log P (Z ⊕ U ∈ Bn (cn , t)) .
n
(39)
Corollary 4 (Proposition 4, DSBS with symmetric auxiliary): For any RX > 0,
σX,1 (RX ) ≥ σHAN (δGV (RX )),
January 3, 2018
(40a)
DRAFT
13
where
△
σHAN (a) = EBT (p1 , a, 0, a ∗ p0 ) .
(40b)
One can show that σHAN (a) ≥ σAC (a), where the inequality is strict for all p1 < 1/2 (recall that for p1 = 1/2,
“testing against independence”, the Alswhede-Csiszár scheme is already optimal). The improvement comes from
having quantization error that is fixed-type a (recall Definition 5) rather than Bernoulli. Thus, Ẑ is “mixed” uniformBernoulli; the probability of that noise to enter a ball around the origin is reduced with respect to that of the Bernoulli
Ẑ of Corollary 3.
The Shimokawa et al. scheme of Proposition 5 is similar in the DSBS case, except that the compression of X
now uses side-information. Namely, Wyner-Ziv style binning is used. When the bin is not correctly decoded, a
decision error may occur. The resulting performance is given in the following.
Corollary 5 (Proposition 5, DSBS with symmetric auxiliary): For any RX > 0,
σX,1 (RX ) ≥
max
0≤a≤δGV (RX )
min {σHAN (a), σSHA (RX , a)} ,
(41a)
where
△
σSHA (R, a) = R − Hb (a ∗ p0 ) + Hb (a)
(41b)
This exponent can be thought of as follows. The encoder performs fixed-type quantization as in Han’s scheme,
except that the quantization type a is now smaller than δGV (RX ). The indices thus have rate 1 − Hb (a). Now these
indices are distributed to bins; as the rate of the bin index is RX , each bin is of rate 1 − Hb (a) − RX . The decision
function decodes the bin index using the side information Y, and then proceeds as in Han’s scheme.
The two terms in the minimization (41a) represent the sum of the events of decision error combined with bindecoding success and error, respectively. The first is as before, hence the use of σSHA . For the second, it can be
shown that as a worst-case assumption, X̂ resulting from a decoding error is uniformly distributed over all binary
sequences. By considering volumes, the exponent of the probability of the reconstruction to fall inside an nt-sphere
is thus at most 1 − Hb (t); a union bound over the bin gives σSHA .
Remark 1: It may be better not to use binning altogether (thus avoiding binning errors), i.e., the exponent of
Corollary 5 is not always higher than that of Corollary 4.
Remark 2: An important special case of this scheme is when lossless compression is used, and Wyner-Ziv
coding reduces to a side-information case of Slepian-Wolf coding. This amounts to forcing a = 0. If no binning
error occurred, we are in the same situation as in the unconstrained case. Thus, we have the exponent:
min (Db (p0 kp1 ), σSHA (RX )) ,
(42a)
where
△
σSHA (R) = σSHA (R, 0) = R − Hb (p0 ).
January 3, 2018
(42b)
DRAFT
14
Coding component
Oblivious to Y
Using side-information Y
Lossless
TS
Bin + TS
TABLE I
S UMMARY OF POSSIBLE SCHEMES . TS STANDS FOR TIME - SHARING , Q
Lossy
Q + TS [4, 5]
Q + Bin + TS [6]
STANDS FOR QUANTIZATION , B IN STANDS FOR BINNING .
We have seen thus that various combinations of quantization and binning; Table III-B summarizes the different
possible schemes.
An upper bound is obtained by specializing the Rahman-Wagner outer bound of Proposition 6 to the DSBS.
Corollary 6 (Proposition 6, DSBS with symmetric additional information):
△
σRW (R, ζ, a) = min min max
0≤ζ≤p0 0≤b1 ≤1 0≤a≤1/2
b1 · ζ · (1 − a) + b0 · (1 − ζ) · a
Hb (γ) − ζ ∗ aHb
ζ ∗a
b1 · ζ · a + b0 · (1 − ζ) · (1 − a)
+ (1 − ζ ∗ a)Hb
1−ζ ∗a
p0 − a
Db b0 · (1 − a) + b1 · a
,
1 − 2a
(43)
△ p1 −a·(1−b1 )
.
1−a
where b0 =
We note that it seems plausible that the exponent for p1 = 1/2, given by Corollary 4, is an upper to general p1 ,
i.e.,
σX,1 (RX ) ≤ 1 − Hb (p0 ∗ δGV (R)).
(44)
Next we compare the performance of these coding schemes in order to understand the effect of each of the
different components of the coding schemes on the performance.
IV. BACKGROUND : L INEAR C ODES
AND
E RROR E XPONENTS
In this section we define code ensembles that will be used in the sequel, and present their properties. Although
the specific properties of linear codes are not required until Section VI, we put an emphasis on such codes already;
this simplifies the proofs of some properties we need to show, and also helps to present the different results in a
more unified manner.
A. Linear Codes
Definition 7 (Linear Code): We define a linear code via a k × n generating matrix G over the binary field. This
induces the linear codebook:
C = {c : c = uG, u ∈ {0, 1}k },
January 3, 2018
(45)
DRAFT
15
where u ∈ {0, 1}k is a row vector.
Assuming that all rows of G are linearly independent, there are 2k codewords in C, so the code rate is
R=
k
.
n
(46)
Clearly, for any rate (up to 1), there exists a linear code of this rate asymptotically as n → ∞.
A linear code is also called a parity-check code, and may be specified by a (n − k) × n (binary) parity-check
matrix H. The code C contains all the n-length binary row vectors c whose syndrome
△
s = cHT
(47)
is equal to the n − k all zero row vector, i.e.,
△
C = c ∈ {0, 1}n : cHT = 0 .
(48)
Given some general syndrome s ∈ {0, 1}n−k , denote the coset of s by
△
Cs = {x ∈ {0, 1}n : xHT = s}.
(49)
The minimum Hamming distance quantizer of a vector x ∈ {0, 1}n with respect to a code C ⊆ {0, 1}n is given
by
△
QC (x) = arg min δH (x ⊖ c).
c∈C
(50)
For any syndrome s with respect to the code C, the decoding function fC (s) : {0, 1}n−k 7→ {0, 1}n gives the
coset leader, the minimum Hamming weight vector within the coset of s:
△
fC (s) = arg min δH (z).
z∈Cs
(51)
Maximum-likelihood decoding of a parity-check code, over a BSC Y = X ⊕ Z, amounts to syndrome decoding
x̂ = y ⊖ fC (y) [24, Theorem 6.1.1]. The basic “Voronoi” set is given by
△
Ω0 = z : z ⊖ fC (zHT ) = 0 .
(52)
The ML decision region of any codeword c ∈ C is equal to a translate of Ω0 , i.e.,
△
Ωc = y : y ⊖ fC (yHT ) = c
= Ω0 + c.
(53)
(54)
B. Properties of Linear Codes
Definition 8 (Normalized Distance Distribution): The normalized distance (or weight) distribution of a linear
code C for a parameter 0 ≤ w ≤ 1 is defined to be the fraction of codewords c 6= 0, with normalized weight at
January 3, 2018
DRAFT
16
most w, i.e.,
△
ΓC (w) =
1
|C|
X
1{δH (c)≤w} ,
(55)
c∈C\{0}
where 1{·} is the indicator function.
Definition 9 (Normalized Minimum Distance): The normalized minimum distance of a linear code C is defined
as
△
δmin (C) = min δH (c)
c∈C\{0}
(56)
Definition 10 (Normalized Covering Radius): The normalized covering radius of a code C ∈ {0, 1}n is the
smallest integer such that every vector x ∈ {0, 1}n is covered by a Hamming ball with radius r and center at some
c ∈ C, normalized by the blocklength, i.e.:
△
ρcover (C) = max n min δH (x ⊖ c).
x∈{0,1}
c∈C
(57)
Definition 11 (Normalized Packing Radius): The normalized packing radius of a linear code C is defined to be
half the normalized minimal distance of its codewords, i.e.,
△
ρpack (C) =
1
δmin (C).
2
(58)
C. Good Linear Codes
We need two notions of goodness of codes, as follows.
Definition 12 (Spectrum-Good Codes): A sequence of codes C (n) ⊆ {0, 1}n, n = 1, 2, . . . with rate R is said to
be spectrum-good if for any w ≥ 0,
. (n)
ΓC (n) (w) = ΓR (w)
where
(n)
ΓR (w)
2−nDb (wk1/2) , w > δ (R)
GV
=
.
0,
otherwise
(59)
Definition 13 (Covering-Good): A sequence of codes C (n) ⊆ {0, 1}n, n = 1, 2, . . . with rate R is said to be
covering-good if
ρcover (C (n) ) −→ δGV (R)
n→∞
(60)
The existence of linear codes satisfying these properties is well known. Specifically, consider the ensemble of
constructed by random generating matrices, where each entry of the matrix is drawn uniformly and statistically
independent with all other entries, then almost all members have a spectrum close to (59), see e.g. [24, Chapter
5.6-5.7]; in addition, for almost all members, a process of appending rows to the generating matrix (with vanishing
January 3, 2018
DRAFT
17
rate) results in a normalized covering radius close to δGV [25, Theorem 12.3.5]. These existence arguments are
detailed in Appendix C. We need the following properties of good codes.
Spectrum-good codes obtain the best known error exponent for the BSC. Namely, for a BSC with crossover
probability p, they achieve E BSC (p, R), given by
△
E BSC (p, R) = max {Er (p, R), Eex (p, R)} ,
(61a)
1
1
△
Er (p, R) = max ρ − (1 + ρ) log p 1+ρ + (1 − p) 1+ρ − ρR
(61b)
where
ρ∈[0,1]
is the random-coding exponent, and
△
Eex (p, R) = max −ρ log
ρ≥1
i ρ1
1 1h p
− ρR.
2 p(1 − p)
+
2 2
(61c)
is the expurgated exponent. Notice that as the achievability depends only upon the spectrum, it is universal in p.
As for covering-good codes, we need the following result, shown that the quantization noise induced by coveringgood codes is no worse than a noise that is uniform over an nδGV -Hamming ball.
Lemma 1: Consider a covering-good sequence of codes C (n) ⊆ {0, 1}n, n = 1, 2, . . . of rate R. Then,
X ⊖ QC (n) (X) 6 N,
(62a)
X ∼ Uniform ({0, 1}n)
N∼ Uniform Bn 0, ρcover (C (n) ) .
(62b)
D
for
(62c)
Furthermore, the same holds when adding any random vector to both sides, i.e., for any Z:
X ⊖ QC (n) (X) ⊕ Z 6 N ⊕ Z,
(63)
D
The proof appears in Appendix B.
D. Nested Linear Codes
We briefly recall some basic definitions and properties related to nested linear codes. The reader is referred to
[26] for further details.
Definition 14 (Nested Linear Code): A nested linear code with rate pair (R1 , R2 ) is a pair of linear codes (C1 , C2 )
with these rates, satisfying
C2 ⊆ C1 ,
January 3, 2018
(64)
DRAFT
18
i.e., each codeword of C2 is also a codeword of C1 (see [26]). We call C1 and C2 fine code and coarse code,
respectively.
If a pair {(n, k1 ), (n, k2 )} of parity-check codes, k1 ≥ k2 , satisfies condition (64), then the corresponding paritycheck matrices H1 and H2 are interrelated as
HT2
|{z}
=[
(n−k2 )×n
HT1 ,
|{z}
∆HT ],
| {z }
(65)
(n−k1 )×n (k1 −k2 )×n
where H1 is an (n − k1 ) × n matrix, H2 is an (n − k2 ) × n matrix, and ∆H is a (k1 − k2 ) × n matrix. This implies
that the syndromes s1 = xHT1 and s2 = xHT2 associated with some n-vector x are related as s2 = [s1 , ∆s], where
the length of ∆s is k1 − k2 bits. In particular, if x ∈ C1 , then s2 = [0, . . . , 0, ∆s]. We may, therefore, partition C1
into 2k1 −k2 cosets of C2 by setting s1 = 0, and varying ∆s, i.e.,
[
C1 =
C2,s2 ,
where s2 = [0, ∆s].
(66)
∆s∈{0,1}k1 −k2
Finally, for a given pair of nested codes, the “syndrome increment” ∆s is given by the function
∆s = x · ∆HT .
(67)
Proposition 7: Let the syndrome increment ∆s be computed for c ∈ C1 . Then, the coset leader corresponding to
the syndrome of c with respect to C2 is given by
fC2 (cHT1 ) = fC2 ([0, ∆s]),
(68)
where 0 is a zero row vector of length n − k1 .
For a proof, see, e.g., [26].
Definition 15 (Good Nested Linear Code): A sequence of nested linear codes with rate pair (R1 , R2 ) is said to
be good if the induced sequences of fine and coarse codes are covering-good and spectrum-good, respectively.
The existence of good nested linear codes follows naturally from the procedures used for constructing spectrumgood and covering-good codes; see Appendix C for a proof.
We need the following property of good nested codes.
(n)
(n)
Corollary 7: Consider a sequence of good nested codes (C1 , C2 ), n = 1, 2, . . . with a rate pair (R1 , R2 ). Let
△
X ∼ Uniform ({0, 1}n) and Z ∼ BerV (n, p) be statistically independent. Denote U = QC (n) (X), where quantization
1
with respect to a code is defined in (50). Then,
P QC (n) (X ⊕ Z) 6= U ≤ P QC (n) U ⊕ N′ ⊕ Z 6= U
2,S
(69)
2,S
△
where N′ ∼ Uniform (Bn (0, δGV (R1 ))) is statistically independent of (U, Z), where S = UHT2 , and where H2 is a
(n)
parity check matrix of the coarse code C2 .
January 3, 2018
DRAFT
19
The proof, which relies upon Lemma 1, is given in Appendix B.
E. Connection to Distributed Source Coding
As the elements used for the schemes presented (quantization and binning) are closely related to distributed
compression, we present here some material regarding the connection of the ensembles presented to such problems.
A covering-good code achieves the rate-distortion function of a binary symmetric source with respect to the
Hamming distortion measure, which amounts to a distortion of δGV (R). Furthermore, it does so with a zero error
probability.
A spectrum-good code is directly applicable to the Slepian-Wolf (SW) problem [8], where the source is DSBS.
Specifically, a partition of all the binary sequences into bins of rate Rbin can be performed by a code of rate
R = 1 − Rbin (which can be alternatively be seen as a nested code with R + Rbin = 1), and the SW decoder can
be seen as a channel decoder where the input codebook is the collection of sequences in the bin and the channel
output is Y n , see [27]. Thus, it achieves the exponent
E BSC (p, Rbin ) = E BSC (p, 1 − R)
As in channel coding, this error exponent is achievable universally in p.
The achievability of the random-coding and expurgated exponents for the general discrete SW problem was
established by Csiszár et al. [28] and [29],10 Indeed, the connection between channel coding and SW coding is
fundamental (as already noted in [27]), and the optimal error exponents (if they exist) are related, see [31–33].
Nested codes are directly applicable to the Wyner-Ziv (WZ) problem [9], where the source is DSBS and under
the Hamming distortion measure, see [26]. When a good ensemble is used, the exponent of a binning error event is
at least E BSC (p, Rbin ). As the end goal of the scheme is to achieve low distortion with high probability, the designer
has the freedom to choose Rbin that strikes a good balance between binning errors and other excess-distortion events,
see [34].
V. O NE -S IDED C ONSTRAINT: N EW R ESULT
In this section we present new achievable exponent tradeoffs for the same one-sided case considered in the
previous section. To that end, we will employ the same binning strategy of Corollary 5. However, our analysis
technique allows to improve the exponent, and to extend it from the Stein setting to the full tradeoff.
For our exponent region, we need the following exponent. It is a variation upon EBT (Definition 6), where the
fixed-type noise is replaced by a noise uniform over a Hamming ball.
10 Indeed, Csiszár has already established the expurgated exponent for a class of additive source-pairs which includes the DSBS in [29].
However, as the derivation was for general rate pairs rather than for the side-information case, it faced inherent difficulty in expurgation in a
distributed setting. This was solved by using linear codes; see [30] for a detailed account in a channel-coding setting.
January 3, 2018
DRAFT
20
Definition 16: Fix some parameters p, a, t, w ∈ [0, 1/2]. Let cn ∈ {0, 1}n, n = 1, 2, . . . be a sequence of vectors
such that limn→∞ δH (cn ) = w. Let Z ∼ BerV (n, p) and let N∼ Uniform (Bn (0, a)). Define
EBB (p, a, w, t) , lim −
n→∞
1
log P (N ⊕ Z ∈ Bn (cn , t)) .
n
(70)
The following can be shown using standard type considerations.
Lemma 2:
EBB (p, a, w, t) = −Hb (a) + min [Hb (r) + EBT (p, r, w, t)]
0≤r≤a
(71)
We are now ready to state the main result of this section.
Theorem 1 (Binary Hypothesis Testing with One-Sided Constraint): Consider the hypothesis testing problem as
defined in Section II-A for the DSBS, with a rate constraint RX ∈ [0, 1]. For any parameters a ∈ [0, δGV (RX )] and
t ∈ [a ∗ p0 , a ∗ p1 ],
(SI)
(a,
t;
p
,
R
),
E
(a,
t;
p
,
R
)
∈ CX (RX ),
E (SI)
0
X
1
X
0
1
(72)
where
(a, t; p0 , RX ) = min EBB (p0 , a, 1, 1 − t) , E BSC (a ∗ p0 , Rbin )
△
(a,
t;
p
,
R
)
=
min
E
(p
,
a,
0,
t)
,
E
(p
,
a,
t,
R
)
,
E (SI)
1
X
BB
1
c
1
bin
1
△
E (SI)
0
(73a)
(73b)
where
△
Ec (p1 , a, t, Rbin ) = max
n
− Rbin +
min
δGV (Rbin )<w≤1
o
Db (wk1/2) + EBB (p1 , a, w, t) , E BSC (a ∗ p1 , Rbin ) ,
(74)
and where
△
Rbin = 1 − Hb (a) − RX
(75)
and EBB (p, a, w, t) is defined in Definition 16.
We prove this theorem using a quantize-and-bin strategy similar to that of Corollary 5, implemented with good
nested codes as defined in Section IV-D. In each of the two minimizations in (73), the first term is a bound on
the exponent of a decision error resulting from a bin-decoding success, while the second is associated with a bindecoding error, similar to the minimization in (41a). Using the properties of good codes, we provide a tighter and
more general (not only a Stein exponent) bound; the key part is the derivation of Ec , the error exponent given H1 ,
and given a bin-decoding error: we replace the worst-case assumption that the “channel output” is uniform over
all binary sequences by the true distribution, centered around the origin; in particular, it means that given an error,
points close to the decision region of the correct codeword, thus not very close to any other codeword, are more
likely.
January 3, 2018
DRAFT
21
After the proof, we remark on the tightness of this result.
Proof: For a chosen a, denote
△
RQ = δGV −1 (a)
(76a)
= 1 − Hb (a)
(76b)
= RX + Rbin .
(76c)
(n)
(n)
Consider a sequence of linear nested codes (C1 , C2 ), n = 1, 2, . . . with rate pair (RQ , Rbin ), which is good
in the sense of Definition 15. For convenience, the superscript of the code index in the sequence will be omitted.
The scheme we consider uses structured quantization and binning. Specifically, given a vector X, we denote its
quantization by
△
U = QC1 (X).
(77)
Note that we can decompose Y as follows:
Y= X⊕Z
= U ⊕ N ⊕ Z,
(78a)
(78b)
where the quantization noise N = X ⊖ U is independent of U (and of course also of Z) since X is uniformly
distributed.
For sake of facilitating the analysis of the scheme, we also define
Y′ = U ⊕ N′ ⊕ Z,
(79a)
N′ ∼ Uniform (Bn (0, a))
(79b)
where
is independent of the pair (U, Z). That is, Y′ satisfies the same relations with U as Y, except that the quantization
noise N is replaced by a spherical noise with the same circumradius.
The encoded message is the syndrome increment (recall (67)) of U, i.e.,
φX (X) = ∆S
= U · ∆H T .
(80a)
(80b)
Since the rates of C1 and C2 are RQ and Rbin , respectively, the encoding rate is indeed RX .
Let
△
S = UH2T .
January 3, 2018
DRAFT
22
By Proposition 7, since U ∈ C1 , the decoder can recover from ∆S the coset C2,S of syndrome S.
The reconstructed vector at the decoder is given by
△
Û = QC2,S (Y),
(81)
△
Ŵ = δH Y ⊖ Û .
(82)
Denote
After computing Ŵ , given a threshold t ∈ [a ∗ p0 , a ∗ p1 ], the decision function is given by
△
ψ(φX (X), Y) = ϕt (Ŵ ),
(83a)
where ϕt (w) is the threshold function (28).
Denote
△
W = δH (Y ⊖ U).
(84)
△
Denote the decoding error event by EC = {Û 6= U} and the complementary event by Ec . Using elementary
probability laws, we can bound the error events given the two hypotheses as follows:
ǫ0 = P Ŵ > t H = 0
= P Ec , Ŵ > t H = 0 + P E c , Ŵ > t H = 0
≤ P Ec , Ŵ > t H = 0 + P (W > t|H = 0)
≤ P (Ec |H = 0) + P (W ≥ t|H = 0) .
(85a)
(85b)
(85c)
(85d)
And similarly,
ǫ1 = P Ŵ ≤ t H = 1
= P Ec , Ŵ ≤ t H = 1 + P E c , Ŵ ≤ t H = 1
≤ P Ec , Ŵ ≤ t H = 1 + P (W ≤ t|H = 1) .
(86a)
(86b)
(86c)
Comparing with the required exponents (73), it suffices to show the following four exponential inequalities.
(87a)
(87b)
(87c)
P (W ≥ t|H = 0) ≤ EBB (p0 , a, 1, 1 − t)
P (Ec |H = 0) ≤ E BSC (a ∗ p0 , Rbin )
P (W ≤ t|H = 1) ≤ EBB (p1 , a, 0, t)
P Ec , Ŵ ≤ t H = 1 ≤ Ec (p1 , a, t, Rbin )
January 3, 2018
(87d)
DRAFT
23
In the rest of the proof we show these. For (87a), we have:
P (W ≥ t|H = 0)
(88a)
= P (δH (Y ⊖ U) ≥ t|H = 0)
(88b)
= P (N ⊕ Z ∈
/ Bn (0, t)|H = 0)
(88c)
= P (N ⊕ Z ∈ Bn (1, 1 − t)|H = 0)
(88d)
≤ P N′ ⊕ Z ∈ Bn (1, 1 − t) H = 0
.
= 2−nEBB(p0 ,a,1,1−t) ,
(88e)
(88f)
where 1 is the all-ones vector, (88c) follows by substituting (77), the transition (88e) is due to Lemma 1 and the
last asymptotic equality is due to Definition 16. The proof of (87c) is very similar and is thus omitted.
For (87b), we have:
P (EC |H = 0)
= P Û 6= U H = 0
(89a)
(89b)
= P QC2,S (X ⊕ Z) 6= QC1 (X) H = 0
≤ P QC2,S QC1 (X) ⊕ N′ ⊕ Z 6= QC1 (X) H = 0
= P QC2,S U ⊕ N′ ⊕ Z 6= U H = 0
= P QC2,S (Y′ ) 6= U H = 0
≤ 2−nE BSC (a∗p0 ,Rbin ) ,
(89c)
(89d)
(89e)
(89f)
(89g)
where (89d) is due to Corollary 7, the last inequality follows from the spectrum-goodness of the coarse code C2
and Y′ was defined in (79a). Notice that the channel exponent is with respect to an i.i.d. noise, but it is easy to
show that the exponent of a mixed noise can only be better.
Lastly, For (87d) we have:
P EC , Ŵ ≤ t H = 1
= P Û 6= U, Ŵ ≤ t H = 1
= P Û 6= U, Y ∈ Bn Û, t H = 1
o
n
[
= P
Û = c, Y ∈ Bn (c, t) H = 1
(90a)
(90b)
(90c)
(90d)
c∈C2,S \{U}
= P Û 6= U, Y ∈
January 3, 2018
[
c∈C2,S \{U}
Bn (c, t) H = 1
(90e)
DRAFT
24
≤ max
P (EC |H = 1) , P Y ∈
Bn (c, t) H = 1 .
[
c∈C2,S \{U}
(90f)
Due to the spectrum-goodness of the coarse code C2 , the first term in the maximization is exponentially upperbounded by
2−nE BSC (a∗p1 ,Rbin ) .
For the second term, we proceed as follows.
[
Bn (c, t) H = 1
P Y ∈
(91a)
c∈C2,S \{U}
≤ P Y′ ∈
[
c∈C2,S \{U}
= P Y ⊖ U ∈
= P Y′ ⊖ U ∈
= P N′ ⊕ Z ∈
≤
X
Bn (c, t) H = 1
[
′
c∈C2,S \{U}
[
c∈C2 \{0}
[
c∈C2 \{0}
Bn (c ⊖ U, t) H = 1
Bn (c, t) H = 1
(91d)
Bn (c, t) H = 1
(91e)
P N′ ⊕ Z ∈ Bn (c, t) H = 1
X
X
P N′ ⊕ Z ∈ Bn (c, t) H = 1
X
X
2−nEBB(p1 ,a,j/n,t)
nδGV (Rbin )≤j≤n c∈C2 :nδH (c)=j
.
=
(91c)
c∈C2 \{0}
=
(91b)
(91f)
(91g)
(91h)
nδGV (Rbin )≤j≤n c∈C2 :nδH (c)=j
=
X
|C2 | · ΓC0 (w) · 2−nEBB (p1 ,a,j/n,t)
(91i)
X
2nRbin · 2−nDb (j/nk1/2) · 2−nEBB (p1 ,a,j/n,t)
(91j)
nδGV (Rbin )≤j≤n
≤
nδGV (Rbin )≤j≤n
.
= 2−n[−Rbin +minδGV (Rbin )<w≤1 Db (wk1/2)+EBB (p1 ,a,w,t)]
(91k)
= 2−nEc (p1 ,a,t,Rbin ) ,
(91l)
where (91b) is due to Lemma 1, (91f) is due to the union bound, the lower limit in the outer summation in (91g) is
valid since spectrum-good codes have no non-zero codewords of lower weight, in (91j) we substituted the spectrum
of a spectrum-good code, and in (91k) we substitute w = j/n and use Laplace’s method.
At this point we remark on the tightness of the analysis above.
January 3, 2018
DRAFT
25
Remark 3: There are two points where our error-probability analysis can be improved.
1) For the exponent of the probability of bin-decoding error (under both hypotheses) we used E BSC (a ∗ pi , Rbin ).
However, one may use the fact the quantization noise is not Bernoulli but rater bounded by a sphere to derive
a larger exponent.
2) In (90e) we have the probability of a Bernoulli noise to fall within a Hamming ball around some non-zero
codeword, and also outside the basic Voronoi cell. In the transition to (90f) we bound this by the maximum
between the probabilities of being in the Hamming balls and being outside the basic Voronoi cell.
While solving the first point is straightforward (though cumbersome), the second point (exponentially tight evaluation
of (90e)) is an interesting open problem, currently under investigation. We conjecture that except for these two points,
our analysis of this specific scheme is exponentially tight.
Remark 4: In order to see that the encoder can be improved, consider the Stein-exponent bound, obtained by
setting t = a ∗ p0 in (73b):
σX,1 (RX ) ≥ min EBB (p1 , a, 0, a ∗ p0 ) , Ec (p1 , a, a ∗ p0 , RX ) ,
(92)
cf. the corresponding expression of the scheme by Shimokawa et al. (41a),
min EBT (p1 , a, 0, a ∗ p0 ) , σSHA (RX , a) .
Now, one can show that we have an improvement of the second term. However, clearly by (71), EBB (p1 , a, 0, a ∗ p0 ) ≤
EBT (p1 , a, 0, a ∗ p0 ). That is, quite counterintuitively, a quantization noise that has always weight a is better than
one that may be smaller. The reason is that a “too good” quantization noise may be confused by the decision
function with a low crossover probability between X and Y , favoring Ĥ = 0. In the Stein case, where we do not
care at all about the exponent of ǫ0 , this is a negative effect. We can amend the situation by a simple tweak: the
encoder will be the same, except that when it detects a quantization error that is not around a it will send a special
symbol forcing Ĥ = 1. It is not difficult to verify that this will yield the more favorable bound
σX,1 (RX ) ≥ min EBT (p1 , a, 0, a ∗ p0 ) , Ec (p1 , a, a ∗ p0 , RX ) .
(93)
A similar process, where if the quantization noise is below some threshold Ĥ = 1 is declared, may also somewhat
extend the exponent region in the regime “close” to Stein (low E0 ), but we do not pursue this direction.
Remark 5: Of course, the two-stage decision process where Ĥ is a function of Ŵ is sub-optimal. It differs from
the Neyman-Pearson test that takes into account the probability of all possible values of W .
Remark 6: Using time sharing on top of the scheme, we can obtain improved performance according to Proposition 2.
Remark 7: In the special case a = 0 the scheme amounts to binning without quantization, and the nested code
maybe replaced by a single spectrum-good code. In this case the expressions simplify considerably, and we have
January 3, 2018
DRAFT
26
the pair:
(0,
t;
R
)
=
min
D
(tkp
),
E
(p
,
R
)
E (SI)
X
b
0
0
bin
BSC
0
△
(SI)
E 1 (0, t; RX ) = min Db (tkp1 ), Ec (p1 , 0, t, Rbin ) ,
(94a)
(94b)
where
Ec (p1 , 0, t, Rbin ) = max
n
− Rbin +
min
δGV (RX )<w≤1
o
Db (wk1/2) + EBB (p1 , 0, w, t) , E BSC (p1 , Rbin ) ,
(95)
and where EBB (p1 , 0, w, t) is defined in (70).
VI. S YMMETRIC C ONSTRAINT
In this section we proceed to a symmetric rate constraint RX = RY = R. In this part our analysis specifically
hinges on the linearity of codes, and specifically builds on the Körner-Marton coding scheme [18]. Adding this
ingredient to the analysis, we get an achievable exponent region for the symmetric constraint in the same spirit of
the achievable region in Theorem 1, where the only loss due to constraining RY is an additional spherical noise
component. Before stating our result, we give some background on the new ingredient.
A. Körner-Marton Compression
The Körner-Marton problem has the same structure as our DHT problem for the DSBS, except that the crossover
probability is known (say p), and the decision function is replaced by a decoder, whose goal is to reproduce the
difference sequence Z = Y ⊖ X with high probability. By considering the two corresponding one-sided constrained
problems, which amount to SI versions of the SW problem, it is clear that any rate R < Hb (p) is not achievable.
The Körner-Marton scheme allows to achieve any rate R > Hb (p) in the following manner.
Assume that the two encoders use the same linear codebook, with parity-check matrix H. Further, both of them
send the syndrome of their observed sequence:
φX (X) = XHT
(96a)
φY (Y) = YHT .
(96b)
In the first stage of the decoder, the two encoded vectors are summed up, leading to
φY (Y) ⊖ φX (X) = YHT ⊖ XHT
= ZHT ,
(97a)
(97b)
which is but the syndrome of the difference sequence. This is indistinguishable from the situation of a decoder for
the SI SW problem,
The decoder is now in the exact same situation as an optimal decoder for a BSC with crossover probability p,
and code rate 1 − R. By the fact that linear codes allow to approach the capacity 1 − Hb (p), the optimal rate
January 3, 2018
DRAFT
27
follows. Further, if spectrum-good codes are used, the exponent E BSC (p, 1 − R) is achievable, i.e., there is no loss
in the exponent w.r.t. the corresponding side-information SW problem.
B. A New Bound
We now present an achievability result that relies upon a very simple principle:as in the Körner-Marton decoder,
after performing the XOR (97) the situation is indistinguishable from that of the input to a SW decoder, also in
DHT we can
Theorem 2 (Binary Hypothesis Testing with Symmetric Constraint): Consider the hypothesis testing problem as
defined in Section II-A for the DSBS, with a symmetric rate constraint R ∈ [0, 1]. For any parameter t ∈ [p0 , p1 ],
(KM)
(t;
R),
E
(t;
R)
∈ C (R),
E (KM)
0
1
(98)
where
E (KM)
(t; R) = E (SI)
0
0 (0, t; R)
(99a)
(t; R) = E (SI)
E (KM)
1
1 (0, t; R),
(99b)
where the one-sided constraint exponents with a = 0 are given in (94).
Proof: Let the codebook be taken from a spectrum-good sequence. Let the encoders be the Körner-Marton
encoders (96). The decision function first obtains ZH T as in the Körner-Marton decoder (97), and then evaluates
Ẑ = QC (ZH T )
and applies the threshold function to
△
W ′ = δH Ẑ .
Noticing that W ′ is equal in distribution to Ŵ in the proof of Theorem 1 when a = 0, and that all error events
only functions of that variable, the proof is completed.
It is natural to ask, why we restrict ourselves under a symmetric rate constraint to a binning-only scheme. Indeed,
lossy versions of the Körner-Marton problem have been studied in [35, 36]. One can construct a scheme based on
nested codes, obtain a lossy reconstruction of the noise sequence Z and then test its weight. However, unlike the
reconstruction in the single-sided case which includes a Bernoulli component (Z) and a quantization noise bounded
by a Hamming ball (N), in the symmetric-rate case we will have a combination of a Bernoulli component with two
quantization noises. This makes the analysis considerably more involved. An idea that comes to mind, is to obtain
a bound by replacing at least one of the quantization noises with a Bernoulli one; however, we do not see a clear
way to do that. Thus, improving Theorem 2 by introducing quantization is left for future research.
VII. P ERFORMANCE C OMPARISON
In this section we provide a numerical comparison of the different bounds for the DSBS.
January 3, 2018
DRAFT
28
We start with the Stein setting, where we can compare with the previously-known results. Our achievable performance for the one-sided constrained case is given by (93) (which coincides with (92) for the parameters we checked).
We compare it against the unconstrained performance, and against the previously best-known achievable exponent,
namely the maximum between Corollaries 4 and 5.11 To both, we also apply time sharing as in Proposition 2. It can
be seen that the new exponent is at least as good, with slight improvement for some parameters. As reference, we
show the unconstrained performance, given by Corollary 1. Also shown on the plots, is the performance obtained
under a symmetric rate constraint, found by constraining the quantization parameter to a = 0. It can be seen that
for low p1 the symmetric constraint yields no loss with respect to the one-sided constraint.
0.4
0.14
0.35
0.12
0.3
0.1
0.25
0.08
0.2
0.06
0.15
0.04
0.1
0.02
0.05
0
0
0.02
0.04
0.06
0.08
0.1
p0
0.12
0.14
(a) p1 = 0.25
0.16
0.18
0.2
0
0
0.01
0.02
0.03
0.04
0.05
p
0.06
0.07
0.08
0.09
0
(b) p1 = 0.1. The three top curves coincide.
Fig. 2. Stein exponent comparison. The rate is 0.3 bits, the X axis is p0 . From top to bottom: unconstrained performance, new exponent,
previously known exponent, new exponent without quantization (achievable with symmetric rate).
Beyond the Stein setting, we plot the full exponent tradeoff, achievable by Theorems 1 and 2. In this case we are
not aware of any previous results we can compare to. We thus only add the unconstrained tradeoff of Corollary 1,
and the simple strategy of Corollary 2. Also here it can be seen that the symmetric constraint imposes a loss for
high p1 , but not for a lower one.
VIII. C ONCLUSIONS
In this work we introduced new achievable error exponents for binary distributed hypothesis testing for binary
symmetric i.i.d sources. One may wonder, naturally, regarding the extension beyond the binary symmetric case.
In that respect, a distinction should be made between two parts of the work. Under a one-sided rate constraint,
linear codes were used merely for concreteness and for convenience of extension to the symmetric constraint; the
same results should hold for a random code ensemble. Thus, there is no fundamental problem in extending our
analysis to any discrete memoryless model.
11 In [7], the performance is evaluated using an asymmetric choice of the auxiliary variable. We have verified that the symmetric choice we
use performs better.
January 3, 2018
DRAFT
29
0.35
0.1
0.09
0.3
0.08
0.25
0.07
0.06
E1
E1
0.2
0.05
0.15
0.04
0.03
0.1
0.02
0.05
0.01
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0
0.8
0
0.02
0.04
0.06
0.08
E0
0.1
E0
0.12
0.14
0.16
0.18
0.2
(b) p1 = 0.1. The new tradeoff curves with and without quantization
coincide.
(a) p1 = 0.25
Fig. 3. Exponent tradeoff comparison. The rate is 0.3 bits, p0 = 0.01. From top to bottom: unconstrained tradeoff, new tradeoff, new tradeoff
without quantization (achievable with symmetric rate), time-sharing only tradeoff.
In contrast, in the setting of a symmetric rate constraint, we explicitly use the “matching” between the closedness
under addition of linear codes, and the additivity of the relation between the sources. Thus, our approach which
has almost no loss with respect to the single-sided constraint, cannot be extended beyond (nearly) additive cases.
The question whether a different approach can achieve that, remains open.
Finally, we stress again the lack of tight outer bounds for the problem, except for cases where the communication
constraints do not limit the exponents.
ACKNOWLEDGMENT
The authors thank Uri Erez for sharing insights throughout the work. They also thank Vincent Y. F. Tan for
introducing them the distributed hypothesis testing problem, and Nir Weinberger for helpful discussions.
A PPENDIX A
E XPONENT OF
A
H AMMING BALL
In this appendix we evaluate the exponent of the event of a mixed noise entering a Hamming ball, namely
EBT (p, a, w, t) of Definition 6.
Lemma 3:
EBT (p, a, w, t) =
min
Hb (a) − wHb
γ∈[max(0,a+w−1),min(w,a)]
γ
w
− (1 − w)Hb
a−γ
1−w
+ Ew (p, 1 − (w + a − 2γ), w + a − 2γ, t − (w + a − 2γ))
(100)
where
△
Ew (p, α, β, t) =
January 3, 2018
min
x∈[max(0,t),min(α,β+t)]
αDb
x
α
p + βDb
x−t
p
β
(101)
DRAFT
30
The proof follows from the lemmas below.
Lemma 4 (Difference of weights): Let Z1 and Z2 be two random vectors. Assume that Zi ∼ BerV (ki , p) and
let Wi = wH (Zi ) for i = 1, 2. If ki grow with n such that
k1
= α,
n
k2
= β,
lim
n→∞ n
lim
n→∞
and further let a sequence t grow with n such that
lim
n→∞
t
=τ
n
where τ ∈ (−β, α). Then,
lim −
n→∞
1
log P (W1 − W2 = t) = Ebb (p, α, β, τ ),
n
where Ew (p, α, β, τ ) is given by (101).
Proof:
P (W1 − W2 = t) =
k1
X
P (W1 = w) P (W2 = w − t)
(102a)
w=0
min(k1 ,k2 +t)
X
=
P (W1 = w) P (W2 = w − t)
(102b)
w=max(0,t)
.
=
min(k1 ,k2 +t)
X
2
−k1 Db
p −k2 Db w−t
p
k
w
k1
2
(102c)
2
w=max(0,t)
.
=
.
=
max
w∈[max(0,t),...,min(k1 ,k2 +t)]
2
−k1 Db
x∈[max(0,τ ),min(α,β+τ )]
w
k1
p −k2 Db w−t
p
k
2
2−n[αDb ( α kp)+βDb (
x
max
x−τ
β
2
(102d)
kp)] ,
(102e)
where (102c) follows by the exponent of the probability of a type class.
The following lemma will assist in proving Lemma 6 which follows.
Lemma 5 (Mixed noise, fixed dimension): Let n ∈ N, and consider a noise that is a mixture of a noise U ∼
Uniform (Tn (a)) (uniform over a fixed type) and a Bernoulli vector Z ∼ BerV (n, p), where a ∈ 1/n · {0, . . . , n}
and p ∈ [0, 1]. Further let c ∈ {0, 1}n where w = δH (c) ∈ 1/n · {0, . . . , n}, be the center of a “distant” sphere.
Then, for a sphere radius with normalized radius t ∈ 1/n · {0, . . . , n},
n·min(w,a)
P (wH (c ⊕ U ⊕ Z) = nt) =
X
m=n·max(0,a+w−1)
nw
m
n−nw
na−m
n
na
P (W1,m − W2,m = nt − nw − na + 2m)
(103)
where W1,m ∼ Binomial (n − (nw + na − 2m), p) and W2,m ∼ Binomial (nw + na − 2m, p).
January 3, 2018
DRAFT
31
Proof: Define the following sets of indices I, M1 , M2 :
△
I = {i : ci = 1}
(104a)
△
M1 = {i : UI,i = 1}
n
o
△
M2 = i : UI,i = 1 .
(104b)
(104c)
In words, I is the set of indices where c contains ones; M1 is the subset (within I) where U contains ones and
M2 is defined similarly over the complement of I.
Then,
P (wH (c ⊕ U ⊕ Z = nt)) = P (wH (cI ⊕ UI ⊕ ZI ) + wH (cI ⊕ UI ⊕ ZI ) = nt)
(105a)
= P (nw − wH (UI ⊕ ZI ) + wH (UI ⊕ ZI ) = nt)
h
i
= P nw − wH (UI,M1 ⊕ ZI,M1 ) + wH UI,M1 ⊕ ZI,M1
i
h
+ wH UI,M2 ⊕ ZI,M2 + wH UI,M2 ⊕ ZI,M2 = nt
h
i h
i
= P nw − M1 − wH (ZI,M1 ) + wH ZI,M1 + M2 − wH ZI,M2 + wH ZI,M2 = nt
= P wH (ZI,M1 ) − wH ZI,M1 − wH ZI,M2 + wH ZI,M2 = n(t − w) + M1 − M2
= P wH (ZI,M1 ) − wH ZI,M1 − wH ZI,M2 + wH ZI,M2 = n(t − w − a) + 2M1
(105b)
(105c)
(105d)
(105e)
(105f)
n·min(w,a)
X
=
P (M1 = m)
m=n·max(0,a+w−1)
· P wH (ZI,M1 ) − wH ZI,M1 − wH ZI,M2 + wH ZI,M2 = n(t − w − a) + 2m M1 = m
n·min(w,a)
nw n−nw
X
m
na−m
P (W1,m − W2,m = n(t − w − a) + 2m)
=
n
(105g)
(105h)
na
m=n·max(0,a+w−1)
△
△
where (105f) follows since |I| = nw, by denoting M1 = |M1 |, M2 = |M2 | and noting that M1 + M2 = na;
equality (105g) follows since M1 ≤ na and M1 ≤ nw, and since M2 ≤ na and M2 ≤ n(1 − w) (therefore
M1 ≥ n(a + w − 1)); and equality (105h) follows by denoting W1,m ∼ Binomial (n − (nw + na − 2m), p) and
W2,m ∼ Binomial (nw + na − 2m, p).
Lemma 6 (Mixed noise, asymptotic dimension): Consider a sequence of problems as in Lemma 5 indexed by the
blocklebgth n, with parameters an → a, wn → w and tn → t. Then:
1
lim − P (wH (cn ⊕ Un ⊕ Zn ) = ntn ) = EBT (p, a, w, t) .
n
n→∞
(106)
Proof: A straightforward calculation shows that
P (wH (cn ⊕ Un ⊕ Zn ) = ntn )
January 3, 2018
(107a)
DRAFT
32
.
=
n·min(wn ,an )
X
2nwn Hb (
m/n
wn
−m/n
) 2n(1−wn )Hb ( an1−w
) 2−nHb (an )
n
m=n·max(0,an +wn −1)
· 2−nEw (p,1−(wn +an −2m/n),wn +an −2m/n,tn −(wn +an −2m/n))
.
=
max
a−m
2−n[−wHb ( w )−(1−w)Hb ( 1−w )+Hb (a)] · 2−nEw (p,1−(w+a−2m),w+a−2m,t−(w+a−2m)).
m
m∈[max(0,a+w−1),min(w,a)]
(107b)
(107c)
The proof of Lemma 3 now follows:
P (wH (cn ⊕ Un ⊕ Zn ) ≤ tn ) =
tn
X
P (wH (cn ⊕ Un ⊕ Zn ) = nτ )
(108a)
τ =0
ntn
. X −nEBT (p,a,w,τ /n)
=
2
(108b)
.
= max 2−nEBT (p,a,w,τ )
(108c)
= 2−n minτ ∈[0,t] EBT (p,a,w,τ )
(108d)
τ =0
τ ∈[0,t]
A PPENDIX B
Q UANTIZATION -N OISE P ROPERTIES
OF
G OOD C ODES
In this section we prove Lemma 1 and Corollary 7, which contain the properties of good codes that we need for
deriving our achievable exponents.
First we define the covering efficiency of a code C as:
△
η(C) =
|Bn (0, ρcover (C))|
.
|Ω0 |
(109)
Lemma 7: Consider a covering-good sequence of codes C (n) ⊆ {0, 1}n, n = 1, 2, . . . of rate R. Then,
.
η(C (n) ) = 1
(110)
Proof: Since for all c ∈ C (n) we have that |Ωc | = |Ω0 |, it follows that
.
|Ω0 | = 2n(1−R) .
(111)
Therefore,
.
η(C (n) ) =
.
=
2−n(1−R)
Bn 0, ρcover (C (n) )
2−n(1−R)
(n)
2−nHb (ρcover (C ))
= 2n[Hb (ρcover (C
January 3, 2018
−1
(n)
))−(1−R)]
(112a)
(112b)
(112c)
DRAFT
33
= 2n[Hb (ρcover (C
(n)
))−Hb (δGV (R))]
.
= 1,
(112d)
(112e)
where the last asymptotic equality is due to (60) and due to the continuity of the entropy.
Proof of Lemma 1: Since for any v ∈
/ Ω0
P (X ⊖ QC (n) (X) = v) = 0
(113a)
≤ P (N = v) ,
(113b)
it is left to consider points v ∈ Ω0 . To that end, since the code is linear, due to symmetry
X ⊖ QC (n) (X) ∼ Uniform (Ω0 ) .
(114)
Thus, for v ∈ Ω0 ,
P (X ⊖ QC (n) (X) = v)
P (N = v)
=
|Ω0 |
(115a)
−1
Bn 0, ρcover (C (n) )
−1
(115b)
= η(C (n) )
(115c)
.
= 1,
(115d)
where the last (asymptotic) equality is due to Lemma 7. This completes the last part (Note that the rate of convergence
is independent of v, and therefore the convergence is uniform over v ∈ Ω0 ). The second part follows by the linearity
of convolution: For any v ∈ {0, 1}n,
P (X ⊖ QC (n) (X) ⊕ Z = v)
X
PZ (z)P (X ⊖ QC (n) (X) ⊕ z = v)
=
(116a)
(116b)
z∈{0,1}n
=
X
PZ (z)P (X ⊖ QC (n) (X) = v ⊖ z)
(116c)
X
PZ (z)P (N = v ⊖ z)
(116d)
z∈{0,1}n
≤
z∈{0,1}n
= P (N ⊕ Z = v) ,
(116e)
where the (asymptotic) inequality is due to the first part of the lemma.
Proof of Corollary 7: Let N = X ⊖ U. Note that since the code is linear and X is uniform over {0, 1}n, it
follows that N is independent of the pair (U, Z). Thus,
P QC (n) (X ⊕ Z) 6= U
(117a)
2,S
January 3, 2018
DRAFT
34
= P QC (n) (U ⊕ [X ⊖ U] ⊕ Z) 6= U
2,S
= P QC (n) (U ⊕ N ⊕ Z) 6= U .
(117b)
(117c)
2,S
Recalling the definition of N′ in (79b), we have
P QC (n) (U ⊕ N ⊕ Z) 6= U
2,S
= P QC (n) (U ⊕ N ⊕ Z) 6= U
2,S
= EU P QC (n) (u ⊕ N ⊕ Z) 6= u U = u
2,S
≤ EU P QC (n) u ⊕ N′ ⊕ Z 6= u U = u
2,S
= P QC (n) U ⊕ N′ ⊕ Z 6= U ,
(118a)
(118b)
(118c)
(118d)
(118e)
2,S
where (118d) follows by applying Lemma 1 for each u.
A PPENDIX C
E XISTENCE
OF
G OOD N ESTED C ODES
In this appendix we prove the existence of a sequence of good nested codes, as defined in Definition 15. To that
end, we first state known results on the existence of spectrum-good and covering-good codes.
By [24], random linear codes are spectrum-good with high probability. That is, let C (n) be a linear code of
blocklength n and rate R, with a generating matrix G(n) drawn i.i.d. Bernoulli-1/2. Then there exist some sequences
(n)
ǫS
(n)
and δS
approaching zero, such that for all w,
(n)
(n)
(n)
P ΓC (n) (w) > (1 + δS )ΓR (w) ≤ ǫS .
(119)
As for covering-good codes, a construction based upon random linear codes is given in [25]. For a generating
(n)
matrix G(n) at blocklength n, a procedure is given to generate a new matrix GI
(n)
(n) △
= GI (G(n) ), with kI
= ⌈log n⌉
rows. The matrices are combined in the following way:
(n)
(n)
G
×
n
k
I
I
k ′(n) × n
G′(n) =
(n)
(n)
k
×
n
G
(120)
Clearly, adding GI does not effect the rate of a sequence of codes. Let C (n) be constructed by this procedure, with
(n)
G(n) drawn i.i.,d. Bernoulli-1/2. It is shown in [25, Theorem 12.3.5], that there exists sequences ǫC
(n)
and δC
approaching zero, such that
(n)
(n)
P ρcover (C (n) ) > (1 + δC )δGV (R) ≤ ǫC .
January 3, 2018
(121)
DRAFT
35
A nested code of blocklength n and rates (R, Rbin ) has a generating matrix
(n)
(n)
G̃
k̃
×
n
G(n) =
k (n) × n
(n)
(n)
kbin × n
Gbin
(122)
(n)
That is, G(n) and Gbin are the generating matrices of the fine and coarse codes, respectively.
We can now construct good nested codes in the following way. We start with random nested codes of rate
(n)
(R, Rbin ). We interpret the random matrix G(n) as consisting of matrices G̃(n) and Gbin as above, both i.i.d.
(n)
Bernoulli-1/2. We now add GI
(n)
= GI (G(n) ) as in the procedure of [25], to obtain the following generating
matrix:
(n)
GI
(n)
G′(n) = G̃
(n)
Gbin
(n)
kI
×n
k̃ (n) × n
k ′(n) × n
(n)
kbin × n
(123)
(n)
Now we construct the fine and coarse codes using the matrices G′(n) and Gbin , respectively; the rate pair does not
change due to the added matrices. By construction, the fine and coarse codes satisfy (121) and (119), respectively.
(n)
Thus, by the union bound, the covering property is satisfied with δC
(n)
(n)
and the spectrum property with δS ,
(n)
simultaneously, with probability 1 − ǫC − ǫS . We can thus construct a sequence of good nested codes as desired.
R EFERENCES
[1] T. Berger, “Decentralized estimation and decision theory,” in The IEEE 7th Spring Workshop Inf. Theory, Mt.
Kisco, NY, Sep. 1979.
[2] R. Ahlswede and I. Csiszár, “To get a bit of information may be as hard as to get full information,” IEEE
Trans. Information Theory, vol. 27, no. 4, pp. 398–408, July 1981.
[3] H. M. H. Shalaby and A. Papamarcou, “Multiterminal detection with zero-rate data compression,” IEEE Trans.
Information Theory, vol. 38, no. 2, pp. 254–267, Mar. 1992.
[4] R. Ahlswede and I. Csiszár, “Hypothesis testing with communication constraints,” IEEE Trans. Information
Theory, vol. 32, no. 4, pp. 533–542, July 1986.
[5] T. S. Han, “Hypothesis testing with multiterminal data compression,” IEEE Trans. Information Theory, vol. 33,
no. 6, pp. 759–772, Nov. 1987.
[6] H. Shimokawa, T. S. Han, and S. Amari, “Error bound of hypothesis testing with data compression,” in Proc.
Int. Symp. Info. Theory (ISIT), June 1994, p. 114.
[7] H. Shimokawa, “Hypothesis testing with multiterminal data compression,” Master’s thesis, Feb. 1994.
[8] D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Information
Theory, vol. 19, pp. 471–480, July 1973.
[9] A. D. Wyner and J. Ziv, “The rate-distortion function for source coding with side information at the decoder,”
IEEE Trans. Information Theory, vol. 22, pp. 1–10, Jan. 1976.
January 3, 2018
DRAFT
36
[10] M. Rahman and A. Wagner, “On the optimality of binning for distributed hypothesis testing,” IEEE Trans.
Information Theory, vol. 58, no. 10, pp. 6282–6303, Oct. 2012.
[11] T. S. Han and K. Kobayashi, “Exponential-type error probabilities for multiterminal hypothesis testing,” IEEE
Trans. Information Theory, vol. 35, no. 1, pp. 2–14, Jan. 1989.
[12] S. Amari, “On optimal data compression in multiterminal statistical inference,” IEEE Trans. Information
Theory, vol. 57, no. 9, pp. 5577–5587, Sep. 2011.
[13] Y. Polyanskiy, “Hypothesis testing via a comparator,” in Proc. Int. Symp. Info. Theory (ISIT), July 2012, pp.
2206–2210.
[14] G. Katz, P. Piantanida, and M. Debbah, “Distributed binary detection with lossy data compression,” IEEE
Trans. Information Theory, vol. 63, no. 8, pp. 5207–5227, Aug 2017.
[15] ——, “A new approach to distributed hypothesis testing,” in 2016 50th Asilomar Conference on Signals,
Systems and Computers, Nov. 2016, pp. 1365–1369.
[16] T. S. Han and S. Amari, “Statistical inference under multiterminal data compression,” IEEE Trans. Information
Theory, vol. 44, no. 6, pp. 2300–2324, Oct. 1998.
[17] G. D. Forney, Jr., “Exponential error bounds for erasure, list, and detection feedback schemes,” IEEE Trans.
Information Theory, vol. 14, pp. 206–220, Mar. 1968.
[18] J. Körner and K. Marton, “How to encode the modulo-two sum of binary sources,” IEEE Trans. Information
Theory, vol. 25, pp. 219–221, Mar. 1979.
[19] H. Shimokawa and S. Amari, “Multiterminal estimation theory with binary symmetric source,” in Proc. Int.
Symp. Info. Theory (ISIT), Sep. 1995, p. 447.
[20] M. El Gamal and L. Lai, “Are Slepian-Wolf rates necessary for distributed parameter estimation?” CoRR,
vol. abs/1508.02765, 2015. [Online]. Available: http://arxiv.org/abs/1508.02765
[21] T. M. Cover and J. A. Thomas, Elements of Information Theory. New York: Wiley, 1991.
[22] J. Neyman and E. S. Pearson, “The testing of statistical hypotheses in relation to probabilities a priori,”
Mathematical Proceedings of the Cambridge Philosophical Society, vol. 29, pp. 492–510, 10 1933. [Online].
Available: http://journals.cambridge.org/article S030500410001152X
[23] A. B. Wagner, B. G. Kelly, and Y. Altuğ, “Distributed rate-distortion with common components,” IEEE Trans.
Information Theory, vol. 57, no. 7, pp. 4035–4057, July 2011.
[24] R. G. Gallager, Information Theory and Reliable Communication. John Wiley & Sons, 1968.
[25] G. Cohen, I. Honkala, S. Litsyn, and A. Lobstein, Covering Codes. Elsevier (North Holland Publishing Co.),
1997.
[26] R. Zamir, S. Shamai, and U. Erez, “Nested linear/lattice codes for structured multiterminal binning,” IEEE
Trans. Information Theory, vol. 48, pp. 1250–1276, June 2002.
[27] A. D. Wyner, “Recent results in the Shannon theory,” IEEE Trans. Information Theory, vol. 40, no. 1, pp.
2–10, Jan. 1974.
[28] I. Csiszár and J.Körner, “Towards a general theory of source networks,” IEEE Trans. Information Theory,
January 3, 2018
DRAFT
37
vol. 26, no. 2, pp. 155–165, Mar. 1980.
[29] I. Csiszár, “Linear codes for sources and source networks: Error exponents, universal coding,” IEEE Trans.
Information Theory, vol. 28, no. 4, pp. 585–592, July 1982.
[30] E. Haim, Y. Kochman, and U. Erez, “Distributed structure: Joint expurgation for the multiple-access channel,”
IEEE Trans. Information Theory, vol. 63, no. 1, pp. 5–20, Jan. 2017.
[31] J. Chen, D.-K. He, A. Jagmohan, and L. A. Lastras-Montaño, “On the reliability function
of
variable-rate
Slepian-Wolf
coding,”
Entropy,
vol.
19,
no.
8,
2017.
[Online].
Available:
http://www.mdpi.com/1099-4300/19/8/389
[32] ——, “On the reliability function of variable-rate Slepian-Wolf coding,” in Proceedings of the 45th annual
Allerton Conference on Communication, Control and Computing, Sep. 2007.
[33] N. Weinberger and N. Merhav, “Optimum trade-offs between error exponent and excess-rate exponent of
Slepian-Wolf coding,” in Proc. Int. Symp. Info. Theory (ISIT), June 2015, pp. 1565–1569.
[34] B. G. Kelly and A. B. Wagner, “Reliability in source coding with side information,” IEEE Trans. Information
Theory, vol. 58, no. 8, pp. 5086–5111, Aug. 2012.
[35] D. Krithivasan and S. S. Pradhan, “Distributed source coding using Abelian group codes: A new achievable
rate-distortion region,” IEEE Trans. Information Theory, vol. 57, no. 3, pp. 1495–1519, Mar. 2011.
[36] A. B. Wagner, “On distributed compression of linear functions,” IEEE Trans. Information Theory, vol. 57,
no. 1, pp. 79–94, Jan. 2011.
January 3, 2018
DRAFT
| 7 |
arXiv:1511.07607v2 [cs.MM] 27 Sep 2017
Fine-Grain Annotation of Cricket Videos
Rahul Anand Sharma
CVIT, IIIT-Hyderabad
Hyderabad, India
Pramod Sankar K.
Xerox Research Center India
Bengaluru, India
C. V. Jawahar
CVIT, IIIT-Hyderabad
Hyderabad, India
[email protected]
[email protected]
[email protected]
Abstract
The recognition of human activities is one of the key
problems in video understanding. Action recognition is
challenging even for specific categories of videos, such as
sports, that contain only a small set of actions. Interestingly, sports videos are accompanied by detailed commentaries available online, which could be used to perform action annotation in a weakly-supervised setting. For the specific case of Cricket videos, we address the challenge of
temporal segmentation and annotation of actions with semantic descriptions. Our solution consists of two stages.
In the first stage, the video is segmented into “scenes”, by
utilizing the scene category information extracted from textcommentary. The second stage consists of classifying videoshots as well as the phrases in the textual description into
various categories. The relevant phrases are then suitably
mapped to the video-shots. The novel aspect of this work
is the fine temporal scale at which semantic information is
assigned to the video. As a result of our approach, we enable retrieval of specific actions that last only a few seconds, from several hours of video. This solution yields a
large number of labelled exemplars, with no manual effort,
that could be used by machine learning algorithms to learn
complex actions.
Batsman: Gambhir
Description: “he pulls it from outside off stump and just
manages to clear the deep square leg rope”
Figure 1. The goal of this work is to annotate Cricket videos with
semantic descriptions at a fine-grain spatio-temporal scale. In this
example, the batsman action of a “pull-shot”, a particular manner
of hitting the ball, is labelled accurately as a result of our solution.
Such a semantic description is impossible to obtain using current
visual-recognition techniques alone. The action shown here lasts
a mere 35 frames (1.2 seconds).
sual content. For example, the appearance of a Basketball
court [10] could help in locating and tracking players and
their movements. However, current visual recognition solutions have only seen limited success towards fine-grain
action classification. For example, it is difficult to automatically distinguish a “forehand” from a “half-volley” in
Tennis. Further, automatic generation of semantic descriptions is a much harder task, with only limited success in the
image domain [4].
Instead of addressing the problem using visual analysis alone, several researchers proposed to utilize relevant
parallel information to build better solutions [2]. For example, the scripts available for movies provide a weak supervision to perform person [2] and action recognition [6].
Similar parallel text in sports was previously used to detect events in soccer videos, and index them for efficient
retrieval [11, 12]. Gupta et al. [3] learn a graphical model
from annotated baseball videos that could then be used to
generate captions automatically. Their generated captions,
however, are not semantically rich. Lu et al. [7] show that
the weak supervision from parallel text results in superior
player identification and tracking in Basketball videos.
In this work, we aim to label the actions of players in
Cricket videos using parallel information in the form of online text-commentaries [1]. The goal is to label the video
at the shot-level with the semantic descriptions of actions
1. Introduction
The labeling of human actions in videos is a challenging
problem for computer vision systems. There are three difficult tasks that need to be solved to perform action recognition: 1) identification of the sequence of frames that involve
an action performed, 2) localisation of the person performing the action and 3) recognition of the pixel information
to assign a semantic label. While each of these tasks could
be solved independently, there are few robust solutions for
their joint inference in generic videos.
Certain categories of videos, such as Movies, News
feeds, Sports videos, etc. contain domain specific cues that
could be exploited towards better understanding of the vi1
Bowler
Run-Up
Batsman
Stroke
Ball-in-the-air
(Sky)
Umpire
Signal
Replay
Crowd
Reaction
Figure 2. Typical visuals observed in a scene of a Cricket match. Each event begins with the Bowler running to throw the ball, which is hit
by the Batsman. The event unfolds depending on the batsman’s stroke. The outcome of this particular scene is 6-Runs. While the first few
shots contain the real action, the rest of the visuals have little value in post-hoc browsing.
Figure 3. An example snippet of commentary obtained from
Cricinfo.com. The commentary follows the format: event number
and player involved along with the outcome (2 runs). Following
this is the descriptive commentary: (red) bowler actions, (blue)
batsman action and (green) other player actions.
and activities. Two challenges need to be addressed towards
this goal. Firstly, the visual and textual modalities are not
aligned, i.e. we are given a few pages of text for a four hour
video with no other synchronisation information. Secondly,
the text-commentaries are very descriptive, where they assume that the person reading the commentary understands
the keywords being used. Bridging this semantic gap over
video data is much tougher than, for example, images and
object categories.
1.1. Our Solution
We present a two-stage solution for the problem of finegrained Cricket video segmentation and annotation. In the
first stage, the goal is to align the two modalities at a
“scene” level. This stage consists of a joint synchronisation
and segmentation of the video with the text commentary.
Each scene is a meaningful event that is a few minutes long
(Figure 2), and described by a small set of sentences in the
commentary (Figure 3). The solution for this stage is inspired from the approach proposed in [8], and presented in
Section 2.
Given the scene segmentation and the description for
each scene, the next step is to align the individual descriptions with their corresponding visuals. At this stage,
the alignment is performed between the video-shots and
phrases of the text commentary. This is achieved by classifying video-shots and phrases into a known set of categories, which allows them to be mapped easily across the
modalities, as described in Section 3. As an outcome of
this step, we could obtain fine-grain annotation of player
actions, such as those presented in Figure 1.
Our experiments, detailed in Section 4 demonstrate that
the proposed solution is sufficiently reliable to address
this seemingly challenging task. The annotation of the
videos allow us to build a retrieval system that can search
across hundreds of hours of content for specific actions that
last only a few seconds. Moreover, as a consquence of
this work, we generate a large set of fine-grained labelled
videos, that could be used to train action recognition solutions.
2. Scene Segmentation
A typical scene in a Cricket match follows the sequence
of events depicted in Figure 2. A scene always begins with
the bowler (similar to a pitcher in Baseball) running towards
and throwing the ball at the batsman, who then plays his
stroke. The events that follow vary depending on this action. Each such scene is described in the text-commentary
as shown in Figure 3. The commentary consists of the event
number, which is not a time-stamp; the player names, which
are hard to recognise; and detailed descriptions that are hard
to automatically interpret.
It was observed in [8] that the visual-temporal patterns
of the scenes are conditioned on the outcome of the event.
In other words, a 1-Run outcome is visually different from
a 4-Run outcome. This can be observed from the statetransition diagrams in Figure 4. In these diagrams, the shots
of the video are represented by visual categories such as
ground, sky, play-area, players, etc. The sequence of the
shot-categories is represented by the arrows across these
states. One can observe that for a typical Four-Runs video,
the number of shots and their transitions are lot more complex than that of a 1-Run video. Several shot classes such as
replay are typically absent for a 1-Run scene, while a replay
is expected as the third or fourth shot in a Four-Runs scene.
While the visual model described above could be useful in recognizing the scene category for a given video segment, it cannot be immediately used to spot the scene in a
full-length video. Conversely, the temporal segmentation of
the video is not possible without using an appropriate model
for the individual scenes themselves. This chicken-and-egg
problem can be solved by utilizing the scene-category information from the parallel text-commentary.
Let us say Fi represents one of the N frames and Sk represents the category of the k-th scene. The goal of the scene
segmentation is to identify anchor frames Fi , Fj , which are
most likely to contain the scene Sk . The optimal segmenta-
Figure 4. State transition diagrams for two scene categories: (left) One Run and (right) Four-Runs. Each shot is classified into one of the
given states. Only the prominent state-transitions are shown, each transition is associated with a probability (not shown for brevity). Notice
how the one-run scene includes only a few states and transitions, while the four-run model involves a variety of visuals. However, the “sky”
state is rarely visited in a Four, but is typically seen in Six-Runs and Out models.
tion of the video can be defined by the recursive function
C(Fi , Sk ) =
max {p(Sk | [Fi , Fj ]) + C(Fj+1 , Sk+1 )},
j∈[i+1,N ]
where p(Sk | [Fi , Fj ]) is the probability that the sequence
of frames [Fi , Fj ] belong to the scene category Sk . This
probability is computed by matching the learnt scene models with the sequence of features for the given frame set.
The optimisation function could be solved using Dynamic
Programming (DP). The optimal solution is found by backtracking the DP matrix, which provides the scene anchor
points FS1 , FS2 , ..., FSK .
We thus obtain a temporal segmentation of the given
video into its individual scenes. A typical segmentation
covering five overs, is shown in Figure 5. The descriptive
commentary from the parallel-text could be used to annotate the scenes, for text-based search and retrieval. In this
work, we would like to further annotate the videos at a much
finer temporal scale than the scenes, i.e., we would like to
annotate at the shot-level. The solution towards shot-level
annotation is presented in the following Section.
3. Shot/Phrase Alignment
Following the scene segmentation, we obtain an alignment between minute-long clips with a paragraph of text.
To perform fine-grained annotation of the video, we must
segment both the video clip and the descriptive text. Firstly,
the video segments at scene level are over-segmented into
shots, to ensure that it is unlikely to map multiple actions
into the same shot. In the case of the text, given that the
commentary is free-flowing, the action descriptions need to
be identified at a finer-grain than the sentence level. Hence
we choose to operate at the phrase-level, by segmenting
sentences at all punctuation marks. Both the video-shots
and the phrases are classified into one of these categories:
{Bowler Action, Batsman Action, Others}, by learning suitable classifiers for each modality. Following this, the individual phrases could be mapped to the video-shots that
belong to the same category.
3.1. Video-Shot Recognition
In order to ensure that the video-shots are atomic to
an action/activity, we perform an over-segmentation of the
video. We use a window-based shot detection scheme that
works as follows. For each frame Fi , we compute its difference with every other frame Fj , where j ∈ [i − w/2, i +
w/2], for a chosen window size w, centered on Fi . If the
maximum frame difference within this window is greater
than a particular threshold τ , we declare Fi as a shotboundary. We choose a small value for τ to ensure oversegmentation of scenes.
Each shot is now represented with the classical Bag of
Visual Words (BoW) approach [9]. SIFT features are first
computed for each frame independently, which are then
clustered using the K-means clustering algorithm to build
a visual vocabulary (where each cluster center corresponds
to a visual word). Each frame is then represented by the
normalized count of number of SIFT features assigned to
each cluster (BoW histograms). The shot is represented by
the average BoW histogram over all frames present in the
shot. The shots are then classified into one of these classes:
{Bowler Runup, Batsman Stroke, Player Close-Up, Umpire,
Ground, Crowd, Animations, Miscellaneous}. The classification is performed using a multiclass Kernel-SVM.
The individual shot-classification results could be further refined by taking into account the temporal neighbourhood information. Given the strong structure of a Cricket
match, the visuals do not change arbitrarily, but are predictable according to the sequence of events in the scenes.
Such a sequence could be modelled as a Linear Chain Conditional Random Field (LC-CRF) [5]. The LC-CRF, consists of nodes corresponding to each shot, with edges connecting each node with its previous and next node, resulting in a linear chain. The goal of the CRF is to model
P (y1 , ..., yn |x1 , ..., xn ), where xi and yi are the input and
output sequences respectively. The LC-CRF is posed as the
on the pads once more Dravid looks to nudge it to square
leg off the pads for a leg-bye
that is again slammed towards long-on
shortish and swinging away lots of bounce
Figure 5. Results of Scene Segmentation depicted over five-overs.
The background of the image is the cost function of the scene
segmentation. The optimal backtrack path is given as the red
line, with the inferred scene boundaries marked on this path. The
groundtruth segmentation of the video is given as the blue lines. It
can be noticed that for most scenes the inferred shot boundary is
only a few shots away from the groundtruth.
objective function
p(Y kX) = exp(
K
X
au (yk ) +
k=1
K−1
X
ap (yk , yk+1 ))/Z(X)
k=1
Here, the unary term is given by the class-probabilities produced by the shot classifier, defined as
au (yk ) = 1 − P (yk |xi ).
The pair-wise term encodes the probability of transitioning
from a class yk to yk+1 as
ap (yk , yk+1 ) = 1 − P (yk+1 |yk ).
The function Z(X) is a normalisation factor. The transition
probabilities between all pairs of classes are learnt from a
training set of labelled videos. The inference of the CRF is
performed using the forward-backward algorithm.
3.2. Text Classification
The phrase classifier is learnt entirely automatically. We
begin by crawling the web for commentaries of about 300
matches and segmenting the text into phrases. It was observed that the name of the bowler or the batsman is sometimes included in the description, for example, “Sachin
hooks the ball to square-leg”. These phrases can accordingly be labelled as belonging to the actions of the Bowler
or the Batsman. From the 300 match commentaries, we obtain about 1500 phrases for bowler actions and about 6000
phrases for the batsman shot. We remove the names of
the respective players and represent each phrase as a histogram of its constituent word occurrences. A Linear-SVM
Figure 6. Examples of shots correctly labeled by the batsman or
bowler actions. The textual annotation is semantically rich, since
it is obtained from human generated content. These annotated
videos could now be used as training data for action recognition
modules.
is now learnt for the bowler and batsman categories over
this bag-of-words representation. Given a test phrase, the
SVM provides a confidence for it to belong to either of the
two classes.
The text classification module is evaluated using 2-fold
cross validation over the 7500 phrase dataset. We obtain a
recognition accuracy of 89.09% for phrases assigned to the
right class of bowler or batsman action.
4. Experiments
Dataset:
Our dataset is collected from the YouTube
channel for the Indian Premier League(IPL) tournament.
The format for this tournament is 20-overs per side, which
results in about 120 scenes or events for each team. The
dataset consists of 16 matches, amounting to about 64 hours
of footage. Four matches were groundtruthed manually at
the scene and shot level, two of which are used for training
and the other two for testing.
4.1. Scene Segmentation
For the text-driven scene segmentation module, we use
these mid-level features: {Pitch, Ground, Sky, PlayerCloseup, Scorecard}. These features are modelled using
binned color histograms. Each frame is then represented by
the fraction of pixels that contain each of these concepts,
the scene is thus a spatio-temporal model that accumulates
these scores.
The limitation with the DP formulation is the amount of
memory available to store the DP score and indicator matrices. With our machines we are limited to performing the DP
over 100K frames, which amounts to 60 scenes, or 10-overs
of the match.
The accuracy of the scene segmentation is measured
as the number of video-shots that are present in the correct scene segment. We obtain a segmentation accuracy of
Kernel
Vocab: 300
Vocab: 1000
Linear
78.02
82.25
Polynomial
80.15
81.16
RBF
81
82.15
Sigmoid
77.88
80.53
Table 1. Evaluation of the video-shot recognition accuracy. A visual vocabulary using 1000 clusters of SIFT-features yields a considerably good performance, with the Linear-SVM.
R
2
4
6
8
10
Bowler Shot
22.15
43.37
69.09
79.94
87.87
Batsman Shot
39.4
47.6
69.6
80.8
88.95
Table 2. Evaluation of the neighbourhood of a scene boundary that
needs to be searched to find the appropriate bowler and batsman
shots in the video. It appears that almost 90% of the correct shots
are found within a window size of 10.
83.45%. Example segmentation results for two scenes are
presented in Figure 5, one can notice that the inferred scene
boundaries are very close to the groundtruth. We observe
that the errors in segmentation typically occur due to events
that are not modelled, such as a player injury or an extended
team huddle.
4.2. Shot Recognition
The accuracy of the shot recognition using various feature representations and SVM Kernels is given in Table 1.
We observe that the 1000 size vocabulary works better than
300. The Linear Kernel seems to suffice to learn the decision boundary, with a best-case accuracy of 82.25%. Refining the SVM predictions with the CRF based method yields
an improved accuracy of 86.54. Specifically, the accuracy
of the batsman/bowler shot categories is 89.68%.
4.3. Shot Annotation Results
The goal of the shot annotation module is to identify the
right shot within a scene that contains the bowler and batsman actions. As the scene segmentation might contain errors, we perform a search in a shot-neighbourhood centered
on the inferred scene boundary. We evaluate the accuracy of
finding the right bowler and batsman shots within a neighbourhood region R of the scene boundary, which is given in
Table 2. It was observed that 90% of the bowler and batsman shots were correctly identified by searching within a
window of 10 shots on either side of the inferred boundary.
Once the shots are identified, the corresponding textual
comments for bowler and batsman actions, are mapped to
these video segments. A few shots that were correctly annotated are shown in Figure 6.
5. Conclusions
In this paper, we present a solution that enables rich
semantic annotation of Cricket videos at a fine temporal scale. Our approach circumvents technical challenges
in visual recognition by utilizing information from online
text-commentaries. We obtain a high annotation accuracy, as evaluated over a large video collection. The annotated videos shall be made available for the community for
benchmarking, such a rich dataset is not yet available publicly. In future work, the obtained labelled datasets could be
used to learn classifiers for fine-grain activity recognition
and understanding.
References
[1] Cricinfo at: http://www.cricinfo.com. 1
[2] M. Everingham, J. Sivic, and A. Zisserman. “Hello! My
name is... Buffy” – automatic naming of characters in TV
video. In Proc. BMVC, 2006. 1
[3] A. Gupta, P. Srinivasan, J. Shi, and L. Davis. Understanding
videos, constructing plots: Learning a visually grounded storyline model from annotated videos. In Proc. CVPR, pages
2012–2019, 2009. 1
[4] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proc. CVPR,
2015. 1
[5] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In Proc. ICML, pages 282–289,
2001. 3
[6] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld.
Learning realistic human actions from movies. In Proc.
CVPR, 2008. 1
[7] W.-L. Lu, J.-A. Ting, J. Little, and K. Murphy. Learning
to track and identify players from broadcast sports videos.
IEEE PAMI, 35(7):1704–1716, July 2013. 1
[8] Pramod Sankar K., S. Pandey, and C. V. Jawahar. Text driven
temporal segmentation of cricket videos. In Proc. ICVGIP,
pages 433–444, 2006. 2
[9] J. Sivic and A. Zisserman. Video Google: A text retrieval
approach to object matching in videos. In Proc. ICCV, 2003.
3
[10] E. Swears, A. Hoogs, Q. Ji, and K. Boyer. Complex activity
recognition using Granger Constrained DBN (GCDBN) in
sports and surveillance video. In Proc. CVPR, pages 788–
795, 2014. 1
[11] C. Xu, J. Wang, K. Wan, Y. Li, and L. Duan. Live sports
event detection based on broadcast video and web-casting
text. In Proc. ACM Multimedia, pages 221–230, 2006. 1
[12] Y. Zhang, X. Zhang, C. Xu, and H. Lu. Personalized retrieval
of sports video. In Proc. Intl. Workshop on Multimedia Information Retrieval, pages 313–322, 2007. 1
| 1 |
How morphological development can guide
evolution
Sam Kriegman1,* , Nick Cheney2 , and Josh Bongard1
1 University
of Vermont, Department of Computer Science, Burlington, VT, USA
of Wyoming, Department of Computer Science, Laramie, WY, USA
* [email protected]
arXiv:1711.07387v2 [q-bio.PE] 15 Dec 2017
2 University
ABSTRACT
Organisms result from adaptive processes interacting across different time scales. One such interaction is that between
development and evolution. Models have shown that development sweeps over several traits in a single agent, sometimes
exposing promising static traits. Subsequent evolution can then canalize these rare traits. Thus, development can, under the
right conditions, increase evolvability. Here, we report on a previously unknown phenomenon when embodied agents are
allowed to develop and evolve: Evolution discovers body plans robust to control changes, these body plans become genetically
assimilated, yet controllers for these agents are not assimilated. This allows evolution to continue climbing fitness gradients
by tinkering with the developmental programs for controllers within these permissive body plans. This exposes a previously
unknown detail about the Baldwin effect: instead of all useful traits becoming genetically assimilated, only traits that render the
agent robust to changes in other traits become assimilated. We refer to this as differential canalization. This finding also has
implications for the evolutionary design of artificial and embodied agents such as robots: robots robust to internal changes in
their controllers may also be robust to external changes in their environment, such as transferal from simulation to reality or
deployment in novel environments.
Introduction
The shape of life changes on many different time scales. From generation to generation, populations gradually increase in
complexity and relative competency. At the individual level, organisms grow from a single-celled egg and exhibit extreme
postnatal change as they interact with the outside world during their lifetimes. At a faster time scale still, organisms behave
such as to survive and reproduce.
Many organisms manifest different traits as they interact with their environment. It seems wasteful not to utilize this extra
exploration to speed the evolutionary search for good genotypes. However, to communicate information from these useful
but temporary traits to the genotype requires inverting the generally very complex, nonlinear and stochastic mapping from
DNA to phenotype. Inverting such a function would be exceedingly difficult to compute. However, organisms can pass on their
particular capacity to acquire certain characteristics. Thus phenotypic plasticity can affect the direction and rate of evolutionary
change by influencing selection pressures. Although this phenomenon was originally described by Baldwin1 , Morgan2 and
Waddington3 , among others, it has become known as ‘the Baldwin effect’. In Baldwin’s words: ‘the most plastic individuals
will be preserved to do the advantageous things for which their variations show them to be the most fit, and the next generation
will show an emphasis of just this direction in its variations’1 . In a fixed environment, when the ‘advantageous thing’ to do is to
stay the same, selection can favor genetic variations which more easily, reliably, or quickly produce these traits. This can lead
to the genetic determination of a character which in previous generations needed to be developed or learned.
Thirty years ago, Hinton and Nowlan4 provided a simple computational model of the Baldwin effect that clearly demonstrated how phenotypic plasticity could, under certain conditions, speed evolutionary search without communication to the
genotype. They considered the evolution of a bitstring that is only of value when perfectly matching a predefined target string.
The search space therefore has a single spike of high fitness with no slope leading to the summit. In such a space, evolution is
no better than random search.
Hinton and Nowlan then allowed part of the string to randomly change at an additional (and faster) developmental time
scale. When the genetically specified portion of the string is correct, there is a chance of discovering the remaining portion in
development. The speed at which such individuals tend to find the good string will be proportional to the number of genetically
determined bits. When the target string is found, development stops and the individual is rewarded for the amount of remaining
developmental time. This has the effect of creating a gradient of increasing fitness surrounding the correct specification that
natural selection can easily climb by incrementally assimilating more correct bits to the genotype.
Hinton and Nowlan imagined the bitstring as specifying the connections of a neural network in a very harsh environment.
We are also interested in this interaction of subsystems unfolding at different time scales, but consider an embodied agent
situated in a physically-realistic environment rather than an abstract control system. This distinction is important as it grounds
our hypotheses in the constraints and opportunities afforded by the physical world. It also allows us to investigate how changes
in morphology and control might differentially affect the direction or rate of evolutionary search. More specifically, it exposes
the previously unknown phenomenon of differential canalization reported here.
Inspired by Hinton and Nowlan, Floreano and Mondada5 explored the interaction between learning and evolution in mobile
robots with a fixed body plan but plastic neural control structure. They noted that the acquisition of stable behavior in ontogeny
did not correspond to stability (no further change) of individual synapses, but rather was regulated by continuously changing
synapses which were dynamically stable. In other words, agents exploited this ontogenetic change for behavior, and this
prevented its canalization. In this paper, we structure development in a way that restricts its exploitation for behavior and
thus promotes the canalization of high performing static phenotypes. Also, the robot’s body plan was fixed in Floreano and
Mondada’s experiments5 , whereas in the work reported here, evolution and development may modify body plans.
Several models that specifically address morphological development of embodied agents have been reported in the
literature6–9 . However, the relationship between morphological development and evolvability is seldom investigated in such
models. Moreover, there are exceedingly few cases that considered postnatal change to the body plan (its resting structural
form) as the agent behaves and interacts with the environment through physiological functioning (at a faster time scale).
We are only aware of four cases reported in the literature in which a simulated robot’s body was allowed to change while
it was behaving. In the first two cases10, 11 , it was not clear whether this ontogenetic morphological change facilitated the
evolution of behavior. Later, Bongard12 demonstrated how such change could lead to a form of self-scaffolding that smoothed
the fitness landscape and thus increased evolvability. This ontogenetic change also exposed evolution to much more variation
in sensor-motor interactions which increased robustness to unseen environments. Recently, Kriegman et al.13 showed how
development can sweep over a series of body plans in a single agent, and subsequent heterochronic mutations canalize the most
promising body plan in more morphologically-static descendants.
We are not aware of any cases reported in the literature to date in which a simulated robot’s body and control are
simultaneously allowed to change while it is behaving. In this paper, we investigate such change in the morphologies and
controllers of soft robots as they are evolved for coordinated action in a simulated 3D environment. By morphology we mean
the current state of a robot’s shape, which is slowly changed over the course of its lifetime by a developmental process. We
distinguish this from the controller, which sends propagating waves of actuation throughout the individual, which also affects
the instantaneous shape of the robot but to a much smaller degree. We here refer to these two processes as ‘morphology’ and
‘control’. As both processes change the shape, and thus behavior, of the robot, this distinction is somewhat arbitrary. However,
the central claim of this paper, which is that some traits become canalized while others do not, is not reliant on this distinction.
We use soft robots because they provide many more degrees of morphological freedom compared to traditional robots
composed of rigid links connected by rotary or linear actuators. This flexibility allows soft robots to accomplish tasks that
would be otherwise impossible for their rigid-bodied counterparts, such as squeezing through small apertures14 or continuously
morphing to meet different tasks. Recent advancements in materials science are enabling the fabrication of 3D-printed muscles15
and nervous systems16 . However, there are several challenges to the field of soft robotics, including an overall lack of design
intuition: What should a robot with nearly unbounded morphological possibility look like, and how can it be controlled?
Controllability often depends on precision actuation and feedback control, but these properties are difficult to maintain in soft
materials in which motion in one part of the robot can propagate in unanticipated ways throughout its body17 .
We present here a minimally complex but embodied model of morphological and neurological development. This new
model represents an alternative approach to the challenging problem of soft robot design and presents an in silico testbed for
hypotheses about evolving and developing embodied systems. This model led to the discovery of differential canalization and
how it can increase evolvability.
Results
All experiments were performed in the open-source soft-body physics simulator Voxelyze, which is described in detail by Hiller
and Lipson18 . We consider a locomotion task, over flat terrain, for soft robots composed of a 4 × 4 × 3 grid of voxels (Fig.
1). Robots are evaluated for 40 actuation cycles at 4 Hz, yielding a lifetime of ten seconds. Fitness is taken to be distance
traveled measured in undeformed body lengths (four unit voxels, i.e. 4 cm). Example robots are shown in Figures 1 and 5, and
Supplementary Video S1.
The morphology of a robot is given by the resting length of each voxel. However the shape and volume of each voxel
is changed by external forces from the environment and internal forces via behavior. The morphology of a robot is denoted
by the 4 × 4 × 3 = 48-element vector `, where each element is the resting length of that voxel (with possible values within
1.0 ± 0.75 cm). Like most animals, our robots are bilaterally symmetrical. The lefthand 2 × 4 × 3 = 12 resting voxel lengths are
2/17
Figure 1. Modeling development. An evolved soft robot changes its shape during its lifetime (postnatal development), from
a walking quadruped into a rolling form. Evolution dictates how a robot’s morphology develops by setting each voxel’s initial
(`k ) and final (`∗k ) resting length. The length of a single voxel k is plotted to illustrate its (slower) growth and (faster) actuation
processes. Voxel color indicates the current length of that cell: the smallest voxels are blue, medium sized voxels are green, and
the largest voxels are red. As robots develop and interact with a physically realistic environment, they generate heterogeneous
behavior in terms of instantaneous velocity (bottom arrows). Soft robot evolution, development and physiological functioning
can be seen in Supplementary Video S1.
reflected on the other, righthand side of the midsagittal line, yielding 24 independent resting lengths. The controller, however, is
not constrained to be symmetrical since many behaviors, even for symmetric morphologies, consist of asymmetric gaits, and is
given by the phase offset of each voxel from a global oscillating signal with an amplitude of 0.14 cm. The controller is denoted
by the 48-element vector φ , where each element is the phase offset of that voxel (with possible values within 0 ± π/2).
We investigated the impact of development in this model by comparing two experimental variants: Evo and Evo-Devo
(schematized in Supplementary Fig. S2). The control treatment, Evo, lacks development and therefore maintains a fixed
morphology and control policy in a robot as it behaves over its lifetime. Two parameters per voxel are sufficient to specify
an evolved robot at any time t in its lifetime: its morphology `k , and controller φk . An evolutionary algorithm optimizes 24
morphological and 48 control parameters.
The experimental treatment Evo-Devo evolves a developmental program rather than a static phenotype (Fig. 1). For each
parameter in an Evo robot, an Evo-Devo robot has two: its starting and final value. The evolutionary algorithm associated with
the Evo-Devo treatment thus optimizes 48 morphological and 96 control parameters. The morphology and controller of the kth
voxel change linearly from starting to final values, throughout the lifetime of a developing robot. The starting and final points
of development are predetermined by a genome which in turn fixes the direction (compression or expansion) and rate of change
for each voxel. Development is thus ballistic in nature rather than adaptive, as it cannot be influenced by the environment.
For both treatments we conducted 30 independent evolutionary trials. At the end of evolutionary optimization, the nondevelopmental robots (Evo) tend to move on average with a speed of 10 body lengths in 10 seconds, or 1 length/sec. The evolved
and developing robots (Evo-Devo) tend to move at over 5 lengths/sec (Fig. 2A). To ensure evolved and developing robots are
not exploiting some unfair advantage conferred by changing body plans and control policies unavailable to non-developmental
robots, we manually remove their development by setting `∗ = ` and φ ∗ = φ , which fixes the structure of their morphologies
and controllers at birth (t = 0). The resulting reduced robots suffer only a slight (and statistically non-significant) decrease in
median speed and still tend to be almost five times faster than the systems evolved without development (Fig. 2B, treatment
‘Evo-Devo removed’). Ballistic development is therefore beneficial for search but does not provide a behavioral advantage in
this task environment.
To investigate this apparent search advantage, we trace development and fitness across the 30 lineages which produced a
3/17
Figure 2. Evolvability and development. Morphological development drastically increases evolvability (A), even when
development is manually removed from the optimized systems by setting the final parameter values equal to their starting
values (`∗k = `k and φk∗ = φk ), in each voxel (B). Median fitness is plotted with 95% bootstrapped confidence intervals for three
treatments: evolving but non-developmental robots (Evo), evolving and developing robots (Evo-Devo), and evolving and
developing robots evaluated at the end of evolution with their development removed (Evo-Devo removed). Fitness of just the
final, evolved populations are plotted in B.
‘run champion’: the robot with highest fitness at the termination of a given evolutionary trial (Supplementary Fig. S1). The
developmental window is defined separately for morphology (Equation 3) and control (Equation 4) as the absolute difference in
starting and final values summed across the robot and divided by the total amount of possible development, such that 0 and 1
indicate no and maximal developmental change, respectively. Evo robots by definition have development windows of zero, as
do Evo-Devo robots that have had development manually removed. An Evo-Devo robot with a small developmental window
has thus become canalized3 . In terms of fitness, there were two observed basins of attraction in average velocity: a slower
design type which either trots or gallops at a speed of less than 1 length/sec (Fig. 5A,B and Fig. 3A∗), and a faster design type
that rolls at 5-6 lengths/sec (Fig. 5C). After ten thousand generations, 25 out of a total of 30 Evo-Devo trials (83.3%) find the
faster design, compared to just 6 out of 30 Evo trials (20%).
Differential canalization.
Modular systems are more evolvable than non-modular systems because they allow evolution to improve one subsystem without
disrupting others19, 20 . Modularity may be a property of the way a system is built, or it may be an evolved property. The robots
evolved here are by definition modular because the genes which affect morphology are independent of those which affect its
control. However the more successful Evo-Devo lineages evolved an additional form of modularity, which we term differential
canalization: Some initially developmentally plastic traits become integrated and canalized, while other traits remain plastic.
In the successful Evo-Devo trials, morphological traits were canalized while control traits were not. Evidence for this is
provided in Supplementary Fig. S1 (which is summarized in Fig. 3A). Trajectories of controller development (green curves) do
not follow any discernible pattern and appear upon visual inspection to be consistent with a random walk or genetic drift. The
trajectories of morphological development (red curves), however, follow a consistent pattern. The magnitude of morphological
development increases slightly, but significantly (p < 0.001), before decreasing all the way to the most recent descendant,
which is the most fit robot from that trial (the run champion). The morphological development window of the most fit robot
is significantly less than the starting morphological development window (p < 0.001), but there is no significant difference
between starting and final control development windows. Furthermore, this pattern tends to correlate with high fitness: in trials
in which this pattern did not appear (runs 6, 8, 16-18), fitness did not increase appreciably over evolutionary time.
This process within the lineages of the run champions is consistent with a more general correlation found in all designs
explored during optimization across all runs: Individuals with the highest fitness values tend to have very small amounts
of morphological development, while their control policies are free to develop (Fig. 3B). However, despite the fact that
morphological development tends to be canalized in the most fit individuals, it cannot simply be discarded as the nondevelopmental systems have by definition small morphological windows, and small controller windows, but also low fitness.
To test the sensitivity of the canalized morphologies to changes in their control policies, we applied a random series of
control mutations to the Evo and Evo-Devo run champions for each of the 30 evolutionary trials. For each run champion, we
4/17
Figure 3. Differential canalization. Developmental windows (i.e. the total lifetime developmental change) for morphology,
WL (see Equation 3), and controller, WΦ (see Equation 4), alongside fitness F. (A) Three representative lineages taken from
Supplementary Fig. S1, which displays the lineages of all 30 Evo-Devo run champions. Evolutionary time T moves from the
oldest ancestor (left) to the run champion (right). A general trend emerges wherein lineages initially increase their
morphological development in T (rising red curves) and subsequently decrease morphological development to almost zero
(falling red curves). Five of the 30 evolutionary trials, annotated by ∗, fell into a local optima. (B) Median fitness as a function
of morphology and controller development windows (WL , WΦ ), for all Evo-Devo designs evaluated. Overall, the fastest designs
tend to have small amounts of morphological development, but are free to explore alternative control policies.
perform 1000 subsequent random controller mutations that build upon each other in series (a Brownian trajectory in the space
of controllers)—and repeat this process ten times for each run champion, each with a unique random seed. It was found that
optimized Evo-Devo robots tend to possess body plans that are much more robust to control mutations than those of Evo robots
(Fig. 4A). The first control mutation to optimized Evo robots tends to immediately render them immobile, whereas optimized
Evo-Devo robots tend to retain most of their functionality even after 1000 successive random changes to their controllers.
Within Evo-Devo designs, the functionality of the 25 fast designs are minimally affected by changes to their control, whereas
the five slow designs also tend to break after the first control mutation (Fig. 4B). Thus it can be concluded that these five robots
are non-modular: their non-canalized morphologies evolved a strong dependency on their controllers. The Evo robots are
similarly non-modular: they are brittle to control mutations.
To test the sensitivity of the evolved controllers to changes in their morphologies, we applied the same procedure described
in the previous paragraph but with random morphological mutations rather than control mutations. It was found that both
developmental and non-developmental systems tend to evolve controllers that are very sensitive to morphological mutations
(Fig. 4C). The first few morphological mutations to optimized robots, in both treatments, tend to immediately render them
immobile. Within Evo-Devo design types, neither of which canalized development in their controllers (Supplementary Fig.
S1), both the fast and slow designs possessed controllers sensitive to changes in their morphologies (Fig. 4D). Thus it can
be concluded that the non-canalized controllers evolved a strong dependency on their morphologies. The same is true of
the non-developmental systems (and is consistent with the findings of Cheney et al.21 ). Therefore, the only trait which was
successfully canalized was also the only trait which was robust to changes in other traits.
Heterochrony in morphological development.
The evolutionary algorithm can rapidly discover an actuation pattern that elicits a very small amount of forward movement in
these soft robots regardless of the morphology. There is then an incremental path of increasing locomotion speed that natural
selection can climb by gradually growing legs to reduce the surface area touching the floor and thus friction, and simultaneously
refining controller actuation patterns to better match and exploit the morphology (Fig. 5A,B).
There is, however, a vastly superior design partially hidden from natural selection—a ‘needle in the haystack’, to use Hinton
and Nowlan’s metaphor4 . On flat terrain, rolling can be much faster and more efficient than walking, but finding such a design
is difficult because the fitness landscape is deceptive. Rolling over once is much less likely to occur in a random individual
5/17
Figure 4. Sensitivity to morphological and control mutations. Ten random walks were taken from each run champion. (A)
Successive control mutations to the Evo and Evo-Devo run champions. (B) The previous Evo-Devo results separately for fast
and slow design types. (C) Successive morphological mutations to the Evo and Evo-Devo run champions. (D) The previous
Evo-Devo results separately for fast and slow design types. Medians plotted with 99% confidence intervals. The faster
Evo-Devo robots tend to possess body plans that are robust to control mutations.
Figure 5. Evolved behavior. (A) An evolved trotting soft quadruped with a two-beat gait synchronizing diagonal pairs of
legs (moving from left to right). (B) A galloping adult robot which goes fully airborne mid-gait. (C) A galloping juvenile robot
which develops into a rolling adult form. (D) A rolling juvenile robot at 10 points in ontogeny immediately after birth. Arrows
indicate the general directionality of movement, although this is much easier to observe in Supplementary Video S1. Voxel
color here indicates the amount of subsequent morphological development remaining at that cell: blue indicates shrinking
voxels (`k > `∗k ), red indicates growing voxels (`k < `∗k ), green indicates little to no change either way (`k ∼
= `∗k ).
6/17
Figure 6. Late onset discoveries. Ontogenetic time before the discovery of rolling over, taken from the lineages of the best
robot from the each of the 25 Evo-Devo trials that produced a rolling design. Median time to discovery, with 95% C.I.s, for (A)
the lineage from the most distant ancestor (T = 0) to more recent descendants, and (B) the first ancestor to roll over compared
to the final run champion. Rolling over is measured from the first time step the top of the robot touches the ground, rather than
after completely rolling over. The first ancestors to roll over tend to do so at the end of their lives, their descendants tend to roll
sooner in life, and the final run champions all begin rolling immediately at birth. These results are a consequence of dependent
time steps: because mutational changes affect all downstream steps, their phenotypic impact is amplified in all but the terminal
stages of development. Thus, late onset changes can provide exploration in the search space without breaking rest-of-life
functionality, and subsequent evolution can gradually assimilate this trait to the start of development.
than shuffling forwards slightly. And as a population continues to refine walking morphologies and gaits, lineages containing
rocking individuals which are close to rolling over, or roll over just once, do not survive long enough to eventually produce a
true rolling descendant.
Development can alter the search space evolution operates in because individuals sweep over a continuum of phenotypes,
with different velocities, rather than single static phenotype that travels at a constant speed (Supplementary Fig. S2E,J). The
lineages which ultimately evolved the faster rolling design initially increased their morphological plasticity in phylogenetic
time as evidenced by the initial upward trends in the red curves in Supplementary Fig. S1 (exemplified in Fig. 3A) which
contain a statistically significant difference between their starting and maximum developmental window sizes (p < 0.001).
This exposes evolution to a wider range of body plans and thus increases the chance of randomly rolling at least once at some
point during the evaluation period.
The peak of morphological plasticity in Supplementary Fig. S1 (and Fig. 3A) generally lines up with the start of an
increasing trend in fitness (blue curves) and marks the onset of differential canalization. Rolling just once allows an individual
to move further (1 body length) than some early walking behaviors but they incur the fitness penalty of having fallen over and
thus not being able to subsequently walk for the rest of the trial. Therefore this tends to happen at the very end of ontogeny (Fig.
6), as individuals evolve to ‘dive’ in the last few time steps of the simulation of their behavior, thus incurring an additional
increase of fitness over their parent, which does not exhibit this behavior. Since more rolling incurs more fitness than less rolling,
a form of progenesis occurs as heterochronic mutations move `k closer to `∗k , for each voxel. This gradually earlifies rolling
from a late onset behavior to one that arises increasing earlier in ontogeny (Supplementary Video S1). As more individuals in
the population discover and earlify this rolling behavior, the competition stiffens until eventually individuals which are not born
rolling from the start are not fast enough to compete (Fig. 5C,D).
Generality of results.
For the results above, the mutation rate of each voxel was under evolutionary control (self-adaptation). In an effort to assess the
generality of our results, we replicated the experiment described above for various fixed mutation rates (Supplementary Fig.
S3). The fastest walking behaviors are produced with the lowest mutation rate we tested. The fastest rolling behaviors are also
produced by the lowest mutation rates, but higher mutation rates facilitate the discovery of this superior phenotype. Without
development, as in Hinton and Nowlan’s case4 , the search space has a single spike of high fitness. One can not do better than
random search in such a space. At the highest mutation rate, optimizing Evo morphologies is random search, and this is the only
mutation rate where Evo does not require significantly more generations than Evo-Devo to find the faster design type. This can
be observed in Supplementary Fig. S3, for a given mutation rate, at the generations where the slopes of the fitness curves tend
to increase dramatically. However, the best two treatments, as measured by the highest median speed at the end of optimization,
have development, and they are significantly faster than random search (with and without development) (p < 0.01).
7/17
Discussion
In these experiments, the intersection of two time scales—slow linear development and rapid oscillatory actuation, as from a
central pattern generator—generates positive and negative feedback in terms of instantaneous velocity: the robot speeds up and
slows down during various points in its lifetime (Supplementary Fig. S2J). Prior to canalization, unless all of the phenotypes
swept over by an individual in development keep the robot motionless, there will be intervals of relatively superior and inferior
performance. Evolution can thus improve overall fitness in a descendant by lengthening the time intervals containing superior
phenotypes and reducing the intervals of inferior phenotypes. However, this is only possible if such mutations exist.
We have found here that such mutations do exist in cases where evolutionary changes to one trait do not disrupt the
successful behavior contributed by other traits. For example, robots that exhibited the locally optimal trotting behavior (Fig.
5A) exhibited a tight coupling between morphology and control, and thus evolution was unable to canalize development in
either one, since mutations to one subsystem tended to disrupt the other. Brief ontogenetic periods of rolling behavior (Fig. 5C),
on the other hand, could be temporally extended by evolution through canalization of the morphology alone (Fig. 5D), since
these morphologies are generally robust to the pattern of actuation. The key observation here is that only phenotypic traits that
render the agent robust to changes in other traits become assimilated, a phenomenon we term differential canalization.
This insight was exposed by modeling the development of simulated robots as they interacted with a physically realistic
environment. Differential canalization may be possible in disembodied agents as well, if they conform to appropriate conditions
described in Supplementary Discussion.
This finding of differential canalization has important implications for the evolutionary design of artificial and embodied
agents such as robots. Computational and engineered systems generally maintain a fixed form as they behave and are evaluated.
However, these systems are also extremely brittle when confronted with slight changes in their internal structure, such as
damage, or in their external environment such as moving onto a new terrain22–24 . Indeed, a perennial problem in robotics and
AI is finding general solutions which perform well in unseen environments24, 25 . Our results demonstrate how incorporating
morphological development in the optimization of robots can reveal, through differential canalization, characters which are
robust to internal changes. Robots that are robust to internal changes in their controllers may also be robust to external changes
in their environment12 . Thus, allowing robots to change their structure as they behave might facilitate evolutionary improvement
of their descendants, even if these robots will be deployed with static phenotypes or in relatively unchanging environments.
These results are particularly important for the nascent field of soft robotics in which engineers cannot as easily presuppose
a robot’s body plan and optimize controllers for it because designing such machines manually is unintuitive17, 26 . Our
approach addresses this challenge, because differential canalization provides a mechanism whereby static yet robust soft robot
morphologies may be automatically discovered using evolutionary algorithms for a given task environment. Furthermore, future
soft robots could potentially alter their shape to best match the current task by selecting from previously trained and canalized
forms. This change might occur pneumatically, as in Shepherd et al.27 , or it could modulate other material properties such as
stiffness (e.g. using a muscular hydrostat).
We have shown that for canalization to occur in our developmental model, some form of paedomorphosis must also occur.
However, there are at least two distinct methods in which such heterochrony can proceed: progenesis and neoteny. Progenesis
could occur through mutations which move initial parameter values (`, φ ) toward their final values (`∗ , φ ∗ ). Neoteny could
instead occur through mutations which move final values (`∗ , φ ∗ ) toward their initial values (`, φ ). Although a superior
phenotype can materialize anywhere along the ontogenetic timeline, late onset mutations are less likely to be deleterious
than early onset mutations. This is because our developmental model is linear in terms of process, and interfering with any
step affects all temporally-downstream steps. Since the probability of a mutation being beneficial is inversely proportional
to its phenotypic magnitude28 , mutational changes in the terminal stages of development require the smallest change to the
developmental program. Hence, late-onset discoveries of superior traits are more likely to occur without breaking functionality
at other points in ontogeny, and these traits can become canalized by evolution through progenesis: mutations which reduce the
amount of ontogenetic time prior to realizing the superior trait (by moving ` → `∗ and/or φ → φ ∗ ). Indeed progenesis was
observed most often in our trials (Fig. 6): late onset mutations which transform a walking robot into a rolling one are discovered
by the evolutionary process, and are then moved back toward the birth of the robots’ descendants through subsequent mutations.
Finally, we would like to note the observed phenomenon of increased phenotypic plasticity prior to genetic assimilation.
Models of the Baldwin effect usually assume that phenotypic plasticity itself does not evolve, although it has been shown
how major changes in the environment can select for increased plasticity in a character that is initially canalized29 . In our
experiments however, there is no environmental change. There is also a related concept known as ‘sensitive periods’ of
development in which an organism’s phenotype is more responsive to experience30 . Despite great interest in sensitive periods,
the adaptive reasons why they have evolved are unclear31 . In our model, increasing the amount of morphological development
increases the chance of capturing an advantageous static phenotype, which can then be canalized, once found. There is a
balance, however, as a phenotype will not realize the globally optimal solution by simply maximizing development. This would
merely produce a cube of maximum volume shrinking to a cube of minimum volume, or vice versa.
8/17
The developmental model described herein is intentionally minimalistic in order to isolate the effect of morphological and
neurological change in the evolutionary search for embodied agents. The simplifying assumptions necessary to do so make it
difficult to assess the biological implications. For example, we model development as an open loop process and thus ignore
environmental queues and sensory feedback32, 33 . We also disregard the costs and constraints of phenotypic plasticity34, 35 .
However, by removing these confounding factors, these results may help generate novel hypotheses about morphological
development, heterochrony, modularity and evolvability in biological systems.
Methods
Resting length.
Voxelyze18 simulates soft materials as a lattice of spring-like beam elements and point masses. Voxels are centered on these
point masses and their individual properties depend on attributes of the beams connecting them to neighboring voxels. By
default, prior to environmental or behavioral forces, all (resting) beam lengths are equal, and thus voxels are rendered as cubes
of equal size (see for example Cheney et al.36 ). In these experiments, however, we alter the resting shape of the voxels by
changing the underlying beam lengths. For convenience we refer to voxels by their resting ‘length’, but the actual resting shape
of a single voxel depends on its neighbors.
Ballistic development.
Ballistic development β (t) = a + t(b − a)/τ, is simply a linear function from a to b in ontogenetic time t ∈ (0, τ). For Evo
robots, a = b, and hence β (t) = a, which is just a constant value in ontogenetic time.
Current length.
For smaller voxels, it is necessary to implement damping into their actuation to avoid simulation instability. Actuation amplitude
is limited by a linear damping factor ξ (x) = min{1, (4x − 1)/3}, which only affects voxels with resting length less than one
centimeter, and approaches zero (no actuation) as the resting length goes to its lower bound of 0.25 cm.
Actuation of the kth voxel ψk (t) therefore depends on the starting and final phase offsets (φk , φk∗ ) for relative displacement,
and on the starting and final resting lengths (`k , `∗k ) for amplitude. With maximum amplitude A = 0.14 cm and a fixed frequency
f = 4 Hz, we have
t(φk∗ − φk )
t(`∗k − `k )
ψk (t) = A · sin 2π f t + φk +
· ξ `k +
τ
τ
(1)
The current length of the kth voxel at time t, which we denote by Lk (t), is the resting length plus the offset and damped
signal ψk (t).
Lk (t) = `k +
t(`∗k − `k )
+ ψk (t)
τ
(2)
Current length is broken down into its constituent parts for a single voxel, under each treatment, in Supplementary Fig. S2.
Evolutionary algorithm.
We employ a standard evolutionary algorithm, Age-Fitness-Pareto Optimization37 , which uses the concept of Pareto dominance
and an objective of age (in addition to fitness) intended to promote diversity among candidate designs. For 30 runs, a population
of 30 robots is evolved for 10000 generations. Every generation, the population is first doubled by creating modified copies of
each individual in the population. The age of each individual is then incremented by one. Next, an additional random individual
(with age zero) is injected into the population. Finally, selection reduces the population down to its original size according to
the two objectives of fitness (maximized) and age (minimized).
The mutation rate is also evolved for each voxel, by maintaining a separate vector of mutation rates which are slightly
modified every time a genotype is copied from parent to child. These 48 independent mutation rates are initialized such that
only a single voxel is mutated on average. Mutations follow a normal distribution (σ` = 0.75 cm, σφ = π/2) and are applied by
first selecting what parameter types to mutate (φk , φk∗ , `k , `∗k ), and then choosing, for each parameter separately, which voxels
to mutate. In Supplemental Materials we provide exact derivations of the expected genotypic impact of mutations in terms of
the proportions of voxels and parameters modified for a given fixed mutation rate λ . There is a negligible difference between
Evo and Evo-Devo in terms of the expected number of parent voxels modified during mutation (Supplementary Fig. S4).
9/17
Developmental windows.
The amount of development in a particular voxel can range from zero (in the case that starting and final values are equal) to 1.5
cm for the morphology (which ranges from 0.25 cm to 1.75 cm) and π for the controller (which ranges from −π/2 to π/2).
The morphological development window, WL , is the sum of the absolute difference of starting and final resting lengths across
the robot’s 48 voxels, divided by the total amount of possible morphological development.
WL =
48
1
|`∗ − `k |
∑
48(1.5) k=1 k
(3)
The controller development window, WΦ , is the sum of the absolute difference of starting and final phase offsets across the
robot’s 48 voxels, divided by the total amount of possible controller development.
WΦ =
1
48π
48
∑ |φk∗ − φk |
(4)
k=1
Statistical analysis.
We use the Mann-Whitney U test to assess statistical significance.
Source code.
https://github.com/skriegman/how-devo-can-guide-evo contains the source code necessary for reproducing the results reported in this paper.
References
1. Baldwin, J. M. A new factor in evolution. The american naturalist 30, 441–451 (1896).
2. Morgan, C. L. On modification and variation. Sci. 4, 733–740 (1896).
3. Waddington, C. H. Canalization of development and the inheritance of acquired characters. Nat. 150, 563–565 (1942).
4. Hinton, G. E. & Nowlan, S. J. How learning can guide evolution. Complex systems 1, 495–502 (1987).
5. Floreano, D. & Mondada, F. Evolution of plastic neurocontrollers for situated agents. In From Animals to Animats 4,
Proceedings of the 4th International Conference on Simulation of Adaptive Behavior (SAB 1996), LIS-CONF-1996-001,
402–410 (MIT Press, 1996).
6. Dellaert, F. & Beer, R. D. A developmental model for the evolution of complete autonomous agents. In Proceedings of the
fourth international conference on simulation of adaptive behavior (1996).
7. Eggenberger, P. Evolving morphologies of simulated 3D organisms based on differential gene expression. Procs. Fourth
Eur. Conf. on Artif. Life 205–213 (1997).
8. Bongard, J. C. & Pfeifer, R. Repeated structure and dissociation of genotypic and phenotypic complexity in Artificial
Ontogeny. Proc. The Genet. Evol. Comput. Conf. (GECCO 2001) 829–836 (2001).
9. Doursat, R. Organically grown architectures: Creating decentralized, autonomous systems by embryomorphic engineering.
In Organic computing, 167–199 (Springer, 2009).
10. Ventrella, J. Designing emergence in animated artificial life worlds. In Virtual Worlds, 143–155 (Springer, 1998).
11. Komosinski, M. The framsticks system: versatile simulator of 3d agents and their evolution. Kybernetes 32, 156–173
(2003).
12. Bongard, J. C. Morphological change in machines accelerates the evolution of robust behavior. Proc. Natl. Acad. Sci. 108,
1234–1239 (2011). DOI 10.1073/pnas.1015390108.
13. Kriegman, S., Cheney, N., Corucci, F. & Bongard, J. C. A minimal developmental model can increase evolvability in soft
robots. In Proceedings of the Genetic and Evolutionary Computation Conference, 131–138 (ACM, 2017).
14. Cheney, N., Bongard, J. C. & Lipson, H. Evolving soft robots in tight spaces. In Proceedings of the 2015 annual conference
on Genetic and Evolutionary Computation, 935–942 (ACM, 2015).
15. Miriyev, A., Stack, K. & Lipson, H. Soft material for soft actuators. Nat. Commun. 8 (2017).
16. Wehner, M. et al. An integrated design and fabrication strategy for entirely soft, autonomous robots. Nat. 536, 451–455
(2016).
10/17
17. Lipson, H. Challenges and opportunities for design, simulation, and fabrication of soft robots. Soft Robotics 1, 21–27
(2014). DOI 10.1089/soro.2013.0007.
18. Hiller, J. & Lipson, H. Dynamic simulation of soft multimaterial 3d-printed objects. Soft Robotics 1, 88–101 (2014).
19. Wagner, G. P. & Altenberg, L. Perspective: complex adaptations and the evolution of evolvability. Evol. 50, 967–976
(1996).
20. Lipson, H. Principles of modularity, regularity, and hierarchy for scalable systems. The J. Biol. Phys. Chem. 7, 125–128
(2007).
21. Cheney, N., Bongard, J., SunSpiral, V. & Lipson, H. Scalable co-optimization of morphology and control in embodied
machines. arXiv preprint arXiv:1706.06133 (2017).
22. French, R. M. Catastrophic forgetting in connectionist networks. Trends cognitive sciences 3, 128–135 (1999).
23. Szegedy, C. et al. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
24. Carlson, J. & Murphy, R. R. How ugvs physically fail in the field. IEEE Transactions on robotics 21, 423–437 (2005).
25. Nguyen, A., Yosinski, J. & Clune, J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable
images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 427–436 (2015).
26. Pfeifer, R., Lungarella, M. & Iida, F. The challenges ahead for bio-inspired ‘soft’ robotics. Commun. ACM 55, 76–87
(2012).
27. Shepherd, R. F. et al. Multigait soft robot. Proc. Natl. Acad. Sci. 108, 20400–20403 (2011).
28. Fisher, R. A. The genetical theory of natural selection (Oxford University Press, 1930).
29. Lande, R. Adaptation to an extraordinary environment by evolution of phenotypic plasticity and genetic assimilation. J.
evolutionary biology 22, 1435–1446 (2009).
30. Bateson, P. How do sensitive periods arise and what are they for? Animal Behav. 27, 470–486 (1979).
31. Fawcett, T. W. & Frankenhuis, W. E. Adaptive explanations for sensitive windows in development. Front. Zool. 12, S3
(2015). DOI 10.1186/1742-9994-12-S1-S3.
32. Moczek, A. P. et al. The role of developmental plasticity in evolutionary innovation. Proc. Royal Soc. Lond. B: Biol. Sci.
278, 2705–2713 (2011).
33. Snell-Rood, E. C. An overview of the evolutionary causes and consequences of behavioural plasticity. Animal Behav. 85,
1004–1011 (2013).
34. Snell-Rood, E. C. Selective processes in development: Implications for the costs and benefits of phenotypic plasticity.
Integr. & Comp. Biol. 52 (2012).
35. Murren, C. J. et al. Constraints on the evolution of phenotypic plasticity: limits and costs of phenotype and plasticity.
Hered. 115, 293–301 (2015).
36. Cheney, N., MacCurdy, R., Clune, J. & Lipson, H. Unshackling evolution: evolving soft robots with multiple materials and
a powerful generative encoding. In Proceedings of the 15th annual conference on Genetic and evolutionary computation,
167–174 (ACM, 2013).
37. Schmidt, M. & Lipson, H. Age-fitness pareto optimization. In Genetic Programming Theory and Practice VIII, 129–146
(Springer, 2011).
Acknowledgments
This work was supported by NSF awards PECASE-0953837 and INSPIRE-1344227, as well as ARO contract W911NF-16-10304. We thank Shawn Beaulieu, Francesco Corucci, Chris Fusting and Frank Veenstra for useful discussions.
11/17
Supplementary Video
Supplementary Video S1.
https://youtu.be/nWbpegOCeQY provides a brief overview of the results reported in this paper (2:22 minutes).
Supplementary Video S2.
https://youtu.be/N8AThp4sZZ0 presents the paper in a nutshell (18 seconds): An unfit robot (1) evolves into a walker (2).
Its descendant ‘discovers’ rolling late in life (3). A mutation causes rolling to show up earlier in its descendant (4). The final
descendant rolls throughout its life (5).
Supplementary Discussion
Embodiment.
We consider an agent to be embodied if its output affects its input. This relationship between an agent and its environment may
be modeled as a partially observable Markov decision process (POMDP), where:
• S is a set of states,
• A is a set of actions, and
• T is a set of state transition probabilities.
At each time period, the environment is in some state s ∈ S. The agent takes an action a ∈ A, which causes the environment
to transition to state s0 with probability T (s0 | s, a). A control policy π specifies the action a = π(b) a robot will take next given
its current belief b about the new state of the environment. Usually, the goal is to find, given T , the optimal policy π ∗ which
yields the highest overall performance.
The morphology of an agent can be considered as part of T since it mediates the transition between s and s0 , yet it is not
part of π. By changing morphology we were able to modify a subset of T in such a way that facilitated the search for π ∗ . This
might also be possible in disembodied agents if other dimensions of T can be changed by some search process such as to
facilitate the search for π ∗ .
Supplementary Methods
Mutations.
Mutations are applied by first choosing what material properties (i.e. voxel-level parameters such as resting length and phase
offset) to mutate, and then choosing, separately for each property, which voxels to modify them in. For each of the n material
properties, we select it with independent probability p = 1/n. If none are selected, we randomly choose one. This occurs with
probability (1 − p)n .
Hence the number of selected material properties for mutation is a random variable S which follows a binomial distribution
truncated on S ≥ 1 such that the entire untruncated probability mass at S = 0 is placed on top of S = 1.
0
np(1 − p)n−1 + (1 − p)n
Pr(S = s | n) =
n ps (1 − p)n−s
s
for s = 0
for s = 1
for s > 1
The expected number of selected material properties is then
n
n s
E(S) = np(1 − p)n−1 + (1 − p)n + ∑ s
p (1 − p)n−s
s
s=2
n
n
ps (1 − p)n−s
= (1 − p)n + ∑ s
s
s=1
= (1 − p)n + np
n
= (1 − p) + 1
(5)
(6)
(7)
(8)
(9)
12/17
The selected material property of each voxel is mutated independently with probability λ , and thus the expected number of
genotype elements mutated given K total voxels is
δgene = λ K · E(S)
(10)
Diving by the length of the genome, nK, the expected proportion of genotype elements mutated is
πgene = λ /n · E(S)
(11)
Note that bilaterally symmetrical properties have the same expected values since they are half the size but a mutation effects
two voxels.
We have K = 48 total voxels, and n = {2, 4} material properties for our two main experimental treatments {Evo, Evo-Devo},
respectively:
δgene
πgene
n=2
60λ
0.625λ
n=4
63.8175λ
0.3291λ
The expected number of voxels mutated is lower than the average number of genotype elements mutated because there can be
overlap/redundancies among the voxels selected between the material properties. To calculate the average number of voxels
mutated we need to consider a hierarchy of binomial distributions.
The number of material properties mutated within the kth voxel given S selected, M|S, follows binom(S, λ ) ∀ k.
S
Pr(M = m | S, λ ) =
λ m (1 − λ )S−m
(12)
m
For convenience, let’s denote the probability that at least one mutation occurs to the kth voxel as θ .
θ = Pr(M > 0 | S, λ ) = 1 − (1 − λ )S
(13)
Then the number of voxels mutated, V , across a total of K voxels and S selected material properties, follows binom(K, θ ).
K v
Pr(V = v | S, K, λ , n) =
θ (1 − θ )K−v
(14)
v
And the expected number of voxels mutated (out of K total) is
δvox = E(V | K, λ , n)
(15)
= ES EV (V | S, K, λ , n)
(16)
= ES (Kθ | S, K, λ , n)
= K 1 − ES (1 − λ )S | λ , n
(
"
(17)
(18)
#)
n
n
= K 1 − (1 − λ )(1 − p)n + ∑ (1 − λ )s
ps (1 − p)n−s
s
s=1
λ p−1 n
= K 1 − (1 − p)n
−λ
p−1
(19)
(20)
There is an extremely tight bound on the proportion of voxels mutated, πvox = δvox /K, for any n > 1 (Supplementary
Fig. S4). Thus, mutations in Evo (n = 2) and Evo-Devo (n = 4) have practically the same genotypic impact in terms of the
number of voxels modified. For completeness, the following table displays δvox for the specific values of λ considered by our
hyperparameter sweep (K = 48) (Supplementary Fig. S3).
n
2
4
1/48
1.25
1.3
2/48
2.48
2.6
4/48
4.92
5.14
8/48
9.67
10.05
λ
16/48
18.67
19.17
24/48
27
27.46
32/48
34.67
34.98
48/48
48
48
13/17
Figure S1. Evolutionary change during 30 Evo-Devo trials. The amount of morphological development, WL (see Equation
3), controller development, WΦ (see Equation 4), and fitness, F, for the lineages of the 30 Evo-Devo run champions.
Evolutionary time, T , moves from the oldest ancestor (left) to the run champion (right). A general trend emerges wherein
lineages initially increase their morphological development in T (rising red curves) and subsequently decrease morphological
development to almost zero (falling red curves). Five of the 30 evolutionary trials, annotated by ∗, fell into a local optima.
14/17
Figure S2. Experimental treatments. The phase of an oscillating global temperature (A, F) is offset in the kth voxel by a
linear function from φk to φk∗ (B, G). The resting length of the kth voxel is a linear function from `k to `∗k (C, H). For Evo, there
is no development, so φk = φk∗ and `k = `∗k . The offset actuation is added on top of the resting length to give the current length
of the kth voxel (D, I). These example voxel-level changes occur across ontogenetic time (t), independently in each of the 48
voxels, and together interact with the environment to generate robot-level velocity (E, J). Velocities are averaged across
intervals of two actuation cycles.
15/17
Figure S3. Mutation rate sweep. Median fitness (with 95% bootstrapped confidence intervals) under various mutation rates,
λ , a hyperparameter defined in Supplemental Materials which affects the probability a voxel is mutated. In the main
experiment of this paper, the mutation rate is evolved for each voxel independently, and is constantly changing. In this mutation
rate sweep, λ is held uniform across all voxels. The fastest walking and rolling behaviors are produced with the lowest
mutation rate we tested (λ = 1/48), although higher mutation rates facilitate the discovery of rolling which is much faster than
walking. Without development, the search space has a single spike of high fitness corresponding to this rolling behavior. One
can not do better than random search in such a space. When λ = 1, optimizing Evo morphologies is random search, and this is
the only mutation rate where Evo does not require significantly more generations than Evo-Devo to find the faster design type.
This can be observed for λ ∈ {1/6, 1/3, 1/2, 2/3, 1} at the generations where the slopes of the fitness curves tend to increase
dramatically. However, the best two treatments (Evo-Devo at λ = 1/2 and λ = 2/3), as measured by the highest median speed
at the end of optimization, have development, and they are significantly faster than random search (with and without
development) (p < 0.01).
16/17
Figure S4. Mutational impact. The expected proportion of voxels modified, πvox , where n is the number of material
properties per voxel (voxel-level parameters such as `k and φk ) and λ is the mutation rate (the probability of mutating a voxel
for a selected material property). A derivation is provided in Supplemental Materials. In this paper, we have treatments Evo,
with n = 2, and Evo-Devo, with n = 4. There is an extremely tight bound on the proportion of voxels mutated for any n > 1. At
λ = 1 every voxel must be mutated while at λ = 0 there can be no voxels mutated. The arch between these two points is
limited by the possibility of overlap (selecting the same voxel more than once).
17/17
| 2 |
Gradient Descent with Random Initialization:
Fast Global Convergence for Nonconvex Phase Retrieval
Yuxin Chen∗
Yuejie Chi†
Jianqing Fan‡
Cong Ma‡
arXiv:1803.07726v1 [stat.ML] 21 Mar 2018
March 2018
Abstract
This paper considers the problem of solving systems of quadratic equations, namely, recovering an
\ 2
object of interest x\ ∈ Rn from m quadratic equations / samples yi = (a>
i x ) , 1 ≤ i ≤ m. This problem,
also dubbed as phase retrieval, spans multiple domains including physical sciences and machine learning.
We investigate the efficiency of gradient descent (or Wirtinger flow) designed for the nonconvex least
squares problem. We prove that under Gaussian designs,
gradient descent — when randomly initialized
— yields an -accurate solution in O log n + log(1/) iterations given nearly minimal samples, thus
achieving near-optimal computational and sample complexities at once. This provides the first global
convergence guarantee concerning vanilla gradient descent for phase retrieval, without the need of (i)
carefully-designed initialization, (ii) sample splitting, or (iii) sophisticated saddle-point escaping schemes.
All of these are achieved by exploiting the statistical models in analyzing optimization algorithms, via a
leave-one-out approach that enables the decoupling of certain statistical dependency between the gradient
descent iterates and the data.
1
Introduction
Suppose we are interested in learning an unknown object x\ ∈ Rn , but only have access to a few quadratic
equations of the form
\ 2
yi = a>
,
1 ≤ i ≤ m,
(1)
i x
where yi is the sample we collect and ai is the design vector known a priori. Is it feasible to reconstruct x\
in an accurate and efficient manner?
The problem of solving systems of quadratic equations (1) is of fundamental importance and finds applications in numerous contexts. Perhaps one of the best-known applications is the so-called phase retrieval
problem arising in physical sciences [CESV13, SEC+ 15]. In X-ray crystallography, due to the ultra-high frequency of the X-rays, the optical sensors and detectors are incapable of recording the phases of the diffractive
waves; rather, only intensity measurements are collected. The phase retrieval problem comes down to reconstructing the specimen of interest given intensity-only measurements. If one thinks of x\ as the specimen
under study and uses {yi } to represent the intensity measurements, then phase retrieval is precisely about
inverting the quadratic system (1).
Moving beyond physical sciences, the above problem also spans various machine learning applications.
One example is mixed linear regression, where one wishes to estimate two unknown vectors β1 and β2 from
unlabeled linear measurements [CYC14, BWY14]. The acquired data {ai , bi }1≤i≤m take the form of either
>
bi ≈ a>
i β1 or bi ≈ ai β2 , without knowing which of the two vectors generates the data. In a simple symmetric
\
2
> \ 2
case with β1 = −β2 = x\ (so that bi ≈ ±a>
i x ), the squared measurements yi = bi ≈ (ai x ) become
\
the sufficient statistics, and hence mixed linear regression can be converted to learning x from {ai , yi }.
Author names are sorted alphabetically.
of Electrical Engineering, Princeton University, Princeton, NJ 08544, USA; Email: [email protected].
† Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Email:
[email protected].
‡ Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, USA; Email:
{jqfan, congm}@princeton.edu.
∗ Department
1
Furthermore, the quadratic measurement model in (1) allows to represent a single neuron associated with
a quadratic activation function, where {ai , yi } are the data and x\ encodes the parameters to be learned.
As described in [SJL17, LMZ17], learning neural nets with quadratic activations involves solving systems of
quadratic equations.
1.1
Nonconvex optimization via gradient descent
A natural strategy for inverting the system of quadratic equations (1) is to solve the following nonconvex
least squares estimation problem
minimizex∈Rn
f (x) :=
m
i2
1 X h > 2
ai x − yi .
4m i=1
(2)
i.i.d.
Under Gaussian designs where ai ∼ N (0, In ), the solution to (2) is known to be exact — up to some
global sign — with high probability, as soon as the number m of equations (samples) exceeds the order of
the number n of unknowns [BCMN14]. However, the loss function in (2) is highly nonconvex, thus resulting
in severe computational challenges. With this issue in mind, can we still hope to find the global minimizer
of (2) via low-complexity algorithms which, ideally, run in time proportional to that taken to read the data?
Fortunately, in spite of nonconvexity, a variety of optimization-based methods are shown to be effective
in the presence of proper statistical models. Arguably, one of the simplest algorithms for solving (2) is vanilla
gradient descent (GD), which attempts recovery via the update rule
xt+1 = xt − ηt ∇f xt ,
t = 0, 1, · · ·
(3)
with ηt being the stepsize / learning rate. The above iterative procedure is also dubbed Wirtinger flow for
phase retrieval, which can accommodate the complex-valued case as well [CLS15]. This simple algorithm
is remarkably efficient under Gaussian designs: in conjunction with carefully-designed initialization and
stepsize rules, GD provably converges to the truth x\ at a linear rate1 , provided that the ratio m/n of the
number of equations to the number of unknowns exceeds some logarithmic factor [CLS15, Sol14, MWCC17].
One crucial element in prior convergence analysis is initialization. In order to guarantee linear convergence, prior works typically recommend spectral initialization or its variants [CLS15,CC17,WGE17,ZZLC17,
MWCC17, LL17, MM17]. Specifically, the spectral method forms an initial estimate x0 using the (properly
scaled) leading eigenvector of a certain data matrix. Two important features are worth emphasizing:
• x0 falls within a local `2 -ball surrounding x\ with a reasonably small radius, where f (·) enjoys strong
convexity;
0
• x0 is incoherent with all the design vectors {ai } — in the sense that |a>
i x | is reasonably small for all
0
1 ≤ i ≤ m — and hence x falls within a region where f (·) enjoys desired smoothness conditions.
These two properties taken collectively allow gradient descent to converge rapidly from the very beginning.
1.2
Random initialization?
The enormous success of spectral initialization gives rise to a curious question: is carefully-designed initialization necessary for achieving fast convergence? Obviously, vanilla GD cannot start from arbitrary points,
since it may get trapped in undesirable stationary points (e.g. saddle points). However, is there any simpler
initialization approach that allows to avoid such stationary points and that works equally well as spectral
initialization?
A strategy that practitioners often like to employ is to initialize GD randomly. The advantage is clear:
compared with spectral methods, random initialization is model-agnostic and is usually more robust visa-vis model mismatch. Despite its wide use in practice, however, GD with random initialization is poorly
understood in theory. One way to study this method is through a geometric lens [SQW16]: under Gaussian
designs, the loss function f (·) (cf. (2)) does not have any spurious local minima as long as the sample size
1 An
iterative algorithm is said to enjoy linear convergence if the iterates {xt } converge geometrically fast to the minimizer x\ .
2
Stage
Stage 1 Stage
2 1 Stage 2
dist(xt ; x\ )=kx\ k2
10 0
10 -2
10 -4
10 -6
n
n
n
n
n
0
=
=
=
=
=
100
200
500
800
1000
50
100
150
200
t : iteration count
Figure 1: The relative `2 error vs. iteration count for GD with random initialization, plotted semilogarithmically. The results are shown for n = 100, 200, 500, 800, 1000 with m = 10n and ηt ≡ 0.1.
m is on the order of n log3 n. Moreover, all saddle points are strict [GHJY15], meaning that the associated
Hessian matrices have at least one negative eigenvalue if they are not local minima. Armed with these two
conditions, the theory of Lee et al. [LSJR16, LPP+ 17] implies that vanilla GD converges almost surely to
the truth. However, the convergence rate remains unsettled. In fact, we are not aware of any theory that
guarantees polynomial-time convergence of vanilla GD for phase retrieval in the absence of carefully-designed
initialization.
Motivated by this, we aim to pursue a formal understanding about the convergence properties of GD
with random initialization. Before embarking on theoretical analyses, we first assess its practical efficiency
through numerical experiments. Generate the true object x\ and the initial guess x0 randomly as
x\ ∼ N (0, n−1 In )
and
x0 ∼ N (0, n−1 In ).
We vary the number n of unknowns (i.e. n = 100, 200, 500, 800, 1000), set m = 10n, and take a constant
i.i.d.
stepsize ηt ≡ 0.1. Here the measurement vectors are generated from Gaussian distributions, i.e. ai ∼
N (0, In ) for 1 ≤ i ≤ m. The relative `2 errors dist(xt , x\ )/kx\ k2 of the GD iterates in a random trial are
plotted in Figure 1, where
dist(xt , x\ ) := min kxt − x\ k2 , kxt + x\ k2
(4)
represents the `2 distance between xt and x\ modulo the unrecoverable global sign.
In all experiments carried out in Figure 1, we observe two stages for GD: (1) Stage 1: the relative error
of xt stays nearly flat; (2) Stage 2: the relative error of xt experiences geometric decay. Interestingly, Stage
1 lasts only for a few tens of iterations. These numerical findings taken together reveal appealing numerical
1
efficiency of GD in the presence of random initialization — it attains
5-digit accuracy within about 200
1
iterations!
To further illustrate this point, we take a closer inspection of the signal component hxt , x\ ix\ and the
perpendicular component xt − hxt , x\ ix\ , where we normalize kx\ k2 = 1 for simplicity. Denote by kxt⊥ k2 the
`2 norm of the perpendicular component. We highlight two important and somewhat surprising observations
that allude to why random initialization works.
• Exponential growth of the magnitude ratio of the signal to the perpendicular components. The ratio,
|hxt , x\ i| / kxt⊥ k2 , grows exponentially fast throughout the execution of the algorithm, as demonstrated in
Figure 2(a). This metric |hxt , x\ i| / kxt⊥ k2 in some sense captures the signal-to-noise ratio of the running
iterates.
• Exponential growth of the signal strength in Stage 1. While the `2 estimation error of xt may not drop
significantly during Stage 1, the size |hxt , x\ i| of the signal component increases exponentially fast and
becomes the dominant component within several tens of iterations, as demonstrated in Figure 2(b). This
helps explain why Stage 1 lasts only for a short duration.
3
6
n
n
n
n
n
jhxt ; x\ ij=kxt? k2
10 4
10
=
=
=
=
=
100
200
500
800
1000
10 0
jhxt ; x\ ij and dist(xt ; x\ )
10
10
2
10 0
10 -2
-2
jhxt ; x\ ij (n = 100)
jhxt ; x\ ij (n = 200)
jhxt ; x\ ij (n = 500)
jhxt ; x\ ij (n = 1000)
dist(xt ; x\ ) (n = 100)
dist(xt ; x\ ) (n = 200)
dist(xt ; x\ ) (n = 500)
dist(xt ; x\ ) (n = 1000)
10 -4
0
50
100
150
0
t : iteration count
50
100
150
t : iteration count
(a)
(b)
Figure 2: (a) The ratio |hxt , x\ i| / kxt⊥ k2 , and (b) the size |hxt , x\ i| of the signal component and the
`2 error vs. iteration count, both plotted on semilogarithmic axes. The results are shown for n =
100, 200, 500, 800, 1000 with m = 10n, ηt ≡ 0.1, and kx\ k2 = 1.
The central question then amounts to whether one can develop a mathematical theory to interpret such
intriguing numerical performance. In particular, how many iterations does Stage 1 encompass, and how fast
can the algorithm converge in Stage 2?
1.3
Main findings
The objective of the current paper is to demystify the computational efficiency of GD with random initialization, thus bridging the gap between theory and practice. Assuming a tractable random design model in
which ai ’s follow Gaussian distributions, our main findings are summarized in the following theorem. Here
and throughout, the notation f (n) . g(n) or f (n) = O(g(n)) (resp. f (n) & g(n), f (n) g(n)) means that
there exist constants c1 , c2 > 0 such that f (n) ≤ c1 g(n) (resp. f (n) ≥ c2 g(n), c1 g(n) ≤ f (n) ≤ c2 g(n)).
i.i.d.
Theorem 1. Fix x\ ∈ Rn with kx\ k2 = 1. Suppose that ai ∼ N (0, In ) for 1 ≤ i ≤ m, x0 ∼ N (0, n−1 In ),
and ηt ≡ η = c/kx\ k22 for some sufficiently small constant c > 0. Then with probability approaching one,
there exist some sufficiently small constant 0 < γ < 1 and Tγ . log n such that the GD iterates (3) obey
dist xt , x\ ≤ γ(1 − ρ)t−Tγ ,
∀ t ≥ Tγ
for some absolute constant 0 < ρ < 1, provided that the sample size m & n poly log(m).
Remark 1. The readers are referred to Theorem 2 for a more general statement.
Here, the stepsize is taken to be a fixed constant throughout all iterations, and we reuse all data across
all iterations (i.e. no sample splitting is needed to establish this theorem). The GD trajectory is divided into
2 stages: (1) Stage 1 consists of the first Tγ iterations, corresponding to the first tens of iterations discussed
in Section 1.2; (2) Stage 2 consists of all remaining iterations, where the estimation error contracts linearly.
Several important implications / remarks follow immediately.
• Stage 1 takes O(log n) iterations. When seeded with a random initial guess, GD is capable of entering a
local region surrounding x\ within Tγ . log n iterations, namely,
dist xTγ , x\ ≤ γ
for some sufficiently small constant γ > 0. Even though Stage 1 may not enjoy linear convergence in terms
of the estimation error, it is of fairly short duration.
• Stage 2 takes O(log(1/)) iterations. After entering the local region, GD converges linearly to the ground
truth x\ with a contraction rate 1 − ρ. This tells us that GD reaches -accuracy (in a relative sense) within
O(log(1/)) iterations.
4
1
2
0.2
0.2
2
0.8 2
1
0.64
=0.01
0.9
22 =0.01
0.9
\>
\
rF (x) :=
3kxk22 1 x + 2 x\>
rF (x) :=
1 x + 2 x x x , 0.8
0.2 3kxk
=0.05
0.62
22 =0.05
0.8
⇥
⇥
⇤
0.8
0.8>i x)2
=0.1
2
\ 0.7
2
> by rF (x) = E[rf (x)] = E {(a
0.8>i x)
=0.1
0.7
essentially
computed
which essentially computed
by rF (x) = E[rf
(x)]2 =
Ewhich
{(a
(a>
i x ) }ai ai x0.6 assuming that x and
the
ai1
’s are
algebraic
manipulation
reveals
0.6
1reveals
the
ai ’s are independent.
theindependent.
dynamics
for1Simple
both the
signal and
the
1 Simple algebraic0manipulation
1 th
0.6
1 0.4
0.2
0.6
0.8
1
0.64
0.58
0
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
perpendicular
components:
0.9
perpendicular
components:
0.9
0
0.05
0.1
0.6
1
,t
0.8 (b)(a)
0
1
0.05
0.8
0.9
0.8
0.7
0.1
xt+1
k
xt+1
?
0.6
1
0.6
,t
t 2
= 1 + 3⌘ 10.8(b)
kx0.8
k2
1 0.83kxt k22
xtk ;
0.64
0.8 0.62 0
0.62
0.7
xt? .
0.6
0.1
0.2
xt+1
= 1(8a)
+ 3⌘ 1
k
-t
00
00
1
0.4
1
1
Here,
the population gradient given by
1 rF (x) represents
Here, rF (x) represents
the population gradient1given by
-t
0.4
xt+1
?
0.6
kxt k2
0.82
x
3kxt k22
-t
-t
0.6
0.8
0.7
= 1(8b)
+⌘ 1
x
0.6
0.6
= 1+⌘
0.58
and Figure
the
signal and
0.6 ↵ components
t represent
3: The
trajectory
of the
(↵t ,perpendicular
the
signal and the perpendicular
components
0
0.05
0.1
t ), where
t and
t represent
0.58
0
0.05
0.1
0.4
0.4
2that ⌘2 is sufficiently
2
0.4
0.05\ k
0.11,
0.4
\ arrive
for nof
=the
1000
with
m
= 10n,
⌘Assuming
0.01, 0.05,
kx
0.2
andfollowing
recognizing that
kxt k22
0.6
0.6
that
⌘shown
is 0and
sufficiently
small
and m
recognizing
that
kxt k0.05,
↵t +0and
we0.15
t = results
2 =
GD
iterates.
(a)0.6
The
are 0.1,
for
n=
1000
with
= 10n, ⌘0.6
=
0.01,
k20.20 =small
1,at0.1 the
0.05
0.1
0.6
0.6
t , kx
2 = 0.1,
tAssuming
population-level
state evolution
for both ↵t and t (cf. (7)):
The results
are instance
shown forasnplotted
=population-level
1000in
with
m approaching
infinity,
for are
bothshown
↵t andfortn(cf.
(7)):
the same
Figure
1.state
(b)evolution
The results
= 1000
with m approaching
infinity,
dots represent
population-level
points.
2
=0.01
⇥
⇤
⇥
⇤
⌘t = 0.01,the
0.05,
0.1, and kx\ k2saddle
= 1. The
red dots represent the population-level
saddle
points.
2
↵t+1 = 1 +(9a)
3⌘ 1
↵t2 0.4
+ t2
0.4
↵t+1 = 1 + 3⌘ 1 0.4
+ t2
↵t ;
t 0.4
0.2
0.2 ⇥ 0.4↵0.2
0.4
0.2
⇥
⇤
⇤ 2 =0.05
⌘ 1 3 ↵t2 + t2
3 ↵t2 + t2
t+1 = 1 +(9b)
t+1 = 1 + ⌘ 1
t.
in characterizing
the dynamics
of the
algorithm
the
0.2 of the algorithm
0.2 plays
statistical observation
a crucial
role without
in characterizing
the dynamics
without the
=0.01
22=0.1
This0.2
recursive system has three fixed points:
This
recursive
system
has
three
fixed
points:
0.2
0.2
need of sample splitting.
0.2
0
0
0
2 =0.05
0
p
0 confined within a (↵,
ectoryItofisGD
is automatically
certain
region
(↵,
=3),
(1,
0),
(↵,
and
0
)0.8
= (1,
(↵,
) = (0,
and
(↵,
)a
=0.6
(0,0.4
1/) 0.8
0
0.2
0.6
0 0), 0confined
0.4
0.6
0.8
10.8 ) = (0,10),
0
0.2 0.2
0.6
0.8
worth
emphasizing
the 0.4
entire
of 0),
GD
is 1
automatically
certain
region
2 =0.1
0that
0.4 trajectory
0.6
1
00.2 0.2 within
0.4
1
as weenjoying
shall make
precise
in
Section
4,
the
GD
iterates
are
alfavorable geometry.
For example, as we shall make precise in Section
4, the GD iterates ,
are al0
0 0correspond
0
which
to thepoints,
globalt respectively.
minimizer, the local maximizer, 0and
which
correspond
to exhibit
the global
minimizer, the local maximizer,
and the saddle
y sufficiently
away from
anythe
saddle
point,
and
desired
0 point,
0.2 and
0.4
0.6
0.8 00.8 1 1
ways incoherent
with
vectors,
stay sufficiently
away 1from any saddle
exhibit
desired
0
0.2
0.4
0.6
0design
0.2
0.4
0.6
0.8
Wetrajectory
make note
of the
following key observations in the presence
a randomly
initialized
, which will in the presence
We makeofnote
of the
keyxobservations
tric properties
underlying
the Such
GD
are
(a)geometric
(b)following
(a)
(a)not explained
(b)
smoothness
conditions.
delicate
properties
underlying the be
GDformalized
trajectory
areinnot
explained
,
be
formalized
later
in
Lemma
1:
later
Lemma
t 1:
analysis
basedworks.
on global
geometry
[SQW16]
— which
provides
by prior
In light
of this,
convergence
analysis
based on global geometry [SQW16] — which provides
\
t
\ (a)
\(b)
h arbitrary
initialization
—
results
in
suboptimal
(or
even
pes(a)
Figure
3:
The
trajectory
of
(α
,
β
),
where
αinitialization
= |hx
xrepresent
i|
and
βt the
=the
kx
−
ix
kthe
represent
respectively
1.
the
ratio
↵
/
of
the
size
of the
signal
tot ,the
perpendicular
components
increases
exponentially
1.
ratio
↵hx
/t , tx
of
the
of
the
therepresent
perpendicular
t), twhere
tand
2size
valuable
insights
into
algorithm of
designs
arbitrary
—
results
in
suboptimal
(or
even
pest
t , with
t(↵
Figure
3:
The
trajectory
(↵
↵
signal
and
perpendicular
Figure
3:
The
trajectory
of
,
),
where
↵tsignal
andtofast;
thecomp
sig
t
t
t
t
t
t
t components
alyzing
a
concrete
algorithm
like
GD.
In
contrast,
the
current
the
size
of
the
signal
component
and
that
of
the
perpendicular
component
of
the
GD
iterates
(assume
simistic) computational guarantees when analyzing a concrete algorithm like GD. In contrast, the current
\
2. (a)
the
size
↵t results
oftothe
signal
keeps
it m
plateaus
around
1;are
of the
GD
The
are
shown
for
n
=growing
1000
with
= signal
⌘and
=
0.05,
0.1,
and
kwith
2. the
size
↵10n,
of
component
keeps
growing
until
plateau
Figure
3:until
trajectory
of
(↵
,0.01,
),
where
↵
and
represent
the
the
GD
iterates.
(a)
The
for
=
1000
m
= s1
\ particular
tthe
t shown
tn
tkx
t signal
2 =it 1,
guarantees
paying
dynamics
Figure
The
trajectory
offiner
(↵
, tcomponent
), where
the
perpendicular
components
kxiterates.
k2 near-optimal
=3:1).
(a)attention
The
results
are
forofof
nby
=↵paying
1000
m
=The
10n,
and
ηttresults
=the
0.01,
0.05,
0.1.
(b)
results
tshown
t andwith
t represent
paper by
establishes
performance
guarantees
particular
attention
to
finer
dynamics
of The
of
the
GD
iterates.
(a)
The
results
are
shown
for
n
=
1000
with
m
=
\
accomplished
by
heavily
exploiting
statistical
models
in
each
the
sameofare
instance
in
Figure
1.are(b)
The
results
are
shown
for
nofin=
with
mcomponent
approaching
infinity,
the
same
instance
astheplotted
Figure
1.
(b)
The
results
shown
nplotted
=size
1000
with
m
infinity,
and
= 3.
0.01,
0.05,
The
blue
filled
circles
the
iterates.
(a)
The
results
shown
for
n
= η1000
with
mstatistical
=0.1.
⌘1000
= 0.01,
0.1,represent
and
kx kare
=shown
1, zero.fo
3.as
the
perpendicular
component
drops
towards
zero.
the
algorithm.
AsGD
willfor
bethe
seen
later,
this
isapproaching
accomplished
by
heavily
models
in 0.05,
each
size
the
drops
texploiting
tperpendicular
2towards
t of
t10n,
\
the
same
instance
as
plotted
in
Figure
1.
(b)
The
results
are
shown
f
\ n = 1000
the
population-level
andred
the
orange
arrows
indicate
thekx
directions
increasing
t.
iterative
update.
⌘t = 0.01,
0.05,
0.1,
and kx
k =points,
1.inrandomly
The
represent
the
population-level
saddle
points.
the
same
instance
assaddle
plotted
Figure
1.
(b)
The
results
are
shown
for
with
m approaching
⌘initialized,
= 0.01,
0.05,
0.1,
and
krapidly,
=\ 1.ofrandomly
The
red
dots
represent
the popul
t
t
tdots
2 when
t
tinfinity,
In other words,2 when
(↵
,
)
converges
to
(1,0)
thus
indicating
rapid
converIn
other
words,
initialized,
(↵
,
)
converges
to
(1,
⌘t represent
= 0.01, 0.05,
0.1,
and kx k2 = \saddle
1. Thepoints.
red dots represent the popu
\
t
⌘t = 0.01, 0.05,
kxtruth
k2 =x\1., without
The red
dots
the
population-level
gence0.1,
of xand
to the
getting
stuck around
undesirable
points.
We also getting
illustrate
these
gence
of xt to saddle
the truth
x , without
stuck
around undesirab
0
1
phenomena
numerically. Set
n = 1000,Taken
⌘t ⌘ 0.1
and x phenomena
⇠ Nthese
0, n imply
In . that
Figure
4Set
displays
the
dynamics
of x0 ⇠ N 0, n
on works?
numerically.
n = 1000,
⌘t ⌘ 0.1 and
Near linear-time
computational
complexity.
collectively,
the
iteration
complexity
2 Why• random
initialization
works?
↵t /random
and
which
areisprecisely
discussedobservation
above.
↵t / tthe
, ↵t , dynamics
and t , a
which
precisely
as discussed
above. the
t , ↵t , plays
t,a
statistical
observation
crucial
rolestatistical
inascharacterizing
ofaarecrucial
the
algorithm
without
plays
crucial
role
in
characterizing
the
of GD
with
initialization
statistical
observation
plays
role
in without
characterizing
thed
eorem,
we
pause
to
develop
intuitions
regarding
why
random
statistical
observation
plays
a
crucial
role
in
characterizing
the
dynamics
of
the
algorithm
the
1
2
t
t
Before
diving
into the splitting.
proof
ofwethe
main
pause
to
develop
intuitions
regarding
why
random
2 Here,
t . As
Here,
do not
taketheorem,
the absolutewe
value
of
x
.
As
we
shall
see
later,
the
x
’s
are
of
the
same
sign
throughout
the
execution
we
do
not
take
the
absolute
value
of
x
we
shall
see
later,
the
xtk ’s
need
of
sample
of nneed
sample
splitting.
k
splitting.
O klog
+ logof sample
.
k
uild our
understanding
step
step:
(i) we
theneed
of by
sample
splitting.
initialization
isneed
expected
Wefirst
will investigate
build our understanding
step by
(i) we first investigate the
ofstep:
of to
the work.
algorithm.
the algorithm.
ce (the
case where
we population
have infinitegradient
samples);
(ii) we (the
then case
turn where we have infinite samples); (ii) we then turn
dynamics
of the
sequence
t
worth
emphasizing
that
the
entire
trajectory
of GD
isautomat
automa
It is worth
emphasizing
that
the
entire
trajectory
ofiscalculating
GD
is isautomatically
within
certain
region
Given
that
the
cost
of
each
iteration
mainly
liesItin
the gradient
∇fconfined
(x
), the
whole
algorithm
It
is assuming
worth
emphasizing
that the
entire
trajectory
of GD
is
stic argument
assuming
independence
between
the
iterates
and
It is worth
the entire
trajectory
of
GD
automatically
confined
within
a acertain
region
to the finite-sample
caseemphasizing
and present athat
heuristic
argument
independence
between
the
iterates
and
6
enjoying
favorable
geometry.
For
example,
as
we
shall
make
preci
6are
takes
nearly
linear
time,
namely,
it enjoys
computational
complexity
proportional
to
taken
to are
the true
trajectory
is remarkably
close
to
the
oneexample,
heuristically
enjoying
favorable
geometry.
For
asa we
shall
make
precise
Section
4,the
thetime
GD
iterates
enjoying
favorable
geometry.
For
as we
shall
make
precise
inFor
Section
4,
the
GD
iterates
al- alfavorable
geometry.
example,
as
we
shall
make
precise
the
design
vectors;
(iii)
finally,
we
argue
that
the example,
trueenjoying
trajectory
is remarkably
close
to in
the
one
heuristically
t
ways incoherent with the design vectors,
stay sufficiently away from
readthe
the“near-independence”
data (modulo
some
logarithmic
ey property
between
{x }factor).
t and
withdesign
the
design
vectors,
stay
sufficiently
awayfrom
from
any saddle
saddle
point,
exhibit
desired
analyzed
inways
Step incoherent
(ii), with
which the
arises
from
avectors,
key
property
concerning
the “near-independence”
between
{xstay
} and
waysconcerning
incoherent
stay
sufficiently
away
any
point,
exhibit
desired
ways
incoherent
with
the
design
vectors,
sufficiently
away
from
smoothness
conditions.
Such
delicate
geometric
properties
underlyi
and
the
design
vectors
{a
}.
smoothness
conditions.
Such
delicate
geometric
properties
underlying
the
GD
trajectory
are
not
explained
i
•
Near-minimal
sample
complexity.
The
preceding
computational
guarantees
occur
as
soon
as
the
sample
smoothness
conditions.
Such
delicate
geometric
properties
underlying
the
GD
trajectory
are
not
explained
smoothness
conditions.
Such
geometric analysis
properties
underlyin
= e1 throughout this section, where e1 denotes the\ first standard
by prior
works.eIn
light delicate
of
this, convergence
based
on glob
Without loss
of exceeds
generality,
we
assume
x
= econvergence
throughout
this
the
standard
1 Given
1 denotes
prior
works.
light
oflog(m).
this,
analysis
based
onglobal
global
geometry
[SQW16]
—
which
size
mIn
&of
nthis,
poly
that
one section,
needs
atwhere
least
n
samples
to first
recover
n unknowns,
theprovides
note byby priorby
works.
In
light
convergence
analysis
based
on
geometry
[SQW16]
—
which
provides
by
prior
works.
In
light
of
this,
convergence
analysis
based
on
globa
valuable
insights
into
algorithm
designs
with
arbitrary
initialization
basis vector.valuable
For
notational
simplicity,
we denote
by
sample
complexity
randomly
initialized
GD is
optimal up
to some logarithmic
insights
intoofalgorithm
designs
with
arbitrary
initialization
— resultsfactor.
in suboptimal (or even pes-
valuable
insights
into algorithm designs with
arbitrary
initialization
— results
in suboptimal
(or aeven
pes- algo
simistic)
computational
guarantees
when
analyzing
concrete
valuable
insights
into algorithm
designs
with
arbitrary
initialization
xt? :=
[xti ]2incomputationalt guarantees
(5) analyzing
simistic)
a concrete
algorithm like
GD. In
contrast,
the current
t
xk :=
xt1 when
andwhen
x
[xtia]2in
(5)
•computational
Saddle points? The
GD
iterates
never
hit
the
saddle
points
(see
Figure
3
for
an
illustration).
In
fact,
after
? :=
paper
establishes
near-optimal
performance
guarantees
by
paying p
simistic)paper
guarantees
analyzing
concrete
algorithm
like
GD.
In
contrast,
the
current
simistic)
computational
guarantees
when analyzing
a concrete
establishes
near-optimal
guarantees
bywill
paying
particular
attention
to finer
dynamics
of algor
a constant
number
of iterationsperformance
at the very beginning,
GD
follow
a path
that
increasingly
distances
the
algorithm.
As
will
be
seen
later,
this
is
accomplished
byofheav
paper establishes
near-optimal
guarantees
by paying
particular
attention
to finer
dynamics
paper
near-optimal
performance
guarantees
byeach
paying
pa
theitself
algorithm.
beperformance
seen
later,
this
is establishes
accomplished
by There
heavily
exploiting
statistical
models
in
from theAs
set will
of saddle
points
as the
algorithm
progresses.
is
no
need
to
adopt
sophisticated
iterative update.
+
the algorithm.
As
will
be
seen
later,
this
is
accomplished
by
heavily
exploiting
statistical
models
in
each
iterative
update.
thein algorithm.
As willtheory
be seen
later, this
accomplished
by heavil
saddle-point escaping schemes developed
generic optimization
(e.g. perturbed
GDis[JGN
17]).
iterative update.
iterative
update.
• Weak dependency w.r.t. the design vectors. As we will elaborate in Section 4, the statistical dependency
and
5
2
2works?
Why
random
works?
t
the random
GD iterates {xinitialization
} and certain components
of the design
vectors {ainitialization
i } stays at an exceedingly
2 between
Why
5
t
weak level. Consequently, the GD iterates {x } proceed as if fresh samples were employed in each iteration.
Before
diving into
the
proof of
of the
thealgorithm
main theorem,
we pause to de
Why
works?
This random
statistical
a crucial
role
in characterizing
thedevelop
dynamics
without
2 theorem,
Why
initialization
works?
Before
diving
intoobservation
theinitialization
proofplays
of the
main
werandom
pause to
intuitions
regarding
why random
initialization is expected to work. We will build our understanding
the need of is
sample
splitting.
initialization
expected
to work. We will build our understanding step by step: (i) we first investigate the
of the
population
gradient sequence
(the
case
where we h
Before diving
into the
proof of the
main Before
theorem,dynamics
wecase
pause
to we
develop
intuitions
regarding
why
random
into
the
proof
ofand
the
main
theorem,
we
pause
to dev
dynamics
the population
gradient
(the
where
have
infinite
samples);
(ii) region
weargument
then
turn
It is worthofemphasizing
that the
entire sequence
trajectorydiving
of the
GD finite-sample
is
automatically
confined
within
certain
to
case
present
aa heuristic
assuming
initialization
is
expected
to
work.
We
will
build
our
understanding
step
by
step:
(i)
we
first
investigate
the
to
the
finite-sample
case
and
present
a
heuristic
argument
assuming
independence
between
the
iterates
and
initialization
is expected
work.
build
our
understanding
enjoying favorable geometry. For example,
the GD
iterates
are
always to
incoherent
with
the design
the
design
vectors;
(iii)
finally,We
we will
argue
that vectors,
the true
trajectory is sr
the
vectors;
finally,
wesequence
argue
that
the
true
trajectory
is
remarkably
close
to
the
one
heuristically
dynamics
ofdesign
the population
gradient
(the
case
where
we
have
infinite
samples);
(ii)
we
then
turnwe h
stay
sufficiently
away(iii)
from
any saddle
point,
and
exhibit
desired
smoothness
conditions,
which
we
will
fordynamics
of
the
population
gradient
sequence
(the
case
where
analyzed in Step (ii), which arises from a key property
concerning
t
malize in in
Section
Such
delicate
underlying
the GD
trajectory
arebetween
not explained
by
analyzed
Step
which
arises
from a properties
key
property
concerning
the
“near-independence”
between
{x
}and
to the finite-sample
case4.(ii),
and
present
ageometric
heuristic
argument
assuming
independence
the
iterates
and the designcase
vectors
}.
to the finite-sample
and{a
present
a heuristic provides
argument assuming
priorthe
papers.
Invectors
light of{a
this,
convergence analysis based on global geometry i [SQW16] — which
and
design
}.
\
i
the design
vectors;
(iii)into
finally,
we argue
that
truevectors;
trajectory
isoffinally,
remarkably
to (or
the
heuristically
Without
loss
generality,
weclose
assume
x
=one
e
thisissec
1 throughout
thexthe
(iii)
argue
that
the
true
trajectory
re
\design
valuable
insights
algorithmwe
designs
with
arbitrary
initialization
resultswe
in suboptimal
even
pes-standard
Without
loss of
generality,
assume
= ebasis
this—
section,
where
e1 denotes
the
first
1 throughout
t
vector.
For
notational
simplicity,
we
denote
by
analyzedsimistic)
in Stepcomputational
(ii), which guarantees
arises from
a
key
property
concerning
the
“near-independence”
between
{x
}
whenanalyzed
analyzing
aby
specific
algorithm
GD. In
contrast,
current concerning
Step (ii),
whichlikearises
from
a keytheproperty
basis vector. For notational simplicity,
we
denotein
paper establishes
near-optimal
performance
guarantees
by
paying
particular
attention
to
finer
dynamics
of
and the design
vectors {a
i }.
t
t
and the design vectors {ai }.
xk :=properties
x1
and
xt? := [xti ]2
t \ is
t throughout
tthis section,
t
the algorithm.
As will be we
seenassume
later, this
accomplished
by
heavily
exploiting
the
statistical
in
\
Without
loss of generality,
x
=
e
where
e
denotes
the
first
standard
xk := xWithout
and loss x
:= [xi ]2inwe assume
(5)
1 x = e1 throughout
11
of?generality,
this sect
each iterative update.
basis vector. For notational simplicity, we basis
denote
by For notational simplicity, we denote by
vector.
xtk := xt1
and 5
xt? := [xti ]2in
xtk := xt1
and
xt? :=(5)
[xti ]2
5
2
Why random initialization works?
Before diving into the proof of the main theorem, we pause to develop intuitions regarding why gradient
descent with random initialization is expected to work. We will build our understanding step by step: (i) we
first investigate the dynamics of the population gradient sequence (the case where we have infinite samples);
(ii) we then turn to the finite-sample case and present a heuristic argument assuming independence between
the iterates and the design vectors; (iii) finally, we argue that the true trajectory is remarkably close to
the one heuristically analyzed in the previous step, which arises from a key property concerning the “nearindependence” between {xt } and the design vectors {ai }.
Without loss of generality, we assume x\ = e1 throughout this section, where e1 denotes the first standard
basis vector. For notational simplicity, we denote by
xtk := xt1
xt⊥ := [xti ]2≤i≤n
and
(5)
the first entry and the 2nd through the nth entries of xt , respectively. Since x\ = e1 , it is easily seen that
0
t
\
\
t
and
(6)
= xt − hxt , x\ ix\
xk e1 = hx , x ix
xt⊥
{z
}
|
|
{z
}
signal component
perpendicular component
t
represent respectively the components of x along and perpendicular to the signal direction. In what follows,
we focus our attention on the following two quantities that reflect the sizes of the preceding two components2
αt := xtk
βt := xt⊥
and
2
.
(7)
Without loss of generality, assume that α0 > 0.
2.1
Population dynamics
To start with, we consider the unrealistic case where the iterates {xt } are constructed using the population
gradient (or equivalently, the gradient when the sample size m approaches infinity), i.e.
xt+1 = xt − η∇F (xt ).
Here, ∇F (x) represents the population gradient given by
∇F (x) := (3kxk22 − 1)x − 2(x\> x)x\ ,
2
> \ 2
>
which can be computed by ∇F (x) = E[∇f (x)] = E {(a>
i x) − (ai x ) }ai ai x assuming that x and
the ai ’s are independent. Simple algebraic manipulation reveals the dynamics for both the signal and the
perpendicular components:
xt+1
= 1 + 3η 1 − kxt k22 xtk ;
(8a)
k
t
t+1
t 2
x⊥ = 1 + η 1 − 3kx k2 x⊥ .
(8b)
Assuming that η is sufficiently small and recognizing that kxt k22 = αt2 + βt2 , we arrive at the following
population-level state evolution for both αt and βt (cf. (7)):
αt+1 = 1 + 3η 1 − αt2 + βt2
αt ;
(9a)
2
2
βt+1 = 1 + η 1 − 3 αt + βt
βt .
(9b)
This recursive system has three fixed points:
(α, β) = (1, 0),
(α, β) = (0, 0),
and
√
(α, β) = (0, 1/ 3),
which correspond to the global minimizer, the local maximizer, and the saddle points, respectively, of the
population objective function.
We make note of the following key observations in the presence of a randomly initialized x0 , which will
be formalized later in Lemma 1:
2 Here,
we do not take the absolute value of xtk . As we shall see later, the xtk ’s are of the same sign throughout the execution
of the algorithm.
6
10 0
10 2
10
10
-1
10
-2
0
10 -2
0
10
20
30
40
10 -3
50
0
(a) αt /βt
10
20
30
40
50
(b) αt and βt
Figure 4: Population-level state evolution, plotted semilogarithmically: (a) the ratio αt /βt vs. iteration count,
and (b) αt and βt vs. iteration count. The results are shown for n = 1000, ηt ≡ 0.1, and x0 ∼ N (0, n−1 In )
(assuming α0 > 0 though).
• the ratio αt /βt of the size of the signal component to that of the perpendicular component increases
exponentially fast;
• the size αt of the signal component keeps growing until it plateaus around 1;
• the size βt of the perpendicular component eventually drops towards zero.
In other words, when randomly initialized, (αt , β t ) converges to (1, 0) rapidly, thus indicating rapid convergence of xt to the truth x\ , without getting stuck at any undesirable saddle points. We also illustrate these
phenomena numerically. Set n = 1000, ηt ≡ 0.1 and x0 ∼ N (0, n−1 In ). Figure 4 displays the dynamics of
αt /βt , αt , and βt , which are precisely as discussed above.
2.2
Finite-sample analysis: a heuristic treatment
We now move on to the finite-sample regime, and examine how many samples are needed in order for the
population dynamics to be reasonably accurate. Notably, the arguments in this subsection are heuristic in
nature, but they are useful in developing insights into the true dynamics of the GD iterates.
Rewrite the gradient update rule (3) as
(10)
xt+1 = xt − η∇f (xt ) = xt − η∇F (xt ) − η ∇f (xt ) − ∇F (xt ) ,
|
{z
}
:=r(xt )
Pm
2
> \ 2
>
t
where ∇f (x) = m−1 i=1 [(a>
i x) − (ai x ) ]ai ai x. Assuming (unreasonably) that the iterate x is independent of {ai }, the central limit theorem (CLT) allows us to control the size of the fluctuation term r(xt ).
Take the signal component as an example: simple calculations give
xt+1
= xtk − η ∇F (xt ) 1 − ηr1 (xt ),
k
where
r1 (x) :=
m
hn
i
i
3
o
1 X h > 3
>
2
>
ai x − a2i,1 a>
x
a
−
E
a
x
−
a
a
x
a
i,1
i,1
i
i
i,1
i
m i=1
(11)
with ai,1 the first entry of ai . Owing to the preceding independence assumption, r1 is the sum of m i.i.d. zeromean random variables. Assuming that xt never blows up so that kxt k2 = O(1), one can apply the CLT to
demonstrate that
r
p
poly log(m)
t
(12)
|r1 (x )| . Var(r1 (xt )) poly log(m) .
m
7
S2,k = (Wk + c2 Fk )ei
S1,k
S2,k
= (Wk + c1 Fk )ei
= (Wk + c2 Fk )ei
1,k
2,k
2,k
,
8pixel k
{xt,(l) }
Wk = f1 (S1,k , S2,k )
0
1
x2, S2,k
x3)
Fx
k = fx
2 (S1,k
al
w.r.t. al
x\
Figure 5: Illustration of the region satisfying the “near-independence” property. Here, the green arrows
represent the directions of {ai }1≤i≤20 , and the blue region consists of all points such that the first entry
r1 (x) of the fluctuation r(x) = ∇f (x) − ∇F (x) is bounded above in magnitude by |xtk |/5 (or |hx, x\ i|/5).
with high probability, which is often negligible compared
to the other terms. For instance, for the random
√
initial guess x0 ∼ N (0, n−1 In ) one has x0|| & 1/ n log n with probability approaching one, telling us that
r
0
|r1 (x )| .
poly log(m)
|x0|| |
m
|r1 (x0 )| x0|| − η ∇F (x0 ) 1 x0||
and
1 hold true for the perpendicular component xt⊥ .
as long as m & n poly log(m). Similar observations
In summary, by assuming independence between xt and {ai }, we arrive at an approximate state evolution
for the finite-sample regime:
αt+1 ≈ 1 + 3η 1 − αt2 + βt2
αt ;
(13a)
2
2
βt+1 ≈ 1 + η 1 − 3 αt + βt
βt ,
(13b)
with the proviso that m & n poly log(m).
2.3
Key analysis ingredients: near-independence and leave-one-out tricks
The preceding heuristic argument justifies the approximate validity of the population dynamics, under an
independence assumption that never holds unless we use fresh samples in each iteration. On closer inspection,
what we essentially need is the fluctuation term r(xt ) (cf. (10)) being well-controlled. For instance, when
focusing on the signal component, one needs |r1 (xt )| xtk for all t ≥ 0. In particular, in the beginning
√
iterations, |xtk | is as small as O(1/ n). Without the independence assumption, the CLT types of results
fail to hold due to the complicated dependency between xt and {ai }. In fact, one can easily find many
points that result in much larger remainder terms (as large as O(1)) and that violate the approximate state
evolution (13). See Figure 5 for a caricature of the region where the fluctuation term r(xt ) is well-controlled.
As can be seen, it only occupies a tiny fraction of the neighborhood of x\ .
Fortunately, despite the complicated dependency across iterations, one can provably guarantee that xt
always stays within the preceding desirable region in which r(xt ) is well-controlled. The key idea is to exploit
a certain “near-independence” property between {xt } and {ai }. Towards this, we make use of a leave-one-out
trick for analyzing nonconvex iterative methods. In particular, we construct auxiliary sequences that are
1. independent of certain components of the design vectors {ai }; and
2. extremely close to the original gradient sequence {xt }t≥0 .
8
1
1
A
x
x
Ax
Ax
y = |Ax|2
A
A
A
x
x
Ax
Ax
(a) xt
y = |Ax|2
Asgn
(b) xt,(l)
y = |Asgn x|2
(c) xt,sgn
Asgn
y = |Asgn x|2
(d) xt,sgn,(l)
Figure 6: Illustration of the leave-one-out and random-sign sequences. (a) {xt } is constructed using all data
{ai , yi }; (b) {xt,(l) } is constructed by discarding the lth sample {al , yl }; (c) {xt,sgn } is constructed by using
sgn
is obtained by randomly flipping the sign of the first entry of ai ;
auxiliary design vectors {asgn
i }, where ai
t,sgn,(l)
(d) {x
} is constructed by discarding the lth sample {asgn
l , yl }.
As it turns out, we need to construct several auxiliary sequences {xt,(l) }t≥0 , {xt,sgn }t≥0 and {xt,sgn,(l) }t≥0 ,
where {xt,(l) }t≥0 is independent of the lth sampling vector al , {xt,sgn }t≥0 is independent of the sign information of the first entries of all ai ’s, and {xt,sgn,(l) } is independent of both. In addition, these auxiliary
sequences are constructed by slightly perturbing the original data (see Figure 6 for an illustration), and hence
one can expect all of them to stay close to the original sequence throughout the execution of the algorithm.
Taking these two properties together, one can propagate the above statistical independence underlying each
auxiliary sequence to the true iterates {xt }, which in turn allows us to obtain near-optimal control of the
fluctuation term r(xt ). The details are postponed to Section 4.
3
Related work
Solving systems of quadratic equations, or phase retrieval, has been studied extensively in the recent literature; see [SEC+ 15] for an overview. One popular method is convex relaxation (e.g. PhaseLift [CSV13]), which
1
1
is guaranteed to work as long as m/n exceeds some large enough constant [CL14,DH14,CCG15,CZ15,KRT17].
1
However, the resulting semidefinite program is computationally prohibitive for solving large-scale problems.
1
To address this issue, [CLS15] proposed the Wirtinger flow algorithm with spectral initialization, which
provides the first convergence guarantee for nonconvex methods without sample splitting. Both the sample
1
1
and computation complexities were further improved by [CC17] with an adaptive truncation strategy. Other
nonconvex phase retrieval methods include [NJS13,CLM+ 16,Sol17,WGE17,ZZLC17,WGSC17,CL16,DR17,
GX16,CFL15,Wei15,BEB17,TV17,CLW17,ZWGC17,QZEW17,ZCL16,YYF+ 17,CWZG17,Zha17,MXM18].
Almost 1all of these nonconvex methods require carefully-designed initialization to guarantee a sufficiently
1
accurate initial point. One exception is the approximate message passing algorithm proposed in [MXM18],
which works as long as the correlation between the truth and the initial signal is bounded away from
zero. This, however, does not accommodate the case when the initial signal strength is vanishingly small
(like random initialization). Other works [Zha17, LGL15] explored the global convergence of alternating
minimization / projection with random initialization which, however, require fresh samples at least in each
of the first O(log n) iterations in order to enter the local basin. In addition, [LMZ17] explored low-rank
recovery from quadratic measurements with near-zero initialization. Using a truncated least squares objective, [LMZ17] established approximate (but non-exact) recovery of over-parametrized GD. Notably, if we do
not over-parametrize the phase retrieval problem, then GD with near-zero initialization is (nearly) equivalent
to running the power method for spectral initialization3 , which can be understood using prior theory.
Another related line of research is the design of generic saddle-point escaping algorithms, where the goal is
to locate a second-order stationary point (i.e. the point with a vanishing gradient and a positive-semidefinite
Hessian). As mentioned earlier, it has been shown by [SQW16] that as soon as m n log3 n, all local
> t 2
P
Pm
>
−1 η
y a a> )xt when
specifically, the GD update xt+1 = xt − m−1 ηt m
t
i=1 (ai x ) − yi ai ai xt ≈ (I + m
i=1
P i i i >
−1
xt ≈ 0, which is equivalent to a power iteration (without normalization) w.r.t. the data matrix I + m ηt m
i=1 yi ai ai .
3 More
9
minima are global and all the saddle points are strict. With these two geometric properties in mind, saddlepoint escaping algorithms are guaranteed to converge globally for phase retrieval. Existing saddle-point
escaping algorithms include but are not limited to Hessian-based methods [SQW16] (see also [AAZB+ 16,
AZ17, JGN+ 17] for some reviews), noisy stochastic gradient descent [GHJY15], perturbed gradient descent
[JGN+ 17], and normalized gradient descent [MSK17]. On the one hand, the results developed in these works
are fairly general: they establish polynomial-time convergence guarantees under a few generic geometric
conditions. On the other hand, the iteration complexity derived therein may be pessimistic when specialized
to a particular problem.
Take phase retrieval and the perturbed gradient descent algorithm [JGN+ 17] as an example. It has been
shown in [JGN+ 17, Theorem 5] that for an objective function that is L-gradient Lipschitz, ρ-Hessian Lipschitz, (θ, γ, ζ)-strict saddle, and also locally α-strongly convex and β-smooth (see definitions in [JGN+ 17]),
it takes4
!
β
1
1
L
3
+
log
=
O
n
+
n
log
O
2
α
[min (θ, γ 2 /ρ)]
iterations (ignoring logarithmic factors) for perturbed gradient descent to converge to -accuracy. In fact,
even with Nesterov’s accelerated scheme [JNJ17], the iteration complexity for entering the local region is at
least
!
L1/2 ρ1/4
= O n2.5 .
O
7/4
2
[min (θ, γ /ρ)]
Both of them are much larger than the O log n + log(1/) complexity established herein. This is primarily
due to the following facts: (i) the Lipschitz constants of both the gradients and the Hessians are quite large,
i.e. L n and ρ n (ignoring log factors), which are, however, treated as dimension-independent constants
in the aforementioned papers; (ii) the local condition number is also large, i.e. β/α n. In comparison, as
suggested by our theory, the GD iterates with random initialization are always confined within a restricted
region enjoying much more benign geometry than the worst-case / global characterization.
Furthermore, the above saddle-escaping first-order methods are often more complicated than vanilla
GD. Despite its algorithmic simplicity and wide use in practice, the convergence rate of GD with random
initialization remains largely unknown. In fact, Du et al. [DJL+ 17] demonstrated that there exist nonpathological functions such that GD can take exponential time to escape the saddle points when initialized
randomly. In contrast, as we have demonstrated, saddle points are not an issue for phase retrieval; the GD
iterates with random initialization never get trapped in the saddle points.
Finally, the leave-one-out arguments have been invoked to analyze other high-dimensional statistical
estimation problems including robust M-estimators [EKBB+ 13,EK15], statistical inference for Lasso [JM15],
likelihood ratio test for logistic regression [SCC17], etc. In addition, [ZB17, CFMW17, AFWZ17] made use
of the leave-one-out trick to derive entrywise perturbation bounds for eigenvectors resulting from certain
spectral methods. The techniques have also been applied by [MWCC17, LMCC18] to establish local linear
convergence of vanilla GD for nonconvex statistical estimation problems in the presence of proper spectral
initialization.
4
Analysis
In this section, we first provide a more general version of Theorem 1 as follows. It spells out exactly the
conditions on x0 in order for the gradient method with random initialization to succeed.
i.i.d.
Theorem 2. Fix x\ ∈ Rn . Suppose ai ∼ N (0, In ) (1 ≤ i ≤ m) and m ≥ Cn log13 m for some sufficiently
large constant C > 0. Assume that the initialization x0 is independent of {ai } and obeys
hx0 , x\ i
1
≥√
2
\
kx k2
n log n
and
1−
1
log n
kx\ k2 ≤ kx0 k2 ≤
1+
1
log n
kx\ k2 ,
(14)
4 When applied to phase retrieval with m n poly log n, one has L n, ρ n, θ γ 1 (see [SQW16, Theorem 2.2]),
α 1, and β & n (ignoring logarithmic factors).
10
and that the stepsize satisfies ηt ≡ η = c/kx\ k22 for some sufficiently small constant c > 0. Then there
exist a sufficiently small absolute constant 0 < γ < 1 and Tγ . log n such that with probability at least
1 − O(m2 e−1.5n ) − O(m−9 ),
1. the GD iterates (3) converge linearly to x\ after t ≥ Tγ , namely,
η
x\
dist xt , x\ ≤ 1 −
2
2. the magnitude ratio of the signal component
hxt ,x\ i \
x
kx\ k22
xt −
hxt ,x\ i
kx\ k22
2
hxt ,x\ i \
x
kx\ k22
&√
x\
2 t−Tγ
· γ x\
2
,
∀ t ≥ Tγ ;
t
\
,x i \
to the perpendicular component xt − hx
x obeys
kx\ k2
2
1
(1 + c1 η 2 )t ,
n log n
t = 0, 1, · · ·
(15)
2
for some constant c1 > 0.
Several remarks regarding Theorem 2 are in order.
0
−1
\ 2
• The random
√ initialization x ∼ N (0, n kx k2 In ) obeys the condition (14) with probability exceeding
1 − O(1/ log n), which in turn establishes Theorem 1.
• Our current sample complexity reads m & n log13 m, which is optimal up to logarithmic factors. It is
possible to further reduce the logarithmic factors using more refined probabilistic tools, which we leave for
future work.
The remainder of this section is then devoted to proving Theorem 2. Without loss of generality5 , we will
assume throughout that
x\ = e1
and
x01 > 0.
(16)
Given this, one can decompose
t
x =
xtk e1
+
0
xt⊥
(17)
where xtk = xt1 and xt⊥ = [xti ]2≤i≤n as introduced in Section 2. For notational simplicity, we define
αt := xtk
and
βt := kxt⊥ k2 .
(18)
Intuitively, αt represents the size of the signal component, whereas βt measures the size of the component
perpendicular to the signal direction. In view of (16), we have α0 > 0.
4.1
Outline of the proof
To begin with, it is easily seen that if αt and βt (cf. (18)) obey |αt − 1| ≤ γ/2 and βt ≤ γ/2, then
dist xt , x\ ≤ kxt − x\ k2 ≤ αt − 1 + βt ≤ γ.
Therefore, our first step — which is concerned with proving dist(xt , x\ ) ≤ γ — comes down to the following
two steps.
1. Show that if αt and βt satisfy the approximate state evolution (see (13)), then there exists some Tγ =
O (log n) such that
αTγ − 1 ≤ γ/2
and
βTγ ≤ γ/2,
(19)
which would immediately imply that
dist xTγ , x\ ≤ γ.
Along the way, we will also show that the ratio αt /βt grows exponentially fast.
5 This
is because of the rotational invariance of Gaussian distributions.
11
2. Justify that αt and βt satisfy the approximate state evolution with high probability, using (some variants
of) leave-one-out arguments.
After t ≥ Tγ , we can invoke prior theory concerning local convergence to show that with high probability,
∀ t > Tγ
dist xt , x\ ≤ (1 − ρ)t−Tγ kxTγ − x\ k2 ,
for some constant 0 < ρ < 1 independent of n and m.
4.2
Dynamics of approximate state evolution
This subsection formalizes our intuition in Section 2: as long as the approximate state evolution holds, then
one can find Tγ . log n obeying condition (19). In particular, the approximate state evolution is given by
(20a)
αt+1 = 1 + 3η 1 − αt2 + βt2 + ηζt αt ,
2
2
βt+1 = 1 + η 1 − 3 αt + βt + ηρt βt ,
(20b)
where {ζt } and {ρt } represent the perturbation terms. Our result is this:
Lemma 1. Let γ > 0 be some sufficiently small constant, and consider the approximate state evolution (20).
Suppose the initial point obeys
q
1
1
1
≤ α02 + β02 ≤ 1 +
.
(21)
α0 ≥ √
and
1−
log n
log n
n log n
and the perturbation terms satisfy
c3
,
log n
max {|ζt | , |ρt |} ≤
t = 0, 1, · · ·
for some sufficiently small constant c3 > 0.
(a) Let
Tγ := min t : |αt − 1| ≤ γ/2 and βt ≤ γ/2 .
(22)
Then for any sufficiently large n and m and any sufficiently small constant η > 0, one has
Tγ . log n,
(23)
and there exist some constants c5 , c10 > 0 independent of n and m such that
1
√
≤ αt ≤ 2,
2 n log n
c5 ≤ βt ≤ 1.5
and
αt+1 /αt
≥ 1 + c10 η 2 ,
βt+1 /βt
0 ≤ t ≤ Tγ .
(24)
(b) If we define
T0 := min t : αt+1 ≥ c6 / log5 m ,
(25)
T1 := min {t : αt+1 > c4 } ,
(26)
for some arbitrarily small constants c4 , c6 > 0, then
1) T0 ≤ T1 ≤ Tγ . log n; T1 − T0 . log log m; Tγ − T1 . 1;
2) For T0 < t ≤ Tγ , one has αt ≥ c6 / log5 m.
Proof. See Appendix B.
Remark 2. Recall that γ is sufficiently small and (α, β) = (1, 0) represents the global minimizer. Since
|α0 −1| ≈ 1, one has Tγ > 0, which denotes the first time√
when the iterates enter the local region surrounding
the global minimizer. In addition, the fact that α0 . 1/ n gives T0 > 0 and T1 > 0, both of which indicate
the first time when the signal strength is sufficiently large.
12
Lemma 1 makes precise that under the approximate state evolution, the first stage enjoys a fairly short
duration Tγ . log n. Moreover, the size of the signal component grows faster than that of the perpendicular
component for any iteration t < Tγ , thus confirming the exponential growth of αt /βt .
In addition, Lemma 1 identifies two midpoints T0 and T1 when the sizes of the signal component αt
become sufficiently large. These are helpful in our subsequent analysis. In what follows, we will divide
Stage 1 (which consists of all iterations up to Tγ ) into two phases:
• Phase I : consider the duration 0 ≤ t ≤ T0 ;
• Phase II : consider all iterations with T0 < t ≤ Tγ .
We will justify the approximate state evolution (20) for these two phases separately.
4.3
Motivation of the leave-one-out approach
As we have alluded in Section 2.3, the main difficulty in establishing the approximate state evolution (20)
lies in controlling the perturbation terms to the desired orders (i.e. |ζt | , |ρt | 1/ log n in Lemma 1). To
achieve this, we advocate the use of (some variants of) leave-one-out sequences to help establish certain
“near-independence” between xt and certain components of {ai }.
We begin by taking a closer look at the perturbation terms. Regarding the signal component, it is easily
seen from (11) that
xt+1
= 1 + 3η 1 − kxt k22 xtk − ηr1 (xt ),
k
where the perturbation term r1 (xt ) obeys
t
h
r1 (x ) = 1 −
|
2
xtk
i
xtk
!
m
m
h
2 i 1 X
1 X 4
ai,1 − 3 + 1 − 3 xtk
a3 a> xt
m i=1
m i=1 i,1 i,⊥ ⊥
{z
} |
{z
}
:=I1
− 3xtk
|
:=I2
m
1 X > t 2 2
a x
a − xt⊥
m i=1 i,⊥ ⊥ i,1
{z
:=I3
!
2
2
m
1 X > t 3
−
a x
ai,1 .
m i=1 i,⊥ ⊥
} |
{z
}
(27)
:=I4
Here and throughout the paper, for any vector v ∈ Rn , v⊥ ∈ Rn−1 denotes the 2nd through the nth entries
of v. Due to the dependency between xt and {ai }, it is challenging to obtain sharp control of some of these
terms.
In what follows, we use the term I4 to explain and motivate
our leave-one-out approach. As discussed
√
in Section 2.3, I4 needs to be controlled to theP
level O(1/( n poly log(n))). This precludes us from seeking
m
3
a uniform bound on the function h(x) := m−1 i=1 (a>
i,⊥ x⊥ ) ai,1 over all√x (or even all x within the set C
incoherent with {ai }), since the uniform bound supx∈C |h(x)| can be O( n/poly log(n)) times larger than
the desired order.
In order to control I4 to the desirable order, one strategy is to approximate it by a sum of independent
variables and then invoke the CLT. Specifically, we first rewrite I4 as
m
I4 =
1 X > t 3
a x
|ai,1 | ξi
m i=1 i,⊥ ⊥
with ξi := sgn(ai,1 ). Here sgn(·) denotes the usual sign function. To exploit the statistical independence
between ξi and {|ai,1 |, ai,⊥ }, we would like to identify some vector independent of ξi that well approximates
xt . If this can be done, then one may treat I4 as a weighted independent sum of {ξi }. Viewed in this light,
our plan is the following:
1. Construct a sequence {xt,sgn } independent of {ξi } obeying xt,sgn ≈ xt , so that
m
I4 ≈
1 X > t,sgn 3
a x
|ai,1 | ξi .
m i=1 | i,⊥ ⊥{z
}
:=wi
13
Algorithm 1 The lth leave-one-out sequence
Input: {ai }1≤i≤m,i6=l , {yi }1≤i≤m,i6=l , and x0 .
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
0,(l)
where x
xt+1,(l) = xt,(l) − ηt ∇f (l) (xt,(l) ),
P
2
> \ 2 2
= x0 and f (l) (x) = (1/4m)· i:i6=l [(a>
i x) − (ai x ) ] .
(29)
Algorithm 2 The random-sign sequence
Input: {|ai,1 |}1≤i≤m , {ai,⊥ }1≤i≤m , {ξisgn }1≤i≤m , {yi }1≤i≤m , x0 .
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
xt+1,sgn = xt,sgn − ηt ∇f sgn (xt,sgn ),
where x0,sgn = x0 , f sgn (x) =
1
4m
Pm
sgn>
x)2
i=1 [(ai
− (asgn>
x\ )2 ]2 with asgn
:=
i
i
(30)
ξisgn |ai,1 |
ai,⊥
.
One can then apply standard concentration results (e.g. the Bernstein inequality) to control I4 , as long as
none of the weight wi is exceedingly large.
t,sgn
2. Demonstrate that the weight wi is well-controlled, or equivalently, a>
(1 ≤ i ≤ m) is not much
i,⊥ x⊥
larger than its typical size. This can be accomplished by identifying another sequence {xt,(i) } independent
of ai such that xt,(i) ≈ xt ≈ xt,sgn , followed by the argument:
p
p
t,(i)
t,(i)
t,sgn
t
>
a>
(28)
≈ a>
. log m x⊥ 2 ≈ log m xt⊥ 2 .
i,⊥ x⊥
i,⊥ x⊥ ≈ ai,⊥ x⊥
Here, the inequality follows from standard Gaussian tail bounds and the independence between ai and
xt,(i) . This explains why we would like to construct {xt,(i) } for each 1 ≤ i ≤ m.
As we will detail in the next subsection, such auxiliary sequences are constructed by leaving out a small
amount of relevant information from the collected data before running the GD algorithm, which is a variant
of the “leave-one-out” approach rooted in probability theory and random matrix theory.
4.4
Leave-one-out and random-sign sequences
We now describe how to design auxiliary sequences to help establish certain independence properties between
the gradient iterates {xt } and the design vectors {ai }. In the sequel, we formally define the three sets of
auxiliary sequences {xt,(l) }, {xt,sgn }, {xt,sgn,(l) } as introduced in Section 2.3 and Section 4.3.
• Leave-one-out sequences {xt,(l) }t≥0 . For each 1 ≤ l ≤ m, we introduce a sequence {xt,(l) }, which drops
the lth sample and runs GD w.r.t. the auxiliary objective function
f (l) (x) =
i2
1 X h > 2
\ 2
ai x − a>
x
.
i
4m
(32)
i:i6=l
See Algorithm 1 for details and also Figure 6(a) for an illustration. One of the most important features of
{xt,(l) } is that all of its iterates are statistically independent
of (al , yl ), and hence are incoherent with al
√
t,(l)
t,(l)
with high probability, in the sense that a>
log
mkx
k2 . Such incoherence properties further
x
.
l
> t
> t,sgn
allow us to control both al x and al x
(see (28)), which is crucial for controlling the size of the
residual terms (e.g. r1 (xt ) as defined in (11)).
• Random-sign sequence {xt,sgn }t≥0 . Introduce a collection of auxiliary design vectors {asgn
i }1≤i≤m defined
as
sgn
ξi |ai,1 |
asgn
:=
,
(33)
i
ai,⊥
14
Algorithm 3 The lth leave-one-out and random-sign sequence
Input:{|ai,1 |}1≤i≤m,i6=l , {ai,⊥ }1≤i≤m,i6=l , {ξisgn }1≤i≤m,,i6=l , {yi }1≤i≤m,i6=l , x0 .
Gradient updates: for t = 0, 1, 2, . . . , T − 1 do
where x0,sgn,(l) = x0 , f sgn,(l)
xt+1,sgn,(l) = xt,sgn,(l) − ηt ∇f sgn,(l) (xt,sgn,(l) ),
(31)
sgn
h
i
2
P
ξi |ai,1 |
2
2
1
asgn>
x − asgn>
x\
x = 4m
with asgn
:=
.
i
i
i
i:i6=l
ai,⊥
where {ξisgn }1≤i≤m is a set of Radamacher random variables independent of {ai }, i.e.
(
i.i.d.
ξisgn =
1,
with probability 1/2,
−1, else,
1 ≤ i ≤ m.
(34)
is generated by randomly flipping the sign of the first entry of ai . To simplify the notations
In words, asgn
i
hereafter, we also denote
ξi = sgn(ai,1 ).
(35)
As a result, ai and asgn
differ only by a single bit of information. With these auxiliary design vectors in
i
place, we generate a sequence {xt,sgn } by running GD w.r.t. the auxiliary loss function
m
2 i2
1 X h sgn> 2
f sgn x =
ai
x − asgn>
x\
.
i
4m i=1
(36)
One simple yet important feature associated with these new design vectors is that it produces the same
measurements as {ai }:
2
2
\ 2
a>
= asgn>
x\ = |ai,1 | ,
1 ≤ i ≤ m.
(37)
i x
i
See Figure 6(b) for an illustration and Algorithm 2 for the detailed procedure. This sequence is introduced
in order to “randomize” certain Gaussian polynomials (e.g. I4 in (27)), which in turn enables optimal control
of these quantities. This is particularly crucial at the initial stage of the algorithm.
• Leave-one-out and random-sign sequences xt,sgn,(l) t≥0 . Furthermore, we also need to introduce another
collection of sequences {xt,sgn,(l) } by simultaneously employing the new design vectors {asgn
i } and discardsgn
ing a single sample (asgn
l , yl ). This enables us to propagate the kinds of independence properties across
the above two sets of sequences, which is useful in demonstrating that xt is jointly “nearly-independent”
of both al and {sgn(ai,1 )}. See Algorithm 3 and Figure 6(c).
As a remark, all of these leave-one-out and random-sign procedures are assumed to start from the same
initial point as the original sequence, namely,
x0 = x0,(l) = x0,sgn = x0,sgn,(l) ,
4.5
1 ≤ l ≤ m.
(38)
Justification of approximate state evolution for Phase I of Stage 1
Recall that Phase I consists of the iterations 0 ≤ t ≤ T0 , where
c6
.
T0 = min t : αt+1 ≥
log5 m
(39)
Our goal here is to show that the approximate state evolution (20) for both the size αt of the signal component
and the size βt of the perpendicular component holds true throughout Phase I. Our proof will be inductive in
nature. Specifically, we will first identify a set of induction hypotheses that are helpful in proving the validity
of the approximate state evolution (20), and then proceed by establishing these hypotheses via induction.
15
4.5.1
Induction hypotheses
For the sake of clarity, we first list all the induction hypotheses.
max
1≤l≤m
t
x −x
t,(l)
2
t,(l)
max xtk − xk
1≤l≤m
xt − xt,sgn
max
1≤l≤m
2
xt − xt,sgn − xt,(l) + xt,sgn,(l)
2
c5 ≤ xt⊥
2
xt
2
t
p
n log5 m
C1
,
m
p
t
1
n log12 m
≤ αt 1 +
C2
,
log m
m
s
t
1
n log5 m
≤ αt 1 +
C3
,
log m
m
p
t
1
n log9 m
≤ αt 1 +
C4
,
log m
m
≤ βt 1 +
1
log m
≤ xt 2 ≤ C5 ,
p
≤ 4αt n log m,
(40a)
(40b)
(40c)
(40d)
(40e)
(40f)
where C1 , · · · , C5 and c5 are some absolute positive constants.
Now we are ready to prove an immediate consequence of the induction hypotheses (40): if (40) hold for
the tth iteration, then αt+1 and βt+1 follow the approximate state evolution (see (20)). This is justified in
the following lemma.
Lemma 2. Suppose m ≥ Cn log11 m for some sufficiently large constant C > 0. For any 0 ≤ t ≤
T0 (cf. (39)), if the tth iterates satisfy the induction hypotheses (40), then with probability at least 1 −
O(me−1.5n ) − O(m−10 ),
αt+1 = 1 + 3η 1 − αt2 + βt2 + ηζt αt ;
(41a)
2
2
βt+1 = 1 + η 1 − 3 αt + βt + ηρt βt
(41b)
hold for some |ζt | 1/ log m and |ρt | 1/ log m.
Proof. See Appendix C.
It remains to inductively show that the hypotheses hold for all 0 ≤ t ≤ T0 . Before proceeding to this
induction step, it is helpful to first develop more understanding about the preceding hypotheses.
1. In words, (40a), (40b), (40c) specify that the leave-one-out sequences xt,(l) and {xt,sgn } are exceedingly
close to the original sequence {xt }. Similarly, the difference between xt − xt,sgn and xt,(l) − xt,sgn,(l) is
extremely small, as asserted in (40d). The hypothesis (40e) says that the norm of the iterates {xt } is
always bounded from above and from below in Phase I. The last one (40f) indicates that the size αt of the
signal component is never too small compared with kxt k2 .
2. Another property that is worth mentioning is the growth rate (with respect to t) of the quantities appeared
t,(l)
in the induction hypotheses (40). For instance, xtk −xk , kxt −xt,sgn k2 and kxt −xt,sgn −xt,(l) +xt,sgn,(l) k2
grow more or less at the same rate as αt (modulo some (1 + 1/ log m)T0 factor). In contrast, kxt − xt,(l) k2
shares the same growth rate with βt (modulo the (1 + 1/ log m)T0 factor). See Figure 7 for an illustration.
The difference in the growth rates turns out to be crucial in establishing the advertised result.
3. Last but not least, we emphasize the sizes of the quantities of interest in (40)√for t = 1 under the Gaussian
initialization. Ignoring all of the log m terms and recognizing that α1 1/ n and β1 1, one sees that
√
√
1,(l)
kx1 −x1,(l) k2 . 1/ m, |x1k −xk | . 1/m, kx1 −x1,sgn k2 . 1/ m and kx1 −x1,sgn −x1,(l) +x1,sgn,(l) k2 .
1/m. See Figure 7 for a caricature of the trends of the above four quantities.
Several consequences of (40) regarding the incoherence between {xt }, {xt,sgn } and {ai }, {asgn
i } are immediate, as summarized in the following lemma.
16
10 0
10 0
10 -1
10
-2
10 -2
10 -4
10
-3
10 -6
10 -4
10 -5
0
10
20
30
40
10 -8
50
(a) Stage 1
0
50
100
150
200
(b) Stage 1 and Stage 2
Figure 7: Illustration of the differences among leave-one-out and original sequences vs. iteration count,
plotted semilogarithmically. The results are shown for n = 1000 with m = 10n, ηt ≡ 0.1, and kx\ k2 = 1.
(a) The four differences increases in Stage 1. From the induction hypotheses (40), our upper bounds on
t,(l)
|xtk − xk |, kxt − xt,sgn k2 and kxt − xt,sgn − xt,(l) + xt,sgn,(l) k2 scale linearly with αt , whereas the upper
√
1,(l)
bound on kxt − xt,(l) k2 is proportional to βt . In addition, kx1 − x1,(l) k2 . 1/ m, |x1k − xk | . 1/m,
√
kx1 − x1,sgn k2 . 1/ m and kx1 − x1,sgn − x1,(l) + x1,sgn,(l) k2 . 1/m. (b) The four differences converge to
zero geometrically fast in Stage 2, as all the (variants of) leave-one-out sequences and the original sequence
converge to the truth x\ .
Lemma 3. Suppose that m ≥ Cn log6 m for some sufficiently large constant C > 0 and the tth iterates
satisfy the induction hypotheses (40) for t ≤ T0 , then with probability at least 1 − O(me−1.5n ) − O(m−10 ),
p
t
max a>
log m xt 2 ;
l x .
1≤l≤m
p
t
log m xt⊥ 2 ;
max a>
l,⊥ x⊥ .
1≤l≤m
p
t,sgn
max a>
. log m xt,sgn 2 ;
l x
1≤l≤m
p
t,sgn
max a>
. log m xt,sgn
;
l,⊥ x⊥
⊥
2
1≤l≤m
p
xt,sgn . log m xt,sgn 2 .
max asgn>
l
1≤l≤m
Proof. These incoherence conditions typically arise from the independence between {xt,(l) } and al . For
instance, the first line follows since
p
p
t
> t,(l)
a>
. log mkxt,(l) k2 log mkxt k2 .
l x ≈ al x
See Appendix M for detailed proofs.
4.5.2
Induction step
We then turn to showing that the induction hypotheses (40) hold throughout Phase I, i.e. for 0 ≤ t ≤ T0 .
The base case can be easily verified because of the identical initial points (38). Now we move on to the
inductive step, i.e. we aim to show that if the hypotheses (40) are valid up to the tth iteration for some
th
t ≤ T0 , then they continue to hold for the (t + 1) iteration.
The first lemma concerns the difference between the leave-one-out sequence xt+1,(l) and the true sequence
t+1
x
(see (40a)).
17
Lemma 4. Suppose m ≥ Cn log5 m for some sufficiently large constant C > 0. If the induction hypotheses
(40) hold true up to the tth iteration for some t ≤ T0 , then with probability at least 1−O(me−1.5n )−O(m−10 ),
t+1 p
1
n log5 m
t+1
t+1,(l)
C
max x
≤
β
1
+
(43)
−x
1
t+1
2
1≤l≤m
log m
m
holds as long as η > 0 is a sufficiently small constant and C1 > 0 is sufficiently large.
Proof. See Appendix D.
The next lemma characterizes a finer relation between xt+1 and xt+1,(l) when projected onto the signal
direction (cf. (40b)).
Lemma 5. Suppose m ≥ Cn log6 m for some sufficiently large constant C > 0. If the induction hypotheses
(40) hold true up to the tth iteration for some t ≤ T0 , then with probability at least 1−O(me−1.5n )−O(m−10 ),
t+1 p
n log12 m
1
t+1,(l)
t+1
C2
(44)
max xk − xk
≤ αt+1 1 +
1≤l≤m
log m
m
holds as long as η > 0 is a sufficiently small constant and C2 C4 .
Proof. See Appendix E.
Regarding the difference between xt and xt,sgn (see (40c)), we have the following result.
Lemma 6. Suppose m ≥ Cn log5 m for some sufficiently large constant C > 0. If the induction hypotheses
(40) hold true up to the tth iteration for some t ≤ T0 , then with probability at least 1−O(me−1.5n )−O m−10 ,
s
t+1
n log5 m
1
xt+1 − xt+1,sgn 2 ≤ αt+1 1 +
C3
(45)
log m
m
holds as long as η > 0 is a sufficiently small constant and C3 is a sufficiently large positive constant.
Proof. See Appendix F.
We are left with the double difference xt+1 − xt+1,sgn − xt+1,(l) + xt+1,sgn,(l) (cf. (40d)), for which one
has the following lemma.
Lemma 7. Suppose m ≥ Cn log8 m for some sufficiently large constant C > 0. If the induction hypotheses
(40) hold true up to the tth iteration for some t ≤ T0 , then with probability at least 1−O(me−1.5n )−O(m−10 ),
p
t+1
1
n log9 m
t+1
t+1,sgn
t+1,(l)
t+1,sgn,(l)
max x
−x
−x
+x
≤ αt+1 1 +
C4
(46)
1≤l≤m
log m
m
2
holds as long as η > 0 is a sufficiently small constant and C4 > 0 is sufficiently large.
Proof. See Appendix G.
Assuming the induction hypotheses (40) hold up to the tth iteration for some t ≤ T0 , we know from
Lemma 2 that the approximate state evolution for both αt and βt (see (20)) holds up to t + 1. As a result,
th
the last two hypotheses (40e) and (40f) for the (t + 1) iteration can be easily verified.
4.6
Justification of approximate state evolution for Phase II of Stage 1
Recall from Lemma 1 that Phase II refers to the iterations T0 < t ≤ Tγ (see the definition of T0 in Lemma 1),
for which one has
c6
αt ≥
(47)
log5 m
as long as the approximate state evolution (20) holds. Here c6 > 0 is the same constant as in Lemma 1.
Similar to Phase I, we invoke an inductive argument to prove that the approximate state evolution (20)
continues to hold for T0 < t ≤ Tγ .
18
4.6.1
Induction hypotheses
In Phase I, we rely on the leave-one-out sequences and the random-sign sequences {xt,(l) }, {xt,sgn } and
{xt,sgn,(l) } to establish certain “near-independence” between {xt } and {al }, which in turn allows us to
obtain sharp control of the residual terms r (xt ) (cf. (10)) and r1 (xt ) (cf. (11)). As it turns out, once the
size αt of the signal component obeys αt & 1/poly log(m), then {xt,(l) } alone is sufficient for our purpose to
establish the “near-independence” property. More precisely, in Phase II we only need to impose the following
induction hypotheses.
t p
n log15 m
1
t
t,(l)
C
;
(48a)
≤
α
1
+
max x − x
6
t
2
1≤l≤m
log m
m
c5 ≤ xt⊥
2
≤ xt
2
≤ C5 .
(48b)
A direct consequence of (48) is the incoherence between xt and {al }, namely,
p
t
max a>
log m xt⊥ 2 ;
l,⊥ x⊥ .
1≤l≤m
p
t
max a>
log m xt 2 .
l x .
(49a)
(49b)
1≤l≤m
To see this, one can use the triangle inequality to show that
t,(l)
t
>
a>
l,⊥ x⊥ ≤ al,⊥ x⊥
t,(l)
t
+ a>
l,⊥ x⊥ − x⊥
√
t,(l)
log m x⊥ 2 + n xt − xt,(l) 2
√
p
. log m xt⊥ 2 + xt − xt,(l) 2 + n xt − xt,(l)
p
(ii) p
p
n log15 m √
. log m +
n . log m,
m
(i)
.
p
2
where (i) follows from the independence between al and xt,(l) and the Cauchy-Schwarz inequality, and the
t
last line (ii) arises from (1 + 1/ log m) . 1 for t ≤ Tγ . log n and m n log15/2 m. This combined with
the fact that kxt⊥ k2 ≥ c5 /2 results in
t
max a>
l,⊥ x⊥ .
1≤l≤m
p
log m xt⊥
2
.
(50)
The condition (49b) follows using nearly identical arguments, which are omitted here.
As in Phase I, we need to justify the approximate state evolution (20) for both αt and βt , given that the
tth iterates satisfy the induction hypotheses (48). This is stated in the following lemma.
Lemma 8. Suppose m ≥ Cn log13 m for some sufficiently large constant C > 0. If the tth iterates satisfy
the induction hypotheses (48) for T0 < t < Tγ , then with probability at least 1 − O(me−1.5n ) − O(m−10 ),
αt+1 = 1 + 3η 1 − αt2 + βt2 + ηζt αt ;
(51a)
2
2
βt+1 = 1 + η 1 − 3 αt + βt + ηρt βt ,
(51b)
for some |ζt | 1/ log m and ρt 1/ log m.
Proof. See Appendix H for the proof of (51a). The proof of (51b) follows exactly the same argument as in
proving (41b), and is hence omitted.
4.6.2
Induction step
We proceed to complete the induction argument. Towards this end, one has the following lemma in regard
to the induction on max1≤l≤m kxt+1 − xt+1,(l) k2 (see (48a)).
19
Lemma 9. Suppose m ≥ Cn log5 m for some sufficiently large constant C > 0, and consider any T0 < t <
Tγ . If the induction hypotheses (40) are valid throughout Phase I and (48) are valid from the T0 th to the tth
iterations, then with probability at least 1 − O(me−1.5n ) − O(m−10 ),
p
t+1
1
n log13 m
t+1
t+1,(l)
C
max x
≤
α
1
+
−x
6
t+1
2
1≤l≤m
log m
m
holds as long as η > 0 is sufficiently small and C6 > 0 is sufficiently large.
Proof. See Appendix I.
As in Phase I, since we assume the induction hypotheses (40) (resp. (48)) hold for all iterations up to the
T0 th iteration (resp. between the T0 th and the tth iteration), we know from Lemma 8 that the approximate
state evolution for both αt and βt (see (20)) holds up to t + 1. The last induction hypothesis (48b) for the
th
(t + 1) iteration can be easily verified from Lemma 1.
It remains to check the case when t = T0 + 1. It can be seen from the analysis in Phase I that
p
T0 +1
1
n log5 m
T0 +1
T0 +1,(l)
−x
≤
β
max x
1
+
C
T0 +1
1
2
1≤l≤m
log m
m
p
T0 +1
n log15 m
1
C6
,
≤ αT0 +1 1 +
log m
m
for some constant condition C6 1, where the second line holds since βT0 +1 ≤ C5 , αT0 +1 ≥ c6 / log5 m.
4.7
Analysis for Stage 2
Combining the analyses in Phase I and Phase II, we finish the proof of Theorem 2 for Stage 1, i.e. t ≤ Tγ .
In addition to dist xTγ , x\ ≤ γ, we can also see from (49b) that
Tγ
max a>
.
i x
p
1≤i≤m
log m,
which in turn implies that
Tγ
max a>
− x\
i x
1≤i≤m
.
p
log m.
Armed with these properties, one can apply the arguments in [MWCC17, Section 6] to prove that for
t ≥ Tγ + 1,
η t−Tγ
η t−Tγ
dist xt , x\ ≤ 1 −
dist xTγ , x\ ≤ 1 −
· γ.
(52)
2
2
Notably, the theorem therein works under the stepsize ηt ≡ η c/ log n when m n log n. Nevertheless, as
remarked by the authors, when the sample complexity exceeds m n log3 m, a constant stepsize is allowed.
We are left with proving (15) for Stage 2. Note that we have already shown that the ratio αt /βt increases
exponentially fast in Stage 1. Therefore,
1
αT1
≥√
(1 + c10 η 2 )T1
β T1
2n log n
and, by the definition of T1 (see (26)) and Lemma 1, one has αT1 βT1 1 and hence
αT1
1.
β T1
When it comes to t > Tγ , in view of (52), one has
1 − dist xt , x\
1−γ
αt
≥
≥
t−Tγ
βt
dist (xt , x\ )
1 − η2
·γ
20
(53)
1−γ
η t−Tγ (i) αT1
η t−Tγ
1+
1+
γ
2
β T1
2
t−Tγ
1
η
T
1
&√
1+
1 + c10 η 2
2
n log n
(ii)
η t−Tγ
1
Tγ
1+
1 + c10 η 2
√
2
n log n
1
t
1 + c10 η 2 ,
&√
n log n
≥
where (i) arises from (53) and the fact that γ is a constant, (ii) follows since Tγ − T1 1 according to Lemma
1, and the last line holds as long as c10 > 0 and η are sufficiently small. This concludes the proof regarding
the lower bound on αt /βt .
5
Discussions
The current paper justifies the fast global convergence of gradient descent with random initialization for phase
retrieval. Specifically, we demonstrate that GD with random initialization takes only O log n + log(1/)
iterations to achieve a relative -accuracy in terms of the estimation error. It is likely that such fast global
convergence properties also arise in other nonconvex statistical estimation problems. The technical tools
developed herein may also prove useful for other settings. We conclude our paper with a few directions
worthy of future investigation.
• Sample complexity and phase transition. We have proved in Theorem 2 that GD with random initialization
enjoys fast convergence, with the proviso that m n log13 m. It is possible to improve the sample
complexity via more sophisticated arguments. In addition, it would be interesting to examine the phase
transition phenomenon of GD with random initialization.
• Other nonconvex statistical estimation problems. We use the phase retrieval problem to showcase the
efficiency of GD with random initialization. It is certainly interesting to investigate whether this fast global
convergence carries over to other nonconvex statistical estimation problems including low-rank matrix and
tensor recovery [KMO10, SL16, CW15, TBS+ 16, ZWL15, CL17, HZC18], blind deconvolution [LLSW16,
HH17,LLB17] and neural networks [SJL17,LMZ17,ZSJ+ 17,FCL18]. The leave-one-out sequences and the
“near-independence” property introduced / identified in this paper might be useful in proving efficiency of
randomly initialized GD for the aforementioned problems.
• Noisy setting and other activation functions. Throughout this paper, our focus is on inverting noiseless
quadratic systems. Extensions to the noisy case is definitely worth investigating. Moving beyond quadratic
samples, one may also study other activation functions, including but not limited to Rectified Linear
Units (ReLU), polynomial functions and sigmoid functions. Such investigations might shed light on the
effectiveness of GD with random initialization for training neural networks.
• Other iterative optimization methods. Apart from gradient descent, other iterative procedures have been
applied to solve the phase retrieval problem. Partial examples include alternating minimization, Kaczmarz algorithm, and truncated gradient descent (Truncated Wirtinger flow). In conjunction with random
initialization, whether the iterative algorithms mentioned above enjoy fast global convergence is an interesting open problem. For example, it has been shown that truncated WF together with truncated spectral
initialization achieves optimal sample complexity (i.e. m n) and computational complexity simultaneously [CC17]. Does truncated Wirtinger flow still enjoy optimal sample complexity when initialized
randomly?
• Applications of leave-one-out tricks. In this paper, we heavily deploy the leave-one-out trick to demonstrate
the “near-independence” between the iterates xt and the sampling vectors {ai }. The basic idea is to
construct an auxiliary sequence that is (i) independent w.r.t. certain components of the design vectors,
and (ii) extremely close to the original sequence. These two properties allow us to propagate the certain
independence properties to xt . As mentioned in Section 3, the leave-one-out trick has served as a very
21
powerful hammer for decoupling the dependency between random vectors in several high-dimensional
estimation problems. We expect this powerful trick to be useful in broader settings.
Acknowledgements
Y. Chen is supported in part by a Princeton SEAS innovation award. The work of Y. Chi is supported in
part by AFOSR under the grant FA9550-15-1-0205, by ONR under the grant N00014-18-1-2142, and by NSF
under the grants CAREER ECCS-1818571 and CCF-1806154.
References
[AAZB+ 16] N. Agarwal, Z. Allen-Zhu, B. Bullins, E. Hazan, and T. Ma. Finding approximate local minima
for nonconvex optimization in linear time. arXiv preprint arXiv:1611.01146, 2016.
[AFWZ17] E. Abbe, J. Fan, K. Wang, and Y. Zhong. Entrywise eigenvector analysis of random matrices
with low expected rank. arXiv preprint arXiv:1709.09565, 2017.
[AZ17]
Z. Allen-Zhu. Natasha 2:
arXiv:1708.08694, 2017.
Faster non-convex optimization than sgd.
arXiv preprint
[BCMN14] A. S. Bandeira, J. Cahill, D. G. Mixon, and A. A. Nelson. Saving phase: Injectivity and stability
for phase retrieval. Applied and Computational Harmonic Analysis, 37(1):106–125, 2014.
[BEB17]
T. Bendory, Y. C. Eldar, and N. Boumal. Non-convex phase retrieval from STFT measurements.
IEEE Transactions on Information Theory, 2017.
[BWY14]
S. Balakrishnan, M. J. Wainwright, and B. Yu. Statistical guarantees for the EM algorithm:
From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014.
[CC17]
Y. Chen and E. J. Candès. Solving random quadratic systems of equations is nearly as easy as
solving linear systems. Comm. Pure Appl. Math., 70(5):822–883, 2017.
[CCG15]
Y. Chen, Y. Chi, and A. J. Goldsmith. Exact and stable covariance estimation from quadratic
sampling via convex programming. IEEE Transactions on Information Theory, 61(7):4034–
4059, 2015.
[CESV13]
E. J. Candès, Y. C. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM Journal on Imaging Sciences, 6(1):199–225, 2013.
[CFL15]
P. Chen, A. Fannjiang, and G.-R. Liu. Phase retrieval with one or two diffraction patterns by
alternating projections with the null initialization. Journal of Fourier Analysis and Applications,
pages 1–40, 2015.
[CFMW17] Y. Chen, J. Fan, C. Ma, and K. Wang. Spectral method and regularized MLE are both optimal
for top-k ranking. arXiv preprint arXiv:1707.09971, 2017.
[CL14]
E. J. Candès and X. Li. Solving quadratic equations via PhaseLift when there are about as
many equations as unknowns. Foundations of Computational Mathematics, 14(5):1017–1026,
2014.
[CL16]
Y. Chi and Y. M. Lu. Kaczmarz method for solving quadratic equations. IEEE Signal Processing
Letters, 23(9):1183–1187, 2016.
[CL17]
J. Chen and X. Li. Memory-efficient kernel pca via partial matrix sampling and nonconvex
optimization: a model-free analysis of local minima. arXiv preprint arXiv:1711.01742, 2017.
[CLM+ 16]
T. T. Cai, X. Li, Z. Ma, et al. Optimal rates of convergence for noisy sparse phase retrieval via
thresholded Wirtinger flow. The Annals of Statistics, 44(5):2221–2251, 2016.
22
[CLS15]
E. J. Candès, X. Li, and M. Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and
algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, April 2015.
[CLW17]
J.-F. Cai, H. Liu, and Y. Wang. Fast rank one alternating minimization algorithm for phase
retrieval. arXiv preprint arXiv:1708.08751, 2017.
[CSV13]
E. J. Candès, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery
from magnitude measurements via convex programming. Communications on Pure and Applied
Mathematics, 66(8):1017–1026, 2013.
[CW15]
Y. Chen and M. J. Wainwright. Fast low-rank estimation by projected gradient descent: General
statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015.
[CWZG17] J. Chen, L. Wang, X. Zhang, and Q. Gu. Robust wirtinger flow for phase retrieval with arbitrary
corruption. arXiv preprint arXiv:1704.06256, 2017.
[CYC14]
Y. Chen, X. Yi, and C. Caramanis. A convex formulation for mixed regression with two
components: Minimax optimal rates. In Conference on Learning Theory, pages 560–604, 2014.
[CZ15]
T. Cai and A. Zhang. ROP: Matrix recovery via rank-one projections. The Annals of Statistics,
43(1):102–138, 2015.
[DH14]
L. Demanet and P. Hand. Stable optimizationless recovery from phaseless linear measurements.
Journal of Fourier Analysis and Applications, 20(1):199–221, 2014.
[DJL+ 17]
S. S. Du, C. Jin, J. D. Lee, M. I. Jordan, A. Singh, and B. Poczos. Gradient descent can
take exponential time to escape saddle points. In Advances in Neural Information Processing
Systems, pages 1067–1077, 2017.
[DR17]
J. C. Duchi and F. Ruan. Solving (most) of a set of quadratic equalities: Composite optimization
for robust phase retrieval. arXiv preprint arXiv:1705.02356, 2017.
[EK15]
N. El Karoui. On the impact of predictor geometry on the performance on high-dimensional
ridge-regularized generalized robust regression estimators. Probability Theory and Related
Fields, pages 1–81, 2015.
[EKBB+ 13] N. El Karoui, D. Bean, P. J. Bickel, C. Lim, and B. Yu. On robust regression with highdimensional predictors. Proceedings of the National Academy of Sciences, 110(36):14557–14562,
2013.
[FCL18]
H. Fu, Y. Chi, and Y. Liang. Local geometry of one-hidden-layer neural networks for logistic
regression. arXiv preprint arXiv:1802.06463, 2018.
[GHJY15]
R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points online stochastic gradient
for tensor decomposition. In Conference on Learning Theory, pages 797–842, 2015.
[GX16]
B. Gao and Z. Xu.
arXiv:1606.08135, 2016.
[HH17]
W. Huang and P. Hand. Blind deconvolution by a steepest descent algorithm on a quotient
manifold. arXiv preprint arXiv:1710.03309, 2017.
[HZC18]
B. Hao, A. Zhang, and G. Cheng. Sparse and low-rank tensor estimation via cubic sketchings.
arXiv preprint arXiv:1801.09326, 2018.
[JGN+ 17]
C. Jin, R. Ge, P. Netrapalli, S. M. Kakade, and M. I. Jordan. How to escape saddle points
efficiently. arXiv preprint arXiv:1703.00887, 2017.
[JM15]
A. Javanmard and A. Montanari. De-biasing the lasso: Optimal sample size for Gaussian
designs. arXiv preprint arXiv:1508.02757, 2015.
Phase retrieval using Gauss-Newton method.
23
arXiv preprint
[JNJ17]
C. Jin, P. Netrapalli, and M. I. Jordan. Accelerated gradient descent escapes saddle points
faster than gradient descent. arXiv preprint arXiv:1711.10456, 2017.
[KMO10]
R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE
Transactions on Information Theory, 56(6):2980 –2998, June 2010.
[KRT17]
R. Kueng, H. Rauhut, and U. Terstiege. Low rank matrix recovery from rank one measurements.
Applied and Computational Harmonic Analysis, 42(1):88–116, 2017.
[Lan93]
S. Lang. Real and functional analysis. Springer-Verlag, New York,, 10:11–13, 1993.
[LGL15]
G. Li, Y. Gu, and Y. M. Lu. Phase retrieval using iterative projections: Dynamics in the
large systems limit. In Allerton Conference on Communication, Control, and Computing, pages
1114–1118. IEEE, 2015.
[LL17]
Y. M. Lu and G. Li. Phase transitions of spectral initialization for high-dimensional nonconvex
estimation. arXiv preprint arXiv:1702.06435, 2017.
[LLB17]
Y. Li, K. Lee, and Y. Bresler. Blind gain and phase calibration for low-dimensional or sparse
signal sensing via power iteration. In Sampling Theory and Applications (SampTA), 2017 International Conference on, pages 119–123. IEEE, 2017.
[LLSW16]
X. Li, S. Ling, T. Strohmer, and K. Wei. Rapid, robust, and reliable blind deconvolution via
nonconvex optimization. CoRR, abs/1606.04933, 2016.
[LMCC18]
Y. Li, C. Ma, Y. Chen, and Y. Chi. Nonconvex matrix factorization from rank-one measurements. arXiv preprint arXiv:1802.06286, 2018.
[LMZ17]
Y. Li, T. Ma, and H. Zhang. Algorithmic regularization in over-parameterized matrix recovery.
arXiv preprint arXiv:1712.09203, 2017.
[LPP+ 17]
J. D. Lee, I. Panageas, G. Piliouras, M. Simchowitz, M. I. Jordan, and B. Recht. First-order
methods almost always avoid saddle points. arXiv preprint arXiv:1710.07406, 2017.
[LSJR16]
J. D. Lee, M. Simchowitz, M. I. Jordan, and B. Recht. Gradient descent converges to minimizers.
arXiv preprint arXiv:1602.04915, 2016.
[MM17]
M. Mondelli and A. Montanari. Fundamental limits of weak recovery with applications to phase
retrieval. arXiv preprint arXiv:1708.05932, 2017.
[MSK17]
R. Murray, B. Swenson, and S. Kar. Revisiting normalized gradient descent: Evasion of saddle
points. arXiv preprint arXiv:1711.05224, 2017.
[MWCC17] C. Ma, K. Wang, Y. Chi, and Y. Chen. Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion and blind
deconvolution. arXiv preprint arXiv:1711.10467, 2017.
[MXM18]
J. Ma, J. Xu, and A. Maleki. Optimization-based amp for phase retrieval: The impact of
initialization and `_2-regularization. arXiv preprint arXiv:1801.01170, 2018.
[NJS13]
P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. Advances
in Neural Information Processing Systems (NIPS), 2013.
[QZEW17] Q. Qing, Y. Zhang, Y. Eldar, and J. Wright. Convolutional phase retrieval via gradient descent.
Neural Information Processing Systems, 2017.
[SCC17]
P. Sur, Y. Chen, and E. J. Candès. The likelihood ratio test in high-dimensional logistic
regression is asymptotically a rescaled chi-square. arXiv preprint arXiv:1706.01191, 2017.
24
[SEC+ 15]
Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev. Phase retrieval
with application to optical imaging: a contemporary overview. IEEE signal processing magazine,
32(3):87–109, 2015.
[SJL17]
M. Soltanolkotabi, A. Javanmard, and J. D. Lee. Theoretical insights into the optimization
landscape of over-parameterized shallow neural networks. arXiv preprint arXiv:1707.04926,
2017.
[SL16]
R. Sun and Z.-Q. Luo. Guaranteed matrix completion via non-convex factorization. IEEE
Transactions on Information Theory, 62(11):6535–6579, 2016.
[Sol14]
M. Soltanolkotabi. Algorithms and Theory for Clustering and Nonconvex Quadratic Programming. PhD thesis, Stanford University, 2014.
[Sol17]
M. Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample
complexity barriers via nonconvex optimization. arXiv preprint arXiv:1702.06175, 2017.
[SQW16]
J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. In Information Theory
(ISIT), 2016 IEEE International Symposium on, pages 2379–2383. IEEE, 2016.
[SS12]
W. Schudy and M. Sviridenko. Concentration and moment inequalities for polynomials of independent random variables. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium
on Discrete Algorithms, pages 437–446. ACM, New York, 2012.
[TBS+ 16]
S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank solutions of linear
matrix equations via procrustes flow. In Proceedings of the 33rd International Conference on
International Conference on Machine Learning-Volume 48, pages 964–973. JMLR. org, 2016.
[TV17]
Y. S. Tan and R. Vershynin. Phase retrieval via randomized kaczmarz: Theoretical guarantees.
arXiv preprint arXiv:1706.09993, 2017.
[Ver12]
R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. Compressed
Sensing, Theory and Applications, pages 210 – 268, 2012.
[Wei15]
K. Wei. Solving systems of phaseless equations via Kaczmarz methods: A proof of concept
study. Inverse Problems, 31(12):125008, 2015.
[WGE17]
G. Wang, G. B. Giannakis, and Y. C. Eldar. Solving systems of random quadratic equations
via truncated amplitude flow. IEEE Transactions on Information Theory, 2017.
[WGSC17] G. Wang, G. B. Giannakis, Y. Saad, and J. Chen. Solving almost all systems of random
quadratic equations. arXiv preprint arXiv:1705.10407, 2017.
[YYF+ 17]
Z. Yang, L. F. Yang, E. X. Fang, T. Zhao, Z. Wang, and M. Neykov. Misspecified nonconvex
statistical optimization for phase retrieval. arXiv preprint arXiv:1712.06245, 2017.
[ZB17]
Y. Zhong and N. Boumal. Near-optimal bounds for phase synchronization. arXiv preprint
arXiv:1703.06605, 2017.
[ZCL16]
H. Zhang, Y. Chi, and Y. Liang. Provable non-convex phase retrieval with outliers: Median
truncated Wirtinger flow. In International conference on machine learning, pages 1022–1031,
2016.
[Zha17]
T. Zhang. Phase retrieval using alternating minimization in a batch setting. arXiv preprint
arXiv:1706.08167, 2017.
[ZSJ+ 17]
K. Zhong, Z. Song, P. Jain, P. L. Bartlett, and I. S. Dhillon. Recovery guarantees for onehidden-layer neural networks. arXiv preprint arXiv:1706.03175, 2017.
[ZWGC17] L. Zhang, G. Wang, G. B. Giannakis, and J. Chen. Compressive phase retrieval via reweighted
amplitude flow. arXiv preprint arXiv:1712.02426, 2017.
25
[ZWL15]
T. Zhao, Z. Wang, and H. Liu. A nonconvex optimization framework for low rank matrix
estimation. In Advances in Neural Information Processing Systems, pages 559–567, 2015.
[ZZLC17]
H. Zhang, Y. Zhou, Y. Liang, and Y. Chi. A nonconvex approach for phase retrieval: Reshaped
wirtinger flow and incremental algorithms. Journal of Machine Learning Research, 2017.
26
A
Preliminaries
We first gather two standard concentration inequalities used throughout the appendix. The first lemma
is the multiplicative form of the Chernoff bound, while the second lemma is a user-friendly version of the
Bernstein inequality.
Lemma
10. Suppose X1 , · · · , Xm are independent random variables taking values in {0, 1}. Denote X =
Pm
X
and
µ = E [X]. Then for any δ ≥ 1, one has
i=1 i
P (X ≥ (1 + δ) µ) ≤ e−δµ/3 .
Lemma 11. Consider m independent random variables zl (1 ≤ l ≤ m), each satisfying |zl | ≤ B. For any
a ≥ 2, one has
v
u
m
m
m
X
X
X
u
2a
zl −
E [zl ] ≤ t2a log m
E [zl2 ] + B log m
3
l=1
l=1
−a
with probability at least 1 − 2m
l=1
.
Next, we list a few simple facts. The gradient and the Hessian of the nonconvex loss function (2) are
given respectively by
m
i
1 X h > 2
\ 2
ai x − a>
∇f (x) =
ai a>
i x
i x;
m i=1
(54)
m
∇2 f (x) =
2
i
1 Xh
\ 2
3 a>
− a>
ai a>
i x
i x
i .
m i=1
(55)
\
In addition, recall that x
to be x\ = e1 throughout the proof. For each 1 ≤ i ≤ m, we have
is assumed
ai,1
the decomposition ai =
, where ai,⊥ contains the 2nd through the nth entries of ai . The standard
ai,⊥
concentration inequality reveals that
p
\
(56)
max |ai,1 | = max a>
i x ≤ 5 log m
1≤i≤m
1≤i≤m
with probability 1 − O m−10 . Additionally, apply the standard concentration inequality to see that
max kai k2 ≤
1≤i≤m
√
6n
(57)
with probability 1 − O me−1.5n .
The next lemma provides concentration bounds regarding polynomial functions of {ai }.
i.i.d.
Lemma 12. Consider any > 3/n. Suppose that ai ∼ N (0, In ) for 1 ≤ i ≤ m. Let
n−1
>
S := z ∈ R
max ai,⊥ z ≤ β kzk2 ,
1≤i≤m
√
where β is any value obeying
β ≥ c1 log m for some sufficiently large constant c1 > 0. Then with probability
exceeding 1 − O m−10 , one has
n
o
Pm 3 >
5
1
1
1
2 m ;
1. m
a
a
z
n
log
n,
βn
log
≤
kzk
for
all
z
∈
S,
provided
that
m
≥
c
max
2
0
i=1 i,1 i,⊥
2
2.
1
m
Pm
ai,1 a>
i,⊥ z
3
3.
1
m
Pm
a2i,1 a>
i,⊥ z
2
i=1
i=1
3
≤ kzk2 for all z ∈ S, provided that m ≥ c0 max
2
2
n
3
1
1 3
2
2 n log n, β n log
− kzk2 ≤ kzk2 for all z ∈ S, provided that m ≥ c0 max
27
o
m ;
2
1 2
2 n log n, β n log
1
m ;
4.
1
m
Pm
a6i,1 a>
i,⊥ z
2
− 15 kzk2 ≤ kzk2 for all z ∈ S, provided that m ≥ c0 max
5.
1
m
Pm
a2i,1 a>
i,⊥ z
6
− 15 kzk2 ≤ kzk2 for all z ∈ S, provided that m ≥ c0 max
6.
1
m
Pm
a2i,1 a>
i,⊥ z
4
− 3 kzk2 ≤ kzk2 for all z ∈ S, provided that m ≥ c0 max
i=1
i=1
i=1
2
2
1
6
6
1
4
4
4
1 2
2 n log n, β n log
m ;
2
1 6
2 n log n, β n log
m ;
2
1 4
2 n log n, β n log
1
m .
Here, c0 > 0 is some sufficiently large constant.
Proof. See Appendix J.
The next lemmas provide the (uniform) matrix concentration inequalities about ai a>
i .
i.i.d.
Lemma 13 ( [Ver12, Corollary 5.35]). Suppose that ai ∼ N (0, In ) for 1 ≤ i ≤ m. With probability at
least 1 − ce−c̃m , one has
m
1 X
≤ 2,
ai a>
i
m i=1
as long as m ≥ c0 n for some sufficiently large constant c0 > 0. Here, c, c̃ > 0 are some absolute constants.
i.i.d.
Lemma 14. Fix some x\ ∈ Rn . Suppose that ai ∼ N (0, In ), 1 ≤ i ≤ m. With probability at least
1 − O m−10 , one has
m
1 X > \ 2
\
a x ai a>
i − x
m i=1 i
s
2
2
In − 2x\ x\> ≤ c0
n log3 m
x\
m
2
2
,
(58)
provided that m > c1 n log3 m. Here, c0 , c1 are some universal positive constants. Furthermore, fix any c2 > 1
and suppose that m > c1 n log3 m for some sufficiently large constant c1 > 0. Then with probability exceeding
1 − O m−10 ,
s
m
1 X > 2
n log3 m
2
2
>
ai z ai a>
kzk2
(59)
≤ c0
i − kzk2 In − 2zz
m i=1
m
√
holds simultaneously for all z ∈ Rn obeying max1≤i≤m a>
i z ≤ c2 log m kzk2 . On this event, we have
m
m
1 X
1 X
|ai,1 |2 ai,⊥ a>
|ai,1 |2 ai a>
≤ 4.
i,⊥ ≤
i
m i=1
m i=1
(60)
Proof. See Appendix K.
The following lemma provides the concentration results regarding the Hessian matrix ∇2 f (x).
3
Lemma 15. Fix any constant c0 > 1. Suppose that
m > c1 n log m for some sufficiently large constant
−10
c1 > 0. Then with probability exceeding 1 − O m
,
s
o
n
2
In − η∇2 f (z) − 1 − 3η kzk2 + η In + 2ηx\ x\> − 6ηzz >
.
and
n
o
n log3 m
2
max kzk2 , 1
m
∇2 f (z) ≤ 10kzk22 + 4
√
hold simultaneously for all z obeying max1≤i≤m a>
i z ≤ c0 log m kzk2 , provided that 0 < η <
for some sufficiently small constant c2 > 0.
Proof. See Appendix L.
28
c2
max{kzk22 ,1}
Finally, we note that there are a few immediate consequences of the induction hypotheses (40), which we
summarize below. These conditions are useful in the subsequent analysis. Note that Lemma 3 is incorporated
here.
Lemma 16. Suppose that m ≥ Cn log6 m for some sufficiently large constant C > 0. Then under the
hypotheses (40) for t . log n, with probability at least 1 − O(me−1.5n ) − O(m−10 ) one has
t,(l)
c5 /2 ≤ x⊥
c5 /2
c5 /2 ≤
2
≤ xt,sgn
⊥
2
t,sgn,(l)
x⊥
2
≤ xt,(l)
≤ x
2
1≤l≤m
max
xt − xt,(l)
2
xt − xt,sgn
2
t,(l)
max xk
1≤l≤m
(61a)
≤ 2C5 ;
(61b)
2
t,sgn,(l)
≤ x
p
t
max a>
log m
l x .
1≤l≤m
p
t
log m
max a>
l,⊥ x⊥ .
1≤l≤m
p
t,sgn
max a>
. log m
l x
1≤l≤m
p
t,sgn
. log m
max a>
l,⊥ x⊥
1≤l≤m
p
max asgn>
xt,sgn . log m
l
1≤l≤m
≤ 2C5 ;
t,sgn
2
≤ 2C5 ;
xt
xt⊥
2
;
2
(61c)
(62a)
;
(62b)
xt,sgn
2
;
(62c)
xt,sgn
⊥
2
;
(62d)
2
;
(62e)
xt,sgn
1
;
log m
1
;
log m
≤ 2αt .
(63a)
(63b)
(63c)
Proof. See Appendix M.
B
Proof of Lemma 1
We focus on the case when
√
1
log n
≤ α0 ≤ √
n
n log n
and
1−
1
1
≤ β0 ≤ 1 +
log n
log n
The other cases can be proved using very similar arguments as below, and hence omitted.
Let η > 0 and c4 > 0 be some sufficiently small constants independent of n. In the sequel, we divide
Stage 1 (iterations up to Tγ ) into several substages. See Figure 8 for an illustration.
• Stage 1.1: consider the period when αt is sufficiently small, which consists of all iterations 0 ≤ t ≤ T1
with T1 given in (26). We claim that, throughout this substage,
1
,
αt > √
2 n log n
√
√
0.5 < βt < 1.5.
(64a)
(64b)
If this claim holds, then we would have αt2 + βt2 < c24 + 1.5 < 2 as long as c4 is small enough. This
immediately reveals that 1 + η 1 − 3αt2 − 3βt2 ≥ 1 − 6η, which further gives
βt+1 ≥ 1 + η 1 − 3αt2 − 3βt2 + ηρt βt
c3 η
≥ 1 − 6η −
βt
log n
≥ (1 − 7η)βt .
In what follows, we further divide this stage into multiple sub-phases.
29
(65)
,t
-t
Figure 8: Illustration of the substages for the proof of Lemma 1.
– Stage 1.1.1: consider the iterations 0 ≤ t ≤ T1,1 with
o
n
p
T1,1 = min t | βt+1 ≤ 1/3 + η .
(66)
Fact 1. For any sufficiently small η > 0, one has
βt+1 ≤ (1 − 2η 2 )βt ,
αt+1 ≤ (1 + 4η)αt ,
3
αt+1 ≥ (1 + 2η )αt ,
0 ≤ t ≤ T1,1 ;
(67)
0 ≤ t ≤ T1,1 ;
1 ≤ t ≤ T1,1 ;
α1 ≥ α0 /2;
1 − 7η
βT1,1 +1 ≥ √ ;
3
1
T1,1 . 2 .
η
(68)
(69)
Moreover, αT1,1 c4 and hence T1,1 < T1 .
From Fact 1, we see that in this substage, αt keeps increasing (at least for t ≥ 1) with
c4 > αt ≥
1
α0
≥ √
,
2
2 n log n
0 ≤ t ≤ T1,1 ,
and βt is strictly decreasing with
1.5 > β0 ≥ βt ≥ βT1,1 +1 ≥
1 − 7η
√ ,
3
0 ≤ t ≤ T1,1 ,
which justifies (64). In addition, combining (67) with (68), we arrive at the growth rate of αt /βt as
αt+1 /αt
1 + 2η 3
≥
= 1 + O(η 2 ).
βt+1 /βt
1 − 2η 2
These demonstrate (24) for this substage.
– Stage 1.1.2: this substage contains all iterations obeying T1,1 < t ≤ T1 . We claim the following result.
Fact 2. Suppose that η > 0 is sufficiently small. Then for any T1,1 < t ≤ T1 ,
(1 − 7η)2 1 + 30η
√
βt ∈
, √
;
3
3
βt+1 ≤ (1 + 30η 2 )βt .
30
(70)
(71)
Furthermore, since
αt2 + βt2 ≤ c24 +
(1 + 30η)2
1
< ,
3
2
we have, for sufficiently small c3 , that
αt+1 ≥ 1 + 3η 1 − αt2 − βt2 − η|ζt | αt
c3 η
≥ 1 + 1.5η −
αt
log n
≥ (1 + 1.4η)αt ,
(72)
√ 1
,
2 n log n
and hence αt keeps increasing. This means αt ≥ α1 ≥
with (70) for this substage. As a consequence,
T1 − T1,1 .
T1 − T0 .
log
c4
α0
log(1 + 1.4η)
log cc64
log5 m
log (1 + 1.4η)
which justifies the claim (64) together
.
log n
;
η
.
log log m
.
η
Moreover, combining (72) with (71) yields the growth rate of αt /βt as
αt+1 /αt
1 + 1.4η
≥
≥1+η
βt+1 /βt
1 + 30η 2
for η > 0 sufficiently small.
– Taken collectively, the preceding bounds imply that
T1 = T1,1 + (T1 − T1,1 ) .
1
log n
log n
. 2 .
+
2
η
η
η
• Stage 1.2: in this stage, we consider all iterations T1 < t ≤ T2 , where
αt+1
2
T2 := min t |
>
.
βt+1
γ
From the preceding analysis, it is seen that, for η sufficiently small,
√
αT1,1
c4
3c4
.
≤ (1−7η)2 ≤
βT1,1
1 − 15η
√
3
In addition, we have:
Fact 3. Suppose η > 0 is sufficiently small. Then for any T1 < t ≤ T2 , one has
αt2 + βt2 ≤ 2;
(73)
αt+1 /βt+1
≥ 1 + η;
αt /βt
αt+1 ≥ {1 − 3.1η} αt ;
βt+1 ≥ {1 − 5.1η} βt .
In addition,
T2 − T1 .
31
1
.
η
(74)
(75)
(76)
With this fact in place, one has
αt ≥ (1 − 3.1η)t−T1 αT1 & 1,
T1 < t ≤ T2 .
βt ≥ (1 − 5.1η)t−T1 βT1 & 1,
T1 < t ≤ T2 .
and hence
These taken collectively demonstrate (24) for any T1 < t ≤ T2 . Finally, if T2 ≥ Tγ , then we complete the
proof as
log n
Tγ ≤ T2 = T1 + (T2 − T1 ) . 2 .
η
Otherwise we move to the next stage.
• Stage 1.3: this stage is composed of all iterations T2 < t ≤ Tγ . We break the discussion into two cases.
– If αT2 +1 > 1 + γ, then αT2 2 +1 + βT22 +1 ≥ αT2 2 +1 > 1 + 2γ. This means that
αT2 +2 ≤ 1 + 3η 1 − αT2 2 +1 − βT22 +1 + η|ζT2 +1 | αT2 +1
ηc3
≤ 1 − 6ηγ −
αT2 +1
log n
≤ {1 − 5ηγ} αT2 +1
when c3 > 0 is sufficiently small. Similarly, one also gets βT2 +2 ≤ (1 − 5ηγ)βT2 +1 . As a result, both αt
and βt will decrease. Repeating this argument reveals that
αt+1 ≤ (1 − 5ηγ)αt ,
βt+1 ≤ (1 − 5ηγ)βt
until αt ≤ 1 + γ. In addition, applying the same argument as for Stage 1.2 yields
αt+1 /αt
≥ 1 + c10 η
βt+1 /βt
for some constant c10 > 0. Therefore, when αt drops below 1 + γ, one has
αt ≥ (1 − 3η)(1 + γ) ≥ 1 − γ
and
βt ≤
γ
αt ≤ γ.
2
This justifies that
Tγ − T2 .
log
2
1−γ
− log(1 − 5ηγ)
.
1
.
η
– If c4 ≤ αT2 +1 < 1 − γ, take very similar arguments as in Stage 1.2 to reach that
αt+1 /αt
≥ 1 + c10 η,
βt+1 /βt
and
αt & 1,
Tγ − T2 .
βt & 1
1
η
T2 ≤ t ≤ Tγ
for some constant c10 > 0. We omit the details for brevity.
In either case, we see that αt is always bounded away from 0. We can also repeat the argument for Stage
1.2 to show that βt & 1.
32
In conclusion, we have established that
Tγ = T1 + (T2 − T1 ) + (Tγ − T2 ) .
and
αt+1 /αt
≥ 1 + c10 η 2 ,
βt+1 /βt
log n
,
η2
c5 ≤ βt ≤ 1.5,
0 ≤ t < Tγ
1
≤ αt ≤ 2,
2 n log n
√
0 ≤ t < Tγ
for some constants c5 , c10 > 0.
Proof of Fact 1. The proof proceeds as follows.
p
• First of all, for any 0 ≤ t ≤ T1,1 , one has βt ≥ 1/3 + η and αt2 + βt2 ≥ 1/3 + η and, as a result,
βt+1 ≤ 1 + η 1 − 3αt2 − 3βt2 + η|ρt | βt
c3 η
2
βt
≤ 1 − 3η +
log n
≤ (1 − 2η 2 )βt
(77)
as long as c3 and η are both constants. In other words, βt is strictly decreasing before T1,1 , which also
justifies the claim (64b) for this substage.
• Moreover, given that the contraction factor of βt is at least 1 − 2η 2 , we have
log √ β0
T1,1 .
1/3+η
− log (1 − 2η 2 )
1
.
η2
p
This upper bound also allows us to conclude that βt will cross the threshold 1/3 + η before αt exceeds
c4 , namely, T1,1 < T1 . To see this, we note that the growth rate of {αt } within this substage is upper
bounded by
αt+1 ≤ 1 + 3η 1 − αt2 − βt2 + η|ζt | αt
c3 η
≤ 1 + 3η +
αt
log n
≤ (1 + 4η)αt .
(78)
This leads to an upper bound
|αT1,1 | ≤ (1 + 4η)T1,1 |α0 | ≤ (1 + 4η)O(η
−2
) log n
√
n
c4 .
• Furthermore, we can also lower bound αt . First of all,
α1 ≥ 1 + 3η 1 − α02 − β02 − η|ζt | α0
c3 η
≥ 1 − 3η −
α0
log n
1
≥ (1 − 4η)α0 ≥ α0
2
for η sufficiently small. For all 1 ≤ t ≤ T1,1 , using (78) we have
αt2 + βt2 ≤ (1 + 4η)T1,1 α02 + β12 ≤ o(1) + (1 − 2η 2 )β0 ≤ 1 − η 2 ,
allowing one to deduce that
αt+1 ≥ 1 + 3η 1 − αt2 − βt2 − η|ζt | αt
33
(79)
≥
c3 η
1 + 3η 3 −
αt
log n
≥ (1 + 2η 3 )αt .
In other words, αt keeps increasing throughout all 1 ≤ t ≤ T1,1 . This verifies the condition (64a) for this
substage.
• Finally, we make note of one useful lower bound
βT1,1 +1 ≥ (1 − 7η)βT1,1 ≥
which follows by combining (65) and the condition βT1,1 ≥
p
1 − 7η
√ ,
3
(80)
1/3 + η .
Proof of Fact 2. Clearly, βT1,1 +1 falls within this range according to (66) and (80). We now divide into
several cases.
• If
1+η
√
3
≤ βt <
1+30η
√
,
3
then αt2 + βt2 ≥ βt2 ≥ (1 + η)2 /3, and hence the next iteration obeys
βt+1 ≤ 1 + η 1 − 3βt2 + η|ρt | βt
c3 η
≤ 1 + η 1 − (1 + η)2 +
βt
log n
≤ (1 − η 2 )βt
and, in view of (65), βt+1 ≥ (1 − 7η)βt ≥
which still resides within the range (70).
1−7η
√ .
3
In summary, in this case one has βt+1
(81)
i
√ , 1+30η
√
∈ 1−7η
,
3
3
h
2
√
√ , then α2 +β 2 < c2 +(1−7η)2 /3 < (1−7η)/3 for c4 sufficiently small. Consequently,
≤ βt ≤ 1−7η
• If (1−7η)
t
t
4
3
3
for a small enough c3 one has
βt+1 ≥ 1 + η 1 − 3αt2 − 3βt2 − η|ρt | βt
c3 η
βt
≥ 1 + 7η 2 −
log n
≥ (1 + 6η 2 )βt .
In other words, βt+1 is strictly larger than βt . Moreover, recognizing that αt2 + βt2 > (1 − 7η)4 /3 >
(1 − 29η)/3, one has
βt+1 ≤ 1 + η 1 − 3αt2 − 3βt2 + η|ρt | βt
c3 η
βt ≤ (1 + 30η 2 )βt
(82)
≤ 1 + 29η 2 +
log n
1 + 30η 2
√
<
.
3
h
i
2
1+30η
√
√
Therefore, we have shown that βt+1 ∈ (1−7η)
,
, which continues to lie within the range (70).
3
3
• Finally, if
1−7η
√
3
< βt <
1+η
√ ,
3
2
we have αt2 + βt2 ≥ (1−7η)
≥ 1−15η
for η sufficiently small, which implies
3
3
βt+1 ≤ 1 + 15η 2 + η|ρt | βt ≤ (1 + 16η 2 )βt
(83)
≤
(1 + 16η 2 )(1 + η)
1 + 2η
√
≤ √
3
3
for small η > 0. In addition, it comes from (80) that βt+1 ≥ (1 − 7η)βt ≥
falls within the range (70).
34
(1−7η)2
√
.
3
This justifies that βt+1
Combining all of the preceding cases establishes the claim (70) for all T1,1 < t ≤ T1 .
Proof of Fact 3. We first demonstrate that
αt2 + βt2 ≤ 2
(84)
throughout this substage. In fact, if αt2 + βt2 ≤ 1.5, then
αt+1 ≤ 1 + 3η 1 − αt2 − βt2 + η|ζt | αt ≤ (1 + 4η) αt
and, similarly, βt+1 ≤ (1 + 4η)βt . These taken together imply that
2
2
2
αt+1
+ βt+1
≤ (1 + 4η) αt2 + βt2 ≤ 1.5(1 + 9η) < 2.
Additionally, if 1.5 < αt2 + βt2 ≤ 2, then
αt+1 ≤ 1 + 3η 1 − αt2 − βt2 + η|ζt | αt
c3 η
αt
≤ 1 − 1.5η +
log n
≤ (1 − η)αt
and, similarly, βt+1 ≤ (1 − η)βt . These reveal that
2
2
αt+1
+ βt+1
≤ αt2 + βt2 .
Put together the above argument to establish the claim (84).
With the claim (84) in place, we can deduce that
αt+1 ≥ 1 + 3η 1 − αt2 − βt2 − η|ζt | αt
≥ 1 + 3η 1 − αt2 − βt2 − 0.1η αt
(85)
and
βt+1 ≤ 1 + η 1 − 3αt2 − 3βt2 + η|ρt | βt
≤ 1 + η 1 − 3αt2 − 3βt2 + 0.1η βt .
Consequently,
1 + 3η 1 − αt2 − βt2 − 0.1η
αt+1 /αt
αt+1 /βt+1
=
≥
αt /βt
βt+1 /βt
1 + η (1 − 3αt2 − 3βt2 ) + 0.1η
1.8η
=1+
1 + η (1 − 3αt2 − 3βt2 ) + 0.1η
1.8η
≥1+
≥1+η
1 + 2η
for η > 0 sufficiently small. This immediately implies that
log αT 2/γ
/β1,1
1
1,1
T2 − T1 .
.
log (1 + η)
η
Moreover, combine (84) and (85) to arrive at
αt+1 ≥ {1 − 3.1η} αt ,
Similarly, one can show that βt+1 ≥ {1 − 5.1η} βt .
35
(86)
C
C.1
Proof of Lemma 2
Proof of (41a)
In view of the gradient update rule (3), we can express the signal component xt+1
as follows
||
m
xt+1
k
=
xtk
i
η X h > t 3
t
ai,1 .
ai x − a2i,1 a>
−
i x
m i=1
t
t
>
t
Expanding this expression using a>
i x = xk ai,1 + ai,⊥ x⊥ and rearranging terms, we are left with
xt+1
k
xtk
=
m
m
h
h
i t 1 X
i 1 X
t 2
t 2
4
+ η 1 − xk
xk ·
·
a + η 1 − 3 xk
a3 a> xt
m i=1 i,1
m i=1 i,1 i,⊥ ⊥
{z
} |
{z
}
|
:=J1
:=J2
m
m
1 X > t 2 2
1 X > t 3
− 3ηxtk ·
ai,⊥ x⊥ ai,1 − η ·
a x
ai,1 .
m i=1
m i=1 i,⊥ ⊥
|
{z
} |
{z
}
:=J3
:=J4
In the sequel, we control the above four terms J1 , J2 , J3 and J4 separately.
• With regard to the first term J1 , it follows from the standard concentration inequality for Gaussian
polynomials [SS12, Theorem 1.9] that
!
m
1/4 1/2
1 X 4
P
ai,1 − 3 ≥ τ ≤ e2 e−c1 m τ
m i=1
3
log
√ m
m
for some absolute constant c1 > 0. Taking τ
reveals that with probability exceeding 1−O m−10 ,
!
m
h
2 i t
1 X 4
ai,1 − 3 η 1 − xtk
xk
m i=1
h
2 i t
J1 = 3η 1 − xtk
xk +
h
2 i t
= 3η 1 − xtk
xk + r1 ,
where the remainder term r1 obeys
|r1 | = O
(87)
η log3 m t
√
xk
m
.
Here, the last line also uses the fact that
1 − xtk
2
≤ 1 + xt
2
2
. 1,
(88)
with the last relation coming from the induction hypothesis (40e).
• For the third term J3 , it is easy to see that
"
m
1 X > t 2 2
a x
ai,1 − xt⊥
m i=1 i,⊥ ⊥
2
2
=
xt>
⊥
#
m
1 X > \ 2
>
a x ai,⊥ ai,⊥ −In−1 xt⊥ ,
m i=1 i
|
{z
}
(89)
:=U
where U − In−1 is a submatrix of the following matrix (obtained by removing its first row and column)
m
1 X > \ 2
>
a x ai a>
i − In + 2e1 e1 .
m i=1 i
36
(90)
This fact combined with Lemma 14 reveals that
s
m
1 X > \ 2
>
kU − In−1 k ≤
.
ai x ai a>
i − In + 2e1 e1
m i=1
n log3 m
m
with probability at least 1 − O m−10 , provided that m n log3 m. This further implies
J3 = 3η xt⊥
where the size of the remaining term r2 satisfies
s
n log3 m t
|r2 | . η
xk
m
2
2
xtk + r2 ,
s
xt⊥
2
2
.η
(91)
n log3 m t
xk .
m
2
2
Here, the last inequality holds under the hypothesis (40e) that kxt⊥ k2 ≤ kxt k2 . 1.
• When it comes to J2 , our analysis relies on the random-sign sequence {xt,sgn }. Specifically, one can
decompose
m
m
m
1 X 3 > t
1 X 3 > t,sgn
1 X 3 >
ai,1 ai,⊥ x⊥ =
ai,1 ai,⊥ x⊥ +
ai,1 ai,⊥ xt⊥ − xt,sgn
.
(92)
⊥
m i=1
m i=1
m i=1
t,sgn
For the first term on the right-hand side of (92), note that |ai,1 |3 a>
is statistically independent of
i,⊥ x⊥
P
m
t,sgn
1
3
>
a
a
x
as
a
weighted
sum
of the ξi ’s and apply the
ξi = sgn (ai,1 ). Therefore we can treat m
i=1 i,1 i,⊥ ⊥
Bernstein inequality (see Lemma 11) to arrive at
m
m
1 X
1 p
1 X 3 > t,sgn
3
t,sgn
ai,1 ai,⊥ x⊥
=
ξi |ai,1 | a>
.
V1 log m + B1 log m
i,⊥ x⊥
m i=1
m i=1
m
(93)
with probability exceeding 1 − O m−10 , where
V1 :=
m
X
|ai,1 |
6
t,sgn
a>
i,⊥ x⊥
2
and
3
t,sgn
.
B1 := max |ai,1 | a>
i,⊥ x⊥
1≤i≤m
i=1
Make use ofLemma 12 and the incoherence condition (62d) to deduce that with probability at least
1 − O m−10 ,
m
1
1 X
6
t,sgn 2
V1 =
|ai,1 | a>
. xt,sgn
i,⊥ x⊥
⊥
m
m i=1
2
2
with the proviso that m n log5 m. Furthermore, the incoherence condition (62d) together with the fact
(56) implies that
B1 . log2 m xt,sgn
.
⊥
2
Substitute the bounds on V1 and B1 back to (93) to obtain
r
m
1 X 3 > t,sgn
log m
log3 m
ai,1 ai,⊥ x⊥
.
xt,sgn
+
xt,sgn
⊥
⊥
2
m i=1
m
m
r
2
log m
xt,sgn
⊥
m
2
(94)
as long as m & log5 m. Additionally, regarding the second term on the right-hand side of (92), one sees
that
m
m
1 X > \ 2
1 X 3 >
t,sgn
t
ai,1 ai,⊥ xt⊥ − xt,sgn
=
a x ai,1 a>
,
(95)
i,⊥ x⊥ − x⊥
⊥
m i=1
m i=1 i
|
{z
}
:=u>
37
where u is the first column of (90) without the first entry. Hence we have
s
m
1 X 3 >
n log3 m
t,sgn
t,sgn
.
≤ kuk2 xt⊥ − x⊥
xt⊥ − xt,sgn
ai,1 ai,⊥ xt⊥ − x⊥
⊥
2
m i=1
m
2
,
(96)
with probability exceeding 1 − O m−10 , with the proviso that m n log3 m. Substituting the above two
bounds (94) and (96) back into (92) gives
m
m
m
1 X 3 > t
1 X 3 > t,sgn
1 X 3 >
ai,1 ai,⊥ x⊥ ≤
ai,1 ai,⊥ x⊥
+
ai,1 ai,⊥ xt⊥ − xt,sgn
⊥
m i=1
m i=1
m i=1
s
r
log m
n log3 m
xt,sgn
+
xt⊥ − xt,sgn
.
.
⊥
⊥
2
2
m
m
As a result, we arrive at the following bound on J2 :
r
s
log
m
n log3 m
2
+
xt,sgn
xt⊥ − xt,sgn
|J2 | . η 1 − 3 xtk
⊥
⊥
2
m
m
s
r
(i)
log m
n log3 m
.η
xt,sgn
xt⊥ − xt,sgn
+
η
⊥
⊥
2
2
m
m
s
r
(ii)
log m
n log3 m
,
xt⊥ 2 + η
xt⊥ − xt,sgn
. η
⊥
2
m
m
where (i) uses (88) again and (ii) comes from the triangle inequality xt,sgn
⊥
q
q
log m
n log3 m
and the fact that
.
m ≤
m
2
2
≤ kxt⊥ k2 + xt⊥ − xt,sgn
⊥
2
• It remains to control J4 , towards which we resort to the random-sign sequence {xt,sgn } once again. Write
m
m
m
i
1 X > t 3
1 X h > t 3
1 X > t,sgn 3
t,sgn 3
ai,⊥ x⊥ ai,1 =
ai,⊥ x⊥
ai,1 +
ai,⊥ x⊥ − a>
ai,1 .
i,⊥ x⊥
m i=1
m i=1
m i=1
t,sgn
For the first term in (97), since ξi = sgn (ai,1 ) is statistically independent of a>
i,⊥ x⊥
upper bound the first term using the Bernstein inequality (see Lemma 11) as
3
(97)
|ai,1 |, we can
m
1 p
1 X > t,sgn 3
ai,⊥ x⊥
|ai,1 | ξi .
V2 log m + B2 log m ,
m i=1
m
where the quantities V2 and B2 obey
V2 :=
m
X
t,sgn
a>
i,⊥ x⊥
6
2
|ai,1 |
3
t,sgn
B2 := max a>
|ai,1 | .
i,⊥ x⊥
and
1≤i≤m
i=1
Using similar arguments as in bounding (93) yields
V2 . m xt,sgn
⊥
6
2
B2 . log2 m xt,sgn
⊥
and
with the proviso that m n log5 m and
r
m
1 X > t,sgn 3
log m
a x
|ai,1 | ξi .
xt,sgn
⊥
m i=1 i,⊥ ⊥
m
38
log3 m
+
xt,sgn
⊥
2
m
3
3
2
3
2
r
log m
xt,sgn
⊥
m
3
2
,
(98)
with probability exceeding 1 − O(m−10 ) as soon as m & log5 m. Regarding the second term in (97),
m
i
1 X h > t 3
t,sgn 3
ai,1
ai,⊥ x⊥ − a>
i,⊥ x⊥
m i=1
m
h > t 2
> t,sgn io
1 X n >
t,sgn 2
>
>
t
ai,1
ai,⊥ xt⊥ − xt,sgn
a
x
+
a
x
+
a
x
ai,⊥ x⊥
i,⊥ ⊥
i,⊥ ⊥
i,⊥ ⊥
⊥
m i=1
v
v
u
m h
m h
i2 u
(ii) u 1 X
u1 X
i
t − xt,sgn
t 4 + 5 a> xt,sgn 4 a2 .
t
≤ t
a>
x
5 a>
i,1
i,⊥ ⊥
i,⊥
⊥
⊥
i,⊥ x⊥
m i=1
m i=1
(i)
=
(99)
Here, the first equality (i) utilizes the elementary identity a3 − b3 = (a − b) a2 + b2 + ab , and (ii) follows
from the Cauchy-Schwarz inequality as well as the inequality
(a2 + b2 + ab)2 ≤ (1.5a2 + 1.5b2 )2 ≤ 5a4 + 5b4 .
Use Lemma 13 to reach
v
v
u
u
m h
i
u
2
u1 X
>
t,sgn
>
t
t
ai,⊥ x⊥ − x⊥
= t xt⊥ − xt,sgn
⊥
m i=1
m
1 X
ai,⊥ a>
i,⊥
m i=1
!
xt⊥ − xt,sgn
. xt⊥ − xt,sgn
⊥
⊥
2
.
Additionally, combining Lemma 12 and the incoherence conditions (62b) and (62d), we can obtain
v
u
m h
u1 X
i
t 4 + 5 a> xt,sgn 4 a2 . xt 2 + xt,sgn 2 . 1,
t
5 a>
⊥ 2
i,1
i,⊥ x⊥
i,⊥ ⊥
⊥
2
m i=1
as long as m n log6 m. Here, the last relation comes from the norm conditions (40e) and (61b). These
in turn imply
m
i
1 X h > t 3
t,sgn 3
ai,⊥ x⊥ − a>
x
ai,1 . xt⊥ − xt,sgn
.
(100)
i,⊥ ⊥
⊥
2
m i=1
Combining the above bounds (98) and (100), we get
m
1 X
|J4 | ≤ η
m i=1
r
log m
.η
m
r
log m
.η
m
r
log m
.η
m
t,sgn 3
a>
i,⊥ x⊥
xt,sgn
⊥
xt,sgn
⊥
xt⊥
2
3
m
ai,1
i
1 X h > t 3
t,sgn 3
ai,⊥ x⊥ − a>
+η
ai,1
i,⊥ x⊥
m i=1
2
+ η xt⊥ − xt,sgn
⊥
2
2
+ η xt⊥ − xt,sgn
⊥
2
+ η xt⊥ − xt,sgn
⊥
2
,
where the penultimate inequality arises from the norm condition (61b) and
q the last one comes from the
t,sgn
t,sgn
t
t
triangle inequality x⊥
≤ kx⊥ k2 + x⊥ − x⊥
and the fact that logmm ≤ 1.
2
2
• Putting together the above estimates for J1 , J2 , J3 and J4 , we reach
xt+1
= xtk + J1 − J3 + J2 − J4
k
h
2 i t
= xtk + 3η 1 − xtk
xk − 3η xt⊥
n
o
2
= 1 + 3η 1 − xt 2 xtk + R1 ,
39
2
2
xt|| + R1
(101)
where R1 is the residual term obeying
s
r
n log3 m t
log m
xk + η
xt⊥
|R1 | . η
m
m
2
+ η xt − xt,sgn
2
.
Substituting the hypotheses (40) into (101) and recalling that αt = hxt , x\ i lead us to conclude that
s
!
r
3
o
n
n
log
m
log
m
t 2
αt+1 = 1 + 3η 1 − x 2 αt + O η
αt + O η
βt
m
m
s
t
5
n log m
1
C3
+ O ηαt 1 +
log m
m
o
n
2
(102)
= 1 + 3η 1 − xt 2 + ηζt αt ,
for some |ζt |
1
log m ,
provided that
s
n log3 m
1
m
log m
r
1
log m
βt
αt
m
log m
s
t
1
n log5 m
1
1+
C3
.
log m
m
log m
(103a)
(103b)
(103c)
5
Here, the first condition (103a) naturally holds under the
√ sample complexity m n log m, whereas the
t
second condition (103b) is true since βt ≤ kx k2 . αt n log m (cf. the induction hypothesis (40f)) and
m n log4 m. For the last condition (103c), observe that for t ≤ T0 = O (log n),
1+
1
log m
t
= O (1) ,
which further implies
1+
1
log m
s
t
C3
5
n log m
. C3
m
s
n log5 m
1
m
log m
as long as the number of samples obeys m n log7 m. This concludes the proof.
C.2
Proof of (41b)
Given the gradient update rule (3), the perpendicular component xt+1
can be decomposed as
⊥
m
t
xt+1
⊥ = x⊥ −
= xt⊥ +
i
η X h > t 2
\ 2
t
ai x − a>
x
ai,⊥ a>
i
i x
m i=1
m
m
η X > \ 2
η X > t 3
t
ai x ai,⊥ a>
x
−
a x ai,⊥ .
i
m i=1
m i=1 i
|
{z
} |
{z
}
:=v1
:=v2
In what follows, we bound v1 and v2 in turn.
40
(104)
t
t
>
t
• We begin with v1 . Using the identity a>
i x = ai,1 xk + ai,⊥ x⊥ , one can further decompose v1 into the
following two terms:
m
m
1
1 X > \ 2
1 X > \ 2
t
v1 = xtk ·
ai x ai,1 ai,⊥ +
a x ai,⊥ a>
i,⊥ x⊥
η
m i=1
m i=1 i
= xtk u + U xt⊥ ,
where U , u are as defined, respectively, in (89) and (95). Recall that we have shown that
s
s
n log3 m
n log3 m
and
kU − In−1 k .
kuk2 .
m
m
hold with probability exceeding 1 − O m−10 . Consequently, one has
v1 = ηxt⊥ + r1 ,
(105)
where the residual term r1 obeys
s
kr1 k2 . η
n log3 m
xt⊥
m
s
+η
2
n log3 m t
xk .
m
(106)
• It remains to bound v2 in (104). To this end, we make note of the following fact
m
m
m
3 1 X 3
1 X > t 3
1 X > t 3
ai x ai,⊥ =
ai,⊥ x⊥ ai,⊥ + xtk
a ai,⊥
m i=1
m i=1
m i=1 i,1
+
m
3xtk X
m
t
ai,1 a>
i,⊥ x⊥
2
ai,⊥ + 3 xtk
i=1
m
2 1 X
t
a2 ai,⊥ a>
i,⊥ x⊥
m i=1 i,1
m
m
3xtk X
3
2
1 X > t 3
t 2
ai,⊥ x⊥ ai,⊥ +
ai,1 a>
ai,⊥ + xtk u + 3 xtk U xt⊥ . (107)
=
i,⊥ x⊥
m i=1
m i=1
Applying Lemma 14 and using the incoherence condition (62b), we get
s
m
1 X > t 2
t
a x
ai,⊥ a>
i,⊥ − x⊥
m i=1 i,⊥ ⊥
2
m
1 X
0
t
a>
ai a>
i
i − x⊥
xt⊥
m i=1
2
2
2
2
In−1 − 2xt⊥ xt>
.
⊥
In − 2
0
xt⊥
0
xt⊥
n log3 m
xt⊥
m
s
>
.
2
2
,
n log3 m
xt⊥
m
as long as m n log3 m. These two together allow us to derive
(
m
m
1 X > t 2
1 X > t 3
t 2 t
t
ai,⊥ a>
ai,⊥ x⊥ ai,⊥ − 3 x⊥ 2 x⊥ =
a x
i,⊥ − x⊥
m i=1
m i=1 i,⊥ ⊥
2
2
,
)
2
2
In−1 −
2xt⊥ xt>
⊥
xt⊥
2
m
1 X > t 2
t
≤
a x
ai,⊥ a>
i,⊥ − x⊥
m i=1 i,⊥ ⊥
s
n log3 m
3
.
xt⊥ 2 ;
m
2
2
2
In−1 − 2xt⊥ xt>
⊥
xt⊥
and
m
1 X
t 2
ai,1 a>
ai,⊥
i,⊥ x⊥
m i=1
2
2
m
1 X
0
>
t
≤
ai
ai a>
i − x⊥
xt⊥
m i=1
{z
|
:=A
41
2
2
In − 2
0
xt⊥
0
xt⊥
>
}
2
s
n log3 m
2
xt⊥ 2 ,
m
Pm
1
>
t 2
where the second one follows since m
i=1 ai,1 ai,⊥ x⊥ ai,⊥ is the first column of A except for the first
entry. Substitute the preceding bounds into (107) to arrive at
.
m
1 X > t 3
a x ai,⊥ − 3 xt⊥
m i=1 i
2
2
2
xt⊥ − 3 xtk xt⊥
2
m
m
1 X
1 X > t 3
2
t 2
≤
ai,⊥ x⊥ ai,⊥ − 3 xt⊥ 2 xt⊥ + 3 xtk
ai,1 a>
ai,⊥
i,⊥ x⊥
m i=1
m i=1
2
2
3
(U − In−1 ) xt⊥ 2
+ xtk u + 3 xtk
2
s
s
n log3 m t 3
n log3 m
2
3
2
.
xt⊥ 2 .
x⊥ 2 + xtk xt⊥ 2 + xtk + xtk
xt
m
m
2
2
with probability at least 1 − O(m−10 ). Here, the last relation holds owing to the norm condition (40e)
and the fact that
3
2
3
2
3
xt⊥ 2 + xtk xt⊥ 2 + xtk + xtk
xt⊥ 2 xt 2 . xt 2 .
This in turn tells us that
m
η X > t 3
a x ai,⊥ = 3η xt⊥
v2 =
m i=1 i
2
2
2
xt⊥ + 3η xtk xt⊥ + r2 = 3η xt
2
2
xt⊥ + r2 ,
where the residual term r2 is bounded by
s
kr2 k2 . η
n log3 m
xt
m
2
.
• Putting the above estimates on v1 and v2 together, we conclude that
n
o
t
t 2
xt+1
=
x
+
v
−
v
=
1
+
η
1
−
3
x
xt⊥ + r3 ,
1
2
⊥
⊥
2
where r3 = r1 − r2 satisfies
s
kr3 k2 . η
n log3 m
xt
m
2
.
Plug in the definitions of αt and βt to realize that
s
βt+1
for some |ρt |
1
log m ,
n
= 1 + η 1 − 3 xt
2
2
o
n
= 1 + η 1 − 3 xt
2
2
βt + O η
n log3 m
(αt + βt )
m
o
+ ηρt βt ,
with the proviso that m n log5 m and
s
n log3 m
1
αt
βt .
m
log m
(108)
The last condition holds true since
s
s
3
n log m
n log3 m 1
1
1
αt .
βt ,
m
m
log m
log m
log5 m
where we have used the assumption αt .
n log
11
1
log5 m
(see definition of T0 ), the sample size condition m
m and the induction hypothesis βt ≥ c5 (see (40e)). This finishes the proof.
42
D
Proof of Lemma 4
It follows from the gradient update rules (3) and (29) that
xt+1 − xt+1,(l) = xt − η∇f xt − xt,(l) − η∇f (l) xt,(l)
= xt − η∇f xt − xt,(l) − η∇f xt,(l) + η∇f (l) xt,(l) − η∇f xt,(l)
Z 1
i
η h > t,(l) 2
t,(l)
2
\ 2
al a>
, (109)
∇ f (x (τ )) dτ xt − xt,(l) −
− a>
= In − η
al x
l x
l x
m
0
where we denote x (τ ) := xt + τ xt,(l) − xt . Here, the last identity is due to the fundamental theorem of
calculus [Lan93, Chapter XIII, Theorem 4.2].
• Controlling the first term in (109) requires exploring the properties of the Hessian ∇2 f (x). Since x (τ )
lies between xt and xt,(l) for any 0 ≤ τ ≤ 1, it is easy to see from (61) and (62) that
p
p
kx⊥ (τ )k2 ≤ kx (τ )k2 ≤ 2C5
and
max a>
log m . log m kx (τ )k2 .
(110)
i x (τ ) .
1≤i≤m
In addition, combining (61) and (63) leads to
kx⊥ (τ )k2 ≥ xt⊥
2
− xt − xt,(l)
2
≥ c5 /2 − log−1 m ≥ c5 /4.
(111)
Armed with these bounds, we can readily apply Lemma 15 to obtain
n
o
2
>
In − η∇2 f (x (τ )) − 1 − 3η kx (τ )k2 + η In + 2ηx\ x\> − 6ηx (τ ) x (τ )
s
s
3
n
o
n log m
n log3 m
2
max kx(τ )k2 , 1 . η
.
.η
m
m
This further allows one to derive
In − η∇2 f (x (τ )) xt − xt,(l)
2
s
n
≤
1−
2
3η kx (τ )k2
+ η In + 2ηx\ x\> − 6ηx (τ ) x (τ )
>
o
xt − xt,(l)
2
+ O η
n log3 m t
x − xt,(l)
m
Moreover, we can apply the triangle inequality to get
o
n
>
2
xt − xt,(l)
1 − 3η kx (τ )k2 + η In + 2ηx\ x\> − 6ηx (τ ) x (τ )
2
n
o
2
>
t
t,(l)
≤
1 − 3η kx (τ )k2 + η In − 6ηx (τ ) x (τ )
x −x
+ 2ηx\ x\> xt − xt,(l)
2
n
o
(i)
t,(l)
2
>
=
1 − 3η kx (τ )k2 + η In − 6ηx (τ ) x (τ )
xt − xt,(l)
+ 2η xtk − xk
2
2
(ii)
≤
2
1 − 3η kx (τ )k2 + η
xt − xt,(l)
t,(l)
2
+ 2η xtk − xk
,
t,(l)
where (i) holds since x\> xt − xt,(l) = xtk − xk (recall that x\ = e1 ) and (ii) follows from the fact that
2
>
1 − 3η kx (τ )k2 + η In − 6ηx (τ ) x (τ ) 0,
as long as η ≤ 1/ (18C5 ). This further reveals
In − η∇2 f (x (τ ))
xt − xt,(l)
2
43
2
.
3
n
log
m
t,(l)
≤ 1+η 1−
+ O η
xt − xt,(l) 2 + 2η xtk − xk
m
s
3
(i)
n
log
m
2
xt − xt,(l)
≤ 1 + η 1 − 3 xt 2 + O η xt − xt,(l) 2 + O η
m
(ii)
≤
s
2
3 kx (τ )k2
n
1 + η 1 − 3 xt
for some |φ1 |
1
log m ,
2
2
+ ηφ1
o
t,(l)
xt − xt,(l)
+ 2η xtk − xk
2
t,(l)
2
+ 2η xtk − xk
,
(112)
where (i) holds since for every 0 ≤ τ ≤ 1
2
2
2
kx (τ )k2 ≥ xt
2
2
t 2
x 2
≥ xt
≥
2
− kx (τ )k2 − xt
2
2
− x (τ ) − xt 2 kx (τ )k2 + xt
− O xt − xt,(l) 2 ,
2
(113)
and (ii) comes from the fact (63a) and the sample complexity assumption m n log5 m.
• We then move on to the second term of (109). Observing that xt,(l) is statistically independent of al , we
have
i
i > t,(l)
1 h > t,(l) 2
1 h > t,(l) 2
\ 2
t,(l)
\ 2
≤
al x
− a>
al a>
al x
+ a>
al x
kal k2
l x
l x
l x
m
m
2
p
√
1
.
· log m · log m xt,(l) · n
m
2
p
3
n log m
xt,(l) ,
(114)
m
2
where the second inequality makes use of the facts (56), (57) and the standard concentration results
t,(l)
a>
.
l x
p
log m xt,(l)
.
2
p
log m.
• Combine the previous two bounds (112) and (114) to reach
xt+1 − xt+1,(l) 2
Z 1
≤
I −η
∇2 f (x(τ )) dτ xt − xt,(l)
0
n
≤ 1 + η 1 − 3 xt
2
2
+ ηφ1
o
xt − xt,(l)
n
≤ 1 + η 1 − 3 xt
2
2
+ ηφ1
o
xt − xt,(l)
1 > t,(l) 2
\ 2
t,(l)
al x
− a>
x
al a>
l
l x
m
2
!2
p
3
η
n
log
m
t,(l)
+ 2η xtk − xk
+O
xt,(l)
2
m
2
!
p
η n log3 m
t,(l)
+
O
xt 2 + 2η xtk − xk .
2
m
+η
Here the last relation holds because of the triangle inequality
xt,(l)
√
and the fact that
n log3 m
m
2
≤ xt
2
+ xt − xt,(l)
2
1
log m .
In view of the inductive hypotheses (40), one has
x
t+1
−x
(i)
t+1,(l)
2
n
≤ 1 + η 1 − 3 xt
2
2
o
+ ηφ1 βt
44
1
1+
log m
t
p
n log5 m
C1
m
!
p
t
1
n log12 m
+O
(αt + βt ) + 2ηαt 1 +
C2
log m
m
p
t
5
o
n
(ii)
1
n log m
2
≤ 1 + η 1 − 3 xt 2 + ηφ2 βt 1 +
C1
log m
m
p
t+1
5
(iii)
n log m
1
C1
≤ βt+1 1 +
,
log m
m
η
p
n log3 m
m
for some |φ2 | log1 m , where the inequality (i) uses kxt k2 ≤ |xtk | + kxt⊥ k2 = αt + βt , the inequality (ii)
holds true as long as
p
p
t
n log3 m
n log5 m
1
1
(αt + βt )
βt 1 +
C1
,
(115a)
m
log m
log m
m
p
p
n log12 m
n log5 m
1
αt C2
βt C 1
.
(115b)
m
log m
m
Here, the first condition (115a) comes from the fact that for t < T0 ,
p
p
p
n log3 m
n log3 m
n log3 m
(αt + βt )
βt C 1 βt
,
m
m
m
as long as C1 > 0 is sufficiently large. The other one (115b) is valid owing to the assumption of Phase I
αt 1/ log5 m. Regarding the inequality (iii) above, it is easy to check that for some |φ3 | log1 m ,
n
1 + η 1 − 3 xt
where the second equality holds since
2
2
βt+1
βt
o
βt+1
+ ηφ2 βt =
+ ηφ3 βt
βt
βt+1
βt+1
+ ηO
φ3
βt
=
βt
βt
1
,
≤ βt+1 1 +
log m
(116)
1 in Phase I.
The proof is completed by applying the union bound over all 1 ≤ l ≤ m.
E
Proof of Lemma 5
Use (109) once again to deduce
t+1,(l)
t+1
xt+1
− xk
= e>
− xt+1,(l)
1 x
k
Z 1
i >
η h > t,(l) 2
>
\ 2
t,(l)
2
= e1 In − η
∇ f (x (τ )) dτ xt − xt,(l) −
al x
− a>
e1 al a>
l x
l x
m
0
Z 1
i
η h > t,(l) 2
t,(l)
2
t
t,(l)
> \ 2
t,(l)
= xtk − xk − η
e>
∇
f
(x
(τ
))
dτ
x
−
x
−
a
x
−
a
x
al,1 a>
, (117)
1
l
l
l x
m
0
where we recall that x (τ ) := xt + τ xt,(l) − xt .
We begin by controlling the second term of (117). Applying similar arguments as in (114) yields
i
log2 m
1 h > t,(l) 2
\ 2
t,(l)
al x
− a>
al,1 a>
.
xt,(l)
l x
l x
m
m
with probability at least 1 − O m−10 .
45
2
Regarding the first term in (117), one can use the decomposition
t,(l)
t,(l)
t
t,(l)
t
a>
= ai,1 xtk − xk
+ a>
i x −x
i,⊥ x⊥ − x⊥
to obtain that
m
2
e>
1∇ f
t
t,(l)
(x (τ )) x − x
i
2
1 Xh
\ 2
t
t,(l)
3 a>
ai,1 a>
=
− a>
i x (τ )
i x
i x −x
m i=1
=
m
i 2
2
1 Xh
t,(l)
\ 2
ai,1 xtk − xk
3 a>
− a>
i x (τ )
i x
m i=1
|
{z
}
:=ω1 (τ )
m
2
i
1 Xh
t,(l)
\ 2
t
.
+
3 a>
− a>
ai,1 a>
i x (τ )
i x
i,⊥ x⊥ − x⊥
m i=1
{z
}
|
:=ω2 (τ )
In the sequel, we shall bound ω1 (τ ) and ω2 (τ ) separately.
• For ω1 (τ ), Lemma 14 together with the facts (110) tells us that
m
h
2
i 2
1 Xh
2
> \ 2
3 a>
x
(τ
)
−
a
x
a
−
3 kx (τ )k2 + 6 xk (τ )
i
i
i,1
m i=1
s
s
3
n
o
n log m
n log3 m
2
.
max kx (τ )k2 , 1 .
,
m
m
2
i
−3
which further implies that
2
ω1 (τ ) = 3 kx (τ )k2 + 6 xk (τ )
2
t,(l)
+ r1
− 3 xtk − xk
with the residual term r1 obeying
s
|r1 | = O
n log3 m t
t,(l)
xk − xk .
m
• We proceed to bound ω2 (τ ). Decompose w2 (τ ) into the following:
ω2 (τ ) =
m
m
2
3 X >
1 X > \ 2
t,(l)
t,(l)
t
t
ai x (τ ) ai,1 a>
a x ai,1 a>
−
.
i,⊥ x⊥ − x⊥
i,⊥ x⊥ − x⊥
m i=1
m i=1 i
|
{z
} |
{z
}
:=ω4
:=ω3 (τ )
\
– The term ω4 is relatively simple to control. Recognizing a>
i x
m
ω4 =
2
= a2i,1 and ai,1 = ξi |ai,1 |, one has
m
1 X
1 X
t,sgn,(l)
t,(l)
t,sgn,(l)
3
3
t,sgn
t
ξi |ai,1 | a>
− x⊥
+
ξi |ai,1 | a>
− xt,sgn
+ x⊥
.
i,⊥ x⊥
i,⊥ x⊥ − x⊥
⊥
m i=1
m i=1
3
t,sgn,(l)
t,sgn
In view of the independence between ξi and |ai,1 | a>
− x⊥
i,⊥ x⊥
Bernstein inequality (see Lemma 11) to obtain
, one can thus invoke the
m
1 X
1 p
t,sgn,(l)
3
t,sgn
ξi |ai,1 | a>
− x⊥
.
V1 log m + B1 log m
i,⊥ x⊥
m i=1
m
46
(118)
with probability at least 1 − O m−10 , where
V1 :=
m
X
t,sgn,(l)
6
t,sgn
− x⊥
|ai,1 | a>
i,⊥ x⊥
2
t,sgn,(l)
3
t,sgn
− x⊥
B1 := max |ai,1 | a>
i,⊥ x⊥
and
1≤i≤m
i=1
.
Regarding V1 , one can combine the fact (56) and Lemma 14 to reach
m
>
1
t,sgn,(l)
V1 . log2 m xt,sgn
− x⊥
⊥
m
t,sgn,(l)
− x⊥
. log2 m xt,sgn
⊥
1 X
2
|ai,1 | ai,⊥ a>
i,⊥
m i=1
!
t,sgn,(l)
xt,sgn
− x⊥
⊥
2
.
2
For B1 , it is easy to check from (56) and (57) that
q
t,sgn,(l)
B1 . n log3 m xt,sgn
− x⊥
⊥
.
2
The previous two bounds taken collectively yield
s
m
1 X
log3 m
t,sgn,(l)
t,sgn,(l)
3
t,sgn
xt,sgn
− x⊥
ξi |ai,1 | a>
x
−
x
.
i,⊥
⊥
⊥
⊥
m i=1
m
s
log3 m
t,sgn,(l)
.
xt,sgn
− x⊥
⊥
m
p
2
+
2
n log5 m
t,sgn,(l)
xt,sgn
− x⊥
⊥
m
,
(119)
2
as long as m & n log2 m. The second term in ω4 can be simply controlled by the Cauchy-Schwarz
inequality and Lemma 14. Specifically, we have
m
1 X
t,(l)
t,sgn,(l)
3
t,sgn
t
ξi |ai,1 | a>
x
−
x
−
x
+
x
i,⊥
⊥
⊥
⊥
⊥
m i=1
m
1 X
t,(l)
t,sgn,(l)
3
≤
ξi |ai,1 | a>
xt⊥ − x⊥ − xt,sgn
+ x⊥
i,⊥
⊥
m i=1
2
s
3
n log m
t,(l)
t,sgn,(l)
.
xt⊥ − x⊥ − xt,sgn
+ x⊥
,
⊥
m
2
2
(120)
where the second relation holds due to Lemma 14. Take the preceding two bounds (119) and (120)
collectively to conclude that
s
s
log3 m
n log3 m
t,sgn,(l)
t,(l)
t,sgn,(l)
|ω4 | .
xt,sgn
−
x
+
xt⊥ − x⊥ − xt,sgn
+ x⊥
⊥
⊥
⊥
m
m
2
2
s
s
log3 m
n log3 m
t,(l)
t,(l)
t,sgn,(l)
.
xt⊥ − x⊥
+
xt⊥ − x⊥ − xt,sgn
+ x⊥
,
⊥
m
m
2
2
where the second line follows from the triangle inequality
t,sgn,(l)
xt,sgn
− x⊥
⊥
and the fact that
q
log3 m
m
≤
t,(l)
2
q
≤ xt⊥ − x⊥
t,(l)
2
+ xt⊥ − x⊥
t,sgn,(l)
− xt,sgn
+ x⊥
⊥
n log3 m
.
m
– It remains to bound ω3 (τ ). To this end, one can decompose
ω3 (τ ) =
m
2
2 i
3 Xh >
t,(l)
t
ai,1 a>
ai x (τ ) − asgn>
x (τ )
i,⊥ x⊥ − x⊥
i
m i=1
|
{z
}
:=θ1 (τ )
47
2
2
m
2
2
3 X sgn>
t,(l)
sgn> sgn
t
x (τ )
ai,1 a>
x (τ ) − ai
+
ai
i,⊥ x⊥ − x⊥
m i=1
|
{z
}
:=θ2 (τ )
+
3
m
|
m
X
xsgn (τ )
asgn>
i
2
t,sgn,(l)
t,sgn
ai,1 a>
− x⊥
i,⊥ x⊥
i=1
{z
}
:=θ3 (τ )
m
2
3 X sgn> sgn
t,(l)
t,sgn,(l)
t
+
ai
x (τ ) ai,1 a>
− xt,sgn
+ x⊥
,
i,⊥ x⊥ − x⊥
⊥
m i=1
{z
}
|
:=θ4 (τ )
where we denote xsgn (τ ) = xt,sgn + τ xt,sgn,(l) − xt,sgn . A direct consequence of (61) and (62) is that
xsgn (τ ) .
asgn>
i
p
log m.
(121)
Recalling that ξi = sgn (ai,1 ) and ξisgn = sgn asgn
i,1 , one has
sgn>
a>
x (τ ) = (ξi − ξisgn ) |ai,1 | xk (τ ) ,
i x (τ ) − ai
sgn>
a>
x (τ ) = (ξi + ξisgn ) |ai,1 | xk (τ ) + 2a>
i x (τ ) + ai
i,⊥ x⊥ (τ ) ,
which implies that
2
2 sgn>
sgn>
sgn>
a>
− ai
x (τ ) = a>
x (τ ) · a>
x (τ )
i x (τ )
i x (τ ) − ai
i x (τ ) + ai
= (ξi − ξisgn ) |ai,1 | xk (τ ) (ξi + ξisgn ) |ai,1 | xk (τ ) + 2a>
i,⊥ x⊥ (τ )
= 2 (ξi − ξisgn ) |ai,1 | xk (τ ) a>
i,⊥ x⊥ (τ )
(122)
2
owing to the identity (ξi − ξisgn ) (ξi + ξisgn ) = ξi2 − (ξisgn ) = 0. In light of (122), we have
m
6 X
t,(l)
>
t
(ξi − ξisgn ) |ai,1 | xk (τ ) a>
i,⊥ x⊥ (τ ) ai,1 ai,⊥ x⊥ − x⊥
m i=1
"
#
m
1 X
t,(l)
2
sgn
>
>
= 6xk (τ ) · x⊥ (τ )
(1 − ξi ξi ) |ai,1 | ai,⊥ ai,⊥ xt⊥ − x⊥
.
m i=1
θ1 (τ ) =
First note that
m
m
1 X
1 X
2
2
(1 − ξi ξisgn ) |ai,1 | ai,⊥ a>
|ai,1 | ai,⊥ a>
i,⊥ ≤ 2
i,⊥ . 1,
m i=1
m i=1
(123)
where the last relation holds due to Lemma 14. This results in the following upper bound on θ1 (τ )
t,(l)
|θ1 (τ )| . xk (τ ) kx⊥ (τ )k2 xt⊥ − x⊥
2
. xk (τ )
t,(l)
xt⊥ − x⊥
,
2
where we have used the fact that kx⊥ (τ )k2 . 1 (see (110)). Regarding θ2 (τ ), one obtains
m
θ2 (τ ) =
ih
i
3 X h sgn>
t,(l)
t
ai
(x (τ ) − xsgn (τ )) asgn>
(x (τ ) + xsgn (τ )) ai,1 a>
.
i,⊥ x⊥ − x⊥
i
m i=1
Apply the Cauchy-Schwarz inequality to reach
v
v
u
m h
m
i2 h
i2 u
h
i2
u1 X
u1 X
2
sgn>
>
t − xt,(l)
sgn (τ ))
sgn (τ )) t
|θ2 (τ )| . t
(x
(τ
)
+
x
asgn>
(x
(τ
)
−
x
a
|a
|
a
x
i,1
i
⊥
⊥
i,⊥
m i=1 i
m i=1
48
v
u
m h
i2
u1 X
t,(l)
.t
(x (τ ) − xsgn (τ )) log m · xt⊥ − x⊥
asgn>
i
m i=1
p
t,(l)
.
. log m kx (τ ) − xsgn (τ )k2 xt⊥ − x⊥
2
2
Here the second relation comes from Lemma 14 and the fact that
p
(x (τ ) + xsgn (τ )) . log m.
asgn>
i
When it comes to θ3 (τ ), we need to exploit the independence between
2
t,sgn,(l)
t,sgn
xsgn (τ ) ai,1 a>
− x⊥
.
{ξi } and asgn>
i,⊥ x⊥
i
Similar to (118), one can obtain
|θ3 (τ )| .
1 p
V2 log m + B2 log m
m
with probability at least 1 − O m−10 , where
V2 :=
m
X
asgn>
xsgn (τ )
i
4
t,sgn,(l)
2
t,sgn
|ai,1 | a>
− x⊥
i,⊥ x⊥
2
i=1
B2 := max
1≤i≤m
2
t,sgn,(l)
t,sgn
sgn
>
asgn>
x
(τ
)
|a
|
a
.
x
−
x
i,1
i,⊥
i
⊥
⊥
It is easy to see from Lemma 14, (121), (56) and (57) that
t,sgn,(l)
V2 . m log2 m xt,sgn
− x⊥
⊥
2
B2 .
and
2
q
t,sgn,(l)
n log3 m xt,sgn
− x⊥
⊥
,
2
which implies
s
|θ3 (τ )| .
log3 m
+
m
p
n log5 m t,sgn
t,sgn,(l)
x⊥ − x⊥
m
s
2
log3 m
t,sgn,(l)
xt,sgn
− x⊥
⊥
m
2
with the proviso that m & n log2 m. We are left with θ4 (τ ). Invoking Cauchy-Schwarz inequality,
v
v
u
m
m
h
i2
4 u
u1 X
u1 X
2
t − xt,(l) − xt,sgn + xt,sgn,(l)
|θ4 (τ )| . t
aisgn> xsgn (τ ) t
|ai,1 | a>
x
i,⊥
⊥
⊥
⊥
⊥
m i=1
m i=1
v
u
m
2
u1 X
t,(l)
t,sgn,(l)
.t
aisgn> xsgn (τ ) log m · xt⊥ − x⊥ − xt,sgn
+ x⊥
⊥
m i=1
2
p
t,(l)
t,sgn,(l)
. log m xt⊥ − x⊥ − xt,sgn
+ x⊥
,
⊥
2
√
where we have used the fact that aisgn> xsgn (τ ) . log m. In summary, we have obtained
o
n
p
t,(l)
|ω3 (τ )| . xk (τ ) + log m kx (τ ) − xsgn (τ )k2 xt⊥ − x⊥
2
s
p
log3 m
t,sgn,(l)
t,(l)
t,sgn,(l)
+
xt,sgn
− x⊥
+ log m xt⊥ − x⊥ − xt,sgn
+ x⊥
⊥
⊥
m
2
s
3
p
log
m
t,(l)
.
xk (τ ) + log m kx (τ ) − xsgn (τ )k2 +
xt − x⊥
m ⊥
2
49
2
+
p
t,(l)
log m xt⊥ − x⊥
t,sgn,(l)
− xt,sgn
+ x⊥
⊥
,
2
where the last inequality utilizes the triangle inequality
t,sgn,(l)
xt,sgn
− x⊥
⊥
and the fact that
q
log3 m
m
≤
√
t,(l)
t,(l)
2
≤ xt⊥ − x⊥
2
+ xt⊥ − x⊥
t,sgn,(l)
− xt,sgn
+ x⊥
⊥
2
log m. This together with the bound for ω4 (τ ) gives
|ω2 (τ )| ≤ |ω3 (τ )| + |ω4 (τ )|
s
3
p
log
m
t,(l)
xt − x⊥
. xk (τ ) + log m kx (τ ) − xsgn (τ )k2 +
m ⊥
+
p
t,(l)
log m xt⊥ − x⊥
t,sgn,(l)
− xt,sgn
+ x⊥
⊥
2
,
2
as long as m n log2 m.
• Combine the bounds to arrive at
s
Z 1
3
n
log
m
2
t,(l)
t+1,(l)
2
t
xk (τ ) +
x
−
x
xt+1
−
x
=
1
+
3η
1
−
kx
(τ
)k
dτ
+
η
·
O
2
k
k
k
k
m
0
p
log2 m
t,(l)
t,sgn,(l)
t,(l)
+O η
x
+ O η log m xt⊥ − x⊥ − xt,sgn
+ x⊥
⊥
m
2
2
s
3
p
log
m
t,(l)
.
xk (τ ) + log m kx (τ ) − xsgn (τ )k2 +
+ O η sup
xt − x⊥
m ⊥
2
0≤τ ≤1
To simplify the above bound, notice that for the last term, for any t < T0 . log n and 0 ≤ τ ≤ 1, one has
t p
n log12 m
1
t,(l)
t
t
C2
. αt ,
xk (τ ) ≤ xk + xk − xk ≤ αt + αt 1 +
log m
m
p
as long as m n log12 m. Similarly, one can show that
p
p
log m kx (τ ) − xsgn (τ )k2 ≤ log m xt − xt,sgn 2 + xt − xt,(l) − xt,sgn + xt,sgn,(l)
2
s
p
5
9
p
n log m
n log m
. αt log m
+
. αt ,
m
m
with the proviso that m n log6 m. Therefore, we can further obtain
s
3
2
n
log
m
t+1,(l)
t 2
xtk − xt,(l)
xt − xt,(l) + xtk +
xt+1
−
x
≤
1
+
3η
1
−
x
+
η
·
O
k
k
k
2
m
2
p
log2 m
t,(l)
t,sgn,(l)
t
+O η
x 2 + O η log m xt⊥ − x⊥ − xt,sgn
+
x
⊥
⊥
m
2
t
t,(l)
+ O ηαt x − x
n
2
o
t,(l)
t 2
≤ 1 + 3η 1 − x 2 + ηφ1 xtk − xk
+ O ηαt xt − xt,(l)
2
2
p
log m
t,(l)
t,sgn,(l)
+O η
xt 2 + O η log m xt⊥ − x⊥ − xt,sgn
+ x⊥
⊥
m
2
50
for some |φ1 | log1 m . Here the last inequality comes from the sample complexity m n log5 m, the
assumption αt log15 m and the fact (63a). Given the inductive hypotheses (40), we can conclude
t p
o
n
1
n log12 m
t+1,(l)
t+1
t 2
≤ 1 + 3η 1 − x 2 + ηφ1 αt 1 +
xk − xk
C2
log m
m
!
p
t
2
p
n log9 m
η log m
1
C4
+O
(αt + βt ) + O η log m · αt 1 +
m
log m
m
!
t p
5
1
n log m
+ O ηαt βt 1 +
C1
log m
m
p
t
o
(i) n
n log12 m
1
t 2
C2
≤ 1 + 3η 1 − x 2 + ηφ2 αt 1 +
log m
m
p
t+1
12
(ii)
1
n log m
C2
≤ αt+1 1 +
log m
m
for some |φ2 |
1
log m .
Here, the inequality (i) holds true as long as
p
log2 m
1
n log12 m
(αt + βt )
αt C2
m
log m
m
p
p
9
p
n log m
1
n log12 m
log mC4
C2
m
log m
m
p
p
5
n log m
1
n log12 m
βt C1
C2
,
m
log m
m
(124a)
(124b)
(124c)
where the first condition (124a) is satisfied since (according to Lemma 1)
p
αt + βt . βt . αt n log m.
The second condition (124b) holds as long as C2 C4 . The third one (124c) holds trivially. Moreover,
the second inequality (ii) follows from the same reasoning as in (116). Specifically, we have for some
|φ3 | log1 m ,
n
o
αt+1
t 2
1 + 3η 1 − x 2 + ηφ2 αt =
+ ηφ3 αt
αt
αt+1
αt+1
+ ηO
φ3
αt
≤
αt
αt
1
,
≤ αt+1 1 +
log m
as long as
αt+1
αt
1.
The proof is completed by applying the union bound over all 1 ≤ l ≤ m.
F
Proof of Lemma 6
By similar calculations as in (109), we get the identity
Z 1
t+1
t+1,sgn
2
x
−x
= I −η
∇ f (x̃ (τ )) dτ xt − xt,sgn + η ∇f sgn xt,sgn − ∇f xt,sgn ,
0
where x̃ (τ ) := xt + τ (xt,sgn − xt ). The first term satisfies
Z 1
∇2 f (x̃ (τ )) dτ xt − xt,sgn
I −η
0
51
(125)
Z 1
∇2 f (x̃ (τ )) dτ xt − xt,sgn 2
≤ I −η
0
s
Z 1
3
n
log
m
2
≤ 1 + 3η 1 −
kx̃ (τ )k2 dτ + O η
xt − xt,sgn
m
0
2
,
(126)
where we have invoked Lemma 15. Furthermore, one has for all 0 ≤ τ ≤ 1
2
kx̃ (τ )k2 ≥ xt
2
2
2
2
2
xt 2
2
− kx̃ (τ )k2 − xt
≥ xt
− x̃ (τ ) − xt
≥
− xt − xt,sgn
2
2
2
kx̃ (τ )k2 + xt
2
2
kx̃ (τ )k2 + xt
2
.
This combined with the norm conditions kxt k2 . 1, kx̃ (τ )k2 . 1 reveals that
2
2
min kx̃ (τ )k2 ≥ xt 2 + O xt − xt,sgn 2 ,
0≤τ ≤1
and hence we can further upper bound (126) as
Z 1
2
I −η
∇ f (x̃ (τ )) dτ xt − xt,sgn
0
s
3
n
log
m
2
≤ 1 + 3η 1 − xt 2 + η · O xt − xt,sgn 2 +
xt − xt,sgn
m
n
o
2
≤ 1 + 3η 1 − xt 2 + ηφ1 xt − xt,sgn 2 ,
2
for some |φ1 | log1 m , where the last line follows from m n log5 m and the fact (63b).
The remainder
of this subsection is largely devoted to controlling the gradient difference ∇f sgn xt,sgn −
∇f xt,sgn in (125). By the definition of f sgn (·), one has
∇f sgn xt,sgn − ∇f xt,sgn
m
o
1 X n sgn> t,sgn 3 sgn
sgn> t,sgn sgn
\ 2
> t,sgn 3
> \ 2
> t,sgn
=
ai
x
ai − asgn>
x
a
x
a
−
a
x
a
+
a
x
a
x
ai
i
i
i
i
i
i
i
m i=1
m
m
o
1 X 2 sgn sgn>
1 X n sgn> t,sgn 3 sgn
> t,sgn 3
ai
x
ai − ai x
ai −
=
ai,1 ai ai
− ai a>
xt,sgn .
i
m i=1
m i=1
|
{z
} |
{z
}
:=r1
:=r2
2
\ 2
Here, the last identity holds because of a>
= asgn>
x\ = a2i,1 (see (37)).
i x
i
sgn
sgn
• We begin with the second term r2 . By construction, one has asgn
|ai,1 | and ai,1 =
i,⊥ = ai,⊥ , ai,1 = ξi
ξ1 |ai,1 |. These taken together yield
0
a>
sgn sgn>
sgn
>
i,⊥
ai ai
− ai ai = (ξi − ξi ) |ai,1 |
,
(127)
ai,⊥
0
and hence r2 can be rewritten as
"
r2 =
Pm
3
sgn
t,sgn
1
− ξi ) |ai,1 | a>
i
i,⊥ x⊥
i=1 (ξ
m
P
3
m
sgn
1
− ξi ) |ai,1 | ai,⊥
xt,sgn
·m
i=1 (ξi
k
#
.
(128)
For the first entry of r2 , the triangle inequality gives
m
m
m
1 X sgn
1 X
1 X
3
3
3
t,sgn
t,sgn
t
(ξi − ξi ) |ai,1 | a>
≤
|ai,1 | ξi a>
+
|ai,1 | ξisgn a>
i,⊥ x⊥
i,⊥ x⊥
i,⊥ x⊥
m i=1
m i=1
m i=1
|
{z
} |
{z
}
:=φ1
52
:=φ2
m
+
1 X
3
t,sgn
|ai,1 | ξisgn a>
− xt⊥ .
i,⊥ x⊥
m i=1
|
{z
}
:=φ3
3
t,sgn
Regarding φ1 , we make use of the independence between ξi and |ai,1 | a>
i,⊥ x⊥ and invoke the Bernstein
inequality (see Lemma 11) to reach that with probability at least 1 − O m−10 ,
1 p
V1 log m + B1 log m ,
m
φ1 .
where V1 and B1 are defined to be
V1 :=
m
X
2
6
t,sgn
|ai,1 | a>
i,⊥ x⊥
and
B1 := max
1≤i≤m
i=1
n
o
3
t,sgn
|ai,1 | a>
.
i,⊥ x⊥
It is easy to see from Lemma 12 and the incoherence condition (62d) that with probability exceeding
2
, which implies
and B1 . log2 m xt,sgn
1 − O m−10 , V1 . m xt,sgn
⊥
⊥
2
2
!
r
r
log m log3 m
log m
t,sgn
+
x⊥
xt,sgn
,
φ1 .
⊥
2
2
m
m
m
as long as m log5 m. Similarly, one can obtain
r
φ2 .
log m
xt⊥
m
2
.
The last term φ3 can be bounded through the Cauchy-Schwarz inequality. Specifically, one has
s
m
X
1
n log3 m
3
t
|ai,1 | ξisgn ai,⊥
xt,sgn
− xt⊥ 2 ,
xt,sgn
−
x
.
φ3 ≤
⊥ 2
⊥
⊥
m i=1
m
2
where the second relation arises from Lemma 14. The previous three bounds taken collectively yield
s
r
m
1 X sgn
log
m
n log3 m
3
t,sgn
t,sgn
t
x
+
+
x
xt,sgn
− xt⊥ 2
(ξi − ξi ) |ai,1 | a>
x
.
⊥ 2
i,⊥ ⊥
⊥
⊥
2
m i=1
m
m
s
r
log m
n log3 m
t
.
x⊥ 2 +
xt,sgn
− xt⊥ 2 .
(129)
⊥
m
m
Here the second inequality results from the triangle inequality xt,sgn
≤ kxt⊥ k2 + xt,sgn
− xt⊥ 2 and
⊥
⊥
2
q
q
3m
the fact that logmm ≤ n log
. In addition, for the second through the nth entries of r2 , one can again
m
invoke Lemma 14 to obtain
m
1 X
3
|ai,1 | (ξisgn − ξi ) ai,⊥
m i=1
m
2
m
1 X
3
≤
|ai,1 | ξisgn ai,⊥
m i=1
s
n log3 m
.
.
m
This combined with (128) and (129) yields
s
r
log m
n log3 m
kr2 k2 .
xt⊥ 2 +
xt,sgn
− xt⊥
⊥
m
m
53
2
1 X
3
+
|ai,1 | ξi ai,⊥
m i=1
2
(130)
s
2
+ xt,sgn
k
n log3 m
.
m
• Moving on to the term r1 , we can also decompose
o
Pm n sgn> t,sgn 3 sgn
1
> t,sgn 3
−
a
x
a
x
a
a
i,1
i
i,1
i
i=1
m
o .
r1 =
Pm n sgn> t,sgn 3 sgn
1
> t,sgn 3
a
x
a
−
a
ai,⊥
i x
i
i=1
i,⊥
m
For the second through the nth entries, we see that
m
m n
o (i) 1 X
3
o
1 X n sgn> t,sgn 3 sgn
> t,sgn 3
t,sgn 3
ai,⊥ − ai x
ai,⊥ =
ai,⊥
x
xt,sgn − a>
ai
asgn>
i x
i
m i=1
m i=1
m
2
(ii) 1 X
sgn> t,sgn
sgn> t,sgn
t,sgn
sgn
> t,sgn 2
> t,sgn
x
+ ai x
x
ai x
ai
+ ai
ai,⊥
(ξi − ξi ) |ai,1 | xk
=
m i=1
m
2
X
xt,sgn
k
sgn
sgn> t,sgn
sgn> t,sgn
> t,sgn 2
> t,sgn
=
(ξi − ξi ) |ai,1 | ai
x
+ ai x
+ ai
x
ai x
ai,⊥ ,
m i=1
3
3
2
2
where (i) follows from asgn
i,⊥ = ai,⊥ and (ii) relies on the elementary identity a −b = (a − b) a + b + ab .
Pm sgn> t,sgn 2 sgn
Pm sgn> t,sgn 2 sgn sgn>
1
1
a
x
x
Treating m
a
ai ai
,
a
as
the
first
column
(except
its
first
entry)
of
i
i,1 i,⊥
i=1
i=1 ai
m
by Lemma 14 and the incoherence condition (62e), we have
m
m
2
1 X sgn
1 X sgn> t,sgn 2 sgn
ξi |ai,1 | asgn>
xt,sgn ai,⊥ =
a
x
ai,1 ai,⊥ = 2xt,sgn
xt,sgn
+ v1 ,
i
⊥
k
m i=1
m i=1 i
where kv1 k2 .
q
n log3 m
.
m
Similarly,
m
−
where kv2 k2 .
q
n log3 m
.
m
1 X
t,sgn 2
ξi |ai,1 | a>
ai,⊥ = −2xt,sgn
xt,sgn
+ v2 ,
i x
⊥
k
m i=1
Moreover, we have
m
1 X sgn
t,sgn 2
ξi |ai,1 | a>
ai,⊥
i x
m i=1
m
m
2
1 X sgn
1 X sgn
=
ξi |ai,1 | asgn>
ξ |ai,1 |
xt,sgn ai,⊥ +
i
m i=1
m i=1 i
t,sgn 2
a>
i x
−
asgn>
xt,sgn
i
2
ai,⊥
= 2xt,sgn
xt,sgn
+ v1 + v3 ,
⊥
k
where v3 is defined as
m
1 X sgn
ξ |ai,1 |
v3 =
m i=1 i
t,sgn 2
a>
i x
−
asgn>
xt,sgn
i
2
ai,⊥
m
= 2xt,sgn
k
=
1 X
2
t,sgn sgn
(ξi − ξisgn ) a>
ξi |ai,1 | ai,⊥
i,⊥ x⊥
m i=1
1
2xt,sgn
k
m
m
X
2
t,sgn
(ξi ξisgn − 1) |ai,1 | ai,⊥ a>
.
i,⊥ x⊥
i=1
Here the second equality comes from the identity (122). Similarly one can get
m
2
1 X
−
ξi |ai,1 | asgn>
xt,sgn ai,⊥ = −2xt,sgn
xt,sgn
− v2 − v4 ,
i
⊥
k
m i=1
54
(131)
where v4 obeys
v4 =
m
2
1 X
t,sgn
> t,sgn 2
ξi |ai,1 | asgn>
x
−
a
x
ai,⊥
i
i
m i=1
m
= 2xt,sgn
k
It remains to bound
1
m
Pm
i=1
1 X
2
t,sgn
.
(ξi ξisgn − 1) |ai,1 | ai,⊥ a>
i,⊥ x⊥
m i=1
xt,sgn
(ξisgn − ξi ) |ai,1 | asgn>
i
t,sgn
ai,⊥ . To this end, we have
a>
i x
m
1 X sgn
t,sgn
xt,sgn a>
ai,⊥
ξ |ai,1 | asgn>
i x
i
m i=1 i
m
=
m
2
h
sgn> t,sgn i
1 X sgn
1 X sgn
sgn> t,sgn
t,sgn
> t,sgn
ai,⊥
x
a
+
|a
|
a
x
a
x
− ai
x
ξi |ai,1 | asgn>
ξ
i,⊥
i,1
i
i
i
m i=1
m i=1 i
= 2xt,sgn
xt,sgn
+ v1 + v5 ,
⊥
k
where
m
v5 = xt,sgn
k
1 X
2
(ξi ξisgn − 1) |ai,1 | ai,⊥ asgn>
xt,sgn .
i
m i=1
The same argument yields
m
−
1 X
t,sgn
ξi |ai,1 | asgn>
xt,sgn a>
ai,⊥ = −2xt,sgn
xt,sgn
− v2 − v6 ,
i x
i
⊥
k
m i=1
where
m
v6 = xt,sgn
k
1 X
2
(ξi ξisgn − 1) |ai,1 | ai,⊥ asgn>
xt,sgn .
i
m i=1
Combining all of the previous bounds and recognizing that v3 = v4 and v5 = v6 , we arrive at
s
m
o
1 X n sgn> t,sgn 3 sgn
n log3 m t,sgn
t,sgn 3
ai
xk
.
x
ai,⊥ − a>
x
a
.
kv
k
+
kv
k
.
i,⊥
1
2
2
2
i
m i=1
m
2
Regarding the first entry of r1 , one has
m
o
1 X n sgn> t,sgn 3 sgn
t,sgn 3
ai
x
ai,1 − a>
ai,1
i x
m i=1
m
3
3
1 X sgn
t,sgn
sgn
t,sgn
t,sgn
>
>
=
ξi |ai,1 | xt,sgn
+
a
x
ξ
|a
|
−
ξ
|a
|
x
+
a
x
ξ
|a
|
i,1
i i,1
i i,1
i,⊥ ⊥
i,⊥ ⊥
i
k
k
m i=1
m
2
1 X sgn
2
t,sgn
t,sgn
t,sgn 3
>
>
=
(ξ − ξi ) |ai,1 | 3 |ai,1 | xk
ai,⊥ x⊥ + ai,⊥ x⊥
.
m i=1 i
t,sgn 3
In view of the independence between ξi and |ai,1 | a>
, from the Bernstein’s inequality (see Lemma
i,⊥ x⊥
11), we have that
m
1 X
1 p
t,sgn 3
ξi |ai,1 | a>
V2 log m + B2 log m
.
i,⊥ x⊥
m i=1
m
holds with probability exceeding 1 − O m−10 , where
V2 :=
m
X
|ai,1 |
2
t,sgn
a>
i,⊥ x⊥
6
and
i=1
55
3
t,sgn
B2 := max |ai,1 | a>
.
i,⊥ x⊥
1≤i≤m
It is straightforward to check that V2 . m xt,sgn
⊥
m
1 X
t,sgn 3
.
ξi |ai,1 | a>
i,⊥ x⊥
m i=1
r
6
2
3
and B2 . log2 m xt,sgn
⊥
log m
xt,sgn
⊥
m
log3 m
+
xt,sgn
⊥
2
m
3
3
2
2
, which further implies
r
log m
xt,sgn
⊥
m
3
2
,
as long as m log5 m. For the term involving ξisgn , we have
m
m
m
h
i
1 X sgn
1 X sgn
1 X sgn
t,sgn 3
t,sgn 3
>
t 3
t 3
+
.
ξi |ai,1 | a>
x
=
ξ
|a
|
a
x
ξi |ai,1 | a>
− a>
i,1
i,⊥ ⊥
i,⊥ ⊥
i,⊥ x⊥
i,⊥ x⊥
i
m i=1
m i=1
m i=1
|
{z
} |
{z
}
:=θ1
:=θ2
Similarly one can obtain
r
|θ1 | .
log m
xt⊥
m
3
2
.
Expand θ2 using the elementary identity a3 − b3 = (a − b) a2 + ab + b2 to get
m
θ2 =
h
> t,sgn i
1 X sgn
t,sgn
t,sgn 2
t
t 2
t
ξi |ai,1 | a>
a>
+ a>
+ a>
ai,⊥ x⊥
i,⊥ x⊥ − x⊥
i,⊥ x⊥
i,⊥ x⊥
i,⊥ x⊥
m i=1
m
=
1 X > t 2 sgn
t,sgn
t
ai,⊥ x⊥ ξi |ai,1 | a>
i,⊥ x⊥ − x⊥
m i=1
m
+
1 X > t,sgn 2 sgn
t,sgn
t
ai,⊥ x⊥
ξi |ai,1 | a>
i,⊥ x⊥ − x⊥
m i=1
+
1 X > t >
t,sgn
t
a x a
xt,sgn − xt⊥ ξisgn |ai,1 | a>
.
i,⊥ x⊥ − x⊥
m i=1 i,⊥ ⊥ i,⊥ ⊥
m
Once more, we can apply Lemma 14 with the incoherence conditions (62b) and (62d) to obtain
s
m
n log3 m
1 X > t 2 sgn
ai,⊥ x⊥ ξi |ai,1 | a>
;
.
i,⊥
m i=1
m
2
s
m
1 X > t,sgn 2 sgn
n log3 m
ai,⊥ x⊥
ξi |ai,1 | a>
.
.
i,⊥
m i=1
m
2
In addition, one can use the Cauchy-Schwarz inequality to deduce that
m
1 X > t >
t,sgn
t
ai,⊥ x⊥ ai,⊥ xt,sgn
− xt⊥ ξisgn |ai,1 | a>
i,⊥ x⊥ − x⊥
⊥
m i=1
v
v
u
m
m
2 h
i2 u
h
i2
u1 X
u1 X
2
t,sgn
t,sgn
t
>
t
t
t
a>
x
a
x
−
x
|ai,1 | a>
≤t
i,⊥ ⊥
i,⊥
⊥
⊥
i,⊥ x⊥ − x⊥
m i=1
m i=1
v
v
u
u
m
m
2
u 1 X
1 X
2u
2
t,sgn
t
>
t
>
t
≤
ai,⊥ x⊥ ai,⊥ ai,⊥ x⊥ − x⊥ 2 t
|ai,1 | ai,⊥ a>
xt,sgn
− xt⊥
i,⊥
⊥
m i=1
m i=1
. xt⊥ − xsgn
⊥
2
2
,
where the last inequality comes from Lemma 14. Combine the preceding bounds to reach
s
n log3 m
2
xt⊥ − xsgn
+ xt⊥ − xsgn
.
|θ2 | .
⊥
⊥
2
2
m
56
2
2
Applying the similar arguments as above we get
m
2
3 X sgn
3
t,sgn
ξi − ξi |ai,1 | a>
i,⊥ x⊥
m i=1
r
s
r
2
log
m
log
m
n log3 m
t,sgn
t
. xt,sgn
+
x
x
+
xt⊥ − xt,sgn
⊥
⊥
⊥
k
2
2
m
m
m
r
s
3
2
log
m
n
log
m
,
xt⊥ 2 +
xt⊥ − xt,sgn
. xt,sgn
⊥
k
2
m
m
xt,sgn
k
where the last line follows from the triangle inequality xt,sgn
≤ kxt⊥ k2 + xt⊥ − xt,sgn
⊥
⊥
2
q
q
log m
n log3 m
that
. Putting the above results together yields
m ≤
m
s
kr1 k2 .
3
2
2
and the fact
s
r
log m
n log3 m
+ xt⊥ 2 +
xt,sgn
xt⊥ − xsgn
⊥
⊥
2
m
m
r
s
2
log m
n log3 m
2
t,sgn
t
,
+
x
x
xt⊥ − xsgn
+
⊥ 2
⊥
k
2
2
m
m
n log m t,sgn
+
xk
m
+ xt⊥ − xsgn
⊥
which can be further simplified to
s
r
n log3 m t
log m
kr1 k2 .
xk +
xt⊥
m
m
s
2
+
n log3 m
xt − xsgn
m
• Combine all of the above estimates to reach
Z 1
t+1
t+1,sgn
2
I −η
x
−x
≤
∇ f (x̃(τ )) dτ xt − xt,sgn
2
0
n
≤ 1 + 3η 1 − xt
2
2
+ ηφ2
o
xt − xt,sgn
2
+ xt − xsgn
2
2
2
.
+ η ∇f sgn xt,sgn − ∇f xt,sgn 2
2
s
!
r
log m
n log3 m t
xt⊥ 2 + η
xk
+O η
2
m
m
for some |φ2 | log1 m . Here the second inequality follows from the fact (63b). Substitute the induction
hypotheses into this bound to reach
s
t
n
o
1
n log5 m
t+1
t+1,sgn
t 2
x
−x
≤ 1 + 3η 1 − x 2 + ηφ2 αt 1 +
C3
2
log m
m
s
r
log m
n log3 m
+η
βt + η
αt
m
m
s
t
o
(i) n
1
n log5 m
2
≤ 1 + 3η 1 − xt 2 + ηφ3 αt 1 +
C3
log m
m
s
t+1
(ii)
1
n log5 m
C3
,
≤ αt+1 1 +
log m
m
for some |φ3 |
1
log m ,
where (ii) follows the same reasoning as in (116) and (i) holds as long as
r
log m
1
1
βt
αt 1 +
m
log m
log m
57
s
t
C3
n log5 m
,
m
(132a)
s
t
n log3 m
1
1
C3
αt
αt 1 +
m
log m
log m
s
n log5 m
.
m
(132b)
Here the first condition (132a) results from (see Lemma 1)
p
βt . n log m · αt ,
and the second one is trivially true with the proviso that C3 > 0 is sufficiently large.
G
Proof of Lemma 7
Consider any l (1 ≤ l ≤ m). According to the gradient update rules (3), (29), (30) and (31), we have
xt+1 − xt+1,(l) − xt+1,sgn + xt+1,sgn,(l)
i
h
= xt − xt,(l) − xt,sgn + xt,sgn,(l) − η ∇f xt − ∇f (l) xt,(l) − ∇f sgn xt,sgn + ∇f sgn,(l) xt,sgn,(l) .
It then boils down to controlling the gradient difference, i.e. ∇f (xt ) − ∇f (l) xt,(l) − ∇f sgn (xt,sgn ) +
∇f sgn,(l) xt,sgn,(l) . To this end, we first see that
∇f xt − ∇f (l) xt,(l) = ∇f xt − ∇f xt,(l) + ∇f xt,(l) − ∇f (l) xt,(l)
Z 1
1 > t,(l) 2
> \ 2
t,(l)
2
t
t,(l)
al x
− al x
=
∇ f (x (τ )) dτ
x −x
+
al a>
,
l x
m
0
(133)
where we denote x(τ ) := xt + τ xt,(l) − xt and the last identity results from the fundamental theorem of
calculus [Lan93, Chapter XIII, Theorem 4.2]. Similar calculations yield
∇f sgn xt,sgn − ∇f sgn,(l) xt,sgn,(l)
Z 1
1 sgn> t,sgn,(l) 2 sgn> \ 2 sgn sgn> t,sgn,(l)
al
x
=
∇2 f sgn (x̃ (τ )) dτ
xt,sgn − xt,sgn(l) +
− al
x
al al
x
m
0
(134)
with x̃(τ ) := xt,sgn + τ xt,sgn,(l) − xt,sgn . Combine (133) and (134) to arrive at
∇f xt − ∇f (l) xt,(l) − ∇f sgn xt,sgn + ∇f sgn,(l) xt,sgn,(l)
Z 1
Z 1
2
t
t,(l)
2 sgn
=
∇ f x(τ ) dτ x − x
−
∇ f
x̃(τ ) dτ xt,sgn − xt,sgn,(l)
0
{z
}
| 0
:=v1
1 > t,(l) 2
1 sgn> t,sgn,(l) 2 sgn> \ 2 sgn sgn> t,sgn,(l)
\ 2
> t,(l)
+
al x
− a>
x
a
a
x
−
a
x
−
a
x
al al
x
.
l l
l
l
l
m
m
{z
}
|
:=v2
(135)
In what follows, we shall control v1 and v2 separately.
\
• We start with the simpler term v2 . In light of the fact that a>
l x
one can decompose v2 as
h
2 i
t,(l) 2
t,(l)
mv2 = a>
− asgn>
xt,sgn,(l)
al a>
l x
l x
l
|
{z
}
:=θ1
58
2
= asgn>
x\
l
2
= al,1
2
(see (37)),
+
h
asgn>
xt,sgn,(l)
l
2
− |al,1 |
2
i
sgn> t,sgn,(l)
t,(l)
.
al a>
− asgn
x
l x
l al
{z
}
|
:=θ2
First, it is easy to see from (56) and the independence between asgn
and xt,sgn,(l) that
l
asgn>
xt,sgn,(l)
l
2
2
≤ asgn>
xt,sgn,(l)
l
− |al,1 |
2
. log m · xt,sgn,(l)
+ |al,1 |
2
2
2
+ log m . log m
(136)
with probability at least 1−O m−10 , where the last inequality results from the norm condition xt,sgn,(l)
1 (see (61c)). Regarding the term θ2 , one has
sgn sgn>
sgn>
θ2 = al a>
−
a
a
xt,(l) + asgn
xt,(l) − xt,sgn,(l) ,
l
l
l
l al
2
which together with the identity (127) gives
"
#
t,(l)
a>
x
sgn
sgn>
l,⊥ ⊥
θ2 = (ξl − ξl ) |al,1 |
+ asgn
xt,(l) − xt,sgn,(l) .
t,(l)
l al
xk al,⊥
In view of the independence between al and xt,(l) , and between asgn
and xt,(l) − xt,sgn,(l) , one can again
l
apply standard Gaussian concentration results to obtain that
p
p
t,(l)
t,(l)
. log m x⊥
a>
and
asgn>
xt,(l) − xt,sgn,(l) . log m xt,(l) − xt,sgn,(l)
l,⊥ x⊥
l
2
2
with probability exceeding 1 − O m−10 . Combining these two with the facts (56) and (57) leads to
t,(l)
t,(l)
sgn>
kal,⊥ k2 + kasgn
xt,(l) − xt,sgn,(l)
+ xk
kθ2 k2 ≤ |ξl − ξlsgn | |al,1 | a>
l,⊥ x⊥
l k2 al
p
p
p
√
t,(l)
t,(l)
. log m
+ n log m xt,(l) − xt,sgn,(l)
log m x⊥
+ n xk
2
2
p
t,(l)
t,(l)
t,(l)
t,sgn,(l)
+ x
−x
.
(137)
. log m x⊥
+ n log m xk
2
2
We now move on to controlling θ1 . Use the elementary identity a2 − b2 = (a − b) (a + b) to get
sgn> t,sgn,(l)
t,(l)
t,sgn,(l)
> t,(l)
t,(l)
θ1 = a>
− asgn>
x
a
x
+
a
x
al a>
.
l x
l
l x
l
l
(138)
The constructions of asgn
requires that
l
t,(l)
t,(l)
a>
− asgn>
xt,sgn,(l) = ξl |al,1 | xk
l x
l
t,sgn,(l)
− ξlsgn |al,1 | xk
t,(l)
Similarly, in view of the independence between
al,⊥ and x⊥
that with probability at least 1 − O m−10
t,(l)
t,sgn,(l)
t,(l)
+ a>
− x⊥
l,⊥ x⊥
t,sgn,(l)
− x⊥
.
, and the fact (56), one can see
t,sgn,(l)
t,sgn,(l)
t,(l)
t,(l)
a>
− asgn>
xt,sgn,(l) ≤ |ξl | |al,1 | xk
+ |ξlsgn | |al,1 | xk
+ a>
− x⊥
l x
l,⊥ x⊥
l
p
t,(l)
t,sgn,(l)
t,(l)
t,sgn,(l)
+ xk
+ x⊥ − x⊥
. log m xk
2
p
t,(l)
t,(l)
t,sgn,(l)
+ x
. log m xk
−x
,
(139)
2
t,sgn,(l)
where the last inequality results from the triangle inequality xk
Substituting (139) into (138) results in
t,(l)
≤ xk
+ xt,(l) − xt,sgn,(l)
t,(l)
t,(l)
t,(l)
− asgn>
+ asgn>
xt,sgn,(l) kal k2 a>
kθ1 k2 = a>
xt,sgn,(l) a>
l x
l x
l x
l
l
59
2
.
.
p
p
√ p
t,(l)
+ xt,(l) − xt,sgn,(l) 2 · log m · n · log m
log m xk
q
t,(l)
+ xt,(l) − xt,sgn,(l) 2 ,
n log3 m xk
.
(140)
where the second line comes from the simple facts (57),
t,(l)
a>
+ asgn>
xt,sgn,(l) ≤
l x
l
p
log m
t,(l)
.
a>
l x
and
p
log m.
Taking the bounds (136), (137) and (140) collectively, we can conclude that
2
1
2
t,sgn,(l)
x
kθ
k
kv2 k2 ≤
−
|a
|
kθ1 k2 + asgn>
2 2
l,1
l
m
p
log2 m
n log3 m t,(l)
t,(l)
+ xt,(l) − xt,sgn,(l)
x⊥
+
xk
.
m
m
2
2
.
• To bound v1 , one first observes that
∇2 f (x (τ )) xt − xt,(l) − ∇2 f sgn (x̃ (τ )) xt,sgn − xt,sgn,(l)
= ∇2 f (x (τ )) xt − xt,(l) − xt,sgn + xt,sgn,(l) + ∇2 f (x (τ )) − ∇2 f (x̃ (τ )) xt,sgn − xt,sgn,(l)
{z
}
|
{z
} |
:=w2 (τ )
:=w1 (τ )
+ ∇2 f (x̃ (τ )) − ∇2 f sgn (x̃ (τ )) xt,sgn − xt,sgn,(l) .
{z
}
|
:=w3 (τ )
– The first term w1 (τ ) satisfies
t
x −x
t,(l)
t,sgn
−x
+x
t,sgn,(l)
Z
−η
1
w1 (τ )dτ
0
=
Z
I −η
Z
1
∇2 f (x (τ )) dτ
2
xt − xt,(l) − xt,sgn + xt,sgn,(l)
0
1
2
t
2
t,(l)
t,sgn
t,sgn,(l)
−x
+x
∇ f (x (τ )) dτ · x − x
≤ I −η
2
0
1
2
≤ 1 + 3η 1 − xt 2 + O η
+ ηφ1
xt − xt,(l) − xt,sgn + xt,sgn,(l)
log m
for some |φ1 |
1
log m ,
,
2
where the last line follows from the same argument as in (112).
– Regarding the second term w2 (τ ), it is seen that
∇2 f (x (τ )) − ∇2 f (x̃ (τ )) =
m
2
2 i
3 Xh >
ai x(τ ) − a>
x̃(τ
)
ai a>
i
i
m i=1
m
≤ max
1≤i≤m
2
2
a>
− a>
i x(τ )
i x̃(τ )
3 X
ai a>
i
m i=1
m
>
≤ max a>
i (x (τ ) − x̃ (τ )) max ai (x (τ ) + x̃ (τ ))
1≤i≤m
1≤i≤m
. max a>
i (x (τ ) − x̃ (τ ))
1≤i≤m
p
3 X
ai a>
i
m i=1
log m,
where the last line makes use of Lemma 13 as well as the incoherence conditions
p
>
>
max a>
log m.
i (x (τ ) + x̃ (τ )) ≤ max ai x (τ ) + max ai x̃ (τ ) .
1≤i≤m
1≤i≤m
60
1≤i≤m
(141)
(142)
Note that
h
i
x(τ ) − x̃(τ ) = xt + τ xt,(l) − xt − xt,sgn + τ xt,sgn,(l) − xt,sgn
= (1 − τ ) xt − xt,sgn + τ xt,(l) − xt,sgn,(l) .
This implies for all 0 ≤ τ ≤ 1,
t,(l)
t
t,sgn
− xt,sgn,(l) .
+ a>
≤ a>
a>
i x −x
i x(τ ) − x̃(τ )
i x
Moreover, the triangle inequality together with the Cauchy-Schwarz inequality tells us that
t
t,sgn
t
t,sgn
t,(l)
− xt,(l) + xt,sgn,(l)
+ a>
a>
− xt,sgn,(l) ≤ a>
i x −x
i x −x
i x
t
t,sgn
+ kai k2 xt − xt,sgn − xt,(l) + xt,sgn,(l)
≤ a>
i x −x
2
and
t
t,sgn
a>
i x −x
t,(i)
− xt,sgn,(i)
≤ a>
i x
+ a>
xt − xt,sgn − xt,(i) + xt,sgn,(i)
i
t,(i)
≤ a>
− xt,sgn,(i)
i x
+ kai k2 xt − xt,sgn − xt,(i) + xt,sgn,(i)
2
.
Combine the previous three inequalities to obtain
>
t
t,sgn
max a>
i (x (τ ) − x̃ (τ )) ≤ max ai x − x
1≤i≤m
1≤i≤m
a>
i
t,(i)
t,sgn,(i)
t,(l)
+ max a>
− xt,sgn,(l)
i x
1≤i≤m
x
−x
+ 3 max kai k2 max xt − xt,sgn − xt,(l) + xt,sgn,(l)
≤ 2 max
1≤i≤m
1≤i≤m
1≤l≤m
p
√
t,(i)
t,sgn,(i)
−x
+ n max xt − xt,sgn − xt,(l) + xt,sgn,(l) 2 ,
. log m max x
2
1≤i≤m
2
1≤l≤m
where the last inequality follows from the independence between ai and xt,(i) − xt,sgn,(i) and the fact
(57). Substituting the above bound into (141) results in
∇2 f (x (τ )) − ∇2 f (x̃ (τ ))
p
xt,(i) − xt,sgn,(i) + n log m max xt − xt,sgn − xt,(l) + xt,sgn,(l)
1≤i≤m
1≤l≤m
2
p
t
t,sgn
t
. log m x − x
+ n log m max x − xt,sgn − xt,(l) + xt,sgn,(l) 2 .
2
. log m max
2
1≤l≤m
Here, we use the triangle inequality
≤ xt − xt,sgn
xt,(i) − xt,sgn,(i)
2
and the fact log m ≤
√
2
+ xt − xt,sgn − xt,(i) + xt,sgn,(i)
2
n log m. Consequently, we have the following bound for w2 (τ ):
kw2 (τ )k2 ≤ ∇2 f (x (τ )) − ∇2 f (x̃ (τ )) · xt,sgn − xt,sgn,(l) 2
p
. log m xt − xt,sgn 2 + n log m max xt − xt,sgn − xt,(l) + xt,sgn,(l)
1≤l≤m
2
xt,sgn − xt,sgn,(l)
– It remains to control w3 (τ ). To this end, one has
w3 (τ ) =
m
2
i
1 Xh
> \ 2
t,sgn
3 a>
x̃
(τ
)
−
a
x
ai a>
− xt,sgn,(l)
i
i
i x
m i=1 |
{z
}
:=ρi
m
2
2 i sgn sgn> t,sgn
1 Xh
−
x\
ai ai
x
− xt,sgn,(l) .
3 asgn>
x̃ (τ ) − asgn>
i
i
m i=1 |
{z
}
:=ρsgn
i
61
2
.
We consider the first entry of w3 (τ ), i.e. w3,k (τ ), and the 2nd through the nth entries, w3,⊥ (τ ),
separately. For the first entry w3,k (τ ), we obtain
m
m
1 X
1 X sgn sgn
t,sgn
w3,k (τ ) =
ρi ξi |ai,1 | a>
− xt,sgn,(l) −
ρ ξ |ai,1 | asgn>
xt,sgn − xt,sgn,(l) . (143)
i x
i
m i=1
m i=1 i i
Use the expansions
t,sgn,(l)
t,sgn,(l)
t,sgn
t,sgn
t,sgn
t,sgn,(l)
−
x
+ a>
− x⊥
a>
x
−
x
=
ξ
|a
|
x
i
i,1
i,⊥ x⊥
i
k
k
t,sgn,(l)
t,sgn,(l)
t,sgn
asgn>
− xk
+ a>
− x⊥
xt,sgn − xt,sgn,(l) = ξisgn |ai,1 | xt,sgn
i,⊥ x⊥
i
k
to further obtain
m
w3,k (τ ) =
m
1 X
1 X
t,sgn,(l)
t,sgn,(l)
2
t,sgn
sgn
>
− x⊥
xkt,sgn − xk
(ρi − ρsgn
+
(ρi ξi − ρsgn
i ) |ai,1 |
i ξi ) |ai,1 | ai,⊥ x⊥
m i=1
m i=1
m
1 X
t,sgn,(l)
2
=
xkt,sgn − xk
(ρi − ρsgn
i ) |ai,1 |
m i=1
{z
}
|
:=θ1 (τ )
+
1
m
|
m
X
t,sgn,(l)
t,sgn
sgn
>
(ρi − ρsgn
− x⊥
i ) (ξi + ξi ) |ai,1 | ai,⊥ x⊥
i=1
{z
}
:=θ2 (τ )
m
m
1 X sgn
1 X
t,sgn,(l)
t,sgn,(l)
t,sgn
t,sgn
ρi ξi |ai,1 | a>
ρi ξisgn |ai,1 | a>
−
− x⊥
− x⊥
+
i,⊥ x⊥
i,⊥ x⊥
m i=1
m i=1
{z
} |
{z
}
|
:=θ3 (τ )
:=θ4 (τ )
The identity (122) reveals that
ρi − ρsgn
= 6 (ξi − ξisgn ) |ai,1 | x̃k (τ ) a>
i,⊥ x̃⊥ (τ ) ,
i
and hence
m
6 X
t,sgn,(l)
3
t,sgn
(ξi − ξisgn ) |ai,1 | a>
− xk
,
θ1 (τ ) = x̃k (τ ) ·
i,⊥ x̃⊥ (τ ) xk
m i=1
which together with (130) implies
m
t,sgn,(l)
|θ1 (τ )| ≤ 6 x̃k (τ ) xt,sgn
− xk
k
1 X
3
(ξi − ξisgn ) |ai,1 | a>
i,⊥
m i=1
s
n log3 m
t,sgn,(l)
x̃k (τ ) xt,sgn
− xk
kx̃⊥ (τ )k2
k
m
s
n log3 m
t,sgn,(l)
x̃k (τ ) xt,sgn
− xk
,
k
m
.
.
kx̃⊥ (τ )k2
where the penultimate inequality arises from (130) and the last inequality utilizes the fact that
kx̃⊥ (τ )k2 ≤ xt,sgn
⊥
t,sgn,(l)
2
+ x⊥
2
. 1.
Again, we can use (144) and the identity (ξi − ξisgn ) (ξi + ξisgn ) = 0 to deduce that
θ2 (τ ) = 0.
62
(144)
t,sgn,(l)
t,sgn
|ai,1 | a>
− x⊥
When it comes to θ3 (τ ), we exploit the independence between ξi and ρsgn
i
i,⊥ x⊥
and apply
the Bernstein inequality (see Lemma 11) to obtain that with probability exceeding 1 −
O m−10
1 p
|θ3 (τ )| .
V1 log m + B1 log m ,
m
where
V1 :=
m
X
2
2
t,sgn,(l)
t,sgn
>
− x⊥
(ρsgn
i ) |ai,1 | ai,⊥ x⊥
2
t,sgn,(l)
t,sgn
>
− x⊥
and B1 := max |ρsgn
i | |ai,1 | ai,⊥ x⊥
1≤i≤m
i=1
Combine the fact |ρsgn
i | . log m and Lemma 14 to see that
t,sgn,(l)
− x⊥
V1 . m log2 m xt,sgn
⊥
In addition, the facts |ρsgn
i | . log m, (56) and (57) tell us that
q
t,sgn,(l)
B1 . n log3 m xt,sgn
− x⊥
⊥
Continue the derivation to reach
s
p
3
5
log m
n log m t,sgn
t,sgn,(l)
|θ3 (τ )| .
+
x⊥ − x⊥
m
m
s
2
.
2
.
2
2
.
log3 m t,sgn
t,sgn,(l)
x⊥ − x⊥
m
2
,
(145)
provided that m & n log2 m. This further allows us to obtain
|θ4 (τ )| =
≤
m
2
i sgn
1 Xh
t,sgn,(l)
t,sgn
\ 2
3 a>
− a>
ξi |ai,1 | a>
− x⊥
i x̃ (τ )
i x
i,⊥ x⊥
m i=1
m
o
2
1 Xn
t,(l)
2
t
3 a>
x
(τ
)
−
|a
|
ξisgn |ai,1 | a>
i,1
i
i,⊥ x⊥ − x⊥
m i=1
+
m
2
2 o sgn
1 Xn
t,sgn,(l)
t,sgn
3 a>
− 3 a>
ξi |ai,1 | a>
− x⊥
i x̃ (τ )
i x (τ )
i,⊥ x⊥
m i=1
+
m
o
2
1 Xn
t,(l)
t,sgn,(l)
2
t
3 a>
− |ai,1 | ξisgn |ai,1 | a>
− xt,sgn
+ x⊥
i x (τ )
i,⊥ x⊥ − x⊥
⊥
m i=1
s
.
p
log3 m
t,(l)
t,sgn,(l)
xt⊥ − x⊥
− x⊥
+ log m xt,sgn
⊥
m
2
1
t,(l)
t,sgn(l)
+
xt⊥ − x⊥ − xt,sgn
+ x⊥
.
⊥
3/2
2
log m
2
kx (τ ) − x̃ (τ )k2
(146)
To justify the last inequality,
we first use similar bounds as in (145) to show that with probability
exceeding 1 − O m−10 ,
m
o
2
1 Xn
t,(l)
2
t
3 a>
− |ai,1 | ξisgn |ai,1 | a>
.
i,⊥ x⊥ − x⊥
i x(τ )
m i=1
s
log3 m
t,(l)
xt⊥ − x⊥
m
In addition, we can invoke the Cauchy-Schwarz inequality to get
m
2
2 o sgn
1 Xn
t,sgn,(l)
t,sgn
3 a>
− 3 a>
ξi |ai,1 | a>
− x⊥
i x̃ (τ )
i x (τ )
i,⊥ x⊥
m i=1
63
.
2
.
v
u
u
≤t
m
o2
2
1 Xn
2
> x (τ ) 2
3 a>
x̃
(τ
)
−
3
a
|ai,1 |
i
i
m i=1
!
m
1 X > t,sgn
t,sgn,(l)
ai,⊥ x⊥ − x⊥
m i=1
v
u
m n
u1 X
o2
2
t,sgn,(l)
2
> x (τ ) 2
.t
− x⊥
|ai,1 | xt,sgn
a>
x̃
(τ
)
−
a
i
i
⊥
m i=1
2
!
,
2
where the last line arises from Lemma 13. For the remaining term in the expression above, we have
v
v
u
u
m n
m
o2
u1 X
u1 X
2
2 >
2
2
2
2
>
>
t
t
ai (x (τ ) + x̃ (τ ))
|ai,1 | a>
|ai,1 | =
ai x̃ (τ ) − ai x (τ )
i (x (τ ) − x̃ (τ ))
m i=1
m i=1
v
u
m
(i) u log m X
2
2
.t
|ai,1 | a>
i (x (τ ) − x̃ (τ ))
m i=1
(ii)
.
p
log m kx (τ ) − x̃ (τ )k2 .
Here, (i) makes use of the incoherence condition (142), whereas (ii) comes from Lemma 14. Regarding
the last line in (146), we have
m
o
2
1 Xn
t,(l)
t,sgn,(l)
2
t
3 a>
− |ai,1 | ξisgn |ai,1 | a>
− xt,sgn
+ x⊥
i x (τ )
i,⊥ x⊥ − x⊥
⊥
m i=1
≤
m
o
2
1 Xn
2
3 a>
x
(τ
)
−
|a
|
ξisgn |ai,1 | a>
i,1
i
i,⊥
m i=1
t,(l)
xt⊥ − x⊥
t,sgn,(l)
− xt,sgn
+ x⊥
⊥
.
2
2
n
o
2
2
Since ξisgn is independent of 3 a>
x
(τ
)
−
|a
|
|ai,1 | a>
i,1
i
i,⊥ , one can apply the Bernstein inequality
(see Lemma 11) to deduce that
m
o
2
1 Xn
2
3 a>
− |ai,1 | ξisgn |ai,1 | a>
i x (τ )
i,⊥
m i=1
.
2
1 p
V2 log m + B2 log m ,
m
where
V2 :=
m n
o2
X
2
2
2
3
3 a>
x
(τ
)
−
|a
|
|ai,1 | a>
i,1
i
i,⊥ ai,⊥ . mn log m;
i=1
2
√
2
B2 := max 3 a>
− |ai,1 | |ai,1 | kai,⊥ k2 . n log3/2 m.
i x (τ )
1≤i≤m
This further implies
m
o
2
1 Xn
2
3 a>
− |ai,1 | ξisgn |ai,1 | a>
i x (τ )
i,⊥
m i=1
s
.
2
n log4 m
+
m
√
n log5/2 m
1
.
,
m
log3/2 m
as long as m n log7 m. Take the previous bounds on θ1 (τ ), θ2 (τ ), θ3 (τ ) and θ4 (τ ) collectively to
arrive at
s
s
n log3 m
log3 m
t,sgn,(l)
t,sgn,(l)
t,sgn
− xk
+
w3,k (τ ) .
x̃k (τ ) xk
xt,sgn
− x⊥
⊥
m
m
2
s
p
log3 m
t,(l)
t,sgn,(l)
+
xt⊥ − x⊥
+ log m xt,sgn
− x⊥
kx (τ ) − x̃ (τ )k2
⊥
m
2
2
64
+
1
log
3/2
t,(l)
xt⊥ − x⊥
m
t,sgn(l)
− xt,sgn
+ x⊥
⊥
2
s
.
n log3 m
t,sgn,(l)
− xk
x̃k (τ ) xt,sgn
k
m
s
p
log3 m
t,(l)
t,sgn,(l)
+
xt⊥ − x⊥
+ log m xt,sgn
− x⊥
⊥
m
2
1
t,(l)
t,sgn(l)
xt⊥ − x⊥ − xt,sgn
,
+
+ x⊥
⊥
3/2
2
log m
2
kx (τ ) − x̃ (τ )k2
where the last inequality follows from the triangle inequality
t,sgn,(l)
xt,sgn
− x⊥
⊥
q
t,(l)
2
≤ xt⊥ − x⊥
t,(l)
2
+ xt⊥ − x⊥
t,sgn(l)
− xt,sgn
+ x⊥
⊥
2
3
log m
1
and the fact that
for m sufficiently large. Similar to (143), we have the following
≤ log3/2
m
m
identity for the 2nd through the nth entries of w3 (τ ):
m
m
1 X
1 X sgn
t,sgn
t,sgn,(l)
ρi ai,⊥ a>
ρi ai,⊥ asgn>
x
−
x
−
xt,sgn − xt,sgn,(l)
i
i
m i=1
m i=1
m
2
2
3 X
t,sgn,(l)
sgn
sgn>
>
ai x̃ (τ ) ξi − ai
x̃ (τ ) ξi
|ai,1 | ai,⊥ xt,sgn
−
x
=
k
k
m i=1
w3,⊥ (τ ) =
m
3 X
t,sgn,(l)
2
|ai,1 | (ξi − ξisgn ) |ai,1 | ai,⊥ xt,sgn
−
x
k
k
m i=1
m
2
2 sgn>
3 X
t,sgn,(l)
t,sgn
+
a>
x̃
(τ
)
−
a
x̃
(τ
)
ai,⊥ a>
.
− x⊥
i
i,⊥ x⊥
i
m i=1
+
It is easy to check by Lemma 14 and the incoherence conditions a>
i x̃ (τ ) .
√
sgn>
ai
x̃ (τ ) . log m kx̃ (τ )k2 that
√
log m kx̃ (τ )k2 and
s
m
3
2
1 X >
n
log
m
,
a x̃ (τ ) ξi |ai,1 | ai,⊥ = 2x̃1 (τ ) x̃⊥ (τ ) + O
m i=1 i
m
and
s
m
3
2
n
log
m
1 X sgn>
.
a
x̃ (τ ) ξisgn |ai,1 | ai,⊥ = 2x̃1 (τ ) x̃⊥ (τ ) + O
m i=1 i
m
Besides, in view of (130), we have
s
m
3 X
2
|ai,1 | (ξi − ξisgn ) |ai,1 | ai,⊥
m i=1
We are left with controlling
3
m
Pm
i=1
.
2
n log3 m
.
m
2
2
t,sgn,(l)
sgn>
t,sgn
>
a>
x̃
(τ
)
−
a
x̃
(τ
)
a
a
x
−
x
i,⊥ i,⊥
i
i
⊥
⊥
. To
2
this end, one can see from (144) that
m
3 X
m i=1
2
2 sgn>
t,sgn,(l)
t,sgn
a>
x̃
(τ
)
−
a
x̃
(τ
)
ai,⊥ a>
− x⊥
i
i,⊥ x⊥
i
2
65
m
= x̃k (τ ) ·
6 X
t,sgn,(l)
t,sgn
>
− x⊥
(ξi − ξisgn ) |ai,1 | ai,⊥ a>
i,⊥ x̃⊥ (τ ) ai,⊥ x⊥
m i=1
m
1 X
ai,⊥ a>
i,⊥
m i=1
≤ 12 max |ai,1 | x̃k (τ ) max a>
i,⊥ x̃⊥ (τ )
1≤i≤m
. log m x̃k (τ )
1≤i≤m
t,sgn,(l)
xt,sgn
− x⊥
⊥
t,sgn,(l)
xt,sgn
− x⊥
⊥
2
,
2
where the last relation arises from (56), the incoherence condition max1≤i≤m a>
i,⊥ x̃⊥ (τ ) .
and Lemma 13. Hence the 2nd through the nth entries of w3 (τ ) obey
s
n log3 m t,sgn
t,sgn,(l)
t,sgn,(l)
− x⊥
+ log m x̃k (τ ) xt,sgn
xk
− xk
kw3,⊥ (τ )k2 .
⊥
m
√
log m
.
2
Combine the above estimates to arrive at
kw3 (τ )k2 ≤ w3,k (τ ) + kw3,⊥ (τ )k2
s
n log3 m t,sgn
t,sgn,(l)
t,sgn,(l)
−
x
+
≤ log m x̃k (τ ) xt,sgn
xk
− xk
⊥
⊥
m
2
s
p
log3 m
t,(l)
t,sgn,(l)
+
xt⊥ − x⊥
+ log m xt,sgn
− x⊥
kx (τ ) − x̃ (τ )k2
⊥
m
2
2
1
t,(l)
t,sgn(l)
xt⊥ − x⊥ − xt,sgn
+ x⊥
.
+
⊥
3/2
2
log m
• Putting together the preceding bounds on v1 and v2 (w1 (τ ), w2 (τ ) and w3 (τ )), we can deduce that
xt+1 − xt+1,(l) − xt+1,sgn + xt+1,sgn,(l)
2
= xt − xt,(l) − xt,sgn + xt,sgn,(l) − η
≤ xt − xt,(l) − xt,sgn + xt,sgn,(l) − η
1
Z
Z
w1 (τ ) dτ +
Z
0
1
1
Z
w2 (τ ) dτ +
0
w3 (τ ) dτ
0
− ηv2
2
+ η sup kw (τ )k2 + η sup kw3 (τ )k2 + η kv2 k2
w1 (τ ) dτ
0
1
2
0≤τ ≤1
0≤τ ≤1
n
o
2
≤ 1 + 3η 1 − xt 2 + ηφ1 xt − xt,(l) − xt,sgn + xt,sgn,(l)
2
p
t
t,sgn
t,(l)
t,sgn,(l)
t
t,sgn
n log m max x − x
−x
+x
+ log m x − x
xt,sgn − xt,sgn,(l)
+O η
2
2
1≤l≤m
s
3
n
log
m
t,sgn,(l)
+ O η log m sup x̃k (τ ) xt,sgn − xt,sgn,(l)
+ O η
xt,sgn
− xk
k
m
2
0≤τ ≤1
s
3
p
log
m
t,(l)
+ O η log m xt,sgn − xt,sgn,(l)
+ O η
xt⊥ − x⊥
sup
kx
(τ
)
−
x̃
(τ
)k
2 .
⊥
⊥
m
2
2 0≤τ ≤1
!
p
n log3 m t,(l)
log2 m
t,(l)
t,(l)
t,sgn,(l)
x⊥
xk
+O η
+O η
+ x
−x
.
(147)
m
m
2
2
To simplify the preceding bound, we first make the following claim, whose proof is deferred to the end of
this subsection.
Claim 1. For t ≤ T0 , the following inequalities hold:
p
n log m xt,sgn − xt,sgn,(l)
2
66
1
;
log m
2
t
t,sgn
log m sup x̃k (τ ) + log m x − x
2
0≤τ ≤1
+
p
p
n log3 m
log m sup kx (τ ) − x̃ (τ )k2 +
. αt log m;
m
0≤τ ≤1
1
.
αt log m
log m
Armed with Claim 1, one can rearrange terms in (147) to obtain for some |φ2 |, |φ3 |
xt+1 − xt+1,(l) − xt+1,sgn + xt+1,sgn,(l)
2
o
n
t 2
≤ 1 + 3η 1 − x 2 + ηφ2 max xt − xt,(l) − xt,sgn + xt,sgn,(l)
1≤l≤m
s
3
2
log m log m t
+ ηO log m · αt +
x − xt,(l)
+
m
m
2
s
p
3
log2 m
n
log
m
n log3 m t
t,(l)
+η
+ ηO
xk − xk
xt⊥ 2
+
m
m
m
!
p
n log3 m t
+ ηO
xk + xt − xt,sgn 2
m
n
o
2
≤ 1 + 3η 1 − xt 2 + ηφ3 xt − xt,(l) − xt,sgn + xt,sgn,(l)
2
t
t,(l)
+ O (η log m) · αt x − x
2
s
3
log2 m
n log m t
t,(l)
+ O η
+O η
xk − xk
xt⊥
m
m
!
p
n log3 m t
xk + xt − xt,sgn 2 .
+O η
m
2
Substituting in the hypotheses (40), we can arrive at
xt+1 − xt+1,(l) − xt+1,sgn + xt+1,sgn,(l)
2
p
t
n
o
n log9 m
1
t 2
≤ 1 + 3η 1 − x 2 + ηφ3 αt 1 +
C4
log m
m
t p
5
n log m
1
C1
+ O (η log m) αt βt 1 +
log m
m
s
p
t
log3 m
1
n log5 m
+ O η
βt 1 +
C1
m
log m
m
s
t p
3
n
log
m
1
n log12 m
αt 1 +
+ O η
C2
m
log m
m
!
p
log2 m
n log3 m
+O η
βt + O η
αt
m
m
s
!
p
t
n log3 m
1
n log5 m
+O η
αt 1 +
C3
m
log m
m
p
t
o
(i) n
1
n log9 m
t 2
≤ 1 + 3η 1 − x 2 + ηφ4 αt 1 +
C4
log m
m
67
1
log m
2
(ii)
≤ αt+1
for some |φ4 |
as long as
1
log m .
1
1+
log m
t+1
p
C4
n log9 m
m
Here, the last relation (ii) follows the same argument as in (116) and (i) holds true
p
t
n log5 m
1
1
C1
(log m) αt βt 1 +
αt 1 +
log m
m
log m
s
p
t
n log3 m
1
n log12 m
1
C2
αt 1 +
αt 1 +
m
log m
m
log m
log2 m
1
βt
αt 1 +
m
log m
s
p
t
n log3 m
1
n log5 m
1
C3
αt 1 +
αt 1 +
m
log m
m
log m
p
3
n log m
1
αt
αt 1 +
m
log m
1
log m
t
1
log m
t
1
log m
t
1
log m
t
1
log m
t
p
C4
p
C4
p
C4
p
C4
p
C4
n log9 m
;
m
(148a)
n log9 m
;
m
(148b)
n log9 m
;
m
(148c)
n log9 m
;
m
(148d)
n log9 m
,
m
(148e)
where we recall that t ≤ T0 . log n. The first condition (148a) can be checked using βt . 1 and the
assumption that C4 > 0 is sufficiently large. The second one is valid if m n log8 m. In addition, the
third condition follows from the relationship (see Lemma 1)
p
βt . αt n log m.
It is also easy to see that the last two are both valid.
Proof of Claim 1. For the first claim, it is east to see from the triangle inequality that
p
n log m xt,sgn − xt,sgn,(l)
2
p
t
t,(l)
+ xt − xt,(l) − xt,sgn + xt,sgn,(l)
≤ n log m x − x
2
2
p
t
t p
5
p
p
n log m
n log9 m
1
1
≤ n log mβt 1 +
+ n log mαt 1 +
C1
C4
log m
m
log m
m
.
n log3 m n log5 m
1
+
,
m
m
log m
as long as m n log6 m. Here, we have invoked the upper bounds on αt and βt provided in Lemma 1.
Regarding the second claim, we have
t,sgn,(l)
x̃k (τ ) ≤ xt,sgn
+ xk
k
t,sgn,(l)
≤ 2 xt,sgn
+ xt,sgn
− xk
k
k
t,(l)
≤ 2 xtk + 2 xt − xt,sgn 2 + xtk − xk
+ xt − xt,(l) − xt,sgn + xt,sgn,(l)
s
p
p
5
12
9
n log m
n log m
n log m
+
+
. αt ,
. αt 1 +
m
m
m
2
as long as m n log5 m. Similar arguments can lead us to conclude that the remaining terms on the
left-hand side of the second inequality in the claim are bounded by O(αt ). The third claim is an immediate
consequence of the fact αt log15 m (see Lemma 1).
68
H
Proof of Lemma 8
Recall from Appendix C that
xt+1
=
1
+
3η
1 − xt
k
s
2
2
+ O η
n log3 m t
x + J2 − J4 ,
k
m
where J2 and J4 are defined respectively as
m
h
2 i 1 X
J2 := η 1 − 3 xtk
·
a3 a> xt ;
m i=1 i,1 i,⊥ ⊥
m
J4 := η ·
1 X > t 3
a x
ai,1 .
m i=1 i,⊥ ⊥
Instead of resorting to the leave-one-out sequence {xt,sgn } as in Appendix C, we can directly apply Lemma
12 and the incoherence condition (49a) to obtain
m
|J2 | ≤ η 1 − 3 xtk
2
1
1 X 3 > t
xt⊥
a a x η 6
m i=1 i,1 i,⊥ ⊥
log m
m
|J4 | ≤ η
1 X > t
1
ai,⊥ x⊥ ai,1 η 6
xt⊥
m i=1
log m
3
2
η
2
η
1
αt ;
log m
1
αt
log m
with probability at least 1 − O m−10 , as long as m n log13 m. Here, the last relations come from the
fact that αt ≥ logc5 m (see Lemma 1). Combining the previous estimates gives
n
αt+1 = 1 + 3η 1 − xt
with |ζt |
I
1
log m .
2
2
o
+ ηζt αt ,
This finishes the proof.
Proof of Lemma 9
In view of Appendix D, one has
n
≤ 1 + 3η 1 − xt
xt+1 − xt+1,(l)
2
for some |φ1 |
1
log m ,
2
2
+ ηφ1
o
xt − xt,(l)
p
n log3 m
+O η
xt
m
2
!
2
,
where we use the trivial upper bound
t,(l)
2η xtk − xk
≤ 2η xt − xt,(l)
.
2
Under the hypotheses (48a), we can obtain
xt+1 − xt+1,(l)
n
≤ 1 + 3η 1 − xt
2
2
n
≤ 1 + 3η 1 − xt
2
2
2
+ ηφ1
o
αt 1 +
1
log m
t
o
αt 1 +
1
log m
t
+ ηφ2
t+1 p
1
n log15 m
≤ αt+1 1 +
C6
,
log m
m
69
p
n log15 m
+O η
m
p
n log15 m
m
C6
C6
p
n log3 m
(αt + βt )
m
!
for some |φ2 |
1
log m ,
as long as η is sufficiently small and
p
p
t
1
1
n log3 m
n log15 m
(αt + βt )
αt 1 +
C6
.
m
log m
log m
m
This is satisfied since, according to Lemma 1,
p
p
p
p
t
n log3 m
n log3 m
n log13 m
n log15 m
1
1
(αt + βt ) .
.
αt
αt 1 +
C6
,
m
m
m
log m
log m
m
as long as C6 > 0 is sufficiently large.
J
Proof of Lemma 12
Without loss of generality, it suffices to consider all the unitPvectors z obeying kzk2 = 1. To begin with, for
m
1
any given z, we can express the quantities of interest as m
i=1 (gi (z) − G (z)) , where gi (z) depends only
on z and ai . Note that
θ 2
1
gi (z) = aθi,1
a>
i,⊥ z
for different θ1 , θ2 ∈ {1, 2, 3, 4, 6} in each of the cases considered herein. It can be easily verified from
Gaussianality that in all of these cases, for any fixed unit vector z one has
2
E gi2 (z) . (E [|gi (z)|]) ;
(149)
E [|gi (z)|] 1;
(150)
1
(151)
E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} − E [gi (z)] ≤ E [|gi (z)|] .
2
i,⊥
n
√
In addition, on the event max1≤i≤m kai k2 ≤ 6n which has probability at least 1 − O me−1.5n , one has,
for any fixed unit vectors z, z0 , that
h
i
|gi (z) − gi (z0 )| ≤ nα kz − z0 k2
(152)
forP
some parameter α = O (1) in all cases. In light of these properties, we will proceed by controlling
m
1
i=1 gi (z) − E [gi (z)] in a unified manner.
m
We start by looking at any fixed vector z independent of {ai }. Recognizing that
m
h
i
1 X
gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} −E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m}
2
2
i,⊥
i,⊥
m i=1
is a sum of m i.i.d. random variables, one can thus apply the Bernstein inequality to obtain
(
)
m
h
i
1 X
gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} −E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} ≥ τ
P
2
2
i,⊥
i,⊥
m i=1
τ 2 /2
≤ 2 exp −
,
V + τ B/3
where the two quantities V and B obey
m
i
1
1 X h 2
1 2
2
√
E
g
(z)
1
>
i
{|ai,⊥ z|≤βkzk2 ,|ai,1 |≤5 log m} ≤ m E gi (z) . m (E [|gi (z)|]) ;
m2 i=1
n
o
1
B :=
max |gi (z)| 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} .
2
i,⊥
m 1≤i≤m
V :=
70
(153)
(154)
Here the penultimate relation of (153) follows from (149). Taking τ = E [|gi (z)|], we can deduce that
m
i
h
1 X
gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} −E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} ≤ E [|gi (z)|] (155)
2
2
i,⊥
i,⊥
m i=1
n
o
i (z)|]
with probability exceeding 1 − 2 min exp −c1 m2 , exp − c2 E[|g
for some constants c1 , c2 > 0. In
B
particular, when m2 /(n log n) and E [|gi (z)|] /(Bn log n) are both sufficiently large, the inequality (155)
holds with probability exceeding 1 − 2 exp (−c3 n log n) for some constant c3 > 0 sufficiently large.
We then move on to extending
this result to a uniform bound. Let Nθ be a θ-net of the unit sphere with
n
cardinality |Nθ | ≤ 1 + θ2 such that for any z on the unit sphere, one can find a point z0 ∈ Nθ such that
kz − z0 k2 ≤ θ. Apply the triangle inequality to obtain
m
i
h
1 X
gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} −E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m}
2
2
i,⊥
i,⊥
m i=1
m
≤
h
i
1 X
gi (z0 ) 1{|a> z0 |≤βkz0 k ,|ai,1 |≤5√log m} −E gi (z0 ) 1{|a> z0 |≤βkz0 k ,|ai,1 |≤5√log m}
2
2
i,⊥
i,⊥
m i=1
|
{z
}
:=I1
1
m
+
m h
X
√
i,⊥ z |≤βkzk2 ,|ai,1 |≤5 log m}
gi (z) 1{|a>
−gi (z0 ) 1{|a>
i,⊥ z0
|≤βkz0 k2 ,|ai,1 |≤5
√
i
log m}
,
i=1
|
{z
}
:=I2
where the second line arises from the fact that
h
i
h
E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} = E gi (z0 ) 1{|a>
i,⊥
√
i,⊥ z0 |≤βkz0 k2 ,|ai,1 |≤5 log m}
2
i
.
n
With regard to the first term I1 , by the union bound, with probability at least 1−2 1 + θ2 exp (−c3 n log n),
one has
I1 ≤ E [|gi (z0 )|] .
n
o
√
It remains to bound I2 . Denoting Si = z | a>
z
≤
β
kzk
,
|a
|
≤
5
log
m
, we have
i,1
i,⊥
2
m
I2 =
1 X
gi (z) 1{z∈Si } −gi (z0 ) 1{z0 ∈Si }
m i=1
m
≤
m
m
≤
m
1 X
1 X
1 X
(gi (z) − gi (z0 )) 1{z∈Si ,z0 ∈Si } +
gi (z) 1{z∈Si ,z0 ∈S
gi (z0 ) 1{z∈S
/ i} +
/ i ,z0 ∈Si }
m i=1
m i=1
m i=1
m
X
1 X
1
|gi (z) − gi (z0 )| +
max gi (z) 1{z∈Si } ·
1{z∈Si ,z0 ∈S
/ i}
m i=1
m 1≤i≤m
i=1
m
+
X
1
max gi (z0 ) 1{z0 ∈Si } ·
1{z∈S
/ i ,z0 ∈Si } .
m 1≤i≤m
i=1
For the first term in (156), it follows from (152) that
m
1 X
|gi (z) − gi (z0 )| ≤ nα kz − z0 k2 ≤ nα θ.
m i=1
For the second term of (156), we have
1{z∈Si ,z0 ∈S
/ i } ≤ 1{|a> z |≤β,|a> z0 |≥β }
i,⊥
i,⊥
71
(156)
= 1{|a>
i,⊥ z |≤β }
= 1{|a>
i,⊥ z
1{|a> z0 |≥β+√6nθ} + 1{β≤|a> z0 |<β+√6nθ}
i,⊥
i,⊥
√
|≤β } 1{β≤|a>
i,⊥ z0 |≤β+
≤ 1{β≤|a>
i,⊥ z0
√
|≤β+
6nθ }
(157)
6nθ }
.
Here, the identity (157) holds due to the fact that
1{|a> z|≤β } 1{|a> z0 |≥β+√6nθ} = 0;
i,⊥
i,⊥
√
in fact, under the condition a>
6nθ one has
i,⊥ z0 ≥ β +
√
√
√
>
>
a>
6nθ − kai,⊥ k2 kz − z0 k2 > β + 6nθ − 6nθ ≥ β,
i,⊥ z ≥ ai,⊥ z0 − ai,⊥ (z − z0 ) ≥ β +
which is contradictory to a>
i,⊥ z ≤ β. As a result, one can obtain
m
X
1{z∈Si ,z0 ∈S
/ i} ≤
i=1
m
X
1{β≤|a>
i,⊥ z0
√
6nθ }
|≤β+
≤ 2Cn log n,
i=1
2
with probability at least 1 − e− 3 Cn log n for a sufficiently large constant C > 0, where the last inequality
follows from the Chernoff bound (see Lemma 10). This together with the union bound reveals that with
2
n
probability exceeding 1 − 1 + θ2 e− 3 Cn log n ,
m
X
1
max gi (z) 1{z∈Si } ·
1{z∈Si ,z0 ∈S
/ i } ≤ B · 2Cn log n
m 1≤i≤m
i=1
with B defined in (154). Similarly, one can show that
m
X
1
max gi (z0 ) 1{z0 ∈Si } ·
1{z∈S
/ i ,z0 ∈Si } ≤ B · 2Cn log n.
m 1≤i≤m
i=1
Combine the above bounds to reach that
I1 + I2 ≤ E [|gi (z0 )|] + nα θ + 4B · Cn log n ≤ 2 E [|gi (z)|] ,
as long as
E [|gi (z)|]
and
4B · Cn log n ≤ E [|gi (z)|] .
2
2
In view of the fact (150), one can take θ n−α to conclude that
nα θ ≤
m
h
i
1 X
gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} −E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} ≤ 2 E [|gi (z)|] (158)
2
2
i,⊥
i,⊥
m i=1
holds for all z ∈ Rn with probability at least 1 − 2 exp (−c4 n log n) for some constant c4 > 0, with the proviso
that ≥ n1 and that E [|gi (z)|] / (Bn log√
n) sufficiently large.
Further, we note that {maxi |ai,1 | ≤ 5 log m} occurs with probability at least 1 − O(m−10 ). Therefore,
on an event of probability at least 1 − O(m−10 ), one has
m
m
1 X
1 X
gi (z) =
gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m}
2
i,⊥
m i=1
m i=1
(159)
for all z ∈ Rn−1 obeying maxi a>
i,⊥ z ≤ β kzk2 . On this event, one can use the triangle inequality to obtain
m
m
1 X
1 X
gi (z) − E [gi (z)] =
gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} −E [gi (z)]
2
i,⊥
m i=1
m i=1
72
≤
m
i
h
1 X
gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} −E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m}
2
2
i,⊥
i,⊥
m i=1
h
i
+ E gi (z) 1{|a> z|≤βkzk ,|ai,1 |≤5√log m} − E [gi (z)]
2
i,⊥
1
≤ 2 E [|gi (z)|] + E [|gi (z)|]
n
≤ 3 E [|gi (z)|] ,
as long as >P1/n, where the penultimate line follows from (151). This leads to the
desired uniform upper
m
1
−10
bound for m
g
(z)
−
E
[g
(z)],
namely,
with
probability
at
least
1
−
O
m
,
i
i=1 i
m
1 X
gi (z) − E [gi (z)] ≤ 3 E [|gi (z)|]
m i=1
holds uniformly for all z ∈ Rn−1 obeying maxi a>
i,⊥ z ≤ β kzk2 , provided that
m2 /(n log n) and E [|gi (z)|] / (Bn log n)
are both sufficiently large (with B defined in (154)).
To finish up, we provide the bounds on B and the resulting sample complexity conditions for each case
as follows.
n
o
3
5
1
1
1
2
2
• For gi (z) = a3i,1 a>
i,⊥ z, one has B . m β log m, and hence we need m max 2 n log n, βn log m ;
3
• For gi (z) = ai,1 a>
, one has B .
i,⊥ z
1 3
mβ
1
log 2 m, and hence we need m max
2
• For gi (z) = a2i,1 a>
z
, we have B .
i,⊥
1 2
mβ
log m, and hence m max
2
• For gi (z) = a6i,1 a>
z
, we have B .
i,⊥
1 2
mβ
log3 m, and hence m max
6
, one has B .
• For gi (z) = a2i,1 a>
i,⊥ z
1 6
mβ
log m, and hence m max
4
• For gi (z) = a2i,1 a>
, one has B .
i,⊥ z
1 4
mβ
log m, and hence m max
n
3
1 3
1
2
2 n log n, β n log
2
1 2
2 n log n, β n log
1
m ;
4
1 2
2 n log n, β n log
1
2
1 6
2 n log n, β n log
1
2
1 4
2 n log n, β n log
1
o
m
m ;
m ;
m .
Given that can be arbitrary quantity above 1/n, we establish the advertised results.
K
Proof of Lemma 14
Note that if the second claim (59) holds, we can readily use it to justify the first one (58) by observing that
p
\
\
max a>
i x ≤ 5 log m x 2
1≤i≤m
holds with probability at least 1 − O m−10 . As a consequence, the proof is devoted to justifying the second
claim in the lemma.
First, notice that it suffices to consider all z’s with unit norm, i.e. kzk2 = 1. We can then apply the
triangle inequality to obtain
m
m
1 X > 2
1 X > 2
>
>
√
ai z ai a>
≤
ai z ai a>
−
β
I
+
β
zz
1
n
2
i − In − 2zz
i 1{|a>
z
≤c
log
m
}
| 2
i
m i=1
m i=1
|
{z
}
:=θ1
73
+ β1 In + β2 zz > − In + 2zz > ,
|
{z
}
:=θ2
where
i
h
β1 := E ξ 2 1{|ξ|≤c2 √log m}
h
i
β2 := E ξ 4 1{|ξ|≤c2 √log m} − β1
and
with ξ ∼ N (0, 1).
• For the second term θ2 , we can further bound it as follows
θ2 ≤ kβ1 In − In k + β2 zz > − 2zz >
≤ |β1 − 1| + |β2 − 2| ,
which motivates us to bound |β1 − 1| and |β2 − 2|. Towards this end, simple calculation yields
r
√
p
c2
2
c2 log m
2 log m
1 − β1 =
· c2 log me− 2 + erfc
π
2
r
2
(i)
p
c2 log m
c2
2
1
2
2 log m
√
· c2 log me− 2 + √
e− 4
≤
π
π c2 log m
(ii)
≤
1
,
m
2
where (i) arises from the fact that for all x > 0, erfc (x) ≤ √1π x1 e−x and (ii) holds as long as c2 > 0 is
sufficiently large. Similarly, for the difference |β2 − 2|, one can easily show that
h
i
2
(160)
|β2 − 2| ≤ E ξ 4 1{|ξ|≤c2 √log m} − 3 + |β1 − 1| ≤ .
m
Take the previous two bounds collectively to reach
θ2 ≤
3
.
m
• With regards to θ1 , we resort to the standard covering argument. First, fix some x, z ∈ Rn with kxk2 =
kzk2 = 1 and notice that
m
2
1 X > 2 > 2
ai z
ai x 1{|a> z|≤c2 √log m} −β1 − β2 z > x
i
m i=1
is a sum of m i.i.d. random variables with bounded sub-exponential norms. To see this, one has
2 > 2
2
a>
ai x 1{|a> z|≤c2 √log m}
≤ c22 log m a>
≤ c22 log m,
i z
i x
i
ψ1
ψ1
where k · kψ1 denotes the sub-exponential norm [Ver12]. This further implies that
2 > 2
2
a>
ai x 1{|a> z|≤c2 √log m} −β1 − β2 z > x
≤ 2c22 log m.
i z
i
ψ1
Apply the Bernstein’s inequality to show that for any 0 ≤ ≤ 1,
P
!
m
1 X > 2 > 2
2
a z
ai x 1{|a> z|≤c2 √log m} −β1 − β2 z > x
≥ 2c22 log m ≤ 2 exp −c2 m ,
i
m i=1 i
where c > 0 is some absolute constant. Taking
1 − 2 exp (−c10 n log m) for some c10 > 0, one has
q
n log m
m
reveals that with probability exceeding
m
2
1 X > 2 > 2
a z
ai x 1{|a> z|≤c2 √log m} −β1 − β2 z > x
. c22
i
m i=1 i
74
s
n log3 m
.
m
(161)
n
One can then apply the covering argument to extend the above result to
all unit vectors x, z ∈ R . Let
2 n
Nθ be a θ-net of the unit sphere, which has cardinality at most 1 + θ . Then for every x, z ∈ R with
unit norm, we can find x0 , z0 ∈ Nθ such that kx − x0 k2 ≤ θ and kz − z0 k2 ≤ θ. The triangle inequality
reveals that
m
2
1 X > 2 > 2
ai z
ai x 1{|a> z|≤c2 √log m} −β1 − β2 z > x
i
m i=1
m
≤
2
1 X > 2 > 2
a z0
ai x0 1{|a> z0 |≤c2 √log m} −β1 − β2 z0> x0 + β2
i
m i=1 i
|
|
{z
}
2
2
z > x − z0> x0
{z
}
:=I1
+
:=I2
m
i
2 > 2
1 X h > 2 > 2
√
z
a
x
1
ai z
ai x 1{|a> z|≤c2 √log m} − a>
>
0
i 0
i
{|ai z0 |≤c2 log m} .
i
m i=1
|
{z
}
:=I3
Regarding I1 , one sees from (161) and the union bound that with probability at least 1−2(1+ θ2 )2n exp (−c10 n log m),
one has
s
n log3 m
.
m
I1 . c22
For the second term I2 , we can deduce from (160) that β2 ≤ 3 and
2
2
z > x − z0> x0
= z > x − z0> x0 z > x + z0> x0
>
= (z − z0 ) x + z0 (x − x0 ) z > x + z0> x0
≤ 2 (kz − z0 k2 + kx − x0 k2 ) ≤ 2θ,
where the last line arises from the Cauchy-Schwarz inequality and the fact that x, z, x0 , z0 are all unit
norm vectors. This further implies
I2 ≤ 6θ.
Now we move on to control the last term I3 . Denoting
n
o
p
Si := u | a>
i u ≤ c2 log m
allows us to rewrite I3 as
m
I3 =
i
2 > 2
1 X h > 2 > 2
ai z
ai x 1{z∈Si } − a>
ai x0 1{z0 ∈Si }
i z0
m i=1
m
≤
2 > 2 i
1 X h > 2 > 2
ai z
ai x − a>
ai x0
1{z∈Si ,z0 ∈Si }
i z0
m i=1
+
m
m
1 X > 2 > 2
1 X > 2 > 2
ai z
ai x 1{z∈Si ,z0 ∈S
a z0
ai x0 1{z0 ∈Si ,z∈S
/ i} +
/ i} .
m i=1
m i=1 i
(162)
Here the decomposition is similar to what we have done in (156). For the first term in (162), one has
m
m
2 > 2 i
2 > 2
2 > 2
1 X h > 2 > 2
1 X
ai z
ai x − a>
ai x0
1{z∈Si ,z0 ∈Si } ≤
a>
ai x − a>
ai x0
i z0
i z
i z0
m i=1
m i=1
≤ nα θ,
for some α = O(1). Here the last line follows from the smoothness of the function g (x, z) = a>
i z
Proceeding to the second term in (162), we see from (157) that
√
√
√
1{z∈Si ,z0 ∈S
/ i } ≤ 1{c2 log m≤|a> z0 |≤c2 log m+ 6nθ } ,
i
75
2
2
a>
i x .
which implies that
m
m
2
1 X > 2 > 2
1 X > 2
>
a z
ai x 1{z∈Si ,z0 ∈S
1{z∈Si }
a x 1{z∈Si ,z0 ∈S
/ i } ≤ max ai z
/ i}
1≤i≤m
m i=1 i
m i=1 i
m
≤ c22 log m
1 X > 2
a x 1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ} .
i
m i=1 i
With regard to the above quantity, we have the following claim.
Claim 2. With probability at least 1 − c2 e−c3 n log m for some constants c2 , c3 > 0, one has
r
m
1 X > 2
n log m
a x 1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ} .
i
m i=1 i
m
for all x ∈ Rn with unit norm and for all z0 ∈ Nθ .
With this claim in place, we arrive at
s
m
1 X > 2 > 2
2
a z
ai x 1{z∈Si ,z0 ∈S
/ i } . c2
m i=1 i
n log3 m
m
with high probability. Similar arguments lead us to conclude that with high probability
s
m
1 X > 2 > 2
n log3 m
2
.
ai z0
ai x0 1{z0 ∈Si ,z∈S
.
c
/ i}
2
m i=1
m
Taking the above bounds collectively and setting θ m−α−1 yield with high probability for all unit vectors
z’s and x’s
s
m
1 X > 2 > 2
n log3 m
2
ai z
ai x 1{|a> z|≤c2 √log m} −β1 − β2 z > x
. c22
,
i
m i=1
m
which is equivalent to saying that
s
θ1 . c22
n log3 m
.
m
The proof is complete by combining the upper bounds on θ1 and θ2 , and the fact
1
m
=o
q
n log3 m
m
.
Proof of Claim 2. We first apply the triangle inequality to get
m
m
1 X > 2
1 X > 2
ai x 1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ} ≤
a x0 1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ}
i
i
m i=1
m i=1 i
{z
}
|
:=J1
m
2 i
1 X h > 2
+
ai x − a>
1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ} ,
i x0
i
m i=1
|
{z
}
:=J2
where x0 ∈ Nθ and kx − x0 k2 ≤ θ. The second term can be controlled as follows
m
2
2
1 X
J2 ≤
≤ nO(1) θ,
a>
− a>
i x
i x0
m i=1
76
2
where we utilize the smoothness property of the function h (x) = a>
i x . It remains to bound J1 , for which
we first fix x0 and z0 . Take the Bernstein inequality to get
!
m
h
i
1 X > 2
2
P
ai x0 1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ} −E a>
1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ} ≥ τ
i x0
i
i
m
i=1
≤ 2e−cmτ
2
for some constant c > 0 and any sufficiently small τ > 0. Taking τ
exceeding 1 − 2e
−Cn log m
q
n log m
m
reveals that with probability
for some large enough constant C > 0,
J1 . E
h
2
a>
i x0
i
r
1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ} +
i
n log m
.
m
Regarding the expectation term, it follows from Cauchy-Schwarz that
r h
i r h
i
h
4 i
2
>
>
√
√
√
E 1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ}
E ai x0 1{c2 log m≤|a> z0 |≤c2 log m+ 6nθ} ≤ E ai x0
i
i
h
i
E 1{c2 √log m≤|a> z0 |≤c2 √log m+√6nθ}
i
≤ 1/m,
as long as θ is sufficiently small. Combining the preceding bounds with the union bound, we can see that
2n −Cn log m
with probability at least 1 − 2 1 + θ2
e
r
J1 .
1
n log m
+ .
m
m
Picking θ m−c1 for some large enough constant c1 > 0, we arrive at with probability at least 1−c2 e−c3 n log m
r
m
1 X > 2
n log m
√
√
√
a x 1{c2 log m≤|a> z0 |≤c2 log m+ 6nθ} .
i
m i=1 i
m
for all unit vectors x’s and for all z0 ∈ Nθ , where c2 , c3 > 0 are some absolute constants.
L
Proof of Lemma 15
Recall that the Hessian matrix is given by
m
∇2 f (z) =
2
i
1 Xh
\ 2
3 a>
− a>
ai a>
i z
i x
i .
m i=1
Lemma 14 implies that with probability at least 1 − O m−10 ,
s
2
∇2 f (z) − 6zz > − 3 kzk2 In + 2x\ x\> + x\
2
I .
2 n
n
n log3 m
2
max kzk2 , x\
m
2
2
o
(163)
√
holds simultaneously for all z obeying max1≤i≤m a>
i z ≤ c0 log m kzk2 , with the proviso that m
n log3 m. This together with the fact x\ 2 = 1 leads to
s
3
n
o
n
log
m
2
2
−∇2 f (z) −6zz > − 3 kzk2 − 1 + O
max kzk2 , 1 In
m
77
s
3
n
o
n
log
m
2
2
− 9 kzk2 − 1 + O
max kzk2 , 1 In .
m
As a consequence, if we pick 0 < η <
c2
max{kzk22 ,1}
for c2 > 0 sufficiently small, then In − η∇2 f (z) 0. This
combined with (163) gives
s
In − η∇2 f (z) −
n
2
1 − 3η kzk2 + η In + 2ηx\ x\> − 6ηzz >
o
.
n
o
n log3 m
2
max kzk2 , 1 .
m
Additionally, it follows from (163) that
s
2
>
∇ f (z) ≤ 6zz +
2
3 kzk2
\
\>
In + 2x x
s
≤ 9kzk22 + 3 + O
+
2
x\ 2
In + O
n
n log3 m
2
max kzk2 , x\
m
2
o
2
n
o
n log3 m
2
max kzk2 , 1
m
≤ 10kzk22 + 4
as long as m n log3 m.
M
Proof of Lemma 16
Note that when t . log n, one naturally has
1
1+
log m
t
. 1.
(164)
Regarding the first set of consequences (61), one sees via the triangle inequality that
max
1≤l≤m
xt,(l)
2
≤ xt
2
+ max
1≤l≤m
xt − xt,(l)
2
p
t
(i)
n log5 m
1
≤ C5 + βt 1 +
C1 η
log m
m
!
p
5
(ii)
n log m
≤ C5 + O
m
(iii)
≤ 2C5 ,
where (i) follows from the induction hypotheses (40a) and (40e).
p The second inequality (ii) holds true since
βt . 1 and (164). The last one (iii) is valid as long as m n log5 m. Similarly, for the lower bound, one
can show that for each 1 ≤ l ≤ m,
t,(l)
x⊥
2
t,(l)
≥ xt⊥
2
− xt⊥ − x⊥
≥ xt⊥
− max
2
2
xt − xt,(l) 2
1≤l≤m
p
t
1
n log3 m
c5
≥ c5 − βt 1 +
C1 η
≥ ,
log m
m
2
p
as long as m n log5 m. Using similar arguments (αt . 1), we can prove the lower and upper bounds for
xt,sgn and xt,sgn,(l) .
78
For the second set of consequences (62), namely the incoherence consequences, first notice that it is
t
sufficient to show that the inner product (for instance |a>
l x |) is upper bounded by C7 log m in magnitude
for some absolute constants C7 > 0. To see this, suppose for now
p
t
(165)
max a>
l x ≤ C7 log m.
1≤l≤m
One can further utilize the lower bound on kxt k2 to deduce that
C7 p
log m xt
c5
t
max a>
l x ≤
1≤l≤m
2
.
This justifies the claim that we only need to obtain bounds as in (165).
Once again we can invoke the triangle
inequality to deduce that with probability at least 1 − O m−10 ,
t
>
t
t,(l)
t,(l)
max a>
+ max a>
l x ≤ max al x − x
l x
1≤l≤m
1≤l≤m
1≤l≤m
(i)
t,(l)
xt − xt,(l) 2 + max a>
l x
1≤l≤m
p
t
(ii) √
n log5 m p
1
. nβt 1 +
C1 η
+ log m xt,(l)
log m
m
p
p
n log5/2 m
.
+ C5 log m . C5 log m.
m
≤ max kal k2 max
1≤l≤m
1≤l≤m
2
Here, the first relation (i) results from the Cauchy-Schwarz inequality and (ii) utilizes the induction
√ hypothesis
t,(l)
(40a), the fact (57) and the standard Gaussian concentration, namely, max1≤l≤m a>
x
.
log m xt,(l) 2
l
with probability at least 1 − O m−10 . The last line is a direct consequence of the fact (61a) established
above and (164). In regard to the incoherence w.r.t. xt,sgn , we resort to the leave-one-out sequence xt,sgn,(l) .
Specifically, we have
>
t,sgn
t
t,sgn
− xt
a>
≤ a>
l x + al x
l x
t
>
t,sgn
t,sgn,(l)
≤ a>
− xt − xt,sgn,(l) + xt,(l) + a>
− xt,(l)
l x + al x
l x
t p
p
√
1
n log9 m p
. log m + nαt 1 +
C4
+ log m
log m
m
p
. log m.
The remaining incoherence conditions can be obtained through similar arguments. For the sake of conciseness, we omit the proof here.
With regard to the third set of consequences (63), we can directly use the induction hypothesis and obtain
p
t
1
n log3 m
t
t,(l)
max x − x
≤
β
1
+
C
t
1
2
1≤l≤m
log m
m
p
n log3 m
1
.
.
,
m
log m
p
as long as m n log5 m. Apply similar arguments to get the claimed bound on kxt − xt,sgn k2 . For the
remaining one, we have
t,(l)
max xk
1≤l≤m
t,(l)
≤ max xtk + max xk
− xtk
p
t
n log12 m
1
C2 η
≤ αt + αt 1 +
log m
m
1≤l≤m
1≤l≤m
≤ 2αt ,
with the proviso that m
p
n log12 m.
79
| 7 |
Preventing Atomicity Violations with Contracts
Diogo G. Sousa
Ricardo J. Dias
Carla Ferreira
João M. Lourenço
arXiv:1505.02951v1 [cs.DC] 12 May 2015
NOVA LINCS — NOVA Laboratory for Computer Science and Informatics
Departamento de Informática, Faculdade de Ciências e Tecnologia
Universidade NOVA de Lisboa, Portugal
May 13, 2015
Abstract
Software developers are expected to protect concurrent accesses to shared regions of
memory with some mutual exclusion primitive that ensures atomicity properties to a
sequence of program statements. This approach prevents data races but may fail to provide all necessary correctness properties.The composition of correlated atomic operations
without further synchronization may cause atomicity violations. Atomic violations may be
avoided by grouping the correlated atomic regions in a single larger atomic scope. Concurrent programs are particularly prone to atomicity violations when they use services
provided by third party packages or modules, since the programmer may fail to identify
which services are correlated. In this paper we propose to use contracts for concurrency,
where the developer of a module writes a set of contract terms that specify which methods are correlated and must be executed in the same atomic scope. These contracts are
then used to verify the correctness of the main program with respect to the usage of the
module(s). If a contract is well defined and complete, and the main program respects
it, then the program is safe from atomicity violations with respect to that module. We
also propose a static analysis based methodology to verify contracts for concurrency that
we applied to some real-world software packages. The bug we found in Tomcat 6.0 was
immediately acknowledged and corrected by its development team.
1
Introduction
The encapsulation of a set of functionalities as services of a software module offers strong
advantages in software development, since it promotes the reuse of code and ease maintenance
efforts. If a programmer is unacquainted with the implementation details of a particular set
of services, she may fail to identify correlations that exist across those services, such as data
and code dependencies, leading to an inappropriate usage. This is particularly relevant in a
concurrent setting, where it is hard to account for all the possible interleavings between threads
and the effects of these interleaved calls to the module’s internal state.
One of the requirements for the correct behavior of a module is to respect its protocol,
which defines the legal sequences of invocations to its methods. For instance, a module that
offers an abstraction to deal with files typically will demand that the programmer start by
calling the method open(), followed by an arbitrary number of read() or write() operations,
and concluding with a call to close(). A program that does not follow this protocol is incorrect
and should be fixed. A way to enforce a program to conform to such well defined behaviors is to
use the design by contract methodology [21], and specifying contracts that regulate the module
usage protocol. In this setting, the contract not only serves as useful documentation, but may
also be automatically verified, ensuring the client’s program obeys the module’s protocol [8,15].
The development of concurrent programs brings new challenges on how to define the protocol of a module. Not only it is important to respect the module’s protocol, but is also necessary
1
void schedule () {
Task t = taskQueue . next ();
if ( t . isReady ())
t . run ();
}
Figure 1: Program with an atomicity violation.
to guarantee the atomic execution of sequences of calls that are susceptible of causing atomicity violations. These atomicity violations are possible, even when the individual methods
in the module are protected by some concurrency control mechanism. Figure 1 shows part
of a program that schedules tasks. The schedule() method gets a task, verifies if it is ready
to run, and execute it if so. This program contains a potential atomicity violation since the
method may execute a task that is not marked as ready. This may happen when another
thread concurrently schedules the same task, despite the fact the methods of Task are atomic.
In this case the isReady() and run() methods should be executed in the same atomic context
to avoid atomicity violations. Atomicity violations are one of the most common source of bugs
in concurrent programming [20] and are particularly susceptible to occur when composing calls
to a module, as the developer may not be aware of the implementation and internal state of
the module.
In this paper we propose to extend module usage protocols with a specification of the
sequences of calls that should be executed atomically. We will also present an efficient static
analysis to verify these protocols.
The contributions of this paper can be summarized as:
1. A proposal of contracts for concurrency addressing the issue of atomicity violations;
2. A static analysis methodology to extract the behavior of a program with respect to the
sequence of calls it may execute;
3. A static analysis to verify if a program conforms to a module’s contract, hence that the
module’s correlated services are correctly invoked in the scope of an atomic region.
The remaining of this paper is organized as follows. In Section 2 we provide a specification
and the semantics for the contract. Section 3 contains the general methodology of the analysis.
Section 4 presents the phase of the analysis that extracts the behavior of the client program
while Section 5 shows how to verify a contract based on the extracted information. Section 9
follows with the presentation and discussion of the results of our experimental validation. The
related work is presented in Section 10, and we conclude with the final remarks in Section 11.
2
Contract Specification
The contract of a module must specify which sequences of calls of its non-private methods must
be executed atomically, as to avoid atomicity violations in the module’s client program. In the
spirit of the programming by contract methodology, we assume the definition of the contract,
including the identification of the sequences of methods that should be executed atomically is
a responsibility of the module’s developer.
Definition 1 (Contract). The contract of a module with public methods m1 , · · ·, mn is of the
form,
2
1. e1
2. e2
..
.
k. ek .
where each clause i is described by ei , a star-free regular expression over the alphabet {m1 , · · ·, mn }.
Star-free regular expressions are regular expressions without the Kleene star, using only the
alternative (|) and the (implicit) concatenation operators.
Each sequence defined in ei must be executed atomically by the program using the module,
otherwise there is a violation of the contract. The contract specifies a finite number of sequences
of calls, since it is the union of star-free languages. Therefore, it is possible to have the same
expressivity by explicitly enumerating all sequences of calls, i.e., without using the alternative
operator. We chose to offer the alternative operator so the programmer can group similar
scenarios under the same clause. Our verification analysis assumes the contract defines a finite
number of call sequences.
Example Consider the array implementation offered by Java standard library, java.util.ArrayList.
For simplicity we will only consider the methods add(obj), contains(obj), indexOf(obj), get(idx),
set(idx, obj), remove(idx), and size().
The following contract defines some of the clauses for this class.
1. contains indexOf
2. indexOf (remove | set | get)
3. size (remove | set | get)
4. add indexOf.
Clause 1 of ArrayList’s contract denotes the execution of contains() followed by indexOf()
should be atomic, otherwise the client program may confirm the existence of an object in the
array, but fail to obtain its index due to a concurrent modification. Clause 2 represents a
similar scenario where, in addition, the position of the object is modified. In clause 3 we deal
with the common situation where the program verifies if a given index is valid before accessing
the array. To make sure the size obtained by size() is valid when accessing the array we
should execute these calls atomically. Clause 4 represents scenarios where an object is added
to the array and then the program tries to obtain information about that object by querying
the array.
Another relevant clause is contains indexOf (set | remove), but the contract’s semantic already enforces the atomicity of this clause as a consequence of the composition of clauses 1 and 2,
as they overlap in the indexOf() method.
3
Methodology
The proposed analysis verifies statically if a client program complies with the contract of a
given module, as defined in Section 2. This is achieved by verifying that the threads launched
by the program always execute atomically the sequence of calls defined by the contract.
This analysis has the following phases:
1. Determine the entry methods of each thread launched by the program.
2. Determine which of the program’s methods are atomically executed. We say that a
method is atomically executed if it is atomic1 or if the method is always called by atomically executed methods.
1 An
atomic method is a method that explicitly applies a concurrency control mechanism to enforce atomicity.
3
3. Extract the behavior of each of the program’s threads with respect to the usage of the
module under analysis.
4. For each thread, verify that its usage of the module respects the contract as defined in
Section 2.
In Section 4 we introduce the algorithm that extracts the program’s behavior with respect
to the module’s usage. Section 5 defines the methodology that verifies whether the extracted
behavior complies to the contract.
4
Extracting the Behavior of a Program
The behavior of the program with respect to the module usage can be seen as the individual
behavior of any thread the program may launch. The usage of a module by a thread t of a
program can be described by a language L over the alphabet m1 , · · ·, mn , the public methods
of the module. A word m1 · · · mn ∈ L if some execution of t may run the sequence of calls
m1 , · · ·, mn to the module.
To extract the usage of a module by a program, our analysis generates a context-free
grammar that represents the language L of a thread t of the client program, which is represented
by its control flow graph (CFG) [1]. The CFG of the thread t represents every possible path the
control flow may take during its execution. In other words, the analysis generates a grammar
Gt such that, if there is an execution path of t that runs the sequence of calls m1 , · · ·, mn , then
m1 · · · mn ∈ L(Gt ). (The language represented by a grammar G is denoted by L(G).)
A context-free grammar is especially suitable to capture the structure of the CFG since
it easily captures the call relations between methods that cannot be captured by a weaker
class of languages such as regular languages. The first example bellow will show how this
is done. Another advantage of using context-free grammars (as opposed to another static
analysis technique) is that we can use efficient algorithms for parsing to explore the language
it represents.
Definition 2 (Program’s Thread Behavior Grammar). The grammar Gt = (N, Σ, P, S) is
build from the CFG of the client’s program thread t.
We define,
• N , the set of non-terminals, as the set of nodes of the CFG. Additionally we add nonterminals that represent each method of the client’s program (represented in calligraphic
font);
• Σ, the set of terminals, as the set of identifiers of the public methods of the module under
analysis (represented in bold);
• P , the set of productions, as described bellow, by rules 1–5;
• S, the grammar initial symbol, as the non-terminal that represents the entry method of
the thread t.
For each method f() that thread t may run we add to P the productions respecting the
rules 1–5. Method f() is represented by F . A CFG node is denoted by α : JvK, where α is the
non-terminal that represents the node and v its type. We distinguish the following types of
nodes: entry, the entry node of method F ; mod.h(), a call to method h() of the module mod
under analysis; g(), a call to another method g() of the client program; and return, the return
point of method F . The succ : N → P(N ) function is used to obtain the successors of a given
CFG node.
4
if α : JentryK,
if α : Jmod.h()K,
if α : Jg()K,
if α : JreturnK,
if α : JotherwiseK,
{F → α} ∪ {α → β | β ∈ succ(α)} ⊂ P
{α → h β | β ∈ succ(α)} ⊂ P
(1)
(2)
{α → G β | β ∈ succ(α)} ⊂ P
where G represents g()
(3)
{α → ǫ} ⊂ P
{α → β | β ∈ succ(α)} ⊂ P
(4)
(5)
No more productions belong to P .
Rules 1–5 capture the structure of the CFG in the form of a context-free grammar. Intuitively this grammar represents the flow control of the thread t of the program, ignoring
everything not related with the module’s usage. For example, if f g ∈ L(Gt ) then the thread t
may invoke, method f(), followed by g().
Rule 1 adds a production that relates the non-terminal F , representing method f(), to the
entry node of the CFG of f(). Calls to the module under analysis are recorded in the grammar
by the Rule 2. Rule 3 handles calls to another method g() of the client program (method g() will
have its non-terminal G added by Rule 1). The return point of a method adds an ǫ production
to the grammar (Rule 4). All others types of CFG nodes are handled uniformly, preserving
the CFG structure by making them reducible to the successor non-terminals (Rule 5). Notice
that only the client program code is analyzed.
The Gt grammar may be ambiguous, i.e., offer several different derivations to the same
word. Each ambiguity in the parsing of a sequence of calls m1 · · · mn ∈ L(Gt ) represents
different contexts where these calls can be executed by thread t. Therefore we need to allow
such ambiguities so that the verification of the contract can cover all the occurrences of the
sequences of calls in the client program.
The language L(Gt ) contains every sequence of calls the program may execute, i.e., it
produces no false negatives. However L(Gt ) may contain sequences of calls the program does
not execute (for instance calls performed inside a block of code that is never executed), which
may lead to false positives.
Examples Figure 2 (left) shows a program that consists of two methods that call each other
mutually. Method f() is the entry point of the thread and the module under analysis is
represented by object m. The control flow graphs of these methods are shown in Figure 2 (right).
According to Definition 2, we construct the grammar G1 = (N1 , Σ1 , P1 , S1 ), where
N1 = {F , G, A, B, C, D, E, F, G, H, I, J, K, L, M },
Σ1 = {a, b, c, d},
S1 = F ,
and P1 has the following productions:
F →A
G→G
A→B
B → aC
H → cI
I→J |M
C→D|E
D→GE
J →GK
K → dL
E → bF
F →ǫ
L→FM
M → ǫ.
5
f()
g()
G
entry
A
entry
void f () {
m . a ();
if ( cond )
g ();
m . b ();
}
H
m.c()
B
m.a()
I
cond
C
void g () {
m . c ();
if ( cond ) {
g ();
m . d ();
f ();
}
}
cond
J
g()
D
g()
K
m.d()
E
m.b()
L
f()
F
return
M
return
Figure 2: Program with recursive calls using the module m (left) and respective CFG (right).
A second example, shown in Figure 3, exemplifies how the Definition 2 handles a flow
control with loops. In this example we have a single function f(), which is assumed to be the
entry point of the thread. We have G2 = (N2 , Σ2 , P2 , S2 ), with
N2 = {F , A, B, C, D, E, F, G, H},
Σ2 = {a, b, c, d},
S2 = F .
The set of productions P2 is,
F →A
A→B
E → cF
F →B
B → aC | aG
C→D|E
G → dH
H →ǫ
D → b F.
5
Contract Verification
The verification of a contract must ensure all sequences of calls specified by the contract are
executed atomically by all threads the client program may launch. Since there is a finite
number of call sequences defined by the contract we can verify each of these sequences to check
if the contract is respected.
The idea of the algorithm is to generate a grammar the captures the behavior of each thread
with respect to the module usage. Any sequence of the calls contained in the contract can then
be found by parsing the word (i.e. the sequence of calls) in that grammar. This will create a
parsing tree for each place where the thread can execute that sequence of calls. The parsing
tree can then be inspected to determine the atomicity of the sequence of calls discovered.
Algorithm 1 presents the pseudo-code of the algorithm that verifies a contract against a
client’s program. For each thread t of a program P , it is necessary to determine if (and where)
any of the sequences of calls defined by the contract w = m1 , · · ·, mn occur in P (line 4). To do
so, each of the these sequences are parsed in the grammar G′t (line 5) that includes all words
and sub-words of Gt . Sub-words must be included since we want to take into account partial
6
A
entry
void f () {
while ( m . a ()) {
if ( cond )
m . b ();
else
m . c ();
count ++;
B
m.a()
C
cond
D
m.b()
E
m.c()
}
count++
m . d ();
F
G
m.d()
}
H
return
Figure 3: Program using the module m (left) and respective CFG (right).
Algorithm 1 Contract verification algorithm.
Require: P , client’s program;
C, module contract (set of allowed sequences).
1: for t ∈ threads(P ) do
2:
Gt ← make_grammar(t)
3:
G′t ← subword_grammar(Gt )
4:
for w ∈ C do
5:
T ← parse(G′t , w)
6:
for τ ∈ T do
7:
N ← lowest_common_ancestor(τ, w)
8:
if ¬run_atomically(N ) then
9:
return ERROR
10: return OK
traces of the execution of thread t, i.e., if we have a program m.a(); m.b(); m.c(); m.d(); we
are able to verify the word b c by parsing it in G′t . Notice that G′t may be ambiguous. Each
different parsing tree represents different locations where the sequence of calls m1 , · · ·, mn may
occur in thread t. Function parse() returns the set of these parsing trees. Each parsing tree
contains information about the location of each methods call of m1 , · · ·, mn in program P (since
non-terminals represent CFG nodes). Additionally, by going upwards in the parsing tree, we
can find the node that represents the method under which all calls to m1 , · · ·, mn are performed.
This node is the lowest common ancestor of terminals m1 , · · ·, mn in the parsing tree (line 7).
Therefore we have to check the lowest common ancestor is always executed atomically (line
8) to make sure the whole sequence of calls is executed under the same atomic context. Since
it is the lowest common ancestor we are sure to require the minimal synchronization from
the program. A parsing tree contains information about the location in the program where a
contract violation may occur, therefore we can offer detailed instructions to the programmer
on where this violation occurs and how to fix it.
Grammar Gt can use all the expressivity offered by context-free languages. For this reason
it is not sufficient to use the LR(·) parsing algorithm [17], since it does not handle ambiguous
grammars. To deal with the full class of context-free languages a GLR parser (Generalized
LR parser) must be used. GLR parsers explore all the ambiguities that can generate different
derivation trees for a word. A GLR parser was introduced by Tomita in [24]. Tomita presents
7
R
void atomic run () {
f ();
m . c ();
}
void f () {
m . a ();
g ();
}
void g () {
while ( cond )
m . b ();
}
F
G
R→F c
F →aG
A
G→A
B
A→B|ǫ
B→bA
A
B
a
b
A
b ǫ c
Figure 4: Program (left), simplified grammar (center) and parsing tree of a b b c (right).
a non-deterministic versions of the LR(0) parsing algorithm with some optimizations in the
representation of the parsing stack that improve the temporal and spacial complexity of the
parsing phase.
Another important point is that the number of parsing trees may be infinite. This is due
to loops in the grammar, i.e., derivations from a non-terminal to itself (A ⇒ · · · ⇒ A), which
often occur in Gt (every loop in the control flow graph will yield a corresponding loop in the
grammar). For this reason the parsing algorithm must detect and prune parsing branches that
will lead to redundant loops, ensuring a finite number of parsing trees is returned. To achieve
this the parsing algorithm must detect a loop in the list of reduction it has applied in the
current parsing branch, and abort it if that loop did not contribute to parse a new terminal.
Examples Figure 4 shows a program (left), that uses the module m. The method run() is the
entry point of the thread t and is atomic. In the center of the figure we shown a simplified
version of the Gt grammar. (The G′t grammar is not shown for the sake of brevity.) The
run(), f(), and g() methods are represented in the grammar by the non-terminals R, F , and
G respectively. If we apply Algorithm 1 to this program with the contract C = {a b b c} the
resulting parsing tree, denoted by τ (line 6 of Algorithm 1), is represented in Figure 4 (right).
To verify all calls represented in this tree are executed atomically, the algorithm determines
the lowest common ancestor of a b b c in the parsing tree (line 7), in this example R. Since
R is always executed atomically (atomic keyword), it complies to the contract of the module.
Figure 5 exemplifies a situation where the generated grammar is ambiguous. In this case
the contract is C = {a b}. The figure shows the two distinct ways to parse the word a b
(right). Both these trees will be obtained by our verification algorithm (line 5 of Algorithm 1).
The first tree (top) has F as the lowest common ancestor of a b. Since F corresponds to the
method f(), which is executed atomically, so this tree respects the contract. The second tree
(bottom) has R as the lowest common ancestor of a b, corresponding to the execution of the
else branch of method run(). This non-terminal (R) does not correspond to an atomically
executed method, therefore the contract is not met and a contract violation is detected.
6
Analysis with Points-to
In an object-oriented programming language the module is often represented as an object,
in which case we should differentiate the instances of the class of the module. This section
explains how the analysis is extended to handle multiple instances of the module by using
points-to information.
To extend the analysis to points-to a different grammar is generated for each allocation site
of the module. Each allocation site represents an instance of the module, and the verification
8
void run () {
if (...)
f ();
else {
m . a ();
g ();
}
}
R
F
G
R→aG|F
F →aG
void atomic f () {
m . a ();
g ();
}
a
b
G→b
R
G
void atomic g () {
m . b ();
}
a
b
Figure 5: Program (left), simplified grammar (center) and parsing tree of a b (right).
Algorithm 2 Contract verification algorithm with points-to information.
Require: P , client’s program;
C, module contract (set of allowed sequences).
1: for t ∈ threads(P ) do
2:
for a ∈ mod_alloc_sites(t) do
3:
Gta ← make_grammar(t, a)
4:
G′ta ← subword_grammar(Gta )
5:
for w ∈ C do
6:
T ← parse(G′ta , w)
7:
for τ ∈ T do
8:
N ← lowest_common_ancestor(τ, w)
9:
if ¬run_atomically(N ) then
10:
return ERROR
11: return OK
algorithm verifies the contract words for each allocation site and thread (whereas the previous
algorithm verified the contract words for each thread). The new algorithm is shown in Algorithm 2. It generated the grammar Gta for a thread t and module instance a. This grammar
can be seen as the behavior of thread t with respect to the module instance a, ignoring every
other instance of the module.
To generate the grammar Gta we adapt the Definition 2 to only take into account the
instance a. The grammar generation is extended in the following way:
Definition 3 (Program’s Thread Behavior Grammar with points-to). The grammar Gt =
(N, Σ, P, S) is build from the CFG of the client’s program thread t and an object allocation
site a, which represents an instance of the module.
We define N , Σ, P and S in the same way as Definition 2.
The rules remain the same, except for rule 2, which becomes:
9
void replace ( int o , int n )
{
if ( array . contains ( o ))
{
int idx = array . indexOf ( o );
array . set ( idx , n );
}
}
Figure 6: Examples of atomic violation with data dependencies.
if α : Jmod.h()K and mod can only point to a
(6)
{α → h β | β ∈ succ(α)} ⊂ P
if α : Jmod.h()K and mod can point to a
(7)
{α → h β | β ∈ succ(α)} ⊂ P
{α → β | β ∈ succ(α)} ⊂ P
if α : Jmod.h()K and mod cannot point to a
{α → β | β ∈ succ(α)} ⊂ P
(8)
Here we use the points-to information to generate the grammar, and we should consider
the places where a variable can point-to. If it may point-to our instance a or another instance
we consider both possibilities in the Rule 7 of Definition 3.
7
Contracts with Parameters
Frequently contract clauses can be refined by considering the flow of data across calls to the
module. For instance Listing 6 shows a procedure that replaces an item in an array by another.
This listing contains two atomicity violations: the element might not exist when indexOf() is
called; and the index obtained might be outdated when set() is executed. Naturally, we can
define a clause that forces the atomicity of this sequence of calls as contains indexOf set,
but this can be substantially refined by explicitly require that a correlation exists between
the indexOf() and set() calls. To do so we extend the contract specification to capture the
arguments and return values of the calls, which allows the user to establish the relation of
values across calls.
The contract can therefore be extended to accommodate this relations, in this case the
clause might be
contains(X) Y=indexOf(X) set(Y,_).
This clause contains variables (X, Y) that must satisfy unification for the clause to be applicable. The underscore symbol (_) represents a variable that will not be used (and therefore
requires no binding). Algorithm 1 can easily be modified to filter out the parsing trees that
correspond to calls that do not satisfy the unification required by the clause in question.
In our implementation we require a exact match between the terms of the program to satisfy
the unification, since it was sufficient for most scenarios. It can however be advantageous to
generalize the unification relation. For example, the calls
array . contains ( o );
idx = array . indexOf ( o +1);
array . set ( idx , n );
also imply a data dependency between the first two calls. We should say that A unifies
with B if, and only if, the value of A depends on the value of B, which can occur due to value
10
manipulation (data dependency) or control-flow dependency (control dependency). This can
be obtained by an information flow analysis, such as presented in [5], which can statically infer
the variables that influenced the value that a variable hold on a specific part of the program.
This extension of the analysis can be a great advantage for some types of modules. As
an example we rewrite the contract for the Java standard library class, java.util.ArrayList,
presented in Section 2:
1. contains(X) indexOf(X)
2. X=indexOf(_) (remove(X) | set(X,_) | get(X))
3. X=size() (remove(X) | set(X,_) | get(X))
4. add(X) indexOf(X).
This contract captures in detail the relations between calls that may be problematic, and
excludes from the contract sequences of calls that does not constitute atomicity violations.
8
Prototype
A prototype was implemented to evaluate our methodology. This tool analyses Java programs
using Soot [25], a Java static analysis framework. This framework directly analyses Java
bytecode, allowing us to analyse a compiled program, without requiring access to its source
code. In our implementation a method can be marked atomic with a Java annotation. The
contract is also defined as an annotation of the class representing the module under analysis.
The prototype is available in https://github.com/trxsys/gluon.
8.1
Optimizations
To achieve a reasonable time performance we implemented a few optimizations. Some of
these optimizations reduced the analysis run time by a few orders of magnitude in some cases,
without sacrificing precision.
A simple optimization was applied to the grammar to reduce its size. When constructing the
grammar, most control flow graph nodes will have a single successor. Rule 5 (Definition 2) will
always be applied to these kind of nodes, since they represent an instruction that does not call
any function. This creates redundant ambiguities in the grammar due to the multiple control
flow paths that never use the module under analysis. To avoid exploration of redundant parsing
branches we rewrite the grammar to transform productions of the form A → βBδ, B → α to
A → βαδ, if no other rule with head B exists. For example, an if else that do not use the
module will create the productions A → B, A → C, B → D, C → D. This transformation will
reduce it to A → D, leaving no ambiguity for the parser to explore here. This optimization
reduced the analysis time by at least one order of magnitude considering the majority of the
tests we performed. For instance, the Elevator test could not be analyzed in a reasonable time
prior to this optimization.
Another optimization was applied during the parsing phase. Since the GLR parser builds
the derivation tree bottom-up we can be sure to find the lowest common ancestor of the terminals as early as possible. The lowest common ancestors will be the first non-terminal in the
tree covering all the terminals of the parse tree. This is easily determined if we propagate,
bottom-up, the number of terminals each node of the tree covers. Whenever a lowest common
ancestor is determined we do not need further parsing and can immediately verify if the corresponding calls are in the same atomic context. This avoids completing the rest of the tree
which can contain ambiguities, therefore a possibly large number of new branches is avoided.
Another key aspect of the parsing algorithm implementation is the loop detection. To
achieve a good performance we should prune parsing branches that generated unproductive
loop as soon as possible. Our implementation guarantees the same non-terminal never appears
twice in a parsing tree without contributing to the recognition of a new terminal.
11
Table 1: Optimization Improvements.
Optimization
Improvement
Grammar Simplification
Stop Parsing at LCA
Subword Parser
428%
?
3%
To achieve a better performance we also do not explicitly compute the subword grammar (G′t ). We have modified our GLR parser to parse subwords as described in [23]. This
greatly improves the parser performance because constructing G′t introduces many irrelevant
ambiguities the parser had to explore.
The Table 1 show how much the of the optimizations improve the analysis performance.
These results are build from an test made to stress the performance of gluon but is consistent
with real applications. The Improvement column show how much of an improvement that
particular optimization contributes to the analysis. The Stop Parsing at LCA cause an improvement that we were not able to measure since the test was unable to complete in reasonable
time.
8.2
Class Scope Mode
Gluon normally analyzes the entire program, taking into account any sequence of calls that can
spread across the whole program (as long as they are consecutive calls to a module). However
this is infeasible for very large programs so, for these programs, we ran the analysis with for
each class, ignoring calls to other classes. This will detect contract violations where the control
flow does not escape the class, which is reasonable since code locality indicates a stronger
correlations between calls.
This mode of operation can be useful to analyze large programs as they might have very
complex control flow graphs and thus are infeasible to analyze with the scope of the whole
program.
In this mode the grammar is build for each class instead of each thread. The methods of
the class will create non-terminals F1 , · · ·, Fn , just as before. The only change in creating this
grammar is that we create the productions S → F1 | · · · | Fn as the starting production of
the grammar (S being the initial symbol). This means that we consider the execution of all
methods of the class being analyzed.
9
Validation and Evaluation
To validate the proposed analysis we analyzed a few real-world programs (Tomcat, Lucene,
Derby, OpenJMS and Cassandra) as well as small programs known to contain atomicity violations. These small programs were adapted from the literature [2–4, 16, 19, 22, 27] and are
typically used to evaluate atomicity violation detection techniques. We modified these small
programs to employ a modular design and we wrote contracts to enforce the correct atomic
scope of calls to that module. Some additional clauses were added that may represent atomicity violations in the context of the module usage, even if the program do not violate those
clauses.
For the large benchmarks analyzed we aimed to discover new, unknown, atomicity violations. To do so we had to create contracts in an automated manner, since the code base was too
large. To automate the generation of contracts we employ a very simplistic approach that tries
to infer the contract’s clauses based on what is already synchronized in the code. This idea is
that most sequences of calls that should be atomic was correctly used somewhere. Having this
in mind we look for sequences of calls done to a module that are used atomically at least two
points of the program. If a sequence of calls is done atomically in two places of the code that
might indicate that these calls are correlated and should be atomic. We used these sequences
12
Table 2: Validation results.
Clauses
Contract
Violations
False
Positives
Potential
AV
Real
AV
SLOC
Time (s)
ICFinder
Static
ICFinder
Final
Allocate Vector [16]
Coord03 [2]
Coord04 [3]
Jigsaw [27]
Local [2]
Knight [19]
NASA [2]
Store [22]
StringBuffer [3]
UnderReporting [27]
VectorFail [22]
Account [27]
Arithmetic DB [19]
Connection [4]
Elevator [27]
1
4
2
1
2
1
1
1
1
1
2
4
2
2
2
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
183
151
35
100
24
135
89
621
27
20
70
42
243
74
268
0.120
0.093
0.039
0.044
0.033
0.219
0.035
0.090
0.032
0.029
0.048
0.041
0.272
0.058
0.333
121
-
2
-
OpenJMS 0.7
Tomcat 6.0
Cassandra 2.0
Derby 10.10
Lucene 4.6
6
9
1
1
3
54
157
60
19
136
10
16
24
5
21
28
47
15
7
76
4
3
2
1
0
163K
239K
192K
793K
478K
148
3070
246
522
151
126
365
122
391
15
12
16
2
Benchmark
as our contracts, after manually filtering a few irrelevant contracts. This is a very simple way
to generate contracts, which should ideally be written by the module’s developer to capture
common cases of atomicity violations, so we can expect the contracts to be more fine-tuned to
better target atomicity violations if the contracts are part of the regular project development.
Since these programs load classes dynamically it is impossible to obtain complete points-to
information, so we are pessimistic and assumed every module instance could be referenced
by any variable that are type-compatible. We also used the class scope mode described in
Section 8.2 because it would be impractical to analyze such large programs with the scope of
the whole program. This restrictions did not apply to the small programs analyzed.
Table 2 summarizes the results that validate the correctness of our approach. The table
contains both the macro benchmarks (above) and the micro benchmarks (bellow). The columns
represent the number of clauses of the contract (Clauses); the number of violations of those
clauses (Contract Violations); the number of false positives, i.e. sequences of calls that in fact
the program will never execute (False Positives); the number of potential atomicity violations,
i.e. atomicity violations that could happen if the object was concurrently accessed by multiple
threads (Potential AV); the number of real atomicity violations that can in fact occur and
compromise the correct execution of the program (Real AV); the number lines of code of the
benchmark (SLOC); and the time it took for the analysis to complete (the analysis run time
excludes the Soot initialization time, which were always less than 179s per run).
Our tool was able to detect all violation of the contract by the client program in the
microbenchmarks, so no false negatives occurred, which supports the soundness of the analysis.
Since some tests include additional contract clauses with call sequences not present in the test
programs we also show that, in general, the analysis does not detect spurious violations, i.e.,
false positives.2 A corrected version of each test was also verified and the prototype correctly
detected that all contract’s call sequences in the client program were now atomically executed.
Correcting a program is trivial since the prototype pinpoints the methods that must be made
atomic, and ensures the synchronization required has the finest possible scope, since it is the
method that corresponds to the lowest common ancestor of the terminals in the parse tree.
The large benchmarks show that gluon can be applied to large scale programs with good
results. Even with a simple automated contract generation we were able to detect 10 atomicity
violations in real-world programs. Six of these bugs where reported (Tomcat 3 , Derby 4 ,
2 In these tests no false positives were detected. However it is possible to create situations where false
positives occur. For instance, the analysis assumes a loop may iterate an arbitrary number of times, which
makes it consider execution traces that may not be possible.
3 https://issues.apache.org/bugzilla/show_bug.cgi?id=56784
4 https://issues.apache.org/jira/browse/DERBY-6679
13
Cassandra 5 ), with two bugs already fixed in Tomcat 8.0.11. The false positives incorrectly
reported by gluon were all due to conservative points-to information, since the program loads
and calls classes and methods dynamically (leading to an incomplete points-to graph).
ICFinder [18] uses a static analysis to detect two types of common incorrect composition
patterns. This is then filtered with a dynamic analysis. Of the atomicity violations detected
by gluon none of them was captured by ICFinder, since they failed to match the definition of
the patterns.
It is hard to directly compare both approaches since they use very different approaches.
Loosely speaking, in Table 2 the ICFinder Static column corresponds to the Contract Violations, since they both represent the static methodology of the approaches. The ICFinder Final
column cannot be directly compared with the Real AV column because “ICFinder Final” may
contain scenarios that not represent atomicity violations (in particular if ICFinder does not correctly identify the atomic sets). ICFinder Final cannot also be compared with “Potencial AV”
since “Potencial AV” is manually obtained from “Contract Violations”, and “ICFinder Final” is
the dynamic filtration of “ICFinder Static”. In the end the number of bugs reported by gluon
was 6, with 2 bugs confirmed and with fixes already applied, 1 bug considered highly unlikely,
and 3 bugs pending confirmation; ICFinder has 3 confirmed and fixed bugs on Tomcat. 6
The performance results show our tool can run efficiently. For larger programs we have to
use class scope mode, sacrificing precision for performance, but we still can capture interesting
contract violations. The performance of the analysis depends greatly on the number of branches
the parser explores. This high number of parsing branches is due to the complexity of the
control flow of the program, offering a huge amount of distinct control flow paths. In general
the parsing phase will dominate the time complexity of the analysis, so the analysis run time
will be proportional to the number of explored parsing branches. Memory usage is not a
problem for the analysis, since the asymptotic space complexity is determined by the size of
the parsing table and the largest parsing tree. Memory usage is not affected by the number
of parsing trees because our GLR parser explores the parsing branches in-depth instead of
in-breadth. In-depth exploration is possible because we never have infinite height parsing trees
due to our detection of unproductive loops.
9.1
ICFinder
ICFinder tries to infer automatically what a module is, and incorrect compositions of pairs of
calls to modules.
Two patterns are used to detect potencial atomicity violations in method calls compositions:
• USE: Detects stale value errors. This pattern detects data or control flow dependencies
between two calls to the module.
• COMP: If a call to method a() dominates b() and b() post-dominates a() in some place,
that is captured by this pattern. This means that, for each piece of code involving two
calls to the module (a() and b()), if a() is always executed before b() and b() is always
executed after a(), it is a COMP violation.
Both this patterns are extremely broad and contain many false positives. To deal with this
the authors filter this results with a dynamic analysis that only consider violations as defined
in [26]. This analysis assumes that the notion of atomic set was correctly inferred by ICFinder.
10
Related Work
The methodology of design by contract was introduced by Meyer [21] as a technique to write
robust code, based on contracts between programs and objects. In this context, a contract
5 https://issues.apache.org/jira/browse/CASSANDRA-7757
6 https://issues.apache.org/bugzilla/show_bug.cgi?id=53498
14
specifies the necessary conditions the program must met in order to call the object’s methods,
whose semantics is ensured if those pre-conditions are met.
Cheon et al. proposes the use of contracts to specify protocols for accessing objects [8].
These contracts use regular expressions to describe the sequences of calls that can be executed
for a given Java object. The authors present a dynamic analysis for the verification of the
contracts. This contrasts to our analysis which statically validates the contracts. Beckman et
al. introduce a methodology based on typestate that statically verifies if a protocol of an object
is respected [4]. This approach requires the programmer to explicitly unpack objects before
it can be used. Hurlin [15] extends the work of Cheon to support protocols in concurrent
scenarios. The protocol specification is extended with operators that allow methods to be
executed concurrently, and pre-conditions that have to be satisfied before the execution of a
method. This analysis is statically verified by a theorem prover. Theorem proving, in general,
is very limited since automated theorem proving tend to be inefficient.
Peng Liu et al. developed a way to detect atomicity violations caused by method composition [18], much like the ones we describe in this paper. They define two patterns that
are likely to cause atomicity violations, one capturing stale value errors and the other one by
trying to infer a correlation between method calls by analyzing the control flow graph (if a() is
executed before b() and b() is executed after a()). This patterns are captured statically and
then filtered with a dynamic analysis.
Many works can be found about atomicity violations. Artho et al. in [2] define the notion of
high-level data races, that characterize sequences of atomic operations that should be executed
atomically to avoid atomicity violations. The definition of high-level data races do not totally
capture the violations that may occur in a program. Praun and Gross [27] extend Artho’s
approach to detect potential anomalies in the execution of methods of an object and increase
the precision of the analysis by distinguish between read and write accesses to variables shared
between multiple threads. An additional refinement to the notion of high-level data races was
introduced by Pessanha in [10], relaxing the properties defined by Artho, which results in a
higher precision of the analysis. Farchi et al. [11] propose a methodology to detect atomicity
violations in the usage of modules based on the definition of high-level data races. Another
common type of atomicity violations that arise when sequencing several atomic operations
are stale value errors. This type of anomaly is characterized by the usage of values obtained
atomically across several atomic operations. These values can be outdated and compromise
the correct execution of the program. Various analysis were developed to detect these types
of anomalies [3, 6, 10]. Several other analysis to verify atomicity violations can be found in
the literature, based on access patterns to shared variables [19, 26], type systems [7], semantic
invariants [9], and other specific methodologies [12–14].
11
Concluding Remarks
In this paper we present the problem of atomicity violations when using a module, even when
their methods are individually synchronized by some concurrency control mechanism. We
propose a solution based on the design by contract methodology. Our contracts define which
call sequences to a module should be executed in an atomic manner.
We introduce a static analysis to verify these contracts. The proposed analysis extracts the
behavior of the client’s program with respect to the module usage, and verifies whether the
contract is respected.
A prototype was implemented and the experimental results shows the analysis is highly
precise and can run efficiently on real-world programs.
References
[1] Frances E. Allen. Control flow analysis. SIGPLAN Not., 5(7):1–19, July 1970.
15
[2] Cyrille Artho, Klaus Havelund, and Armin Biere. High-level data races. Software Testing,
Verification and Reliability, 13(4):207–227, December 2003.
[3] Cyrille Artho, Klaus Havelund, and Armin Biere. Using block-local atomicity to detect
stale-value concurrency errors. Automated Technology for Verification and Analysis, pages
150–164, 2004.
[4] Nels E. Beckman, Kevin Bierhoff, and Jonathan Aldrich. Verifying correct usage of atomic
blocks and typestate. SIGPLAN Not., 43(10):227–244, October 2008.
[5] Jean-Francois Bergeretti and Bernard A Carré. Information-flow and data-flow analysis of
while-programs. ACM Transactions on Programming Languages and Systems (TOPLAS),
7(1):37–61, 1985.
[6] M. Burrows and K.R.M. Leino. Finding stale-value errors in concurrent programs. Concurrency and Computation: Practice and Experience, 16(12):1161–1172, 2004.
[7] Luís Caires and João C. Seco. The type discipline of behavioral separation. SIGPLAN
Not., 48(1):275–286, January 2013.
[8] Yoonsik Cheon and Ashaveena Perumandla. Specifying and checking method call sequences of java programs. Software Quality Control, 15(1):7–25, March 2007.
[9] R. Demeyer and W. Vanhoof. A framework for verifying the application-level race-freeness
of concurrent programs. In 22nd Workshop on Logic-based Programming Environments
(WLPE 2012), page 10, 2012.
[10] Ricardo J. Dias, Vasco Pessanha, and João M. Lourenço. Precise detection of atomicity violations. In Hardware and Software: Verification and Testing, Lecture Notes in Computer
Science. Springer Berlin / Heidelberg, November 2012. HVC 2012 Best Paper Award.
[11] Eitan Farchi, Itai Segall, João M. Lourenço, and Diogo Sousa. Using program closures to
make an application programming interface (api) implementation thread safe. In Proceedings of the 2012 Workshop on Parallel and Distributed Systems: Testing, Analysis, and
Debugging, PADTAD 2012, pages 18–24, New York, NY, USA, 2012. ACM.
[12] Cormac Flanagan and Stephen N Freund. Atomizer: a dynamic atomicity checker for
multithreaded programs. SIGPLAN Not., 39(1):256–267, January 2004.
[13] Cormac Flanagan and Stephen N. Freund. FastTrack: efficient and precise dynamic race
detection. Commun. ACM, 53(11):93–101, November 2010.
[14] Cormac Flanagan, Stephen N. Freund, and Jaeheon Yi. Velodrome: a sound and complete
dynamic atomicity checker for multithreaded programs. SIGPLAN Not., 43(6):293–303,
June 2008.
[15] Clément Hurlin. Specifying and checking protocols of multithreaded classes. In Proceedings
of the 2009 ACM symposium on Applied Computing, SAC ’09, pages 587–592, New York,
NY, USA, 2009. ACM.
[16] IBM’s Concurrency Testing Repository.
[17] Donald E Knuth. On the translation of languages from left to right. Information and
control, 8(6):607–639, 1965.
[18] Peng Liu, Julian Dolby, and Charles Zhang. Finding incorrect compositions of atomicity.
In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering,
pages 158–168. ACM, 2013.
16
[19] J. Lourenço, D. Sousa, B. Teixeira, and R. Dias. Detecting concurrency anomalies in
transactional memory programs. Computer Science and Information Systems/ComSIS,
8(2):533–548, 2011.
[20] Shan Lu, Soyeon Park, Eunsoo Seo, and Yuanyuan Zhou. Learning from mistakes: a
comprehensive study on real world concurrency bug characteristics. SIGPLAN Not.,
43(3):329–339, March 2008.
[21] Bertrand Meyer. Applying "design by contract". Computer, 25(10):40–51, October 1992.
[22] Vasco Pessanha. Verificação prática de anomalias em programas de memória transaccional
(Practical verification of anomalies in transactional memory programs). Master’s thesis,
Universidade Nova de Lisboa, 2011.
[23] Jan Rekers and Wilco Koorn. Substring parsing for arbitrary context-free grammars.
ACM Sigplan Notices, 26(5):59–66, 1991.
[24] Masaru Tomita. An efficient augmented-context-free parsing algorithm. Comput. Linguist., 13(1-2):31–46, January 1987.
[25] Raja Vallée-Rai, Phong Co, Etienne Gagnon, Laurie Hendren, Patrick Lam, and Vijay
Sundaresan. Soot - a java bytecode optimization framework. In Proceedings of the 1999
conference of the Centre for Advanced Studies on Collaborative research, CASCON ’99,
pages 13–. IBM Press, 1999.
[26] M. Vaziri, F. Tip, and J. Dolby. Associating synchronization constraints with data in an
object-oriented language. In ACM SIGPLAN Notices, volume 41, pages 334–345. ACM,
2006.
[27] C. Von Praun and T.R. Gross. Static detection of atomicity violations in object-oriented
programs. Journal of Object Technology, 3(6):103–122, 2004.
17
12
12.1
Appendix
Grammar Example
@Contract ( clauses =...)
class Module
{
public Module () { }
public void a () { }
public void b () { }
public void c () { }
}
public class Main
{
private static Module m ;
private static void f ()
{
m . c ();
}
@Atomic
private static void g ()
{
m . a ();
m . b ();
f ();
}
public static void main ( String [] args )
{
m = new Module ();
for ( int i =0; i < 10; i ++)
if ( i %2 == 0)
m . a ();
else
m . b ();
f ();
g ();
}
}
Figure 7: Program.
18
Start: A
A -> B
D -> E F
E -> X
F -> G
G -> H
B -> C
C -> D
L -> M
L -> N
M -> O
N -> R
O -> a P
H -> I
I -> K
I -> J
J -> L
K -> T U
BF -> BG
U -> V W
BG -> b BH
T -> BA
BH -> T BI
W -> epsilon
BI -> epsilon
V -> BD
BB -> c BC
Q -> S
BC -> epsilon
P -> Q
BD -> BE
S -> H
BE -> a BF
R -> b Q
Y -> Z
X -> Y
Z -> epsilon
BA -> BB
Figure 8: Non-optimized grammar.
19
Start: A’
A’ -> A
A -> I
L -> b I
L -> a I
I -> L
I -> T V
T -> c
V -> a b T
Figure 9: Optimized grammar.
20
| 6 |
SYMMETRIC AUTOMORPHISMS OF FREE GROUPS,
BNSR-INVARIANTS, AND FINITENESS PROPERTIES
arXiv:1607.03043v1 [] 11 Jul 2016
MATTHEW C. B. ZAREMSKY
Abstract. The BNSR-invariants of a group G are a sequence Σ1 (G) ⊇ Σ2 (G) ⊇ · · ·
of geometric invariants that reveal important information about finiteness properties of
certain subgroups of G. We consider the symmetric automorphism group ΣAutn and
pure symmetric automorphism group PΣAutn of the free group Fn , and inspect their
BNSR-invariants. We prove that for n ≥ 2, all the “positive” and “negative” character
classes of PΣAutn lie in Σn−2 (PΣAutn ) \ Σn−1 (PΣAutn ). We use this to prove that for
n ≥ 2, Σn−2 (ΣAutn ) equals the full character sphere S 0 of ΣAutn but Σn−1 (ΣAutn ) is
empty, so in particular the commutator subgroup ΣAut′n is of type Fn−2 but not Fn−1 .
Our techniques involve applying Morse theory to the complex of symmetric marked cactus
graphs.
Introduction
The Bieri–Neumann–Strebel–Renz (BNSR) invariants Σm (G) of a group G are a sequence of geometric invariants Σ1 (G) ⊇ Σ2 (G) ⊇ · · · that encode a large amount of
information about the subgroups of G containing the commutator subgroup G′ . For example if G is of type Fn and m ≤ n then Σm (G) reveals precisely which such subgroups
are of type Fm . Recall that a group is of type Fn if it admits a classifying space with
compact n-skeleton; these finiteness properties are an important class of quasi-isometry
invariants of groups. The BNSR-invariants are in general very difficult to compute; a complete description is known for the class of right-angled Artin groups [MMV98, BG99], but
not many other substantial families of groups. A complete picture also exists for the generalized Thompson groups Fn,∞ [BGK10, Koc12, Zar15], and the first invariant Σ1 is also
known for some additional classes of groups, e.g., one-relator groups [Bro87], pure braid
groups [KMM15] and pure symmetric automorphism groups of right-angled Artin groups
[OK00, KP14], among others.
In this paper, we focus on the groups ΣAutn and PΣAutn of symmetric and pure symmetric automorphisms of the free group Fn . An automorphism of Fn is symmetric if it
takes each basis element to a conjugate of a basis element, and pure symmetric if it takes
each basis element to a conjugate of itself. These are also known as the (pure) loop braid
groups, and are the groups of motions of n unknotted unlinked oriented loops in 3-space;
Date: July 12, 2016.
2010 Mathematics Subject Classification. Primary 20F65; Secondary 20F28, 57M07.
Key words and phrases. Symmetric automorphism, BNSR-invariant, finiteness properties.
2
MATTHEW C. B. ZAREMSKY
an element describes these loops moving around and through each other, ending up back
where they started, either individually in the pure case or just as a set in the non-pure
case (but preserving orientation in both cases). Other names for these and closely related
groups include welded braid groups, permutation-conjugacy automorphism groups, braidpermutation groups and more. See [Dam16] for a discussion of the many guises of these
groups. Some topological properties known for PΣAutn include that is has cohomological
dimension n − 1 [Col89], it is a duality group [BMMM01] and its cohomology ring has been
computed [JMM06].
The first invariant Σ1 (PΣAutn ) was fully computed by Orlandi–Korner [OK00] (she
denotes the group by P Σn ). Koban and Piggott subsequently computed Σ1 (G) for G the
group of pure symmetric automorphisms of any right-angled Artin group [KP14]. One
reason that the question of BNSR-invariants is interesting for PΣAutn is that PΣAutn is
similar to a right-angled Artin group, for instance it admits a presentation in which the
relations are all commutators (see Section 2), but for n ≥ 3 it is not a right-angled Artin
group [KP14], and it is not known whether it is a CAT(0) group [BMMM01, Question 6.4].
The BNSR-invariants are completely known for right-angled Artin groups, but the Morse
theoretic proof of this fact in [BG99] made essential use of the CAT(0) geometry of the
relevant complexes.
Our approach here is to use Morse theory applied to the complex of symmetric marked
cactus graphs ΣKn to prove the following main results:
Theorem A. For n ≥ 2, if χ is a positive or negative1 character of PΣAutn then [χ] ∈
Σn−2 (PΣAutn ) \ Σn−1 (PΣAutn ).
Theorem B. For n ≥ 2, we have Σn−2 (ΣAutn ) = S(ΣAutn ) = S 0 and Σn−1 (ΣAutn ) = ∅.
In particular the commutator subgroup ΣAut′n is of type Fn−2 but not Fn−1 .
For example this shows that ΣAut′n is finitely generated if and only if n ≥ 3, and finitely
presentable if and only if n ≥ 4. It appears that these are already new results (except for
the fact that ΣAut′2 is not finitely generated, which is easy to see since ΣAut2 ∼
= F2 ⋊ S2 ).
Theorem B also provides what could be viewed as the first examples for m ≥ 2 of “naturally
occurring” groups G of type F∞ such that Σm−1 (G) = S(G) but Σm (G) = ∅, and of groups
of type F∞ whose commutator subgroups have arbitrary finiteness properties. (One can
also construct more ad hoc examples: we have noticed that taking a semidirect product of
F2n with the Coxeter group of type Bn = Cn also produces a group with these properties.)
As a remark, contrasting the loop braid group ΣAutn with the classical braid group Bn , it
is easy to see that Σm (Bn ) = S(Bn ) = S 0 for all m and n, and Bn′ is of type F∞ for all n.
For the case n = 3 we can actually get a full computation of Σm (PΣAut3 ); Orlandi–
Korner already computed Σ1 (PΣAut3 ) (it is dense in S(PΣAut3 ); see Citation 2.2), and
we prove that Σ2 (PΣAut3 ) = ∅ (see Theorem 4.23). We tentatively conjecture that
Σn−2 (PΣAutn ) is always dense in S(PΣAutn ) and Σn−1 (PΣAutn ) is always empty, but for
n ≥ 4 it seems this cannot be proved using our techniques, as discussed in Remark 4.24.
1For
the definitions of “positive” and “negative” consult Definition 2.4.
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
3
As a remark, there is a result of Pettet involving finiteness properties of some other
normal subgroups of PΣAutn . Namely she found that the kernel of the natural projection
PΣAutn → PΣAutn−1 is finitely generated but not finitely presentable when n ≥ 3 [Pet10].
This is in contrast to the pure braid situation, where the kernel of the “forget a strand”
map P Bn → P Bn−1 is of type F∞ (in fact it is the free group Fn−1 ).
This paper is organized as follows. In Section 1 we recall the background on BNSRinvariants and Morse theory. In Section 2 we discuss the groups of interest, and in Section 3
we discuss the complex ΣKn . We prove Theorem A in Section 4, and with Theorem A in
hand we quickly prove Theorem B in Section 5.
Acknowledgments. I am grateful to Alex Schaefer for a helpful conversation about graph
theory that in particular helped me figure out how to prove that Σ2 (PΣAut3 ) = ∅ (see
Subsection 4.3), to Celeste Damiani for pointing me toward the paper [Sav96], and to
Robert Bieri for enlightening discussions about the novelty of the behavior of the BNSRinvariants found in Theorem B.
1. BNSR-invariants and Morse theory
In this rather technical section we recall the definition of the BNSR-invariants, and set
up the Morse theoretic approach that we will use. The results in Subsection 1.3 are general
enough that we expect they should be useful in the future to compute BNSR-invariants of
other interesting groups.
1.1. BNSR-invariants. A CW-complex Z is called a classifying space for G, or K(G, 1),
if π1 (Z) ∼
= G and πk (Z) = 0 for all k 6= 1. We say that G is of type Fn if it admits a
K(G, 1) with compact n-skeleton. For example G is of type F1 if and only if it is finitely
generated and of type F2 if and only if it is finitely presentable. If G is of type Fn for all
n we say it is of type F∞ . If G acts properly and cocompactly on an (n − 1)-connected
CW-complex, then G is of type Fn .
Definition 1.1 (BNSR-invariants). Let G be a group acting properly and cocompactly on
an (n − 1)-connected CW-complex Y (so G is of type Fn ). Let χ : G → R be a character
of G, i.e., a homomorphism to R. There exists a map hχ : Y → R, which we will call a
character height function, such that hχ (g.y) = χ(g) + hχ (y) for all g ∈ G and y ∈ Y . For
t ∈ R let Yχ≥t be the full subcomplex of Y supported on those 0-cells y with hχ (y) ≥ t.
Let [χ] be the equivalence class of χ under scaling by positive real numbers. The character
sphere S(G) is the set of non-trivial character classes [χ]. For m ≤ n, the mth BNSRinvariant Σm (G) is defined to be
Σm (G) := {[χ] ∈ S(G) | (Yχ≥t )t∈R is essentially (m − 1)-connected}.
Recall that (Yχ≥t )t∈R is said to be essentially (m − 1)-connected if for all t ∈ R there
exists −∞ < s ≤ t such that the inclusion of Yχ≥t into Yχ≥s induces the trivial map in πk
for all k ≤ m − 1.
4
MATTHEW C. B. ZAREMSKY
It turns out Σm (G) is well defined up to the choice of Y and hχ (see for example [Bux04,
Definition 8.1]). As a remark, the definition there used the filtration by sets h−1
χ ([t, ∞))t∈R ,
but thanks to cocompactness this filtration is essentially (m − 1)-connected if and only if
our filtration (Yχ≥t )t∈R is.
One important application of BNSR-invariants is the following:
Citation 1.2. [BR88, Theorem B and Remark 6.5] Let G be a group of type Fm . Let
G′ ≤ H ≤ G. Then H is of type Fm if and only if for every non-trivial character χ of G
such that χ(H) = 0, we have [χ] ∈ Σm (G).
For example, if H = ker(χ) for χ a discrete character of G, i.e., one with image Z, then
H is of type Fm if and only if [±χ] ∈ Σm (G). Also note that G′ itself is of type Fm if and
only if Σm (G) = S(G).
Other important classical properties of the Σm (G) are that they are all open subsets of
S(G) and that they are invariant under the natural action of Aut(G) on S(G) [BNS87,
BR88].
1.2. Morse theory. Bestvina–Brady Morse theory can be a useful tool for computing
BNSR-invariants. In this section we give the relevant definitions and results from Morse
theory, in the current level of generality needed.
Let Y be an affine cell complex (see [BB97, Definition 2.1]). The star stY v of a 0-cell v
in Y is the subcomplex of Y consisting of cells that are faces of cells containing v. The link
lkY v of v is the simplicial complex stY v of directions out of v into stY v. We will suppress
the subscript Y from the notation when it is clear from context. If v and w are distinct
0-cells sharing a 1-cell we will call v and w adjacent and write v adj w.
In [BB97], Bestvina and Brady defined a Morse function on an affine cell complex Y to
be a map Y → R that is affine on cells, takes discretely many values on the 0-cells, and is
non-constant on 1-cells. When using Morse theory to compute BNSR-invariants though,
these last two conditions are often too restrictive. The definition of Morse function that
will prove useful for our purposes is as follows.
Definition 1.3 (Morse function). Let Y be an affine cell complex and let h : Y → R and
f : Y → R be functions that are affine on cells. We call (h, f ) : Y → R × R a Morse
function if the set {h(v) − h(w) | v, w ∈ Y (0) , v adj w} does not have 0 as a limit point
(we will call it discrete near 0), the set {f (v) | v ∈ Y (0) } is finite2, and if v, w ∈ Y (0) with
v adj w and h(v) = h(w) then f (v) 6= f (w).
For example if h takes discrete values on 0-cells and distinct values on adjacent 0-cells,
then (taking f to be constant and ignoring it) we recover Bestvina and Brady’s notion of
“Morse function”.
2In
what follows it will be clear that “finite” could be replaced with “well ordered” but for our present
purposes we will just assume it is finite.
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
5
Using the usual order on R and the lexicographic order on R × R, it makes sense to
compare (h, f ) values of 0-cells. On a given cell c, since h and f are affine on c it is clear
that (h, f ) achieves its maximum and minimum values at unique faces of c, and the last
assumption in Definition 1.3 ensures these will be 0-cells.
Definition 1.4 (Ascending star/link). Given a Morse function (h, f ) on an affine cell
complex Y , define the ascending star st↑ v of a 0-cell v in Y to be the subcomplex of st v
consisting of all faces of those cells c for which the unique 0-face of c where (h, f ) achieves
its minimum is v. Define the ascending link lk↑ v to be the subcomplex of lk v consisting
of directions into st↑ v. Note that lk↑ v is a full subcomplex of lk v, since h and f are affine
on cells.
For Y an affine cell complex, (h, f ) a Morse function on Y and t ∈ R, denote by Yh≥t
the subcomplex of Y supported on those 0-cells v with h(v) ≥ t.
Lemma 1.5 (Morse Lemma). Let Y be an affine cell complex and (h, f ) : Y → R × R
a Morse function. Let t ∈ R and s ∈ [−∞, t). If for all 0-cells v with h(v) ∈ [s, t)
the ascending link lk↑ v is (m − 1)-connected, then the inclusion Yh≥t → Yh≥s induces an
isomorphism in πk for k ≤ m − 1 and an epimorphism in πm .
Proof. The essential parts of the proof are the same as in [BB97]. Choose ε > 0 such that
for any v adj w, |h(v) − h(w)| 6∈ (0, ε) (this is possible since the set of values h(v) − h(w) for
v adj w is discrete near 0). We can assume by induction (and by compactness of spheres if
s = −∞) that t−s ≤ ε. In particular if adjacent 0-cells v and w both lie in Yh≥s \Yh≥t , then
h(v) = h(w) and f (v) 6= f (w). To build up from Yh≥t to Yh≥s , we need to glue in the 0-cells
of Yh≥s \ Yh≥t along their relative links in some order such that upon gluing in v, all of lk↑ v
is already present, but nothing else in lk v, so the relative link is precisely the ascending
(0)
(0)
link. To do this, we put any order we like on each set Fi := {v ∈ Yh≥s \ Yh≥t | f (v) = i} for
(0)
(0)
i ∈ f (Y (0) ), and then extend these to an order on Yh≥s \ Yh≥t by declaring that everything
in Fi comes after everything in Fj whenever i < j. Now when we glue in v, for w ∈ lk v we
have w ∈ lk↑ v if and only if either h(w) > h(v), in which case h(w) ≥ t and w is already
present, or h(w) = h(v) and f (w) > f (v), in which case w ∈ Ff (w) is also already present.
Since the relevant ascending links are (m − 1)-connected by assumption, the result follows
from the Seifert–van Kampen, Mayer–Vietoris and Hurewicz Theorems.
As a corollary to the proof, we have:
Corollary 1.6. With the same setup as the Morse Lemma, if additionally for all 0-cells
e m+1 (lk↑ v) = 0, then the inclusion Yh≥t → Yh≥s induces an
v with s ≤ h(v) < t we have H
e m+1 .
injection in H
Proof. In the proof of the Morse Lemma, we saw that Yh≥s is obtained from Yh≥t by coning
off the ascending links of 0-cells v with s ≤ h(v) < t, so this is immediate from the
Mayer–Vietoris sequence.
6
MATTHEW C. B. ZAREMSKY
For example if Y is (m + 1)-dimensional, so the links are at most m-dimensional, then
this additional condition will always be satisfied.
Wen dealing with BNSR-invariants, the following is particularly useful:
Corollary 1.7. Let Y be an (m − 1)-connected affine cell complex with a Morse function
(h, f ). Suppose there exists q such that, for every 0-cell v of Y with h(v) < q, lk↑ v is (m −
1)-connected. Then the filtration (Yh≥t )t∈R is essentially (m − 1)-connected. Now assume
e m+1 (Y ) = 0 and for every 0-cell v of Y with h(v) < q, H
e m+1 (lk↑ v) = 0,
additionally that H
e m (lk↑ v) 6= 0. Then the
and that for all p there exists a 0-cell v with h(v) < p such that H
filtration (Yh≥t)t∈R is not essentially m-connected.
Proof. By the Morse Lemma, for any r ≤ q the inclusion Yh≥r → Y = Yh≥−∞ induces an
isomorphism in πk for k ≤ m − 1. Since Y is (m − 1)-connected, so is Yh≥r . Now for any
t ∈ R we just need to choose s = min{q, t} and we get that the inclusion Yh≥t → Yh≥s
induces the trivial map in πk for k ≤ m − 1, simply because Yh≥s is (m − 1)-connected.
For the second claim, suppose that (Yh≥t )t∈R is essentially m-connected. Say t < q, and
choose s ≤ t such that the inclusion Yh≥t → Yh≥s induces the trivial map in πk for k ≤ m.
Also, since t < q, this inclusion induces a surjection in these πk by the Morse Lemma, so in
fact Yh≥s itself is m-connected, as are all Yh≥r for r ≤ s (for the same reason). Now choose
e m (lk↑ v) 6= 0. Since H
e m (Yh≥r ) = 0 for all r ≤ s, Mayer–Vietoris
v such that h(v) < s and H
e m+1 (Yh≥q ) 6= 0 for any q ≤ h(v). But this includes q = −∞,
and Corollary 1.6 say that H
e m+1 (Y ) = 0.
which contradicts our assumption that H
1.3. BNSR-invariants via Morse theory. We now return to the situation in Definition 1.1, so Y is an (n − 1)-connected CW-complex on which G acts properly and cocompactly (and, we assume, cellularly), χ is a character of G, and hχ is a character height
function on Y . The goal of this subsection is to establish a Morse function on Y using hχ .
Let us make two additional assumptions. First, assume Y is simplicial (this is just to
ensure that any function on Y (0) can be extended to a function on Y that is affine on cells).
Second, assume that no adjacent 0-simplicies in Y share a G-orbit (if this is not the case,
it can be achieved by subdividing). Let f : Y (0) /G → R be any function that takes distinct
values on adjacent 0-cells, where the cell structure on Y /G is induced from Y . (Just to give
some examples, one could construct f by randomly assigning distinct values to the 0-cells
in Y /G, or one could take the barycentric subdivision and have f read the dimension.)
Define f : Y (0) → R via f (v) := f (G.v), and extend f to a map (also called f ) on all of Y
by extending affinely to each simplex.
Lemma 1.8. With Y , hχ and f as above, (hχ , f ) : Y → R × R is a Morse function.
Proof. The functions hχ and f are affine on cells by construction. The set {f (v) | v ∈ Y (0) }
equals the set {f (G.v) | G.v ∈ Y (0) /G}, which is finite since Y /G is compact. For any
g ∈ G we have hχ (g.v) − hχ (g.w) = χ(g) + hχ (v) − χ(g) − hχ (w) = hχ (v) − hχ (w), so
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
7
by compactness of Y /G the set {hχ (v) − hχ (w) | v, w ∈ Y (0) , v adj w} is finite (and hence
discrete near 0). Finally, since f takes distinct values on adjacent 0-cells in Y /G, and no
adjacent 0-cells in Y share an orbit, we see f takes distinct values on adjacent 0-cells in
Y.
In particular Corollary 1.7 can now potentially be used to prove that (Yχ≥t )t∈R is or is
not essentially (m − 1)-connected, and hence that [χ] is or is not in Σm (G).
While any f constructed as above will make (hχ , f ) a Morse function, this does not mean
every f may be useful, for instance if the ascending links are not as highly connected as
one would hope. In fact it seems likely that situations exist where every choice of f yields
a “useless” Morse function. Hence, in practice one hopes to have a concrete space Y with
a natural choice of f that produces nice ascending links.
2. (Pure) symmetric automorphism groups
We now turn to our groups of interest.
Let Fn be the free group with basis S := {x1 , . . . , xn }. An automorphism α ∈ Aut(Fn )
is called symmetric if for each i ∈ [n] := {1, . . . , n}, (xi )α is conjugate to xj for some
j; if each (xi )α is conjugate to xi we call α pure symmetric 3. Note that in some texts,
“symmetric” allows for (xi )α to be conjugate to some x−1
j , but we do not allow that here.
Denote by ΣAutn the group of all symmetric automorphisms of Fn , and by PΣAutn the
group of pure symmetric automorphisms. The abelianization Fn → Zn induces a surjection
Aut(Fn ) → GLn (Z), and the restriction of this map to ΣAutn yields a splitting
ΣAutn ∼
= PΣAutn ⋊Sn .
An equivalent description of ΣAutn (and PΣAutn ) is as the (pure) loop braid group,
i.e., the group of (pure) motions of n unknotted, unlinked oriented circles in 3-space.
The subgroup of ΣAutn consisting of those automorphisms taking x1 · · · xn to itself is
isomorphic to the classical braid group Bn , and the intersection of this with PΣAutn is
the classical pure braid group P Bn [Sav96]. Other names for ΣAutn and closely related
groups include welded braid groups, permutation-conjugacy automorphism groups, braidpermutation groups and more. Details on the various viewpoints for these groups can be
found for example in [Dam16].
In [McC86], McCool found a (finite) presentation for PΣAutn . The generators are the
automorphisms αi,j (i 6= j) given by (xi )αi,j = x−1
j xi xj and (xk )αi,j = xk for k 6= i, and
the defining relations are [αi,j , αk,ℓ ] = 1, [αi,j , αk,j ] = 1 and [αi,j αk,j , αi,k ] = 1, for distinct
i, j, k, ℓ. (In particular note that this implies PΣAut2 ∼
= F2 .) It will also be convenient
later to consider automorphisms αI,j , defined via
Y
αI,j :=
αi,j
i∈I
3Automorphisms
will be acting on the right here, so we will reflect this in the notation.
8
MATTHEW C. B. ZAREMSKY
for I ⊆ [n] \ {j}, where the product can be taken in any order thanks to the relation
[αi,j , αk,j ] = 1. Following Collins [Col89] we call these symmetric Whitehead automorphisms.
Since the defining relations in McCool’s presentation are commutators, we immediately
see that PΣAutn has abelianization Zn(n−1) , with basis {αi,j | i 6= j}. Since Sn acts
transitively on the αi,j , we also quickly compute that ΣAutn abelianizes to Z × (Z/2Z)
for all n ≥ 2. A natural basis for the vector space Hom(PΣAutn , R) ∼
= Rn(n−1) is the dual
of {αi,j | i 6= j}. This dual basis has a nice description that we will now work up to.
w
For α ∈ PΣAutn let wi,α ∈ Fn be the elements such that (xi )α = xi i,α . For each i 6= j
define χi,j : PΣAutn → Z by sending α to ϕj (wi,α), where ϕj : Fn → Z are the projections
sending xj to 1 and the other generators to 0.
Lemma 2.1. Each χi,j is a homomorphism.
Proof. Let α, β ∈ PΣAutn and i ∈ [n]. Write wi,α = xεk11 · · · xεkrr for k1 , . . . , kr ∈ [n] and
ε1 , . . . , εr ∈ {±1}, so we have
−1
1
r
)β · · · (xk−ε
)βwi,β
(xi )α ◦ β = (xk−ε
xi wi,β (xεk11 )β · · · (xεkrr )β.
r
1
In particular
wi,α◦β = wi,β (xεk11 )β · · · (xεkrr )β.
Note that ϕj ((xεk11 )β · · · (xεkrr )β) = ϕj (xεk11 · · · xεkrr ), so ϕj (wi,α◦β ) = ϕj (wi,α ) + ϕj (wi,β ), as
desired.
Clearly χi,j (αk,ℓ ) = δ(i,j),(k,ℓ) (the Kronecker delta), so {χi,j | i 6= j} is the basis of
Hom(PΣAutn , R) dual to {αi,j | i 6= j}. Since Hom(PΣAutn , R) ∼
= Rn(n−1) we know that
n(n−1)−1
the character sphere S(PΣAutn ) is S(PΣAutn ) = S
.
For the group ΣAutn , Hom(ΣAutn , R) ∼
= R for all n, so to find a basis we just need a
non-trivial character. We know ΣAutn = PΣAutn ⋊Sn , so the most natural candidate is
the character reading 1 on each αi,j and 0 on Sn . Note that S(ΣAutn ) = S 0 for all n ≥ 2.
P
Writing an arbitrary character of PΣAutn as χ =
i6=j ai,j χi,j , we recall Orlandi–
1
Korner’s computation of Σ (PΣAutn ):
Citation 2.2. [OK00] We have [χ] ∈ Σ1 (PΣAutn ) unless either
(i) there exist distinct i and j such that ap,q = 0 whenever {p, q} 6⊆ {i, j}, or
(ii) there exist distinct i, j and k such that ap,q = 0 whenever {p, q} 6⊆ {i, j, k} and
moreover ap,q = −ap′ ,q whenever {p, p′ , q} = {i, j, k}.
In these cases [χ] 6∈ Σ1 (PΣAutn ).
For example, Σ1 (PΣAut2 ) is empty (which we know anyway since PΣAut2 ∼
= F2 ) and
Σ (PΣAut3 ) is a 5-sphere with three 1-spheres and one 2-sphere removed, so in particular
Σ1 (PΣAut3 ) is dense in S(PΣAut3 ).
1
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
9
The groups ΣAutn and PΣAutn are of type F∞ (this can be seen for example after work
of Collins [Col89])4, so one can ask what Σm (ΣAutn ) and Σm (PΣAutn ) are, for any m and
n. One thing we know, which we will use later, is that the invariants are all closed under
taking antipodes:
Observation 2.3. If [χ] ∈ Σm (G) for G = ΣAutn or PΣAutn then [−χ] ∈ Σm (G).
Proof. The automorphism of Fn taking each xi to x−1
induces an automorphism of ΣAutn
i
and PΣAutn under which each character χ maps to −χ. The result now follows since
Σm (G) is invariant under Aut(G).
We can now state our main results.
Definition 2.4 (Positive/negative character). Call χ =
all i, j, and negative if ai,j < 0 for all i, j.
P
i6=j
ai,j χi,j positive if ai,j > 0 for
Theorem A. For n ≥ 2, if χ is a positive or negative character of PΣAutn then [χ] ∈
Σn−2 (PΣAutn ) \ Σn−1 (PΣAutn ).
As a remark, thanks to Observation 2.3 the negative character classes lie in a given
Σm (PΣAutn ) if and only if the positive ones do, so we only need to prove Theorem A for
positive characters.
Theorem B. For n ≥ 2, we have Σn−2 (ΣAutn ) = S(ΣAutn ) = S 0 and Σn−1 (ΣAutn ) = ∅.
In particular the commutator subgroup ΣAut′n is of type Fn−2 but not Fn−1 .
The commutator subgroups ΣAut′n and PΣAut′n are easy to describe; see for example
Lemmas 4 and 5 of [Sav96]. The commutator subgroup PΣAut′n consists of those automorphisms taking each xi to wi−1xi wi for some wi ∈ Fn′ . In other words, PΣAut′n is just
the intersection of all the ker(χi,j ). Note that for n ≥ 2 the commutator subgroup PΣAut′n
is not finitely generated, since it surjects onto PΣAut′2 ∼
= F2′ . The commutator subgroup
ΣAut′n consists of those automorphisms taking xi to wi−1 xπ(i) wi for some even permutation π ∈ Sn (i.e., π ∈ An ) and satisfying ϕ(w1 · · · wn ) = 0 where ϕ : Fn → Z is the map
taking each basis element to 1. As a remark, the abelianization map ΣAutn → Z splits,
for instance by sending Z to hα1,2 i, so we have ΣAutn = ΣAut′n ⋊Z.
In Section 5 we will be able to deduce Theorem B from Theorem
P A quickly by using the
next lemma. If we write BBn for the kernel of the character i6=j χi,j of PΣAutn taking
each αi,j to 1, so BBn is the “Bestvina–Brady-esque” subgroup of PΣAutn , then we have:
Lemma 2.5. ΣAut′n = BBn ⋊ An .
Proof. When we restrict the map ΣAutn → Sn to ΣAut′n , by the above description we know
that the image is An . This map splits, and the kernel of this restricted map is the kernel of
the original map, which is PΣAutn , intersected with ΣAut′n . The above description tells us
4Actually,
PΣAutn is even of “type F”, meaning it has a compact classifying space, but we will not
need this fact.
10
MATTHEW C. B. ZAREMSKY
that this consists of all pure symmetric automorphisms α such that
Pϕ(w1,α · · · wn,α) = 0,
and from the definition of the χi,j it is clear that ϕ(w1,α · · · wn,α ) = i6=j χi,j (α), so we are
done.
In particular BBn has finite index in ΣAut′n , so they have the same finiteness properties.
To prove Theorems A and B we need a complex on which the groups act nicely, and to
understand ascending links. We discuss the complex in Section 3 and the ascending links
in Section 4.
3. The complex of symmetric marked cactus graphs
In [Col89], Collins found a contractible simplicial complex ΣKn on which the “Outer”
versions of ΣAutn and PΣAutn act properly and cocompactly, described by symmetric
marked cactus graphs. We will use the obvious analog of this complex for our groups.
Thanks to the action being proper and cocompact, and the complex being contractible, it
can be used to “reveal” the BNSR-invariants of the groups, as per Definition 1.1. In this
section we recall the construction of ΣKn and set up the character height functions that
will then be used in the following sections to prove our main results.
Terminology: By a graph we will always mean a connected finite directed graph with
one vertex specified as the basepoint, such that the basepoint has degree at least two and
all other vertices have degree at least three. We will use the usual terminology of initial
and terminal endpoints of an edge, paths, cycles, reduced paths, simple cycles, subtrees,
subforests and spanning trees.
Our graphs will always be understood to have rank n, unless otherwise specified.
Let Rn be the n-petaled rose, that is the graph with one vertex ∗ (which is necessarily
the basepoint) and n edges. Then π1 (Rn ) ∼
= Fn , and we identify Aut(Fn ) with the group
of basepoint-preserving self-homotopy equivalences of Rn , modulo homotopy.
Definition 3.1 (Cactus graph, cladode, base, above, projection, before/after, between).
A graph Γ is called a cactus graph if every edge is contained in precisely one simple cycle.
We will refer to the simple cycles, viewed as subgraphs, as cladodes. For example the petals
of the rose Rn are precisely its cladodes. We will assume the orientations of the edges are
such that each cladode is a directed cycle, that is, no distinct edges of a cladode share an
origin (or terminus). If C is a cladode of a cactus graph Γ with basepoint p, there is a
unique vertex bC of C closest to p, which we call the base of C. Note that every vertex is
the base of at least one cladode. We say that a cladode C is above a cladode D if every
path from bC to p must pass through an edge of D. If C is above D there is a unique vertex
projD (C) of D closest to bC , which we will call the projection of C onto D. Given two
distinct points x and y in a common cladode C, with x, y 6= bC , there is a unique reduced
path from x to y in C \ {bC }; if this path follows the orientation of C we say x is before
y, and otherwise we say x is after y. Within C it also makes sense to say that an edge is
before or after another edge, or that an edge is before or after a point not in the interior
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
11
of that edge. We say a point or edge is between two points or edges if it is before one and
after the other.
See Figure 1 for an example illustrating the many definitions in Definition 3.1.
9
8
10
3
7
4
6
11
2
1
5
12
Figure 1. A cactus graph, with its cladodes numbered for reference. To illustrate the definitions in Definition 3.1 with some examples, we note: cladodes 3 and 4 have the same base; cladode 7 is above cladodes 2 and 1 but no
others; the projection of cladode 8 onto cladode 1 is the vertex that is the
base of cladode 3; and the base of cladode 3 is after the base of cladode 2
and before the base of cladode 12, and hence is between them.
Given a graph Γ and a subforest F , we will write Γ/F for the graph obtained by quotienting each connected component of F to a point. The quotient map d : Γ → Γ/F is
called a forest collapse or forest blow-down. It is a homotopy equivalence, and a homotopy
inverse of a forest blow-down is called a forest blow-up, denoted u : Γ/F → Γ.
Definition 3.2 (Marking). A marking of a basepointed graph Γ is a homotopy equivalence
ρ : Rn → Γ from the n-petaled rose to Γ, taking basepoint to basepoint.
A marking of the rose itself represents an automorphism of Fn , thanks to our identification of Aut(Fn ) with the group of basepoint-preserving self-homotopy equivalences of
Rn , modulo homotopy. If a marking α : Rn → Rn even represents a (pure) symmetric
automorphism, then it makes sense to call the marking itself (pure) symmetric. More
generally:
Definition 3.3 ((Pure) symmetric marking). A marking ρ : Rn → Γ is called (pure) symmetric if there exists a forest collapse d : Γ → Rn such that d ◦ ρ : Rn → Rn is (pure)
symmetric.
Definition 3.4 (Symmetric marked cactus graph). A symmetric marked cactus graph is a
triple (Γ, p, ρ) where Γ is a cactus graph with basepoint p and ρ is a symmetric marking.
12
MATTHEW C. B. ZAREMSKY
Two such triples (Γ, p, ρ), (Γ′ , p′ , ρ′ ) are considered equivalent if there is a homeomorphism
φ : Γ → Γ′ taking p to p′ such that φ◦ρ ≃ ρ′ . We will denote equivalence classes by [Γ, p, ρ],
and will usually just refer to [Γ, p, ρ] as a symmetric marked cactus graph.
We note that under this equivalence relation, every symmetric marked cactus graph is
equivalent to one where the marking is even pure symmetric. This is just because the
markings of the rose that permute the petals are all equivalent to the trivial marking.
Moreover, these are the only markings equivalent to the trivial marking, so the map α 7→
[Rn , ∗, α] is in fact a bijection between PΣAutn and the set of symmetric marked roses.
Definition 3.5 (Partial order). We define a partial order ≤ on the set of symmetric
marked cactus graphs as follows. Let [Γ, p, ρ] be a symmetric marked cactus graph and F
a subforest of Γ, with d : Γ → Γ/F the forest collapse. Let pF := d(p) and let ρF := d ◦ ρ.
We declare that [Γ, p, ρ] ≤ [Γ/F, pF , ρF ]. It is easy to check that the relation ≤ is well
defined up to equivalence of triples, and that it is a partial order.
Definition 3.6 (Complex of symmetric marked cactus graphs). The complex of symmetric
marked cactus graphs ΣKn is the geometric realization of the partially ordered set of
symmetric marked cactus graphs.
(0)
Note that ΣAutn and PΣAutn act (on the right) on ΣKn via [Γ, p, ρ].α := [Γ, p, ρ ◦ α],
and this extends to an action on ΣKn since for any forest collapse d : Γ → Γ/F , we have
(d ◦ ρ) ◦ α = d ◦ (ρ ◦ α), i.e., ρF ◦ α = (ρ ◦ α)F .
Citation 3.7. [Col89, Proposition 3.5, Theorem 4.7] The complex ΣKn is contractible
and (n − 1)-dimensional, and the actions of ΣAutn and PΣAutn on ΣKn are proper and
cocompact.
Technically Collins considers the “Outer” version where we do not keep track of basepoints, but it is straightforward to get these results also in our basepointed “Auter” version.
In particular, we have the requisite setup of Definition 1.1.
Remark 3.8. One can similarly consider the complex of all marked basepointed graphs,
and get the well studied spine of Auter space, which is contractible and on which Aut(Fn )
acts properly and cocompactly (see [CV86, HV98]). This is not relevant for our present
purposes though, since the abelianization of Aut(Fn ) is finite, and hence its character
sphere is empty.
The next step is to take a character χ of PΣAutn and induce a character height function
hχ on ΣKn . First recall that we equivocate between symmetric markings of roses and
elements of PΣAutn . Hence for 0-simplices in ΣKn of the form [Rn , ∗, α], we can just
define hχ ([Rn , ∗, α]) := χ(α). In general we define hχ ([Γ, p, ρ]) as follows:
Definition 3.9 (The character height function hχ ). Let [Γ, p, ρ] be a symmetric marked
cactus graph. Define hχ ([Γ, p, ρ]) to be
hχ ([Γ, p, ρ]) := max{χ(α) | [Γ, p, ρ] ∈ st([Rn , ∗, α])}.
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
13
Extend this affinely to the simplices of ΣKn , to get hχ : ΣKn → R.
Observation 3.10. hχ is a character height function.
(0)
Proof. We need to show that hχ ([Γ, p, ρ].α) = hχ ([Γ, p, ρ]) + χ(α) for all [Γ, p, ρ] ∈ ΣKn
and α ∈ PΣAutn . We know that [Γ, p, ρ].α = [Γ, p, ρ◦α], and clearly [Γ, p, ρ] ∈ st[Rn , ∗, β] if
and only if [Γ, p, ρ◦α] ∈ st[Rn , ∗, β◦α], so this follows simply because χ(β◦α) = χ(β)+χ(α)
for all α, β ∈ PΣAutn .
Now we need a “tiebreaker” function f as in Lemma 1.8. As discussed before and
(0)
after that lemma, any randomly chosen injective f : ΣKn /G → R could serve to induce
a tiebreaker f : ΣKn → R, but we want to be more clever than this. In particular our
tiebreaker will yield tractable ascending links that will actually reveal parts of the BNSRinvariants.
The 0-cells in the orbit space ΣKn / PΣAutn are homeomorphism classes of cactus graphs,
(0)
so “number of vertices” is a well defined measurement on these 0-cells. Let f : ΣKn /G →
R be the function taking a graph to the negative of its number of vertices. In particular
since we are using the negative, the rose has the largest f value of all cactus graphs. Let
f : ΣKn → R be the extension of f described before Lemma 1.8, so f ([Γ, p, ρ]) equals
the negative number of vertices of Γ, and consider the function (hχ , f ) : ΣKn → R × R.
Since ΣKn is simplicial and adjacent 0-simplices in ΣKn cannot share a PΣAutn -orbit (for
instance since they necessarily have different f values), the following is immediate from
Lemma 1.8:
Corollary 3.11. For any χ, (hχ , f ) is a Morse function on ΣKn .
h ≥t
It is clear from the definition of hχ that ΣKn χ is the union of the stars of those [Rn , ∗, α]
with χ(α) ≥ t. It is a common phenomenon when working in Auter space and its relatives
to encounter important subcomplexes that are unions of stars of marked roses. Another
example arises in [BBM07], where Bestvina, Bux and Margalit use “homology markings” of
roses to prove that for n ≥ 3 the kernel of Out(Fn ) → GLn (Z) has cohomological dimension
2n − 4 and is not of type F2n−4 (when n ≥ 4 it remains open whether or not this kernel is
of type F2n−5 ).
We record here a useful technical lemma that gives information on how hχ can differ
between “nearby” symmetric marked roses. Let [Γ, p, ρ] be a symmetric marked cactus
graph and let T be a spanning tree in Γ. Since T is spanning, collapsing T yields a
symmetric marked rose. The marking ρ provides the cladodes of Γ with a numbering from
1 to n; let Ci,ρ be the ith cladode. Since T is a spanning tree, it meets Ci,ρ at all but
one edge; write Ei,T for the single-edge subforest of Ci,ρ that is not in T . In particular,
intuitively, upon collapsing T , Ei,T becomes the ith petal of Rn . Note that T is completely
determined by the set {E1,T , . . . , En,T }, namely it consists of all the edges of Γ not in any
Ei,T .
14
MATTHEW C. B. ZAREMSKY
Lemma 3.12 (Change of spanning tree). Let [Γ, p, ρ] be a symmetric marked cactus graph
and let T be a spanning tree in Γ. Suppose U is another spanning tree such that Ej,T 6= Ej,U
but Ei,T = Ei,U for all i 6= j (so U differs from T only in the jth cladode). Suppose that
Ej,T is before Ej,U (in the language of Definition 3.1). Let ∅ =
6 I ( [n] be the set of indices
i such that the projection projCj,ρ (Ci,ρ ) lies between Ej,T and Ej,U (so in particular j 6∈ I).
P
P
Then for any χ = i6=j ai,j χi,j we have hχ ([Γ/T, pT , ρT ]) − hχ ([Γ/U, pU , ρU ]) = i∈I ai,j .
Proof. By collapsing the subforest T ∩ U we can assume without loss of generality that
T = Ej,T and U = Ej,U are each a single edge, so Γ has two vertices, the basepoint p
and another vertex q. The set I indexes those cladodes whose base is q, so [n] \ I indexes
those cladodes whose base is p. Up to the action of PΣAutn we can assume
P that Γ/U
is the trivially marked rose, so we need to show that hχ ([Γ/T, pT , ρT ]) =
i∈I ai,j . In
fact the procedure of blowing up the trivial rose to get Γ and then blowing down T is a
Whitehead move (see [CV86, Section 3.1]) that corresponds to the symmetric Whitehead
automorphism αI,j . In other words, viewed as P
an element of PΣAutn , we have ρT = αI,j .
This means that hχ ([Γ/T, pT , ρT ]) = χ(αI,j ) = i∈I ai,j , as desired.
Ascending links: In the next section we will need to understand ascending links lk↑ v
with respect to (hχ , f ), for v = [Γ, p, ρ] a 0-simplex in ΣKn , so we discuss this a bit here.
Since lk↑ v is a full subcomplex of lk v we just need to understand which 0-simplices of
lk v lie in lk↑ v. First note that lk v is a join, of the down-link lkd v, spanned by those
0-simplices of lk v obtained from forest blow-downs of Γ, and its up-link lku v, spanned by
those 0-simplices of lk v corresponding to forest blow-ups of Γ. The ascending link lk↑ v
similarly decomposes as the join of the ascending down-link lk↑d v and ascending up-link
lk↑u v, which are just defined to be lk↑d v := lkd v ∩ lk↑ v and lk↑u v := lku v ∩ lk↑ v.
Since 0-simplices in lkd v have larger f value than v (i.e., the graphs have fewer vertices than Γ) and cannot have strictly larger hχ value, given a subforest F ⊆ Γ we see
that [Γ/F, pF , ρF ] ∈ lk↑d v if and only if hχ ([Γ/F, pF , ρF ]) ≥ hχ ([Γ, p, ρ]) if and only if
e pe, ρe] ∈ lku v then it lies in lk↑ v if and only
hχ ([Γ/F, pF , ρF ]) = hχ ([Γ, p, ρ]). Similarly if [Γ,
u
if it has strictly larger hχ value than v (since it has smaller f value).
4. Topology of ascending links
Throughout this section, χ is a non-trivial character of PΣAutn with character height
function hχ on ΣKn , and lk↑ [Γ, p, ρ] means the ascending link of the 0-simplex [Γ, p, ρ] with
respect to χ.
We will analyze the topology of lk↑ [Γ, p, ρ] = lk↑d [Γ, p, ρ] ∗ lk↑u [Γ, p, ρ] by inspecting
lk↑d [Γ, p, ρ] and lk↑u [Γ, p, ρ] individually. First we focus on lk↑d [Γ, p, ρ].
4.1. Ascending down-link. The first goal is to realize lk↑d [Γ, p, ρ] as a nice combinatorial
object, the complex of ascending forests.
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
15
Definition 4.1 (Complex of forests). The complex of forests F (Γ) for a graph Γ is the
geometric realization of the partially ordered set of non-empty subforests of Γ, with partial
order given by inclusion.
We will not really need to know the homotopy type of F (Γ) in what follows, but it is
easy to compute so we record it here for good measure.
Observation 4.2. For Γ a cactus graph with V vertices, F (Γ) ≃ S V −2 .
Proof. For each cladode C let FC (Γ) be the complex of subforests of Γ contained in C.
Clearly FC (Γ) ≃ S VC −2 , where VC is the number
P of vertices of C. Since F (Γ) is the join
of all the FC (Γ), we get FC (Γ) ≃ S d−1 for d = C (VC − 1). Now, VC − 1 is the number of
non-base vertices of C, and
P every vertex of Γ except for the basepoint is a non-base vertex
of a unique cladode, so C (VC − 1) = V − 1.
Definition 4.3 (Complex of ascending forests). The complex of ascending forests F ↑ (Γ, p, ρ)
for a symmetric marked cactus graph [Γ, p, ρ] is the full subcomplex of F (Γ) supported on
those 0-simplices F such that [Γ/F, pF , ρF ] ∈ lk↑ [Γ, p, ρ].
↑
Observation 4.4. F ↑ (Γ, p, ρ) ∼
= lk [Γ, p, ρ].
d
Proof. The isomorphism is given by F 7→ [Γ/F, pF , ρF ].
For certain characters, F ↑ (Γ, p, ρ) is guaranteed to be contractible. We call these characters decisive:
Definition 4.5 (Decisive). Call a character χ of PΣAutn decisive if every [Γ, p, ρ] lies in
a unique star of a symmetric marked rose with maximal χ value.
Observation 4.6. Let χ be decisive. Then for any Γ 6= Rn , F ↑ (Γ, p, ρ) is contractible.
Proof. Every ascending forest is contained in an ascending spanning tree, and we are assuming there is a unique ascending spanning tree, so F ↑ (Γ, p, ρ) is just the star in F ↑(Γ, p, ρ)
of this unique ascending spanning tree.
Proposition 4.7 (Positive implies decisive). Positive characters of PΣAutn are decisive.
Proof. Let T be the spanning tree in Γ such that, using the notation from Lemma 3.12,
for each cladode Cj,ρ, the origin of the edge in Ej,T is the base of Cj,ρ; see Figure 2 for
an example. Note that the edge of Ej,T is before all the other edges of Cj,ρ. We claim
that ρT has larger χ value than ρU for any other spanning tree U. First we prove this in
the case when U differs from T only in one cladode, say Cj,ρ. Let Ij be the set of i such
that the projection projCj,ρ (Ci,ρ ) lies between Ej,U and Ej,T . Since Ej,T is before Ej,U , by
P
Lemma 3.12 we get hχ ([Γ/T, pT , ρT ]) − hχ ([Γ/U, pU , ρU ]) = i∈Ij ai,j > 0. Now suppose U
differs from T in more than one cladode, say Cj1 ,ρ , . . . , Cjr ,ρ . By
Ejk ,T to Ejk ,U
P
Prchanging
one k at a time, we get hχ ([Γ/T, pT , ρT ]) − hχ ([Γ/U, pU , ρU ]) = k=1 i∈Ij ai,jk > 0. We
k
conclude that hχ ([Γ/T, pT , ρT ]) > hχ ([Γ/U, pU , ρU ]), as desired.
16
MATTHEW C. B. ZAREMSKY
By a parallel argument, negative characters are also decisive.
Figure 2. A cactus graph, with the tree T from the proof of Proposition 4.7
marked in bold.
It turns out “most” characters are decisive, in the following sense:
P
Definition 4.8 (Generic). Call a character χ = i6=j ai,j χi,j of PΣAutn generic if for every
P
choice of εi,j ∈ {−1, 0, 1} we have i6=j εi,j ai,j = 0 only if εi,j = 0 for all i, j (said another
way, the ai,j have no non-trivial linear dependencies using coefficients from {−1, 0, 1}).
Observation 4.9. The set {[χ] ∈ S(PΣAutn ) | χ is generic} is dense in S(PΣAutn ).
P
Proof. Given a linear dependence i6=j εi,j ai,j = 0 with εi,j ∈ {−1, 0, 1}, the complement
of the set of character classes satisfying this dependence is open and dense in S(PΣAutn ).
Since there are only finitely many choices for the εi,j , the set of generic character classes
is also (open and) dense in S(PΣAutn ).
Proposition 4.10 (Generic implies decisive). Generic characters of PΣAutn are decisive.
Proof. Let T and U be two different spanning trees in Γ, so [Γ, p, ρ] lies in the stars of
the symmetric marked roses [Γ/T, pT , ρT ] and [Γ/U, pU , ρU ]. We claim that for χ generic,
these symmetric marked roses have different hχ values, from which the result will follow.
Using the notation from Lemma 3.12, suppose the cladodes in which T and U differ are
Cj1 ,ρ , . . . , Cjr ,ρ , so Ejk ,T 6= Ejk ,U for 1 ≤ k ≤ r, but Ej,T = Ej,U for all j 6∈ {j1 , . . . , jr }. Let
U = T0 , T1 , . . . , Tr = T be spanning trees such that we obtain Tk from Tk−1 by replacing
Ejk ,U with Ejk ,T . For each 1 ≤ k ≤ r let Ik be the set of i such that the projection
projCj ,ρ (Ci,ρ ) lies between Ejk ,T and Ejk ,U . By Lemma 3.12, we know that
k
X
hχ ([Γ/Tk , pTk , ρTk ]) − hχ ([Γ/Tk−1 , pTk−1 , ρTk−1 ]) = ±
ai,jk
i∈Ik
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
17
for each 1 ≤ k ≤ r (with the plus or minus depending on whether Ejk ,TPis before or
after
P Ejk ,U ). This impliesPthat hχ ([Γ/T, pT , ρT ]) − hχ ([Γ/U, pU , ρU ]) = (± i∈I1 ai,j1 ) +
(± i∈I2 ai,j2 ) + · · · + (± i∈Ir ai,jr ). Since χ is generic, this cannot be zero.
Remark 4.11. If χ is not decisive, then F ↑ (Γ, p, ρ) is still somewhat understandable, it
just might not be contractible. A forest is ascending if and only if it lies in an ascending
spanning tree, so F ↑ (Γ, p, ρ) is the union of the stars of the ascending spanning trees. Also,
a non-empty intersection of some of these stars is again a (contractible) star, namely the
star of the forest that is the intersection of the relevant spanning trees. Hence F ↑(Γ, p, ρ)
is homotopy equivalent to the nerve of its covering by stars of ascending spanning trees.
This is isomorphic to the simplicial complex whose 0-simplices are the ascending spanning
trees, and where k of them span a (k − 1)-simplex whenever the trees have a non-empty
intersection. In theory it should be possible to compute the homotopy type of this complex,
but we are not currently interested in the non-decisive characters (since they will be totally
intractable when we study the ascending up-link in the next subsection), so we leave further
analysis of this for the future.
Since lk↑ v = lk↑d v ∗ lk↑u v and lk↑d [Γ, p, ρ] ∼
= F ↑ (Γ, p, ρ), we get:
Corollary 4.12. If χ is a decisive character of PΣAutn and v = [Γ, p, ρ] for Γ not a rose,
then lk↑ v is contractible.
4.2. Ascending up-link. Thanks to Corollary 4.12, for decisive characters of PΣAutn
the only 0-simplices of ΣKn that can have non-contractible ascending links are those of the
form [Rn , ∗, α]. These have empty down-link, so the ascending link equals the ascending
up-link. It turns out that the ascending up-link lk↑u [Rn , ∗, α] is homotopy equivalent to
a particularly nice complex In↑ (χ), the complex of ascending ideal edges, which we now
describe.
Let E(∗) be the set of half-edges of Rn incident to ∗. Since we identify π1 (Rn ) with Fn ,
the petals of Rn are naturally identified with the basis S = {x1 , . . . , xn } of Fn . We will
write i for the half-edge in the petal xi with ∗ as its origin, and i for the half-edge in xi
with ∗ as its terminus, so E(∗) = {1, 1, 2, 2, . . . , n, n}.
Definition 4.13 ((Symmetric) ideal edges). A subset A of E(∗) such that |A| ≥ 2 and
|E(∗) \ A| ≥ 1 is called an ideal edge. We say an ideal edge A splits {i, i} if {i, i} ∩ A and
{i, i} \ A are both non-empty. We call A symmetric if there exists precisely one i ∈ [n]
such that A splits {i, i}. In this case we call {i, i} the split pair of A.
Intuitively, an ideal edge A describes a way of blowing up a new edge at ∗, with the
half-edges in E(∗) \ A becoming incident to the new basepoint and the half-edges in A
becoming incident to the new non-basepoint vertex; a more rigorous discussion can be
found for example in [Jen02]. The conditions in the definition ensure that blowing up
a symmetric ideal edge results in a cactus graph. See Figure 3 for an example. The
asymmetry between the conditions |A| ≥ 2 and |E(∗) \ A| ≥ 1 arises because the basepoint
18
MATTHEW C. B. ZAREMSKY
of a cactus graph must have degree at least 2, whereas other vertices must have degree at
least 3. In fact every vertex of a cactus graph has even degree, so in practice |A| ≥ 2 is
equivalent to |A| ≥ 3 for symmetric ideal edges.
1
2
1
E(∗) \ A
1
E(∗) \ A
2
2
1
−→
A
1
2
1
2
2
1
2
1
2
−→
A
Figure 3. The symmetric ideal edge {1, 2, 2} and the non-symmetric ideal
edge {1, 2}, together with the blow-ups they produce. The former yields a
cactus graph and the latter does not.
P
Definition 4.14 (Ascending symmetric ideal edge). Let χ = i6=j ai,j χi,j be a character
of PΣAutn and let A be a symmetric ideal edge. Suppose {j, j} is the split pair of A and
let I = (A ∩ [n]) \ {j}. We call A ascending (with respect to χ) if either
P
(i) j ∈ A and i∈I ai,j > 0 or
P
(ii) j ∈ A and i∈I ai,j < 0.
For example, the symmetric ideal edge in Figure 3 is ascending if and only if a2,1 < 0. If
χ is positive (respectively negative) then A is ascending if and only if j ∈ A (respectively
j ∈ A), for {j, j}
Pthe split pair of A. If χ is generic then for any set I of pairs {i, i} and
any j ∈ [n] \ I, i∈I ai,j 6= 0, so one of A = {j} ∪ I or A = {j} ∪ I is an ascending ideal
edge.
Definition 4.15 (Compatible). Two ideal edges A and A′ are called compatible if any of
A ⊆ A′ , A′ ⊆ A or A ∩ A′ = ∅ occur.
Definition 4.16 (Complex of (ascending) symmetric ideal edges). Let In be the simplicial
complex whose 0-simplices are the symmetric ideal edges, and where a collection of symmetric ideal edges span a simplex if and only if they are pairwise compatible. Let In↑ (χ)
be the subcomplex of In spanned by the ascending symmetric ideal edges.
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
19
It is a classical fact that In is homotopy equivalent to lku [Rn , ∗, α] (for any α). More
precisely, the barycentric subdivision In′ is isomorphic to lku [Rn , ∗, α]. It turns out a similar
thing happens when restricting to ascending ideal edges:
Lemma 4.17. For any character χ of PΣAutn and any 0-simplex of the form [Rn , ∗, α],
lk↑u [Rn , ∗, α] is homotopy equivalent to In↑ (χ).
Proof. Since the barycentric subdivision In′ of In is isomorphic to lku [Rn , ∗, α], we have
that lk↑u [Rn , ∗, α] is isomorphic to a subcomplex In′ (asc) of In′ . This is the subcomplex
spanned by those 0-simplices in In′ , i.e, those collections of pairwise compatible symmetric
ideal edges, whose corresponding tree blow-up makes hχ go up. Note that In↑ (χ)′ is a
subcomplex of In′ (asc), since as soon as one ideal edge in a collection corresponds to an
ascending edge blow-up the whole collection corresponds to an ascending tree blow-up.
Given a 0-simplex σ = {A1 , . . . , Ak } of In′ (asc), let φ(σ) := {Ai | Ai ∈ In↑ (χ)}. We
claim that φ(σ) is non-empty, and hence φ : In′ (asc) → In′ (asc) is a well defined map
whose image is In↑ (χ). Let [Γ, p, ρ] be the result of blowing up the ideal tree given by σ.
Let U be the spanning tree in Γ such that [Γ/U, pU , ρU ] = [Rn , ∗, α]. Since the blow-up
is ascending, the blow-down reversing it cannot be ascending, so U is not an ascending
spanning tree in Γ. Choose an ascending spanning tree T in Γ, so U 6= T . Similar to the
proof of Proposition 4.10, we can turn U into T by changing one edge at a time, and from
Lemma 3.12 we get
X
X
X
hχ ([Γ/T, pT , ρT ]) − hχ ([Γ/U, pU , ρU ]) = (±
ai,jr )
ai,j2 ) + · · · + (±
ai,j1 ) + (±
i∈I1
i∈I2
i∈Ir
where the Ik and jk are as in the proof of Proposition 4.10. SincePT is ascending but U is
not, this quantity is positive. Hence there exists k such that ± i∈Ik ai,jk > 0 (with the
“±” determined by whether Ejk ,T is before or after Ejk ,U ). Write j = jk for brevity.
Now let T ′ be the spanning tree (T \ Ej,U ) ∪ Ej,T (keep in mind that Ej,U lies in T and
not U, and Ej,T lies in U and not T ), so, roughly, T ′ is the result of changing only the part
of T in Cj,ρ to look like U. Let F := T \ Cj,ρ and consider [Γ/F, pF , ρF ]. Let E j,U and E j,T
be the images of Ej,U and Ej,T in Γ/F . The difference P
between the hχ values obtained by
blowing down E j,U versus E j,T is the positive value ± i∈Ik ai,jk from before; hence E j,U
is ascending in Γ/F and E j,T is not ascending. Now, the blow-up of [Rn , ∗, α] resulting
in [Γ/F, pF , ρF ] corresponds to one of the Ai , and E j,T is the new edge blown up. This is
an ascending blow-up, since the reverse is a non-ascending blow-down. This shows that at
least one of the Ai is indeed ascending, so φ(σ) 6= ∅.
Having shown that φ : In′ (asc) → In′ (asc) is well defined, it is easily seen to be a
poset retraction (à la [Qui78, Section 1.3]) onto its image In↑ (χ)′ , so we conclude that
lk↑u [Rn , ∗, α] ∼
= In↑ (χ).
= In′ (asc) ≃ In↑ (χ)′ ∼
It is clear from Definition 4.14 that for χ positive, the complex In↑ (χ) is independent of
χ. We will write In↑ (pos) for In↑ (χ) in this case. In Proposition 4.19 we will determine the
20
MATTHEW C. B. ZAREMSKY
connectivity properties of In↑ (pos). First we need the following useful lemma, which was
proved in [WZ16].
Lemma 4.18 (Strong Nerve Lemma). Let Y be a simplicial complex covered by subcomT
plexes Y1 , . . . , Yn . Suppose that whenever an intersection ki=1 Yji of k of them (1 ≤ k ≤ n)
is non-empty, it is (n−k −2)-connected. If the nerve N of the covering is (n−3)-connected
then so is Y . If the nerve of the covering is (n − 3)-connected but not (n − 2)-acyclic, then
so is Y .
Proof. That Y is (n − 3)-connected follows from the usual Nerve Lemma, e.g., [BLVŽ94,
Lemma 1.2], but this usual Nerve Lemma is not enough to show Y is not (n − 2)-acyclic if
the nerve is not. In [WZ16, Proposition 1.21] it was shown using spectral sequences that
indeed these hypotheses ensure that Y is not (n − 2)-acyclic.
Proposition 4.19. The complex In↑ (pos) is (n − 3)-connected but not (n − 2)-acyclic (and
hence so are lk↑u [Rn , ∗, α] and lk↑ [Rn , ∗, α] for any positive character of PΣAutn ).
Proof. We will prove that In↑ (pos) is (n − 3)-connected by using induction to prove a more
general statement, and then afterwards we will prove that In↑ (pos) is not (n − 2)-acyclic
by applying Lemma 4.18. Call a subset P ⊆ E(∗) positive if for each 1 ≤ i ≤ n we have
that i ∈ P implies i ∈ P . Define the defect d(P ) to be the number of i ∈ P with i 6∈ P .
Define the weight w(P ) of P to be the number of pairs {j, j} contained in P , plus one
if the defect is non-zero. For example the sets {1, 1, 2, 2}, {1, 2, 2} and {1, 2, 3, 4, 5, 5} all
have weight two (and defect zero, one and four, respectively). Also note that P = E(∗)
itself is positive and has defect zero and weight n. Let I ↑ (P ; pos) be the subcomplex of
In↑ (pos) supported on those 0-simplices A such that A ⊆ P . We now claim that I ↑ (P ; pos)
is (w(P ) − 3)-connected, so In↑ (pos) being (n − 3)-connected is a special case of this.
We induct on w(P ). For the base case we can use w(P ) = 1, and the result holds
vacuously since every set is (−2)-connected. Now let w(P ) ≥ 2. Let D be the set of all
i ∈ P with i 6∈ P , so d(P ) = |D|. Within this induction on w(P ) we now additionally
begin an induction on d(P ). For the base case we assume D = ∅, i.e., d(P ) = 0. Consider
the 0-simplices of I ↑ (P ; pos) of the form P \ {i}, for each i ∈ P ∩ [n]. Call these hubs, and
denote P \ {i} by Θi . Note that a symmetric ideal edge in I ↑ (P ; pos) is compatible with
a given hub if and only if it is contained in it (i.e., it cannot properly contain it nor be
disjoint from it). Any collection of pairwise compatible symmetric ideal edges in I ↑ (P ; pos)
lies in the star of some hub, so I ↑ (P ; pos) is covered by the contractible stars of the Θi .
The intersection of the stars of any k hubs, say Θi1 , . . . , Θik , is isomorphic to the complex
I ↑ (P \ {i1 , . . . , ik }; pos). For k = 1 this is contractible, being a star, and for k > 1 we
have w(P \ {i1 , . . . , ik }) = w(P ) − k + 1 < w(P ), so by induction on w(P ) we know this is
(w(P ) − k − 2)-connected. Finally, the nerve of the covering of I ↑ (P ; pos) by these stars is
the boundary of a (w(P ) − 1)-simplex, i.e., a (w(P ) − 2)-sphere, so by the first statement
in Lemma 4.18 we conclude that I ↑ (P ; pos) is (w(P ) − 3)-connected.
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
21
Now suppose D 6= ∅, so d(P ) > 0. Without loss of generality we can write D =
{1, . . . , d}. We will build up to I ↑ (P ; pos) from a subcomplex with a known homotopy
type, namely the contractible star of the 0-simplex {1} ∪ (P \ D). The 0-simplices of
I ↑ (P ; pos) missing from this star are those A containing an element of {2, . . . , d} (if d = 1
there is nothing to do, so assume d ≥ 2), so to obtain I ↑ (P ; pos) from this star we will
attach these missing 0-simplices, in some order, along their relative links lkrel A. If we
can do this in an order such that the relative links are always (w(P ) − 4)-connected,
then we can conclude that I ↑ (P ; pos) is (w(P ) − 3)-connected. The order is as follows:
first glue in the A containing 2 in order of decreasing size, then the A containing 3 in
order of decreasing size, and so forth. The relative link lkrel A of A decomposes into the
out
join of its relative in-link lkin
rel A and relative out-link lkrel A. The relative in-link of A
is defined to be the subcomplex supported on those B in lkrel A such that B ⊆ A. The
relative out-link is defined to be the subcomplex supported on those B in lkrel A that
satisfy either A ⊆ B or A ∩ B = ∅. These options encompass all the ways a symmetric
ideal edge can be compatible with A, and clearly everything in lkin
rel A is compatible with
out
in
out
everything in lkrel A, so indeed lkrel A = lkrel A∗ lkrel A. To show that lkrel A is (w(P ) −4)connected for every A containing an element of {2, . . . , d}, we will consider lkin
rel A and
A
separately.
Let
{i
}
:=
A
∩
D,
so
i
∈
{2,
.
.
.
,
d},
and
let
A
:=
A
\
{i
lkout
A
A
♭
A }. The
rel
in
↑
0-simplices B in lkrel A must lie in I (A♭ ; pos), since for B to come before A in our order
while having smaller cardinality than A, it must not contain iA (so such B are actually
↑
already in the star of {1} ∪ (P \ D)). Hence lkin
rel A is isomorphic to I (A♭ ; pos), and
in
w(A♭ ) = w(A) − 1 < w(P ), so by induction on w(P ) we know lkrel A is (w(A♭ ) − 3)connected. Next, the 0-simplices B in lkout
rel A must be disjoint from {iA +1, . . . , d} and either
properly contain A or be disjoint from A. The map B 7→ B \ A♭ induces an isomorphism
↑
from lkout
rel A to I (P \(A♭ ∪{iA +1, . . . , d}); pos); the inverse map sends C to itself if iA 6∈ C
and to C ∪ A♭ if iA ∈ C. Since w(P \ (A♭ ∪ {iA + 1, . . . , d})) = w(P ) − w(A♭ ) < w(P ), by
induction on w(P ) we know lkout
rel A is (w(P ) − w(A♭ ) − 3)-connected. We conclude that
out
A
is
(((w(A
A
∗
lk
lkrel A = lkin
♭ ) − 3) + (w(P ) − w(A♭ ) − 3)) + 2)-connected, which is
rel
rel
to say (w(P ) − 4)-connected, as desired.
This finishes the inductive proof, which in particular shows In↑ (pos) is (n − 3)-connected.
Now to see that it is not (n − 2)-acyclic, consider the covering of In↑ (pos) by the stars of
Θ1 , . . . , Θn , as above. The intersection of any k of these stars is (n − k − 2)-connected, as
was deduced during the inductive proof, and the nerve of the covering is an (n − 2)-sphere,
so Lemma 4.18 says I ↑ (P ; pos) is not (n − 2)-acyclic.
A parallel argument shows that In↑ (χ) is also (n − 3)-connected but not (n − 2)-acyclic
for χ a negative character of PΣAutn .
Remark 4.20. If χ is neither positive nor negative then In↑ (χ) is much more complicated,
for example as discussed in Remark 4.24 below, one can find examples of generic χ for
which I4↑ (χ) has non-trivial π1 and H2 . Hence, we have focused only on positive and
negative characters of PΣAutn in Theorem A, but in Subsection 4.3 we will show that
generic characters are also tractable at least when n = 3.
22
MATTHEW C. B. ZAREMSKY
We can now prove Theorem A.
Proof of Theorem A. By Corollary 4.12 and Proposition 4.19, all the ascending links of
0-simplices in ΣKn are (n − 3)-connected, so Corollary 1.7 says the filtration (ΣKnχ≥t )t∈R
is essentially (n − 3)-connected, and so [χ] ∈ Σn−2 (PΣAutn ).
To prove the negative statement, note that Proposition 4.19 says that there exist ascending links of 0-simplices that are not (n − 2)-acyclic, with arbitrary hχ value. Also,
every ascending link has trivial (n − 1)st homology since it is (n − 2)-dimensional. By
Corollary 1.7 then, the filtration (ΣKnχ≥t )t∈R is not essentially (n − 2)-connected, and so
[χ] 6∈ Σn−1 (PΣAutn ).
Remark 4.21. Using the natural
P split epimorphisms PΣAutn → PΣAutm for m < n,
we also now can see that if χ = i6=j ai,j χi,j is a character of PΣAutn induced from this
epimorphism by a positive or negative character of PΣAutm (so ai,j is positive for 1 ≤
i, j ≤ m or negative for all 1 ≤ i, j ≤ m, and is zero if either i or j is greater than m) then
[χ] ∈ Σm−2 (PΣAutn ). However, we cannot immediately tell whether [χ] 6∈ Σm−1 (PΣAutn )
(which we suspect is the case), since Pettet showed the kernels of PΣAutn → PΣAutm
have bad finiteness properties [Pet10].
As an immediate consequence of Theorem A, Citation 1.2 and Observation 2.3, we have
the following result, which will provide the crucial step in proving Theorem B in Section 5.
Corollary 4.22. For n ≥ 2, if χ is a discrete positive character of PΣAutn , then the kernel
ker(χ) is of type Fn−2 but not Fn−1 . In particular the “Bestvina–Brady-esque” subgroup
BBn , i.e., the kernel of the character sending each standard generator αi,j to 1 ∈ Z, is of
type Fn−2 but not Fn−1 .
4.3. The n = 3 case. When n = 3 the 0-simplex links in ΣK3 are graphs, and using some
graph theoretic considerations we can actually prove the analog of Proposition 4.19 for
generic characters, which leads to the following:
Theorem 4.23. Σ2 (PΣAut3 ) = ∅.
Proof. Since Σ2 (PΣAut3 ) is open and the generic character classes are dense in the character sphere (Observation 4.9), it suffices to prove the analog of Proposition 4.19 for all
generic χ. Since I3↑ (χ) is 1-dimensional, i.e., a graph, we need to prove that it is connected
but not a tree.
First we collect some facts about I3 . It is a graph with eighteen vertices, namely there
are twelve vertices corresponding to the symmetric ideal edges A ⊆ E(∗) = {1, 1, 2, 2, 3, 3}
with |A| = 3 (three choices of which {i, i} to split, times two choices of which of i or i to
include in A, times two choices of which {j, j} (j 6= i) to include in A) and six vertices for
the symmetric ideal edges with |A| = 5 (six choices of which element of E(∗) to leave out
of A). Call the former vertices depots and the latter hubs. There is an edge connecting a
depot to a hub whenever the depot is contained in the hub, and there is an edge connecting
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
23
two depots whenever they are disjoint. Each depot has degree four and each hub has degree
six.
Now, since χ is generic, for each pair of depots of the form {i, j, j} and {i, j, j}, precisely
one of them is ascending, with a similar statement for hubs. Hence I3↑ (χ) is a full subgraph
of I3 spanned by six depots and three hubs. We claim that each ascending hub has degree
at least three in I3↑ (χ). Consider the hub {i′ , j, j, k, k}, where i′ ∈ {i, i} and i, j, k are
distinct. If this is ascending then at least one of the depots {i′ , j, j} or {i′ , k, k} must be
as well, since if aj,i + ak,i is positive (respectively negative) then at least one of aj,i or ak,i
must be as well. The other two edges incident to our hub come from the fact that one of
the depots {j, k, k} or {j, k, k} is ascending, as is one of {j, j, k} or {j, j, k}.
Having shown that each hub in I3↑ (χ) has degree at least three in I3↑ (χ), this tells us
I3↑ (χ) has at least nine edges, since hubs cannot be adjacent. Then since I3↑ (χ) also has
nine vertices we conclude that it contains a non-trivial cycle. It remains to prove it is
connected. Say the three hubs in I3↑ (χ) are u, v and w, and suppose that u has no adjacent
depots in I3↑ (χ) in common with either v or w (since otherwise we are done). Since there
are only six depots, this implies that v and w have the same set of adjacent depots in
I3↑ (χ). But this is impossible since the intersection of the stars of two different ascending
hubs can contain at most two ascending depots.
Combining this with Orlandi-Korner’s computation of Σ1 (PΣAut3 ), we get in particular
that Σ1 (PΣAut3 ) is dense in S(PΣAut3 ) = S 5 but Σ2 (PΣAut3 ) is already empty. (Note
since PΣAut2 ∼
= F2 we also know that Σ1 (PΣAut2 ) = ∅.)
Remark 4.24. Unfortunately the analogous result for arbitrary n, i.e., that Σn−2 (PΣAutn )
is dense in S(PΣAutn ) but Σn−1 (PΣAutn ) is empty, cannot be deduced using our methods when n > 3 (though we do suspect it is true). One would hope that the analog of
Proposition 4.19 always holds for all generic χ, but it does not. For example, when n = 4
one can find a generic character χ such that I4↑ (χ) is not simply connected. One example
we found is to take χ with a1,2 = a2,1 = a3,4 = a4,3 = 3 and all other ai,j = −1 (adjusted
slightly by some tiny ε > 0 to be generic). This non-simply connected ascending link
also has non-trivial second homology, so this does not necessarily mean that [χ] is not
in Σ2 (PΣAut4 ) (and we believe that it actually is), it is just inconclusive. In general we
tentatively conjecture that Σn−2 (PΣAutn ) is dense in S(PΣAutn ) and Σn−1 (PΣAutn ) is
empty, but for now our Morse theoretic approach here seems to only be able to handle the
positive and negative characters for arbitrary n, and also the generic characters for n = 3.
5. Proof of Theorem B
We can now use our results about PΣAutn to quickly prove Theorem B, about ΣAutn .
Proof of Theorem B. Since BBn is of type Fn−2 but not Fn−1 by Corollary 4.22, and has
finite index in ΣAut′n by Lemma 2.5, we know that ΣAut′n is of type Fn−2 but not Fn−1 .
24
MATTHEW C. B. ZAREMSKY
Also, Observation 2.3 says that for any m either Σm (ΣAutn ) is all of S 0 or else is empty.
The result now follows from Citation 1.2.
References
[BB97]
M. Bestvina and N. Brady, Morse theory and finiteness properties of groups, Invent. Math.
129 (1997), no. 3, 445–470.
[BBM07]
M. Bestvina, K.-U. Bux, and D. Margalit, Dimension of the Torelli group for Out(Fn ), Invent.
Math. 170 (2007), no. 1, 1–32.
[BG99]
K.-U. Bux and C. Gonzalez, The Bestvina-Brady construction revisited: geometric computation of Σ-invariants for right-angled Artin groups, J. London Math. Soc. (2) 60 (1999), no. 3,
793–801.
[BGK10]
R. Bieri, R. Geoghegan, and D. H. Kochloukova, The Sigma invariants of Thompson’s group
F , Groups Geom. Dyn. 4 (2010), no. 2, 263–273.
[BLVŽ94] A. Björner, L. Lovász, S. T. Vrećica, and R. T. Živaljević, Chessboard complexes and matching
complexes, J. London Math. Soc. (2) 49 (1994), no. 1, 25–39.
[BMMM01] N. Brady, J. McCammond, J. Meier, and A. Miller, The pure symmetric automorphisms of a
free group form a duality group, J. Algebra 246 (2001), no. 2, 881–896.
[BNS87]
R. Bieri, W. D. Neumann, and R. Strebel, A geometric invariant of discrete groups, Invent.
Math. 90 (1987), no. 3, 451–477.
[BR88]
R. Bieri and B. Renz, Valuations on free resolutions and higher geometric invariants of groups,
Comment. Math. Helv. 63 (1988), no. 3, 464–497.
[Bro87]
K. S. Brown, Trees, valuations, and the Bieri-Neumann-Strebel invariant, Invent. Math. 90
(1987), no. 3, 479–504.
[Bux04]
K.-U. Bux, Finiteness properties of soluble arithmetic groups over global function fields, Geom.
Topol. 8 (2004), 611–644 (electronic).
[Col89]
D. J. Collins, Cohomological dimension and symmetric automorphisms of a free group, Comment. Math. Helv. 64 (1989), no. 1, 44–61.
[CV86]
M. Culler and K. Vogtmann, Moduli of graphs and automorphisms of free groups, Invent.
Math. 84 (1986), no. 1, 91–119.
[Dam16]
C. Damiani, A journey through loop braid groups, arXiv:1605.02323, 2016.
[HV98]
A. Hatcher and K. Vogtmann, Cerf theory for graphs, J. London Math. Soc. (2) 58 (1998),
no. 3, 633–655.
[Jen02]
C. A. Jensen, Contractibility of fixed point sets of auter space, Topology Appl. 119 (2002),
no. 3, 287–304.
[JMM06]
C. Jensen, J. McCammond, and J. Meier, The integral cohomology of the group of loops,
Geom. Topol. 10 (2006), 759–784.
[KMM15] N. Koban, J. McCammond, and J. Meier, The BNS-invariant for the pure braid groups,
Groups Geom. Dyn. 9 (2015), no. 3, 665–682.
[Koc12]
D. H. Kochloukova, On the Σ2 -invariants of the generalised R. Thompson groups of type F ,
J. Algebra 371 (2012), 430–456.
[KP14]
N. Koban and A. Piggott, The Bieri-Neumann-Strebel invariant of the pure symmetric automorphisms of a right-angled Artin group, Illinois J. Math. 58 (2014), no. 1, 27–41.
[McC86]
J. McCool, On basis-conjugating automorphisms of free groups, Canad. J. Math. 38 (1986),
no. 6, 1525–1529.
[MMV98] J. Meier, H. Meinert, and L. VanWyk, Higher generation subgroup sets and the Σ-invariants
of graph groups, Comment. Math. Helv. 73 (1998), no. 1, 22–44.
[OK00]
L. A. Orlandi-Korner, The Bieri-Neumann-Strebel invariant for basis-conjugating automorphisms of free groups, Proc. Amer. Math. Soc. 128 (2000), no. 5, 1257–1262.
SYMMETRIC AUTOMORPHISMS, BNSR-INVARIANTS, AND FINITENESS PROPERTIES
[Pet10]
[Qui78]
[Sav96]
[WZ16]
[Zar15]
25
A. Pettet, Finiteness properties for a subgroup of the pure symmetric automorphism group,
C. R. Math. Acad. Sci. Paris 348 (2010), no. 3-4, 127–130.
D. Quillen, Homotopy properties of the poset of nontrivial p-subgroups of a group, Adv. in
Math. 28 (1978), no. 2, 101–128.
A. G. Savushkina, On a group of conjugating automorphisms of a free group, Mat. Zametki
60 (1996), no. 1, 92–108, 159.
S. Witzel and M. C. B. Zaremsky, The Basilica Thompson group is not finitely presented,
Submitted. arXiv:1603.01150, 2016.
M. C. B. Zaremsky, On the Σ-invariants of generalized Thompson groups and Houghton
groups, Submitted. arXiv:1502.02620, 2015.
Department of Mathematical Sciences, Binghamton University, Binghamton, NY 13902
E-mail address: [email protected]
| 4 |
EFFICIENT COMPRESSION OF PROLOG PROGRAMS
Alin-Dumitru Suciu
Technical University of Cluj-Napoca, Department of Computer Science
26-28, George Bariţiu St., RO-3400, Cluj-Napoca, Romania
Phone/Fax: +40-64-194491, Email: [email protected]
Kálmán Pusztai
Technical University of Cluj-Napoca, Department of Computer Science
26-28, George Bariţiu St., RO-3400, Cluj-Napoca, Romania
Phone/Fax: +40-64-194491, Email: [email protected]
Abstract
We propose a special-purpose class of compression algorithms for efficient compression
of Prolog programs. It is a dictionary-based compression method, specially designed for the
compression of Prolog code, and therefore we name it PCA (Prolog Compression Algorithm).
According to the experimental results this method provides better compression than state-of-theart general-purpose compression algorithms. Since the algorithm works with Prolog syntactic
entities (e.g. atoms, terms, etc.) the implementation of a Prolog prototype is straightforward and
very easy to use in any Prolog application that needs compression. Although the algorithm is
designed for Prolog programs, the idea can be easily applied for the compression of programs
written in other (logic) languages.
1. Introduction
The need for compression of Prolog programs naturally occurs in practice whenever we
need to send Prolog code over the network (e.g. in distributed logic frameworks, mobile logic
agents etc.) or to store large libraries of Prolog modules. Note that compression could be used
also as a form of protection if the adversary does not know the compression algorithm.
Whenever the need for such compression appears one can choose to use a general-purpose
compression algorithm [4] (e.g. Huffman, LZW, etc.) or to design and use a special-purpose
compression algorithm that best suits for one’s needs. Although designing a new compression
algorithm requires some effort it is however expected from a special-purpose compression
algorithm to achieve a better compression ratio that will justify the effort.
We will present a special-purpose class of compression algorithms for efficient
compression of Prolog programs. It is a dictionary-based compression method, specially designed
for the compression of Prolog code, and therefore we name it PCA (Prolog Compression
Algorithm). According to the experimental results this method provides better compression and is
faster than state-of-the-art general-purpose compression algorithms. Since the algorithm works
with Prolog syntactic entities (e.g. atoms, terms, etc.) the implementation of a Prolog prototype is
straightforward and very easy to use in any Prolog application that needs compression. Although
the algorithm is designed for Prolog programs, the idea can be easily applied for the compression
of programs written in other (logic) languages. There are two natural questions to answer when
we talk about the compression of Prolog code:
1. Do we need the comments any further?
2. Do we need the variable names any further?
According to the logic programming theory [3] and the Prolog Standard [2] if we remove
the comments and rename the variables this will lead to a perfectly equivalent, although less
readable, Prolog program. This action is known to every Prolog programmer by means of the
“listing” predicate. It is a common situation in practice, which can greatly benefit from the
compression algorithm we propose. According to the answer to the above questions, the table
below summarizes the PCA family of compression algorithms:
Rename variables
Don’t rename variables
Remove comments
PCA0
PCA2
Don’t remove comments
PCA1
PCA3
Table 1: The PCA family of compression algorithms
Although from the information theory viewpoint the above algorithms are lossy
(comments, original formatting and original variable names may be lost), the Prolog program
obtained after compression and decompression is equivalent to the original one. It is obvious that
removing the comments will provide a better compression and it is expectable that renaming the
usually long names of the variables with shorter ones (e.g. A-Z) will also provide a better
compression. Therefore it is expectable that PCA0 will provide the best compression while PCA3
will provide the worst compression.
We will concentrate on the PCA0 algorithm which is detailed in section 2 while in section
3 the experimental results are shown; finally we draw some conclusions and present some further
work issues.
2. The PCA0 Compression Algorithm
From a logical point of view and for an easier understanding we can split the algorithm in
five sequential steps, STEP 0..4, as one can see in Figure 1. The original Prolog program (PP) is
successively transformed in a series of normal forms, NF0..4, (NF0 is the equivalent of a “listing”
command); each normal form has an associated dictionary, D1..4, and header, H1..4. The five steps
of the algorithm are detailed below.
STEP 0
PP
STEP 1
NF0
STEP 2
STEP 3
STEP 4
NF1
NF2
NF3
NF4
D1
D2
D3
D4
H1
H2
H3
H4
Figure 1: The PCA0 Compression Algorithm
STEP0 – comments removal and variable renaming
In: Prolog Program (PP)
Out: Normal Form NF0
Action:
This step is the equivalent of a “listing” command upon the original Prolog program.
Comments and white spaces are removed, variables from each clause or directive are renamed
from A to Z; in case we need more variables we use the naming convention A1, ..., A9, B1, ...,
Z9, .... This is the only transformation of the algorithm that induces a loss of information and is
therefore irreversible; at decompression the original program will not be recoverable, but the
normal form NF0 is perfectly equivalent from a computational viewpoint and can be used instead.
In case someone needs the original variable names and/or the comments one may use the other
compression algorithms from the PCA family.
STEP 1 – building the dictionary and applying the substitution
In: Normal Form NF0
Out: Normal Form NF1, Dictionary D1, Header H1
Action:
This is the essential transformation of the algorithm in which the first dictionary (D1) is
built; every lexical entity of the program (atom, functor, number, variable, constant, etc) has an
entry in the dictionary; each entry consists of the name, arity and type (prefix, infix, postfix) of
the lexical entity. The first header (H1) is also built in this phase and contains information about
the compression method that is needed at decompression (e.g. the size of the dictionary). The
normal form NF1 is obtained by replacing every lexical entity by its index in the dictionary and
removing all the parentheses and commas. Thus the term p(a,B,f(c,d,e)) becomes
&p&a&B&f&c&d&e where &x denotes the index of entity “x”. Starting with this point the
normal form becomes a binary representation of the program; indeed, suppose that for the
example above the indexes are 102 56 42 79 100 101 27; if we maintain an ASCII representation
then we get in fact an expansion (23 bytes from the original 15 bytes); on the contrary, if we
switch to a binary representation, every index takes one byte so we get 7 bytes from the original
15 bytes. Depending on the size of the dictionary, the index will take one or more bytes; for a
regular Prolog program two bytes should generally suffice.
STEP 2 – more compact representation of indexes
In: NF1, D1, H1
Out: NF2, D2, H2
Action:
The dictionary remains the same (D2 = D1). NF2 is obtained in a similar way as NF1 but
the representation of the indexes is compacted; if the dictionary has N entries then we only need
log2N bits to represent an index. The binary representations of the indexes are appended and
grouped in bytes; the last byte is padded with zero. The number N is added to the header.
STEP 3 – optimizing the dictionary and the header
In: NF2, D2, H2
Out: NF3, D3, H3
Action:
The program representation remains the same (NF3 = NF2). The dictionary is split in three
parts, one for each column:
Part N (Names)
Part A (Arities)
Part T (Types)
Part N contains the names of the lexical entities separated by a white space. We can
replace some frequently used operators or built in predicates with ASCII codes 0-31 or 128-255.
An additional optimization is obtained if all the variables are at the beginning of the dictionary; if
NVAR is the number of the variables then the first NVAR entries in Part N can be removed and
NVAR is added to the header H3.
If AMAX is the maximal arity, then Part A can be compressed by representing the arities
on log2AMAX bits and adding AMAX to the header H3. Part T contains only three codes:
0=prefix, 1=infix, 2=postfix. Postfix operators are rare in practice so we can use a flag TF to
compress Part T even more; if TF = 0 then there are no postfix operators and types will be
represented on one bit; if TF=1 two bits will be used.
STEP 4 – additional compression
In: NF3, D3, H3
Out: NF4, D4, H4
Action:
In case one needs an even better compression, the output of STEP 3 (H3+D3+NF3) can be
further compressed using traditional compression algorithms or even external compression
programs.
Finally we mention that an efficient implementation of the algorithm will combine and
rearrange actions from different steps and will not generate all the intermediate normal forms but
only the final result.
3. Experimental Results
We developed a prototype implementation of the algorithm PCA0 in SICStus Prolog that
allowed the study of the compression ratio for a set of test examples; compression time was not
the focus of our research yet, as this will require a more efficient implementation. As test
examples we used some of the source files from the SICStus Prolog Library.
Tables 2 and 3 below show the evolution of compression ratio during the steps of the
algorithm (Ratio x corresponds to STEP x). In Table 2 the original size of the logic program PP is
the reference while in Table 3 the normal form NF0 (i.e. the original program with comments
striped and the variables renamed) is the reference. As expected, the first table shows better
results, the extra compression obtained is due to the comments removal.
File
arrays.pl
assoc.pl
db.pl
atts.pl
heaps.pl
objects.pl
sockets.pl
Ratio 0
0.606
0.453
0.654
0.767
0.463
0.465
0.781
Ratio 1
0.555
0.422
0.660
0.736
0.448
0.598
0.812
Ratio 2
0.472
0.432
0.544
0.703
0.391
0.407
0.613
Ratio 3
0.225
0.184
0.262
0.280
0.188
0.262
0.318
Table 2: Compression ratio with respect to the size of PP
File
arrays.pl
assoc.pl
db.pl
atts.pl
heaps.pl
objects.pl
sockets.pl
Ratio 1
0.916
0.931
1.008
0.958
0.968
1.285
1.039
Ratio 2
0.778
0.954
0.832
0.916
0.843
0.874
0.784
Ratio 3
0.371
0.406
0.401
0.365
0.405
0.564
0.407
Ratio 4
0.335
0.298
0.228
0.295
0.318
0.450
0.224
Table 3: Compression ratio with respect to the size of NF0
File
arrays.pl
assoc.pl
db.pl
atts.pl
heaps.pl
objects.pl
sockets.pl
Winzip
0.344
0.282
0.254
0.292
0.332
0.375
0.247
Winrar
0.337
0.278
0.252
0.287
0.329
0.369
0.242
WinAce
0.343
0.280
0.254
0.291
0.331
0.377
0.247
PCA0
0.203
0.135
0.149
0.227
0.147
0.209
0.175
Table 4: Comparison with other programs (ratio with respect to the size of PP)
Ratio 4
0.203
0.135
0.149
0.227
0.147
0.209
0.175
File
arrays.pl
assoc.pl
db.pl
atts.pl
heaps.pl
objects.pl
sockets.pl
Winzip
0.568
0.622
0.388
0.381
0.717
0.805
0.316
Winrar
0.555
0.613
0.386
0.374
0.709
0.793
0.310
WinAce
0.567
0.619
0.388
0.380
0.715
0.810
0.316
PCA0
0.335
0.298
0.228
0.295
0.318
0.450
0.224
Table 5: Comparison with other programs (ratio with respect to the size of NF0)
Tables 4 and 5 above compare the compression ratio obtained by PCA0 with other popular
compression algorithms; we used Winzip 6.2, Winrar 2.04 and WinAce 0.96. Same as before, in
Table 4 we considered the original program PP as the reference while in Table 5 the normal form
NF0 was taken as the reference. One can see that PCA0 outperforms all the other compression
programs for all the test examples. It is interesting to see that the difference between PCA0 and
the other programs is even bigger when there are no comments in the original program, which
shows that the strength of the PCA0 method does not lie in the comments removal but rather in
the other original ideas used in the algorithm.
4. Conclusions and Further Work
We presented a class of special-purpose compression algorithms for Prolog programs
named PCA (Prolog Compression Algorithm). The best algorithm of this class, PCA0, provides
better compression than state-of-the-art general-purpose compression algorithms.
Possible applications are in practice whenever we need to send Prolog code over the
network (e.g. in distributed logic frameworks, mobile logic agents etc.) or to store large libraries
of Prolog modules.
The prototype implementation we developed allowed us to study the performances of the
algorithm PCA0 only for the compression ratio; a more efficient implementation is under
development that will allow the study of the compression and decompression time.
It is also a subject for further work the implementation of the rest of the algorithms from
the PCA family and the development of similar special-purpose compression algorithms for other
(logic) languages. Further refinements of the algorithms are also possible.
References
[1]
[2]
[3]
[4]
[5]
Apt, K.R.: From Logic Programming to Prolog, Prentice Hall, 1997.
Deransart, P., Ed-Dbali, A., Cervoni, L.: PROLOG – The Standard, Springer, 1996.
Lloyd, J.W.: Foundations of Logic Programming, Springer, 1984.
Salomon, D.: Data Compression – The Complete Reference, Springer, 1998.
Sterling, L., Shapiro, E.: The Art of Prolog, MIT Press, 1994.
| 2 |
F -SINGULARITIES VIA ALTERATIONS
arXiv:1107.3807v4 [math.AG] 5 May 2014
MANUEL BLICKLE, KARL SCHWEDE, KEVIN TUCKER
Abstract. We give characterizations of test ideals and F -rational singularities via (regular) alterations. Formally, the descriptions are analogous to standard characterizations of
multiplier ideals and rational singularities in characteristic zero via log resolutions. Lastly,
we establish Nadel-type vanishing theorems (up to finite maps) for test ideals, and further
demonstrate how these vanishing theorems may be used to extend sections.
1. Introduction
In this paper we define an ideal, in arbitrary equal characteristic, which coincides with
the multiplier ideal over C, and coincides with the test ideal in characteristic p > 0. This
justifies the maxim:
The test ideal and multiplier ideal are morally equivalent.
We state our main theorem.
Main Theorem (Theorem 4.6, Corollary 4.8, Theorem 8.2). Suppose that X is a normal
algebraic variety over a perfect field and ∆ is a Q-divisor on X such that KX + ∆ is
Q-Cartier. Consider the ideal
\
Tr
J :=
Image π∗ OY (⌈KY − π ∗ (KX + ∆)⌉) −−−π→ K(X) .
π: Y −
→X
Here the intersection runs over all generically finite proper separable maps π : Y −→ X
where Y is regular (or equivalently just normal), and the map to the function field K(X)
is induced by the Grothendieck trace map Trπ : π∗ ωY −→ ωX (if KY = π ∗ KX + Ramπ over
the locus where π is finite, then Trπ : π∗ OY (KY ) −→ OX (KX ) is induced by the field trace
TrK(Y )/K(X) : K(Y ) −→ K(X)). We obtain the following:
(a) If X is of equal characteristic zero, then J = J (X; ∆), the multiplier ideal of (X, ∆).
(b) If X is of equal characteristic p > 0, then J = τ (X; ∆), the test ideal of (X, ∆).
Furthermore, in either case, the intersection defining J stabilizes: in other words, there
is always a generically finite separable proper map π : Y −→ X with Y regular such that
Tr
J = Image π∗ OY (KY − π ∗ (KX + ∆)) −−−π→ K(X) .
In fact, we prove a number of variants on the above theorem in further generality, i.e.
for various schemes other than varieties over a perfect field.
2010 Mathematics Subject Classification. 14F18, 13A35, 14F17, 14B05, 14E15.
Key words and phrases. Test ideal, multiplier ideal, alteration, rational singularities, F -rational singularities, Nadel vanishing.
The first author was partially supported by a DFG Heisenberg Fellowship and the DFG research grant
SFB/TRR45.
The second author was partially supported by an NSF Postdoctoral Fellowship #0703505, the NSF grant
#1064485, the NSF FRG grant #1265261, the NSF CAREER grant #1252860 and a Sloan Fellowship.
The third author was partially supported by an NSF Postdoctoral fellowship #1004344 and NSF grant
#1303077.
1
F -SINGULARITIES VIA ALTERATIONS
2
Of course, there are two different statements here. In characteristic zero, this statement
can be viewed as a generalization of the transformation rule for multiplier ideals under generically finite proper dominant maps, see [Laz04, Theorem 9.5.42] or [Ein97, Proposition 2.8].
In positive characteristic, a basic case of the theorem is the following characterization of
F -rational singularities – which is interesting in its own right. Recall that an alteration is
a proper and generically finite map π : Y −
→ X, it is called a regular alteration if Y is a
regular scheme [dJ96].
Corollary (Corollary 3.6, cf. [HL07, HY11]). An F -finite ring R of characteristic p > 0 is
F -rational if and only if it is Cohen-Macaulay and for every alteration (equivalently every
regular separable alteration if R is of finite type over a perfect field, equivalently every finite
dominant map with Y normal) f : Y −→ Spec R, the map f∗ ωY −→ ωR is surjective.
The proof of this special case is in fact the key step in the proof of the main theorem in
positive characteristic. The central ingredients in its proof are the argument of K. Smith
[Smi97b] that F -rational singularities are pseudo-rational, and the work of C. Huneke and
G. Lyubeznik on annihilating local cohomology using finite covers [HL07] (cf. [HH92, HY11,
SS12]). The proof of the Main Theorem additionally utilizes transformation rules for test
ideals under finite morphisms [ST10].
In this paper, we also give a transformation rule for test ideals under proper dominant
(and in particular proper birational) maps between varieties of the same dimension. More
precisely, for any normal (but not necessarily proper) variety Y and Q-divisor Γ, we define
a canonical submodule T 0 (Y, Γ) ⊆ H 0 (Y, OY (⌈KY + Γ⌉)). We use this submodule to obtain
a transformation rule for test ideals under alterations.
Theorem 1 (Theorem 6.8). Suppose that π : Y −→ X = Spec R is a proper dominant generically finite map of normal varieties over a perfect (or even F -finite) field of characteristic
p > 0. Further suppose that ∆ is a Q-divisor on X such that KX + ∆ is Q-Cartier.
Consider the canonically determined submodule (see Definition 6.1)
T 0 (Y, −π ∗ (KX + ∆)) ⊆ H 0 (Y, OY (⌈KY − π ∗ (KX + ∆))⌉))
of sections which are in the image of the trace map for any alteration of Y . Then the global
sections of τ (X; ∆) coincide with the image of T 0 (Y, −π ∗ (KX + ∆)) under the map
Tr
H 0 (Y, OY (⌈KY − π ∗ (KX + ∆)⌉)) −−−π→ K(X)
which is induced by the trace Trπ : π∗ ωY −→ ωX .
We also prove a related transformation rule for multiplier ideals under arbitrary proper
dominant maps in Theorem 8.3.
Perhaps the most sorely missed tools in positive characteristic birational algebraic geometry (in comparison to characteristic zero) are vanishing theorems for cohomology. Indeed,
Kodaira vanishing fails in positive characteristic [Ray78]. However, if X is projective in characteristic p > 0 and L is a “positive” line-bundle, cohomology classes z ∈ H i (X, L −1 ) can
often be killed by considering their images in H i (Y, f ∗ L −1 ) for finite covers f : Y −
→ X.
For example, if i ≥ 0 and L big and semi-ample, it was shown in [Bha12, Bha10] that
there exists such a cover killing any cohomology class η ∈ H i (X, L −1 ) for i < dim X (cf.
[HH92, Smi97c, Smi97a]). When we combine our main result with results from [Bha10], we
obtain the following variant of a Nadel-type vanishing theorem in characteristic p > 0 (and
a relative version). Notably, we need not require a W2 lifting hypothesis.
Theorem 2 (Theorem 5.5). Let X be a normal proper algebraic variety of finite type over
a perfect (or even F -finite) field of characteristic p > 0, L a Cartier divisor, and ∆ ≥ 0
F -SINGULARITIES VIA ALTERATIONS
3
a Q-divisor such that KX + ∆ is Q-Cartier. Suppose that L − (KX + ∆) is a big and
semi-ample Q-divisor. Then there exists a finite surjective map f : Y −→ X such that:
(a) The natural map f∗ OY (⌈KY + f ∗ (L − KX − ∆)⌉) −→ OX (L), induced by the trace
map, has image τ (X; ∆) ⊗ OX (L).
(b) The induced map on cohomology
H i (Y, OY (⌈KY + f ∗ (L − KX − ∆)⌉)) −→ H i (X, τ (X; ∆) ⊗ OX (L))
is zero for all i > 0.
Applying the vanishing theorem above, we obtain the following extension result.
Theorem 3 (Theorem 7.6). Let X be a normal algebraic variety which is proper over
a perfect (or even F -finite) field of characteristic p > 0 and D is a Cartier divisor on
X. Suppose that ∆ is a Q-divisor having no components in common with D and such
that KX + ∆ is Q-Cartier. Further suppose that L is a Cartier divisor on X such that
L − (KX + D + ∆) is big and semi-ample. Consider the natural restriction map
γ : H 0 (X, OX (⌈L − ∆⌉) −→ H 0 (D, OD (L|D − ⌊∆⌋|D ) .
Then
T 0 D, L|D − (KD + ∆|D ) ⊆ γ T 0 X, D + L − (KX + D + ∆)
noting that T 0 (D, L|D − (KD + ∆|D )) ⊆ H 0 (D, OD (⌈L|D − ⌊∆⌋|D )⌉). In particular, if
T 0 (D, L|D − (KD + ∆|D )) 6= 0, then H 0 (X, OX (⌈L − ∆⌉) 6= 0.
Finally, let us remark that many of the results contained herein can be extended to
excellent (but not necessarily F -finite) local rings with dualizing complexes; in fact, this is
the setting of C. Huneke and G. Lyubeznik in [HL07]. However, moving beyond the local
case is then difficult essentially because we do not know the existence of test elements. For
this reason, and also because our inspiration comes largely from (projective) geometry, we
restrict ourselves to the F -finite setting throughout (note that any scheme of finite type
over a perfect field is automatically F -finite).
Acknowledgements: The authors would like to thank Bhargav Bhatt, Christopher Hacon,
Mircea Mustaţă and Karen Smith for valuable conversations. The authors would also like
to thank all the referees for many very useful comments. Finally, the authors worked on
this paper while visiting the Johannes Gutenberg-Universität Mainz during the summers
of 2010 and 2011. These visits were funded by the SFB/TRR45 Periods, moduli, and the
arithmetic of algebraic varieties.
2. The trace map, multiplier ideals and test ideals
Multiplier ideals and test ideals are prominent tools in the study of singularities of algebraic varieties. Later in this section, we will briefly review their constructions together with
those of certain variants – the multiplier and test module, respectively – related to various
notions of rational singularities. In doing so, we emphasize a viewpoint that relies heavily
on the use of the trace map of Grothendieck-Serre duality. In fact, the whole paper (particularly Sections 5, 6, and 7) relies on some of the more subtle properties of this theory.
First however we give a brief introduction to this theory necessary to understand the main
results of this paper that does not rely on any of these more subtle aspects.
F -SINGULARITIES VIA ALTERATIONS
4
2.1. Maps derived from the trace map. This section is designed to be a friendly and
brief introduction to the trace map at the level we will apply it for our main theorem.
Therefore, in this subsection we only deal algebraic varieties of finite type over a perfect
field (although everything can be immediately generalized to F -finite integral schemes).
More general versions will be discussed later in Section 2.3. We will assume that the reader
is familiar with canonical and dualizing modules at the level of [KM98, Section 5.5] and
[Har77, Chapter III, Section 7].
Suppose that π : Y −
→ X is a proper generically finite map of varieties of finite type over
a field k. A key tool in this paper is a trace map
Trπ : π∗ ωY −
→ ωX .
Here ωY and ωX denote suitable canonical modules on Y and X (which we assume exist).
We will explain the origin of this map explicitly. Since any generically finite map can be
factored into a composition of a finite and proper birational map, it suffices to deal with
these cases separately:
Example 2.1 (Trace for proper birational morphism). Suppose that π : Y −
→ X is a proper
birational map between normal varieties. In this case, the trace map Trπ : π∗ ωY −
→ ωX
can be described in the following manner. Fix a canonical divisor KY on Y and set KX =
π∗ KY (in other words, recall by definition that ωY ∼
= OX (KY ) and requiring that π∗ KY =
KX simply means that KX is the divisor on X that agrees with KY wherever π is an
isomorphism). Then π∗ OY (KY ) is a torsion-free sheaf whose reflexification is just OX (KX ),
since π is an isomorphism outside a codimension 2 set of X. The trace map is simply the
natural (reflexification) map π∗ OY (KY ) ֒→ OX (KX ).
Example 2.2 (Trace for finite morphism). Suppose that π : Y −
→ X is a finite surjective map
of varieties. The trace map Trπ : π∗ ωY −
→ ωX is then identified with the evaluation-at-1
map, π∗ ωY := H omOX (π∗ OY , ωX ) −
→ ωX (the neophyte reader should take on faith that
π∗ ω Y ∼
= H omOX (π∗ OY , ωX ), or see Section 2.3 and Remark 2.17 for additional discussion).
Assuming additionally that π : Y −
→ X is a finite separable map of normal varieties with
ramification divisor Ramπ , we fix a canonical divisor KX on X and set KY = π ∗ KX +Ramπ .
Then the field-trace map
TrK(Y )/K(X) : K(Y ) −
→ K(X)
restricts to a map π∗ OY (KY ) −
→ OX (KX ) which can be identified with the Grothendieck
trace map (cf. [ST10]).
Below in subsection 2.3 we will explain that this construction of a trace map is just an
instance of a much more general theory contained in Grothendieck-Serre duality. We do
not need this generality for our main theorem however.
We now mention two key properties that we will use repeatedly in this basic context.
Lemma 2.3 (Compatibilities of the trace map). Suppose that π : Y −→ X is a proper
generically finite dominant morphism between varieties (or integral schemes). Fix Trπ :
π∗ ωY −→ ωX to be the trace map as above.
(a) If additionally, ρ : Z −→ Y is another proper generically finite dominant morphism
and Trρ : ρ∗ ωZ −→ ωY is the associated trace map, then Trπ ◦(π∗ Trρ ) = Trπ◦ρ .
(b) Additionally, if U ⊆ X is open and W = π −1 (U ), then Trπ|W = Trπ |U (here Trπ :
π∗ ωY −→ ωX is a map of sheaves on X and so can be restricted to an open set). In
other words, the trace map is compatible with open immersions.
Proof. These properties follow directly from the definition given.
F -SINGULARITIES VIA ALTERATIONS
5
Because much of our paper is devoted to studying singularities defined by Frobenius,
utilizing Lemma 2.3 we specialize Example 2.2 to the case where π is the Frobenius.
Example 2.4 (Trace of Frobenius). Suppose that X is a variety of finite type over a perfect
field of characteristic p > 0. Then consider the absolute Frobenius map F : X −
→ X, this
map is not a map of varieties over k, but it is still a map of schemes. Using the fact (cf.
Example 2.15) that H omOX (F∗ OX , ωX ) ∼
= F∗ ωX , and applying Example 2.2, we obtain the
evaluation-at-1 trace map,
TrF : F∗ ωX −
→ ωX .
Because of the importance of this map in what follows, we will use the notation ΦX to
denote TrF . As an endomorphism of X one can compose the Frobenius with itself and
obtain the e-iterated Frobenius F e . It follows from Lemma 2.3(a) that TrF e then coincides
with the composition of TrF with itself e-times (appropriately pushed forward). Because of
this, we use ΦeX to denote TrF e .
Now we come to a compatibility statement for images of trace maps that will be absolutely
crucial later in the paper. This is essentially the dual statement to a key observation
from [Smi97b]. We will generalize this later in Proposition 2.21 and also in the proof of
Proposition 4.2.
Proposition 2.5. If π : Y −→ X is a proper dominant generically finite map of varieties,
then the image of the trace map
Jπ := Trπ (π∗ ωY ) ⊆ ωX
satisfies ΦX (F∗ Jπ ) ⊆ Jπ .
Proof. Consider the commutative diagram:
Y
F
/Y
π
π
X
F
/X
where the horizontal maps are the Frobenius on X and Y respectively. It follows from
Lemma 2.3(a) that there is a commutative diagram
F∗ π∗ ωY
π ∗ ΦY
/ π∗ ω Y
F∗ Trπ
Trπ
F∗ ωX
The claimed result follows immediately.
ΦX
/ ωX .
2.2. Pairs. The next step is to extend the trace map to incorporate divisors. Suppose that
X is a normal integral scheme. A Q-divisor Γ on XPis a formal linear combination of prime
Weil divisors with coefficients
in Q. Writing P
Γ=
bi Bi where the Bi are distinct prime
P
divisors, we use ⌈Γ⌉ = ⌈bi ⌉Bi and ⌊Γ⌋ = ⌊bi ⌋Bi to denote the round up and round
down of Γ, respectively. We say Γ is Q-Cartier if there exists an integer n > 0 such that
nΓ is an integral (i.e. having integer coefficients) Cartier divisor, and the smallest such n
is called the index of Γ. An integral divisor KX with OX (KX ) ∼
= ωX is called a canonical
divisor.
F -SINGULARITIES VIA ALTERATIONS
6
Definition 2.6. A pair (X, ∆) is the combined data of a normal integral scheme X together
with a Q-divisor ∆ on X. The pair (X, ∆) is called log-Q-Gorenstein if KX +∆ is Q-Cartier.
We emphasize that log-Q-Gorenstein pairs need not be Cohen-Macaulay.
Convention 2.7. For X normal and integral let ∆ be a Q-divisor on X. The choice of a
rational section s ∈ ωX gives a canonical divisor KX = KX,s = div s and also a map ωX ⊆
s7→1
s7→1
ωX ⊗ K(X) −−−−→ K(X). Then the image of the inclusion ωX (−KX,s − ⌊∆⌋) ֒−−−−→ K(X)
is the subsheaf OX (−⌊∆⌋) ⊆ K(X). Note the image is independent of the choice of s but
the inclusion maps for different sections may differ by multiplication with a unit of OX .
Hence, every OX -submodule of ωX (−⌊KX +∆⌋) corresponds uniquely to an OX -submodule
of OX (−⌊∆⌋) (or even ⊆ OX when ∆ is effective). As such, we have chosen to accept certain abuses of notation in order to identify such submodules. For example, we may write
ωX (−⌊KX + ∆⌋) ⊆ K(X) (or ⊆ OX when ∆ is effective); however, while it is canonical as
a subset (and equals OX (−⌊∆⌋)), the actual inclusion map involves the choice of a section
(and is well defined only up to a multiplication by a unit of OX ).
We now state a result incorporating divisors into the trace map.
Proposition 2.8. Suppose that π : Y −→ X is a proper dominant generically finite morphism between normal varieties, and let ∆ be a Q-divisor on X such that KX + ∆ is
Q-Cartier. Then the trace map of π induces a non-zero map
Trπ : π∗ ωY (−⌊π ∗ (KX + ∆)⌋) −→ ωX (−⌊KX + ∆⌋) .
Proof. This result is simple based upon Examples 2.1 and 2.2 and so we leave it to the reader
to verify. We carefully prove a more general result in Propositions 2.13 2.18 below.
2.3. Duality and the trace map. This section restates the results of the previous section
in the more general language of dualizing complexes. While these results will be important
for generalizations of our main theorem and for some of the Kodaira-type vanishing theorems, they are not needed for the main result stated in the introduction. Therefore, we
invite the reader to skip the next section and instead read ahead to Section 2.4.
From now on we assume that all schemes X are Noetherian, excellent, separated and
q
possess a dualizing complex ωX . This is a relatively mild condition, since, for example,
all Noetherian schemes that are of finite type over a local Gorenstein ring of finite Krull
dimension have a dualizing complex [Har66, Chapter V, Section 10]. By definition, [Har66,
b (X) which has fiChapter V, Section 2], a dualizing complex on X is an object in Dcoh
q
q
nite injective dimension and such that the canonical map OX −
→ RH omX (ωX , ωX ) is an
b (X).
isomorphism in Dcoh
Since dualizing complexes are defined by properties in the derived category, they are only
q
unique up to quasi-isomorphism. But even worse, if ω is a dualizing complex, then so is
q
ω ⊗ L [n] for any integer n and line-bundle L . But this is all the ambiguity there is for a
q
connected scheme: if Ω is another dualizing complex then there is a unique line-bundle L
q
q
and a unique shift n such that Ω is quasi-isomorphic to ω ⊗ L [n], see [Har66, Theorem
V.3.1]. The ambiguity with respect to shift is the least serious. For this, we say that a
dualizing complex on an integral scheme (or a local scheme) is said to be normalized if the
q
first non-zero cohomology of ωX lies in degree (− dim X).
A canonical module ωX on a reduced and connected scheme X is a coherent OX -module
q
that agrees with the first non-zero cohomology of a dualizing complex ωX . In particular,
q
q
for a normalized dualizing complex ωX , its (− dim X)-th cohomology ωX := h− dim X ωX is
a canonical module; since it is the first non-zero cohomology there is a natural inclusion
F -SINGULARITIES VIA ALTERATIONS
7
ωX [dim X] ֒−
→ ωX . If X is S1 , i.e. satisfies Serre’s first condition, then ωX is S2 by [Har07,
Lemma 1.3]. Also see [KM98, Corollary 5.69] where it is shown that any (quasi-)projective
scheme over a field has an S2 canonical module. We also note that if X is integral, then
ωX can be taken to be the S2 -module agreeing with the dualizing complex on the CohenMacaulay locus of X.
We shall make extensive use of the trace map from Grothendieck-Serre duality, see
[Con00, Har66]. For S a base scheme, Noetherian, excellent, and separated, we consider the
category SchS of S-schemes (essentially) of finite type over S, with S-morphisms between
q
them. We assume as before that S has a dualizing complex ωS . Then Grothendieck duality
theory provides us with a functor f ! for every S-morphism f : Y −
→ X with the properties
!
(a) ( ) is compatible with composition, i.e. if g : Z −
→ Y is a further S-morphism,
then there is a natural isomorphism of functors (f ◦ g)! ∼
= g! ◦ f ! which is compatible
with triple composition.
q
q
(b) If f is of finite type, and ωX is a dualizing complex on X then f ! ωX is a dualizing
complex on Y . If additionally f is a dominant morphism of integral schemes and
q
q
ωX is normalized, then f ! ωX is also normalized.
q
(c) If f is a finite map, then f ! ( ) = RH omOX (f∗ OY , ) viewed as a complex of
OY -modules. Note that the right hand side is defined for any finite morphism, not
just for an S-morphism.
Therefore we may define for each S-scheme X with structural map πX : X −
→ S the dualq
! ω q . After this choice of dualizing complexes on Sch , the compatizing complex ωX := πX
S
S
ibility with composition now immediately implies that for any S-morphism f : Y −
→ X we
q ∼ q
!
have a canonical isomorphism f ωX = ωY .
q
Remark 2.9. In the remainder of the paper, the base scheme S will typically either be a
field, or it will be the scheme X we are interested in. Note that in either case the absolute
Frobenius map F : X −
→ X is not a map of S-schemes with the obvious choice of (the
same) structural maps. However, using the composition F : X −
→X −
→ S, we do obtain a
new S-scheme structure for X and so we view F : X −
→ X as a map of different S-schemes.
→ X proper there is a natural
A key point in the construction of ( )! is that for f : Y −
transformation of functors
Rf∗ f ! −
→ idX
called the trace map which induces a natural isomorphism of functors in the derived category
Rf∗ RH omY (M , f ! N ) −
→ RH omY (Rf∗ M , N )
q
q
q
q
for any bounded above complex of quasi-coherent OY -modules M and bounded below
q
complex of coherent OX -modules N . This statement, which expresses that f ! is right
adjoint to Rf∗ for proper f , is the duality theorem in its general form.
q
Applying trace map to the dualizing complex ωX we obtain
q
q
q
→ω
(2.10)
Trf q : Rf∗ ω ∼
= Rf∗ f ! ω −
q
Y
X
X
which we also refer to as the trace map and denote by Trf q . Equivalently, by the duality
theorem, the trace map is Grothendieck-Serre dual to the corresponding map of structure
sheaves f : OX −
→ Rf∗ OY . The key properties of the trace relevant for us are
(a) Compatibility with composition, i.e. if g : Z −
→ Y is another proper S-morphism
then Tr(f ◦g) q = Trf q ◦Rf∗ Trg q .
(b) The trace map is compatible with certain base changes. In general this is a difficult
and subtle issue (see [Con00]); however, we will only need this for open inclusions
U ⊆ X and localization at a point, where it is not problematic.
F -SINGULARITIES VIA ALTERATIONS
8
(c) In the case that f is finite, Trf q is locally given by evaluation at 1,
→ ωX
Trf q : Rf∗ f ! ωX = Rf∗ RH omOX (f∗ OY , ωX ) −
q
q
q
q
For a proper dominant morphism π : Y −
→ X of integral schemes, the trace gives rise to
maps on canonical modules as well (not just dualizing complexes). Taking the (− dim X)-th
q
q
cohomology of Trπ q : Rπ∗ ωY −
→ ωX gives
(2.11)
Trπ q : h− dim X (Rπ∗ ωY ) −
→ h− dim X ωX =: ωX
q
q
which we will also denote by Trπ q . Further composing with the inclusion ωY [dim Y ] −
→ ωY
then gives
q
(2.12)
Tr
Trπ : hdim Y −dim X Rπ∗ ωY −
→ h− dim X Rπ∗ ωY −−−π−→ ωX
q
q
which we also refer to as the trace map and now denote by Trπ . Note that the above
construction remains compatible with localization on the base scheme, which we make use
of below in showing under mild assumptions that this trace map is non-zero.
Proposition 2.13. If π : Y −→ X is a proper dominant morphism of integral schemes, then
the trace map
Trπ : hdim Y −dim X Rπ∗ ωY −→ ωX
is non-zero.
Proof. To show the statement we may assume that X = Spec K for K a field. The
q
q
q
map H0 (Y, ωY ) = h0 Rπ∗ ωY −
→ ωK ∼
= K is non-zero as it is Grothendieck-Serre dual
to the natural inclusion K −
→ H 0 (Y, OY ). Hence it is sufficient to show that the map
q
hdim Y (Rπ∗ ωY ) −
→ h0 (Rπ∗ ωY ) is surjective. If Y is Cohen-Macaulay, i.e. ωY [dim Y ] =
q
ωY , we are done. More generally consider the hypercohomology spectral sequence E2 =
q
q
q
q
H i (Y, hj ωY ) ⇒ Hi+j (Y, ωY ). The only terms contributing to H0 (Y, ωY ) are H i (Y, hj ωY )
q
with i + j = 0. In the next lemma, it is shown that dim supp hj ωY < −j for j > − dim Y ,
which hence implies that H dim Y (Y, ωY ) = hdim Y (Rπ∗ ωY ) is the only non-vanishing term
q
among them and thus surjects onto h0 (Rπ∗ ωY ) as claimed.
Lemma 2.14. Let (R, m) be an equidimensional local S1 ring of dimension n with normalq
ized dualizing complex ωR . Then
dim supp h−j ωR < j
q
for j < n.
Proof. By local duality h0 ωR is Matlis dual to Γm (R) which by the S1 condition is zero.
This shows the lemma for j = 0, and hence in particular for n = 1, so that we may proceed
by induction on n.
q
Assuming that 1 ≤ j < n, if dim supp h−j ωR = 0 we are done. Otherwise, we have
q
c = dim supp h−j ωR > 0 and can take p 6= m to be a minimal prime in the support
q
q
of h−j ωR with dim R/p = c. Since dim Rp = n − c < n, we have that (ωR )p [−c] is a
normalized dualizing complex for Rp . Thus, by the induction hypothesis it follows
q
0 = dim supp h−j (ωR )p = dim supp h−j+c (ωR )p [−c] < j − c
q
so that c = dim supp h−j ωR < j as desired.
q
q
We address now a particularly subtle issue surrounding the upper shriek functor and
Frobenius.
F -SINGULARITIES VIA ALTERATIONS
9
Example 2.15 (Trace of Frobenius and behavior of dualizing complexes). A particularly
important setting in this paper is that of a scheme X essentially of finite type over an F finite base scheme S of positive characteristic p (e.g. a perfect field of characteristic p > 0).
This means simply that the (absolute) Frobenius or p-th power map F : X −
→ X is a finite
morphism, and hence proper. Thus we have the “evaluation at 1” trace map
q ∼
q
q
TrF q : F∗ F ! ω =
F∗ RH omX (F∗ OX , ω ) −
→ω .
X
X
X
However, note that since the Frobenius F is generally not an S-morphism we do not have,
q
q
a priori, that F ! ωX ∼
= ωX . This needs an additional assumption, namely that this property
q
q
holds for the base scheme S. Using a fixed isomorphism F ! ωS ∼
= ωS , the compatibility of
( )! with composition in the commutative diagram
X
F
/X
π
π
S
shows that indeed
F
/S
q
q
q
q
q
F ! ωX ∼
= ωX .
= π ! ωS ∼
= π ! F ! ωS ∼
= F ! π ! ωS ∼
Convention 2.16. For simplicity, we will always assume that all our base schemes S of
q
q
positive characteristic p are F -finite and satisfy F ! ωS ∼
= ωS for the given choice of dualizing
q
complex ωS . This is automatic if S is the spectrum of a local ring (e.g. a field).
q ∼ q
Remark 2.17. The assumption F ! ωS =
ωS is convenient but could nonetheless be avoided.
q
q
!
Notice that regardless, F ωS is a dualizing complex, and so it already agrees with ωS up to
a shift and up to tensoring with an invertible sheaf. It is easy to see that the shift is zero
q
q
since F is a finite map, thus we have F ! ωS ∼
= ωS ⊗OS L for some invertible sheaf L . One
option would be to carefully keep track of L throughout all constructions and arguments
– this we chose to avoid. In any case, if one is willing to work locally over the base S (as is
q
q
the case with most of our main theorems) one may always assume that F ! ωS = ωS simply
by restricting to charts where L is trivial.
Proposition 2.8 and Proposition 2.13 can be combined as follows.
Proposition 2.18. Suppose that π : Y −→ X is a proper dominant morphism between
normal integral schemes, and let ∆ be a Q-divisor on X such that KX + ∆ is Q-Cartier.
Then the trace map of π induces a non-zero map
Trπ : hdim Y −dim X Rπ∗ ωY (−⌊π ∗ (KX + ∆)⌋) −→ ωX (−⌊KX + ∆⌋) .
Similarly, if additionally π ∗ (KX + ∆) is a Cartier divisor, then we have another non-zero
map
q
Trπ q : h− dim X Rπ∗ ωY (−⌊π ∗ (KX + ∆)⌋) −→ ωX (−⌊KX + ∆⌋) .
Proof. The difficulty here lies in that ⌊KX + ∆⌋ need not be Q-Cartier. To overcome this,
i
let U ֒−→ X be the regular locus of X, then we have the trace map
Tr
hdim Y −dim X Rπ∗ ωπ−1 (U ) −−−π−→ ωU
q
which is just the restriction of (2.12) to U . Tensoring this map by the invertible sheaf
OU (−⌊(KX + ∆)⌋), we have by the projection formula
hdim Y −dim X Rπ∗ ωπ−1 (U ) (−π ∗ ⌊(KX + ∆)⌋) −
→ ωU (−⌊(KX + ∆)⌋) .
F -SINGULARITIES VIA ALTERATIONS
10
Since −π ∗ ⌊KX + ∆⌋ ≥ −⌊π ∗ (KX + ∆)⌋ on U (where again ⌊KX + ∆⌋ is Cartier), we have
an induced map
(2.19)
hdim Y −dim X Rπ∗ ωπ−1 (U ) (−⌊π ∗ (KX + ∆)⌋) −
→ ωU (−⌊(KX + ∆)⌋) .
Applying i∗ ( ) to (2.19) and composing with the restriction map
hdim Y −dim X Rπ∗ ωY (−⌊π ∗ (KX + ∆)⌋) −
→ i∗ hdim Y −dim X Rπ∗ ωπ−1 (U ) (−⌊π ∗ (KX + ∆)⌋)
now gives the desired first map
Trπ : h− dim X Rπ∗ ωY (−⌊π ∗ (KX + ∆)⌋) −
→ ωX (−⌊KX + ∆⌋)
by noting that i∗ ωU (−⌊(KX + ∆)⌋) = ωX (−⌊(KX + ∆)⌋). To see that Trπ is non-zero,
localize to the generic point of X (where KX + ∆ is trivial) and apply Proposition 2.13.
The construction of the second map Trπ q is similar, rather starting from (2.11) in place
of (2.12). Notice that we require that π ∗ (KX + ∆) to be Cartier so that we have a means
q
of interpreting ωY (−⌊π ∗ (KX + ∆)⌋). Since Trπ q agrees with Trπ generically by the proof
of Proposition 2.13, this map is again non-zero.
In the previous Proposition, we studied trace maps twisted by Q-divisors. In the next
Lemma, we study a special case of this situation which demonstrates that sometimes this
trace map can be re-interpreted as generating a certain module of homomorphisms.
Lemma 2.20. Let X be a normal integral F -finite scheme. Suppose that ∆ is a Q-divisor
such that (pe − 1)(KX + ∆) = div c for some e > 0 and 0 6= c ∈ K(X). If ΦeX : F e ωX −→ ωX
is the trace of the e-iterated Frobenius, then the homomorphism φ( ) = ΦeX (F∗e c · )
generates HomOX (F∗e OX (⌈(pe − 1)∆⌉), OX ) as an F∗e OX -module.
Proof. Essentially by construction (and the definition of ωX ), we have that ΦeX generates
HomOX (F∗e ωX , ωX ) as an F∗e OX -module. Using the identification ωX = OX (KX ) we may
consider ΦeX to generate
q
HomOX (F∗e OX ((1 − pe )KX ), OX ) = HomOX (F∗e ωX , ωX ) .
·c
But then multiplication by c induces an isomorphism OX ((pe − 1)∆) −−→ OX ((1 − pe )KX )
(note (pe − 1)∆ is integral), so that ΦX (F∗e c · ) generates HomOX (F∗e OX (⌈(pe − 1)∆⌉), OX )
as an F∗e OX -module as well.
A main technique in this paper is the observation that the images of the various trace
maps Trπ are preserved under the trace of the Frobenius. We will show this now for
Trπ : hdim Y −dim X Rπ∗ ωY −
→ ωX . We will obtain a partial generalization involving Qdivisors within the proof of Proposition 4.2.
Proposition 2.21. If π : Y −→ X is a proper dominant map of integral schemes, the image
of the trace map
Jπ := Trπ (hdim Y −dim X Rπ∗ ωY ) ⊆ ωX
satisfies ΦX (F∗ Jπ ) ⊆ Jπ .
Proof. Since Frobenius commutes with any map, we get the following diagram for which we
consider the corresponding commutative diagram of structure sheaves
YO
π
F
/X
O
F
Y
π
/X
Rπ∗ OY o
π
OX
Rπ∗ F
F
F∗ Rπ∗ OY o
F∗ π
F∗ OX
.
F -SINGULARITIES VIA ALTERATIONS
11
Applying duality now gives the following commutative diagram of trace maps
(2.22)
Trπ
Rπ∗ ωY
q
q
/ωq
X
O
O
Rπ∗ TrF
TrF
q
F∗ Rπ∗ ωY
q
F∗ Trπ q
.
q
/ F∗ ω q
X
Taking the (− dim X)-th cohomology and composing with the inclusion ωY [dim Y ] −
→ ωY
on the left, we get a diagram
q
(2.23)
hdim Y −dimO X Rπ∗ ωY
/ h− dim X Rπ∗ ω q
Y
O
Trπ
/ h− dim X F∗ Rπ∗ ω q
Y
F∗ Trπ q
q
hdim Y −dim X Rπ∗ ΦY
hdim Y −dim X F∗ Rπ∗ ωY
/ ωX
O
ΦX
/ F∗ ωX
where the left vertical map exists since F is finite and hence F∗ is exact. The horizontal
composition on the top is Trπ and the image Trπ (hdim Y −dim X Rπ∗ ωY ) is Jπ ⊆ ωX . By the
exactness of F∗ , the horizontal composition on the bottom is F∗ Trπ and we get that F∗ Jπ
is the image, and the result now follows.
2.4. Multiplier ideals and pseudo-rationality.
Definition 2.24. [LT81] We say that a reduced connected scheme X is pseudo-rational if
(a) X is Cohen-Macaulay, and
(b) π∗ ωY = ωX for every proper birational map π : Y −
→ X.
Furthermore, if there exists a resolution of singularities π : Y −
→ X, then it is sufficient to
check (b) for this one map π.
If X is of characteristic zero, this coincides via Grauert-Riemenschneider vanishing [GR70]
with the classical definition of rational singularities, meaning there exists a resolution of
singularities π : Y −
→ X such that OX ∼
= π∗ OY and hi Rπ∗ OY = 0 for all i. In positive or
mixed characteristic, it is a distinct notion.
Remark 2.25. It was remarked in [GR70] that if π : Y −
→ X is a resolution of singularities
in characteristic zero, then the subsheaf π∗ ωY ⊆ ωX is independent of the choice of resolution of singularities. This subsheaf should be viewed as an early version of the multiplier
ideal. Compare with the definition of the multiplier module below and the parameter test
submodule in Definition 2.33.
Going back to ideas in K. Smith’s thesis and [Smi95], the natural object to deal with
rational singularities of pairs is the multiplier module, cf. [Bli04, ST08].
Definition 2.26. Given a pair (X, Γ) with Γ a Q-Cartier Q-divisor, then the multiplier module
is defined as
\
J (ωX ; Γ) :=
π∗ ωY (⌈−π ∗ Γ⌉)
π: Y −
→X
where π ranges over all proper birational maps with normal Y .
Note that from this definition it is not clear that J (ωX ; Γ) is even quasi-coherent, as the
infinite intersection of coherent subsheaves need not be quasi-coherent in general. However, if there is a theory of resolution of singularities available (for example over a field
of characteristic zero [Hir64], or in dimension ≤ 2 [Lip78]), it is straightforward to check
coherence by showing that the above intersection stabilizes. Recall that a log resolution of
F -SINGULARITIES VIA ALTERATIONS
12
the pair (X, Γ) is a proper birational map π : Y −
→ X with Y regular and exceptional set
E of pure codimension one such that Supp(E) ∪ π −1 (Supp(Γ)) is a simple normal crossings
divisor. Assuming every normal proper birational modification can be dominated by a log
resolution, one can in fact show
J (ωX ; Γ) = π∗ ωY (⌈−π ∗ Γ⌉)
for any single log resolution π : Y −
→ X, which is in particular coherent. Note that, for
effective Γ it is a subsheaf of ωX via the natural inclusion π∗ ωY ⊆ ωX as in Example 2.1.
Immediately from this definition it follows that X is pseudo-rational if and only if X is
Cohen-Macaulay and J (ωX ) := J (ωX ; 0) = ωX . Hence one defines:
Definition 2.27 ([ST08]). A pair (X, Γ) with Γ ≥ 0 a Q-Cartier Q-divisor is called pseudorational if X is Cohen-Macaulay and J (ωX ; Γ) = ωX . Note that this implies that ⌊Γ⌋ = 0.
The classical notion is of course that of multiplier ideals, which have been defined primarily in characteristic zero. See [Laz04] for a complete treatment in this setting. Historically,
while multiplier ideals first appeared in more analytic contexts and were originally defined
using integrability conditions, one facet of their pre-history was defined for any normal
integral scheme – J. Lipman’s adjoint ideals [Lip94]. The definition we give here (which
makes sense in arbitrary characteristic) is a slight generalization of Lipman’s definition to
the modern setting of pairs.
Definition 2.28 ([Laz04, Lip94]). Given a log-Q-Gorenstein pair (X, ∆) then the multiplier
ideal is defined as
\
J (X; ∆) :=
π∗ OY (⌈KY − π ∗ (KX + ∆)⌉)
π:Y −
→X
where π ranges over all proper birational maps with normal Y and, for each individual π,
we have that KX and KY agree wherever π is an isomorphism.
As above, in a non-local setting, for J (X; ∆) to be quasi-coherent one needs a good
theory of resolution of singularities. In this situation,
J (X; ∆) = π∗ OY (⌈KY − π ∗ (KX + ∆)⌉)
for any log resolution π : Y −
→ X of the pair (X, ∆). In general, J (X; ∆) depends heavily on ∆ and not simply the corresponding linear or Q-linear equivalence class; a similar
observation holds for the multiplier module as well.
If (X, ∆) is a pair, strictly speaking the object J (ωX ; KX + ∆) is ambiguous as KX is
not uniquely determined (and represents a linear equivalence class of divisors). Nonetheless,
for each choice of KX we have that J (ωX ; KX + ∆) ⊆ ωX (−⌊KX + ∆⌋), and is thereby
identified with a submodule of OX (−⌊∆⌋) using Convention 2.7. This construction is in fact
independent of the choice of KX , and allows one to relate multiplier ideals and multiplier
modules in general.
Lemma 2.29. If (X, ∆) is a log-Q-Gorenstein pair, then J (ωX ; (KX + ∆)) = J (X; ∆).
Proof. Suppose π : Y −
→ X is a proper birational map, Y is normal, and KY and KX agree
wherever π is an isomorphism. Making full use of Convention 2.7, we have
=
OX (−⌊∆⌋)
⊆
⊆
ωX (−⌊KX + ∆⌋)
π∗ ωY (−⌊π ∗ (KX + ∆)⌋) = π∗ OY (⌈KY − π ∗ (KX + ∆)⌉)
and the desired conclusion now follows immediately from the definitions.
F -SINGULARITIES VIA ALTERATIONS
13
Definition 2.30. A log-Q-Gorenstein pair (X, ∆) with ∆ ≥ 0 effective is called Kawamata
log terminal if J (X; ∆) = OX .
2.5. The parameter test submodule and F -rationality. We now turn to the characteristic p > 0 notion of F -rationality, extensively studied in [FW89] and [Smi97b], which is
central to our investigations.
Definition 2.31. Suppose that X is reduced, connected, and F -finite (and satisfies Convention 2.17). We say that X is F -rational if
(a) X is Cohen-Macaulay.
(b) There is no proper submodule M ⊆ ωX , non-zero on every irreducible component of
X where ωX is non-zero, such that ΦX (F∗ M ) ⊆ M where ΦX : F∗ ωX −
→ ωX is the
trace of Frobenius as in Example 2.4.
Remark 2.32. While the preceding characterization of F -rationality differs from the definition used historically throughout the literature, it is nonetheless readily seen to be equivalent. Indeed, when X = Spec R for a local ring R with maximal ideal m, the characterization of the tight closure of zero in Hmdim R (R) found in [Smi97b] implies that condition (b)
is Matlis dual to the statement 0∗H dim R (R) = 0.
m
Definition 2.33 ([Smi95, Bli04, ST08]). Suppose that X is reduced, connected, and F -finite
(and satisfies Convention 2.17). The parameter test submodule τ (ωX ) is the unique smallest
subsheaf M of ωX , non-zero on every irreducible component of X where ωX is nonzero,
such that ΦX (F∗ M ) ⊆ M where ΦX : F∗ ωX −
→ ωX is trace of Frobenius.
For X normal and integral, the parameter test submodule τ (ωX ; Γ) of a pair (X, Γ)
with Γ ≥ 0 is the unique smallest non-zero subsheaf M of ωX such that φ(F∗e M ) ⊆ M
for all local sections φ ∈ H omOX (F∗e ωX (⌈(pe − 1)Γ⌉), ωX ) and all e > 0, noting that
ωX ⊆ ωX (⌈(pe − 1)Γ⌉). The further observation φ(F∗ ωX (⌈−Γ⌉)) ⊆ ωX (⌈−Γ⌉) gives that
τ (ωX ; Γ) ⊆ ωX (⌈−Γ⌉).
Once again, the preceding definition is separate from but equivalent to that which is
commonly used throughout the literature. Moreover, standard arguments on the existence
of test elements are required to show the (non-obvious) fact that τ (ωX ) and τ (ωX ; Γ) (as
above) are well-defined. See, for example, [Sch11, Proposition 3.21] (cf. [Sch10, Lemma
2.17]), or more generally [BB11, Bli13].
Lemma 2.34. Suppose that X is reduced, connected, and F -finite. Then X is F -rational
if and only if it is Cohen-Macaulay and τ (ωX ) = ωX . Furthermore, any F -rational scheme
is normal.
Proof. The first statement is an immediate consequence of the definitions, so we need only
show the second. Without loss of generality, we may assume that X = Spec R where R is
an F -rational local ring. If RN is the normalization of R, we will show the inclusion map
i:R−
→ RN is an isomorphism. As R is already assumed Cohen-Macaulay it is S2 , and so
by Serre’s criterion for normality we simply need to check that R is regular in codimension
1. Thus, by localizing we may assume that R is one dimensional (and thus so is RN , which
now must be regular). Consider the following commutative diagram of rings together with
its corresponding Grothendieck-Serre dual (all rings in question are Cohen-Macaulay)
RO N
F
F∗ i
i
R
/ F∗ (RN )
O
F
/ F∗ R
ωRN o
ΦR N
F∗ ωRN
F∗ (i∨ )
i∨
ωR o
ΦR
F∗ ωR
.
F -SINGULARITIES VIA ALTERATIONS
14
As in Example 2.2, i∨ is identified with the evaluation-at-1 map HomR (RN , ωR ) −
→ ωR .
Since i is birational, it is easy to see i∨ is injective. In particular, i∨ is non-zero and
thus also surjective by the definition of F -rationality. It follows that i∨ and hence i are
isomorphisms, whence R is normal as desired.
In order to consider pairs (X, ∆) with ∆ not necessarily effective, we need to recall the
following lemma.
Lemma 2.35. [ST08, Proposition 7.10(3)][ST10, Lemma 6.12] Suppose that (X, ∆) is a pair
with ∆ ≥ 0, and that D ≥ 0 an integral Cartier divisor on X = Spec R, then τ (ωX ; ∆+D) =
τ (ωX ; ∆) ⊗ OX (−D).
Definition 2.36. Suppose that (X, Γ) is a pair. Fix a Cartier divisor D on X such that
Γ + D is effective (these always exist on affine charts). Then the parameter test module of
(X, Γ) is defined as
τ (ωX ; Γ) := τ (ωX ; Γ + D) ⊗ OX (D) .
It is also easy to verify that this definition is independent of the choice of D, hence our local
definition globalizes. It is straightforward to check that τ (ωX ; Γ) ⊆ ωX (⌈−Γ⌉).
In defining the test ideal of a pair below, we handle the non-effective case analogously.
Definition 2.37. Suppose that X is reduced and F -finite. The test ideal τ (X) is the
unique smallest ideal J of OX , non-zero on every irreducible component of X, such that
φ(F∗e J) ⊆ J for every local section φ ∈ H omOX (F∗e OX , OX ) and all e > 0.
If (X, ∆) is a pair with ∆ ≥ 0, the test ideal τ (X; ∆) is the unique smallest non-zero ideal
J of OX such that φ(F∗e J) ⊆ J for any local section φ ∈ H omOX (F∗e OX (⌈(pe − 1)∆⌉), OX )
and all e > 0, noting that J ⊆ OX ⊆ OX (⌈(pe − 1)∆⌉).
As with the parameter test module, one has for any effective integral Cartier Divisor D
the equality τ (X; ∆ + D) = τ (X; ∆) ⊗ OX (−D), which allows one to extend the definition
to the non-effective case as above (see [ST08] for further details). Furthermore, the same
subtle albeit well-known arguments are again required to show these ideals exist [Sch11,
Proposition 3.21] (see also [ST12a]).
Remark 2.38. As before, the preceding definition is non-standard; rather, what we have
just defined is an alternative yet equivalent characterization of the big or non-finitistic
test ideal, commonly denoted in the literature by τb (X, ∆) or τ̃ (X, ∆). However, in many
situations (and conjecturally in general) the big test ideal agrees with the classically defined
or finitistic test ideal. Indeed, these two notions are known to coincide whenever KX + ∆
is Q-Cartier [Tak04, BSTZ10] – the only setting considered in this paper. For this reason,
as well as our belief that the big test ideal is the correct object of study in general, we will
drop the adjective big from the terminology throughout.
Strictly speaking the object τ (ωX ; KX + ∆) is ambiguous as KX is not uniquely determined. Indeed supposing ∆ ≥ 0, as seen from Lemma 2.35, different choices of KX give rise
to different submodules τ (ωX ; KX + ∆) of ωX . However, we in fact have τ (ωX ; KX + ∆) ⊆
ωX (−⌊KX + ∆⌋), so that τ (ωX ; KX + ∆) is identified with a submodule of OX (−⌊∆⌋)
using Convention 2.7. As was the case with the multiplier module, this construction is
independent of the choice of KX .
Lemma 2.39. If X is an F -finite and (X, ∆) is a pair, then τ (ωX ; KX + ∆) = τ (X; ∆).
Proof. Choose a rational section s ∈ ωX determining a canonical divisor KX = KX,s = div s
s7→1
and the embedding ωX ⊆ ωX ⊗ K(X) −−−−→ K(X). We will write ωX = OX (KX ) for the
F -SINGULARITIES VIA ALTERATIONS
15
duration of the proof without further remark. Working locally, it is harmless to assume KX
and ∆ are both effective by Definition 2.36, so that τ (ωX ; KX + ∆) ⊆ ωX (−⌊KX + ∆⌋) =
OX (−⌊∆⌋) gives in particular τ (ωX ; KX + ∆) ⊆ OX ⊆ ωX .
Next, observe for all e > 0 that
H omOX (F∗e ωX (⌈(pe − 1)(KX + ∆)⌉), ωX ) = H omOX (F∗e OX (⌈(pe − 1)∆⌉), OX )
in a very precise sense; they are equal after mapping to the corresponding stalks at the
generic point of X, which are explicitly identified with one another. Said another way, the
image of F∗e OX (⌈(pe −1)∆⌉) ⊆ F∗e ωX (⌈(pe −1)(KX +∆)⌉) under every local homomorphism
φ ∈ H omOX (F∗e ωX (⌈(pe − 1)(KX + ∆)⌉), ωX ) satisfies φ(F∗e OX (⌈(pe − 1)∆⌉)) ⊆ OX , giving
rise to a commutative diagram
F∗e OX
ωX
/
⊆
F∗e OX (⌈(pe − 1)∆⌉)
⊆
φ
⊆
⊆ F∗e ωX (⌈(pe − 1)(KX + ∆)⌉)
⊆
F∗e ωX
φ
/
OX
and uniquely accounting for every local homomorphism in H omOX (F∗e OX (⌈(pe −1)∆⌉), OX ).
The desired conclusion is now an exercise in manipulating definitions. As τ (X; ∆) ⊆ ωX
is preserved under the local homomorphisms in H omOX (F∗e ωX (⌈(pe − 1)(KX + ∆)⌉), ωX ),
we must have τ (ωX ; KX + ∆) ⊆ τ (X; ∆) by the definition of τ (ωX ; KX + ∆). Conversely,
as τ (ωX ; KX + ∆) ⊆ OX is preserved under H omOX (F∗e OX (⌈(pe − 1)∆⌉), OX ), we must
have τ (X; ∆) ⊆ τ (ωX ; KX + ∆) and the statement follows.
2.6. Transformation behavior of test ideals and multiplier ideals. One of the contributions of this paper is a further clarification of the transformation behavior of test and
multiplier ideals. Let us summarize what is known so far for alterations, which can always
be viewed as compositions of proper birational maps and finite dominant maps.
Let us first consider the classical (and transparent) case of the multiplier ideal in characteristic zero [Laz04, 9.5.42]. Essentially by definition, the multiplier ideal of a logQ-Gorenstein pair (X, ∆X ) is well-behaved under a proper birational morphism π : Y −
→X
and satisfies
(2.40)
π∗ J (Y, ∆Y ) = J (X, ∆X ) with ∆Y := π ∗ (KX + ∆X ) − KY
where we have arranged that π∗ KY = KX as usual. If rather π : Y −
→ X is a finite dominant
∗
map, one sets KY = π KX + Ramπ where Ramπ is the ramification divisor, and has the
transformation rule
(2.41)
π∗ J (Y, ∆Y ) ∩ K(X) = J (X, ∆X ) with ∆Y := π ∗ (KX + ∆X ) − KY = π ∗ ∆X − Ramπ .
In Section 8, we further generalize this rule to incorporate the trace map Example 2.2
(2.42)
Trπ (π∗ J (Y, ∆Y )) = J (X, ∆X )
in the process of showing our main theorem in characteristic zero.
In characteristic p > 0, the transformation rule (2.40) for the multiplier ideal under proper
birational maps once again follows immediately from the definition. However, the behavior
of the multiplier ideal for finite maps is more complicated and not fully understood in general
– even for finite separable maps. In Section 8, we will show that both (2.41) and (2.42) hold
for separable finite maps of degree prime to p (and more generally when Trπ (π∗ OY ) = OX ).
However, Examples 3.10, 6.13, and 7.12 in [ST10] together show that neither formula is
valid for arbitrary separable finite maps in general.
F -SINGULARITIES VIA ALTERATIONS
16
In contrast, the last two authors in [ST10] have completely described the behavior of
the test ideal of a pair (X, ∆X ) under arbitrary finite dominant maps π : Y −
→ X. In the
separable case, one again has
Trπ (π∗ τ (Y, ∆Y )) = τ (X, ∆X )
with ∆Y := π ∗ (KX + ∆X ) − KY = π ∗ ∆X − Ramπ .
More generally, when π is not necessarily separable, a similar description holds after reinterpretation of the ramification divisor (via the Grothendieck trace). However, a formula
as simple as (2.40) relating test ideals under a birational map cannot hold. Indeed, if
π: Y −
→ X is a log resolution of (X, ∆X ), then the multiplier and test ideal of (Y, ∆Y )
will agree while those of (X, ∆X ) may not (cf. [Tak04, Theorems 2.13, 3.2]). Nonetheless
in Theorem 6.8 we will give a transformation rule – albeit far more complex – for the test
ideal under alterations in general, so in particular for birational morphisms.
In summary we observe that the test ideal behaves well under finite maps (and may
be computed using either finite maps or alterations by Theorem 4.6), whereas its behavior
under birational maps is much more subtle. On the other hand, the multiplier ideal in
characteristic zero behaves well under finite and birational maps, however finite maps will
not suffice for its computation. The multiplier ideal in positive characteristic is still well
behaved under birational transformations, but its behavior under finite maps is more subtle.
3. F -rationality via alterations
In this section, we will characterize F -rationality and, more generally, the parameter test
submodule in terms of alterations. This is – at the same time – a special case of our Main
Theorem as well as a key ingredient in its proof. The full proof of our Main Theorem in
positive characteristic consists of a reduction to the cases treated here and will follow in the
next section.
The crux of the argument to follow is based on a version of the equational lemma of [HH92]
in the form that is found in [HL07]. In fact, we require a variant with an even stronger
conclusion; namely, that the guaranteed finite extension may be assumed separable. This
generalization follows from a recent result of A. Sannai and A. Singh [SS12].
Lemma 3.1 (equational lemma). Consider a domain R with characteristic p > 0. Let K
be the fraction field of R, K̄ an algebraic closure of K, and I an ideal of R. Suppose i ≥ 0
2
and α ∈ HIi (R) is such that α, αp , αp , . . . belong to a finitely generated R-submodule of
HIi (R). Then there exists a separable R-subalgebra R′ of K̄ that is a finite R-module such
that the induced map HIi (R) −→ HIi (R′ ) sends α to zero.
Proof. The statement of the equational lemma in [HL07, Lemma 2.2] is the same as above
without the desired separability. Applying this weaker version we therefore have a finite
extension R ⊆ R′ such that HIi (R) −
→ HIi (R′ ) maps α to zero. Let Rs be the separable
′
closure of R in R , that is all elements of R′ that are separable over R. The extension
Rs ⊆ R′ is then purely inseparable, i.e. some power of the Frobenius has the property that
F i (R′ ) ⊆ Rs . Applying the functor HIi ( ) yields the diagram
HIi (R)
/ H i (Rs )
I
/ H i (R′ )
I
tt
t
t
F i ttt
ztt F i
HIi (Rs )
Denoting the image of α in HIi (Rs ) by a, this element is mapped to zero HIi (R′ ), hence,
under the Frobenius F i it is mapped also to zero in HIi (Rs ). But this shows that F i (a) = 0
F -SINGULARITIES VIA ALTERATIONS
17
in HIi (Rs ) by the above diagram. Now [SS12, Theorem 1.3(1)] states that there is a module
finite separable (even Galois with solvable Galois group) extension Rs ⊆ R′′ such that a is
mapped to zero in HIi (R′′ ). But this means that the finite separable extension R ⊆ R′′ is
as required: the image of α ∈ HIi (R) in HIi (R′′ ) is zero.
The main result of this section, immediately below, is a straightforward application of
the method of C. Huneke and G. Lyubeznik in [HL07]. Since making this observation, we
have been informed that a Matlis dual version of the theorem below has also been obtained
by M. Hochster and Y. Yao (in a non-public preprint [HY11]).
Theorem 3.2. Suppose X is an integral F -finite scheme.
(a) For all proper dominant maps π : Y −→ X with Y integral, the image of the trace
map (2.12) contains the parameter test module, i.e.
Tr
τ (ωX ) ⊆ Image(hdim Y −dim X Rπ∗ ωY −−−π→ ωX ) .
(b) There exists a finite separable map π : Y −→ X with Y integral such that the image
of the trace map equals the parameter test module, i.e.
Tr
τ (ωX ) = Image(hdim Y −dim X Rπ∗ ωY −−−π→ ωX ) .
Remark 3.3. In the above result, it is possible to work with equidimensional reduced (rather
than integral) schemes of finite type over a field at the expense of removing “separable”
from the conclusion in (b).
An immediate Corollary of this statement is a characterization of the test submodule. Recall that an alteration is a generically finite proper dominant morphism of integral schemes.
Corollary 3.4. Suppose X is an integral F -finite scheme. Then
\
Tr
τ (ωX ) =
Image(hdim Y −dim X Rπ∗ ωY −−−π→ ωX ).
π: Y −
→X
where π ranges over all of the maps from an integral scheme Y to X that are either:
◦ (separable) finite dominant maps, or
◦ (separable) alterations, or
◦ (separable) proper dominant maps.
Additionally, if X is a variety over a perfect field, then we may allow π to range over
◦ all regular separable alterations to X, or
◦ all separable proper dominant maps with Y regular.
Furthermore, in this case, there always exists a separable regular alteration π : Y −→ X such
Tr
that Image(π∗ ωY −−−π→ ωX ) is equal to the parameter test submodule of X.
Proof. This follows immediately from Theorem 3.2 and from the existence of regular alterations [dJ96].
Proof of Theorem 3.2 (a). The statement follows immediately from Propositions 2.21 and
2.13 as well as the definition of τ (ωX ).
The proof of (b) follows closely the strategy of [HL07]; note that a local version of the
statement is also related to the result of K. Smith that “plus closure equals tight closure
for parameter ideals” [Smi94]. The proof comes down to Noetherian induction once we
establish the following lemma.
F -SINGULARITIES VIA ALTERATIONS
18
Lemma 3.5. Let R ⊆ S be a module finite inclusion of domains and denote the image of
the trace map by JS = Image(ωS −→ ωR ). If τ (ωR ) ( JS , then there is a separable finite
extension of domains S ⊆ S ′ such that the support of JS ′ /τ (ωR ) is strictly contained in the
support of JS /τ (ωR ), where JS ′ = Image(ωS ′ −→ ωR ).
Proof. Choose η ∈ Spec R to be the generic point of a component of the support of JS /τ (ωR )
and set d = dim Rη . By construction (JS /τ (ωR ))η = JSη /τ (ωRη ) has finite length, and hence
so also must its Matlis dual
(JS /τ (ωR ))∨
η = Hom(JSη /τ (ωRη ), E(Rη /η)) .
By Matlis duality applied to the sequence ωSη −
→
→(JSη /τ (ωRη )) ֒−
→ ωRη /τ (ωRη ) one obd (S ) is identified with the image of the tight closure of zero
serves that (JS /τ (ωR ))∨
⊆
H
η
η
η
0∗H d (Rη ) = (ωRη /τ (ωRη ))∨ in Hηd (Sη ). By Proposition 2.21 we have for ΦR : F∗ ωR −
→ ωR
η
that ΦR (JS ) ⊆ JS . Hence the dual of JS /τ (ωR ) is stable under the action of the Frobenius
∨
on Hηd (Sη ). i.e. F ((JS /τ (ωR ))∨
η ) ⊆ (JS /τ (ωR ))η (phrased differently: the tight closure
of zero is Frobenius stable, hence so is its image). This implies that for any element
p
p2
∨
α ∈ (JS /τ (ωR ))∨
η the powers α, α , α , . . . must also lie in the finite length (JSη /τ (ωRη )) .
∨
Applying Lemma 3.1 to α ∈ (JSη /τ (ωRη )) repeatedly (e.g. for a finite set of generators) we obtain a separable integral extension Sη ⊆ T such that Hηd (Sη ) −
→ Hηd (T ) maps
∨
′
(JSη /τ (ωRη )) to zero. By taking S to be the normalization of S in the total field of fractions of T we see that T = Sη′ and that the finite extension R ⊆ S ′ is separable. Translating
this back via Matlis duality this means that the map ωSη′ −
→ ωRη sends JSη′ into τ (ωRη ). In
particular η 6∈ Supp(JS ′ /τ (ωR )).
Proof of Theorem 3.2 (b). Without loss of generality, we may assume that X = Spec R is
affine. Starting with the identity R = S0 we successively produce, using Lemma 3.5, a
sequence of separable finite extensions R = S0 ⊆ S1 ⊆ S2 ⊆ S3 . . . such that the support
of JSi+1 /τ (ωR ) is strictly smaller than the support of JSi /τ (ωR ) until JSi = τ (ωR ) by
Noetherian induction.
The following important corollary (which can also be proven directly from the equational
lemma without reference to the above results) should be viewed in the context of the
definition of pseudo-rationality (see Section 2.4), as well as the Kempf-criterion for rational
singularity [KKMSD73, p. 50] in characteristic zero.
Corollary 3.6. For an F -finite Cohen-Macaulay domain R the following are equivalent.
(a) R is F -rational, i.e. there is no non-trivial submodule M ⊆ ωR which is stable under
ΦR : F∗ ωR −→ ωR .
(b) For all finite extensions R −→ S (which may be taken to be separable if desired) the
induced map ωS −→ ωR is surjective.
(c) For all alterations π : Y −→ X = Spec R (π may be taken to be separable and or
regular if R is of finite type over a perfect field) the induced map π∗ ωY −→ ωX is
surjective.
In fact, utilizing local duality and [HL07] once again to obtain a further finite cover which
annihilates the local cohomology modules below the dimension, we obtain the following
characterization of F -rationality without the Cohen-Macaulay hypothesis.
Corollary 3.7. For an F -finite domain R, the following are equivalent:
(a) R is F -rational.
F -SINGULARITIES VIA ALTERATIONS
19
(b) For each integer i ∈ Z, and every (separable) finite extension of rings R −→ S, the
q
q
induced map hi ωS −→ hi ωR is surjective.
(c) For each integer i ∈ Z and every (separable regular, if R is of finite type over a
q
q
perfect field) alteration π : Y −→ X = Spec R , the induced map hi Rπ∗ ωY −→ hi ωX
is surjective.
Remark 3.8. The main result of the paper [HL07] which inspired our proof is that for a local
ring (R, m) of dimension d that is a homomorphic image of a Gorenstein ring, there is a
module finite extension R ⊆ S such that the induced map Hmi (R) −
→ Hmi (S) is zero for i < d.
q
q
A local dual statement to this is that the induced map on dualizing complexes ωS −
→ ωR
q
q
is zero on cohomology hi (ωS ) −
→ hi (ωR ) for i 6= − dim R. What we accomplish here is
that we clarify the case i = − dim R. With d = dim R we just showed that one can also
q
q
achieve that the image h−d (ωS ) −
→ h−d (ωR ) ∼
= ωR is the parameter test submodule τ (ωR ).
Of course, the dual statement thereof is: the tight closure of zero 0∗H d (R) is mapped to zero
m
→ Hmd (S). It is exactly this statement, and further generalizations,
under the map Hmd (R) −
which are first and independently by M. Hochster and Y. Yao in [HY11].
4. Test ideals via alterations
In this section, we explore the behavior of test ideals under proper dominant maps and
prove our main theorem in characteristic p > 0 in full generality. First we show that various
images are compatible with the Φ from Example 2.4, and so they contain the test ideal.
Proposition 4.1. Suppose that f : Y −→ X is a proper dominant generically finite map of
normal F -finite varieties and that (X, ∆) is a log-Q-Gorenstein pair. Then the test ideal is
contained in the image of the trace map, i.e.
Trf
∗
τ (X; ∆) ⊆ Image f∗ ωY (−⌊f (KX + ∆)⌋) −−−→ ωX (−⌊KX + ∆⌋) ⊆ K(X)
where Trf is the map induced by trace as in Proposition 2.8.
g
h
Proof. This follows easily by Stein factorization. Factor f as Y −
→ Z −
→ X where g is
birational and h is finite. Then it follows from [ST10, Theorem 6.25] (cf. [ST12b, Lemma
4.4(a)]) that
Trh (h∗ τ (Z; −KZ + h∗ (KX + ∆))) = τ (X; ∆).
On the other hand, it follows from the argument that the test ideal is contained in the
multiplier ideal (since the test ideal is the unique smallest ideal satisfying a certain property),
that
Trg (g∗ ωY (−⌊f ∗ (KX + ∆)⌋)) ⊇ τ (Z; −KZ + h∗ (KX + ∆)).
See [Tak04] or see [ST12a, Theorem 4.17] for a sketch of a simpler version of this argument
(or see immediately below for a generalization).
This completes the proof since Trh ◦h∗ Trg = Trf .
We also prove a more general statement whose proof also partially generalizes Proposition 2.21,
and which uses heavily the material from the preliminary section on Duality, Section 2.3.
The reader who has so far avoided that section, can skip Proposition 4.2 and rely on
Proposition 4.1 instead.
Proposition 4.2. Suppose that f : Y −→ X is a proper dominant map of normal integral
F -finite schemes and that (X, ∆) is a log-Q-Gorenstein pair. Then the test ideal is contained
F -SINGULARITIES VIA ALTERATIONS
20
in the image of the trace map, i.e.
Trf
dim Y −dim X
∗
τ (X; ∆) ⊆ Image h
Rf∗ ωY (−⌊f (KX + ∆)⌋) −−−→ ωX (−⌊KX + ∆⌋) ⊆ K(X)
where Trf is the map induced by trace as in Proposition 2.18. Similarly, if additionally
f ∗ (KX + ∆) is a Cartier divisor, then
Trf q
q ∗
− dim X
τ (X; ∆) ⊆ Image h
Rf∗ ωY (f (KX + ∆)) −−−−→ ωX (−⌊KX + ∆⌋) ⊆ K(X)
where again Trf q is the map induced by trace as in Proposition 2.18.
Proof. The statement is local (if it fails to hold, then it fails to hold locally), so we assume
that X is the spectrum of a local ring. Without loss of generality, as in Definition 2.37, we
may assume that ∆ ≥ 0. Let n = dim Y − dim X.
→ OX such that ∆φ ≥ ∆, where ∆φ is defined as in
Fix an OX -linear map φ : F∗e OX −
[Sch09, Theorem 3.11, 3.13] or [ST12a, Subsection 4.4]. Recall that (pe − 1)(KX + ∆φ ) ∼ 0,
so in particular we may write (pe − 1)(KX + ∆) = div c for some c ∈ K(X). For the first
statement, it is sufficient to show that
φ F∗e Trf (hn Rf∗ ωY (−⌊f ∗ (KX + ∆)⌋)) ⊆ Trf hn Rf∗ ωY (−⌊f ∗ (KX + ∆)⌋)
and by Lemma 2.20, we may assume φ( ) = ΦeX (F∗e c ·
φ(F∗e
Trf ( )) =
ΦeX
((F∗e c)
· F∗e Trf (
)) =
ΦeX (F∗e Trf (c
). Now we have
·
)) = Trf (hn Rf∗ ΦeY (F∗e c ·
))
where we have used that Trf is OX -linear, and that ΦeX (F∗e Trf ( )) = Trf (hn Rf∗ ΦeY (F∗e ))
as shown in (2.23). Thus, it is enough to show
ΦeY F∗e (c · ωY (−⌊f ∗ (KX + ∆)⌋)) ⊆ ωY (−⌊f ∗ (KX + ∆)⌋) .
Since
−⌊f ∗ (KX + ∆)⌋ + (1 − pe )f ∗ (KX + ∆φ ) = −⌊f ∗ (pe KX + (pe − 1)∆φ + ∆)⌋
and ∆φ ≥ ∆, we have
c · ωY (−⌊f ∗ (KX + ∆)⌋) ⊆ ωY (−⌊pe f ∗ (KX + ∆)⌋) = ωY (−⌊(F e )∗ f ∗ (KX + ∆)⌋) .
But then, according to Proposition 2.18, we have
ΦeY (F∗e ωY (−⌊(F e )∗ f ∗ (KX + ∆)⌋)) ⊆ ωY (−⌊f ∗ (KX + ∆)⌋)
which completes the proof of the first statement. For the second statement, simply notice
that Image(Trf q ) ⊇ Image(Trf ).
Lemma 4.3. Suppose that (X = Spec R, Γ) is a pair such that Γ = t div(g) for some g ∈ R
and t ∈ Q≥0 . Fix c ∈ R such that Rc is regular and that Supp(Γ) = V (g) ⊆ V (c). Then
there exists a power of cN of c such that
X
e
τ (ωX ; Γ) =
Φe (F∗e cN g ⌈t(p −1)⌉ ωR ).
e≥0
where
ΦeR
:
F∗e ωR
−→ ωR denotes the trace of the e-iterated Frobenius, see Section 2.3.
Proof. By the usual theory of test elements (cf. [Sch11, proof of Theorem 3.18]), we have
for some power cn of c that
XX
τ (ωX ; Γ) =
φ(F∗e cn ωR )
e≥0 φ
F -SINGULARITIES VIA ALTERATIONS
21
where the inner sum ranges over all φ ∈ HomR (F∗e ωR (⌈(pe − 1)Γ⌉), ωR ). Furthermore, note
it is harmless to replace n by any larger integer n + k. The reason the statement does not
follow immediately is that, as div(g) may not be reduced, we may have ⌈t(pe − 1)⌉ div(g) ≥
e
⌈t(pe − 1) div(g)⌉. Choose k such that div(ck ) + (pe − 1)Γ ≥ div(g⌈t(p −1)⌉ ) for all e ≥ 0,
and set N = n + k.
e
Now, the map ψ( ) = ΦeR (g⌈t(p −1)⌉ · ) ∈ HomR (F∗e ωR (⌈(pe − 1)Γ⌉), ωR ) appears
in the sum above. This implies the containment ⊇ for our desired equality. Furthermore, for any φ ∈ HomR (F∗e ωR (⌈(pe − 1)Γ⌉), ωR ), it is clear then that φ(F∗e cN ωR ) ⊆
e
F∗e Φe (F∗e cn g⌈(p −1)⌉ ωR ). Thus, we have the containment ⊆ as well.
We now describe the transformation rule for the parameter test module under finite maps.
Proposition 4.4 (cf. [ST10]). Given a finite map f : Y −→ X of normal F -finite integral
schemes and a Q-Cartier Q-divisor Γ on X, we have
Trf f∗ τ (ωY ; f ∗ Γ) = τ (ωX ; Γ) .
Proof. Without loss of generality, we may assume that X = Spec R and Y = Spec S are
affine and that Γ = t div(g) for some g ∈ R.
→ ωS be the corresponding traces of Frobenius
→ ωR and ΦeS : F∗e ωS −
Let ΦeR : F∗e ωR −
and set JS = Image(Trf : f∗ ωS −
→ ωR ) ⊆ ωR . Now, Lemma 4.3 above implies that there
exists an element c ∈ R such that both
X
e
τ (ωR ; Γ) =
ΦeR (F∗e cg⌈t(p −1)⌉ JS )
e≥0
τ (ωS ; f ∗ Γ) =
X
ΦeS (F∗e cg⌈t(p
e −1)⌉
ωS ) .
e≥0
The idea for the remainder of the proof is to apply Trf ( ) to τ (ωS , f ∗ Γ), noting that
Trf (f∗ ΦeS ( )) = ΦeR (F∗e Trf ( )) since trace is compatible with composition (shown precisely in (2.23)). Therefore,
X
e
Trf (f∗ τ (ωS ; f ∗ Γ)) = Trf f∗
ΦeS (F∗e cg⌈t(p −1)⌉ ωS )
=
X
e≥0
=
X
e≥0
=
X
e≥0
Trf f∗ ΦeS (F∗e cg⌈t(p
e −1)⌉
f∗ ωS )
e
ΦeR F∗e Trf (cg⌈t(p −1)⌉ f∗ ωS )
ΦeR (F∗e cg⌈t(p
e −1)⌉
JS )
e≥0
= τ (ωR ; Γ)
which completes the proof.
To reduce the main theorem of this paper to Theorem 3.2, we need a variant of the cyclic
covering construction, cf. [TW92] or [KM98, Section 2.4].
Lemma 4.5. Suppose that X is a normal integral scheme and Γ is a Q-Cartier Q-divisor
on X. Then there exists a finite separable map g : W −→ X from a normal integral scheme
W such that g ∗ Γ is a Cartier divisor.
F -SINGULARITIES VIA ALTERATIONS
22
Proof. We may assume that X = Spec R is affine and that nΓ = divX (f ) for some non-zero
non-unit f ∈ R. We view f ∈ K = K(R) and suppose that α is a root of the polynomial
xn + f x + f in some separable finite field extension L of K. Let S be the normalization of R
inside L so that we have a module-finite inclusion R ⊆ S. Set π : Y = Spec S −
→ Spec R =
n = −f (α + 1) we
X. Further observe
that
S
contains
α
since
α
is
integral
over
R.
Since
α
p
have α, α + 1 ∈ hα + 1i, so that α + 1 is a unit. Thus, n divY (α) = divY (f ) = π ∗ nΓ, and
so π ∗ Γ = divY (α) is Cartier as desired.
Theorem 4.6. Given a normal integral F -finite scheme X with a Q-divisor ∆ such that
KX + ∆ is Q-Cartier, there exists a finite separable map f : Y −→ X from a normal F -finite
integral scheme Y such that
Trf
∗
(4.7)
τ (X; ∆) = Image f∗ ωY (−⌊f (KX + ∆)⌋) −−→ K(X) .
Alternatively, if X is of finite type over a F -finite (respectively perfect) field, one may take
f : Y −→ X to be a regular (respectively separable) alteration.
Before proving this theorem, we state several corollaries.
Corollary 4.8. Assume that X is a normal variety over an F -finite field k. If (X, ∆) is
a log-Q-Gorenstein pair, then using the images of the trace map as in Proposition 2.18, we
have
!
\
Tr
f
(4.9) τ (X; ∆) =
Image hdim Y −dim X Rf∗ ωY (−⌊f ∗ (KX + ∆)⌋) −−→ K(X)
f :Y −
→X
where f : Y −→ X ranges over all maps from a normal variety Y that are either:
(a) finite dominant maps
(b) finite separable dominant maps
(c) alterations (i.e. generically finite proper dominant maps)
(d) regular alterations
(e) proper dominant maps
(f) proper dominant maps from regular schemes
or, if additionally k is perfect,
(g) regular separable alterations.
Furthermore, in all cases the intersection stabilizes (i.e. is equal to one of its members).
Corollary 4.10. Assume X is a normal integral F -finite scheme with a Q-divisor ∆ such
that KX + ∆ is Q-Cartier. Then using the images of the trace map as in Proposition 2.18,
we have
!
\
q
Tr
f
q
(4.11)
τ (X; ∆) =
Image h− dim X Rf∗ ωY (−⌊f ∗ (KX + ∆)⌋) −−−→ K(X)
f :Y −
→X
where the intersection runs over all proper dominant maps f : Y −→ X from a normal integral scheme Y such that f ∗ (KX + ∆) is Cartier. Furthermore, once again, the intersection
stabilizes.
Proof of Theorem 4.6. We make use the identification of τ (X; ∆) = τ (ωX ; KX + ∆) from
Lemma 2.39 and will prove the statement for the latter. For (4.7), first take a separable finite
map f ′ : X ′ −
→ X such that Γ := f ′∗ (KX +∆) is Cartier by Lemma 4.5. By Proposition 4.4,
Trf ′ f∗′ τ (ωX ′ ; Γ) = τ (ωX ; KX + ∆) = τ (X; ∆).
F -SINGULARITIES VIA ALTERATIONS
23
Now, using Theorem 3.2 we may fix a separable finite map h : Y −
→ X ′ such that we
→ ωX ′ ). The projection formula then gives the equality
have τ (ωX ′ ) = Image(Trh : h∗ ωY −
→ ωX ′ ), and (4.7) now follows after applying Trf ′ .
τ (ωX ′ ; Γ) = Image(Trh (h∗ ωY (−h∗ Γ) −
For the remaining statement when X, if we are given a composition of alterations f ◦ g :
g
f
Z−
→Y −
→ X, it follows that
Trf ◦g
Image(f∗ g∗ ωZ (−⌊g∗ f ∗ (KX + ∆)⌋) −−−−→ K(X))
Trf
⊆ Image(f∗ ωY (−⌊f ∗ (KX + ∆)⌋) −−−→ K(X)) .
Thus, the second statement immediately follows from the first by taking a further regular
(separable) alteration using [dJ96, Theorem 4.1].
Proof of Corollary 4.8. For equation (4.9), we have the containment τ (X; ∆) ⊆ from Proposition 4.1
or Proposition 4.2. The result then follows from equation (4.7).
Proof of Corollary 4.10. Finally, for equation (4.11) we still have the containment τ (X; ∆) ⊆
from Proposition 4.2. On the other hand, if Y has the same dimension as X, then it is readily seen from the spectral sequence argument used in the proof of Proposition 2.13 that
q
Trf q h− dim X Rf∗ ωY (−⌊f ∗ (KX + ∆)⌋) = Trf hdim Y −dim X Rf∗ ωY (−⌊f ∗ (KX + ∆)⌋) .
so that equality holds in (4.11) for f : Y −
→ X as in Theorem 4.6.
Remark 4.12. If R is an F -finite Q-Gorenstein splinter ring (i.e. any module-finite extension
R ⊆ S splits as a map of R-modules), then Corollary 4.8 above gives that τ (X) = OX ,
implying that R is strongly F -regular (since by τ (X) we always mean the big test ideal).
This recovers the main result of [Sin99].
5. Nadel-type vanishing up to finite maps
Among the most sorely missed tools in positive characteristic birational algebraic geometry (in comparison to characteristic zero) are powerful cohomology vanishing theorems.
Strong additional assumptions (e.g. lifting to the second Witt vectors) are required to
recover the most basic version of Kodaira vanishing, and even under similar assumptions
the most powerful variants (e.g. Kawamata-Viehweg or Nadel-type vanishing) cannot be
proven. By applying the results and ideas of B. Bhatt’s dissertation (see also [Bha12]), we
derive here variants of Nadel-type vanishing theorems. These are strictly weaker than what
one would hope for as we only obtain the desired vanishing after a finite covering. Notably
however, we need not require a W2 lifting hypothesis.
Before continuing, we recall the following well known Lemma.
Lemma 5.1 (cf. [Bha10]). Suppose that X is a Noetherian scheme and that we have a
q
q
q
≥0
map of objects f : A −→ B within Dcoh
(X). Then f factors through h0 B if and only if
τ >0 (f ) is the zero map.
Proof. Certainly if f factors through h0 B , then τ >0 (f ) = 0 since τ >0 (h0 B ) = 0. For
the converse direction, suppose τ >0 (f ) = 0. Consider the diagram
q
h0 A
q
/ Aq
q
q
q
/ τ >0 A q
+1
/
0
h0 B
/B
/ τ >0 B q
+1
/
F -SINGULARITIES VIA ALTERATIONS
Since A −
→ B
≥0
Dcoh
(X)
q
q
−
→ τ >0 B
24
is zero, we also have the following diagram of objects in
q
A
id
q
h0 B
q
+1
/ Aq
/0
/Bq
/ τ >0 B q
.
/
+1
/
We begin with a lemma which can be viewed as a kind of “Grauert-Riemenschneider
vanishing [GR70] up to finite maps.” Using the notation from below, note that in the
special case where W is smooth (or a tame quotient singularity) it has been shown only
recently by A. Chatzistamatiou and K. Rülling [CR11] that hi Rπ∗ ωX is zero for i > 0.
Lemma 5.2. Suppose that π : X −→ W is an alteration between integral schemes of characteristic p > 0. Then there exists a finite map g : U −→ X such that the trace map
(5.3)
τ >− dim X R(π ◦ g)∗ ωU −→ τ >− dim X Rπ∗ ωW
q
q
is zero and furthermore that the trace map
(5.4)
τ >0 R(π ◦ g)∗ ωU −→ τ >0 Rπ∗ ωW
is zero. As a consequence hi R(π ◦ g)∗ ωU −→ hi Rπ∗ ωX is zero for all i > 0.
Proof. It is harmless to assume that π is birational (simply take the normalization of W
inside the fraction field of X) and also that W is affine.
q
q
→ ωX factors through
First, choose a finite cover a : X ′ −
→ X such that a : ωX ′ −
ωX [dim X] by [Bha10, Proposition 5.4.2]. Set X ′ −
→ W′ −
→ W to be the Stein factorization
′
of π ◦ a (thus ℓ : W −
→ W is finite). By [Bha10, Theorem 5.0.1], there exists a finite cover
→ R(π ◦ a ◦ b)∗ OX factors through
→ X ′ from a normal X such that R(π ◦ a)∗ OX ′ −
b:X −
→ W to be the Stein-factorization of π ◦ a ◦ b (thus m : W −
→ W′
(π ◦ a ◦ b)∗ OX . Set X −
f−
is finite and (π ◦ a ◦ b)∗ OX = (l ◦ m)∗ OW ). Choose a further cover n : W
→ W such that
q
q
f
→ τ >− dim X (ωW ) is zero by [Bha10, Proposition 5.4.2]. By making W
τ >− dim X (n∗ ω f ) −
W
e−
larger if necessary, we additionally assume that c : X
→ X, the normalization of X in the
f
fraction field of W , satisfies the condition that R(π ◦ a ◦ b)∗ OX −
→ R(π ◦ a ◦ b ◦ c)∗ OXe factors
again
using
[Bha10,
Theorem 5.0.1].
through (π ◦ a ◦ b ◦ c)∗ OXe ∼
= (l ◦ m ◦ n)∗ OW
f
e
X
c
b
/ X′
a
/X
π
f
W
/X
n
/W
m
/ W′
ℓ
/W
Putting this all together, we have the following factorization of Rπ∗ OX −
→ Rπ∗ OXe :
→ (π ◦ a ◦ b)∗ OX
Rπ∗ OX −
→ R(π ◦ a)∗ OX ′ −
−
→ R(π ◦ a ◦ b)∗ OX −
→ R(π ◦ a ◦ b ◦ c)∗ OXe .
→ (π ◦ a ◦ b ◦ c)∗ OXe −
We now note that the term R(π ◦ a ◦ b)∗ OX in the factorization can be removed yielding:
→ (π ◦ a ◦ b)∗ OX −
Rπ∗ OX −
→ R(π ◦ a)∗ OX ′ −
→ R(π ◦ a ◦ b ◦ c)∗ OXe .
→ (π ◦ a ◦ b ◦ c)∗ OXe −
which by factoring along the lower part of the above diagram yields
→ (ℓ ◦ m ◦ n)∗ OW
→ R(π ◦ a ◦ b ◦ c)∗ OXe .
→ (ℓ ◦ m)∗ OW −
Rπ∗ OX −
→ R(π ◦ a)∗ OX ′ −
f −
F -SINGULARITIES VIA ALTERATIONS
25
Now we apply the functor RH omX ( , ωW ) to this factorization and obtain:
q
α
− (ℓ ◦ m)∗ ωW ←−− (ℓ ◦ m ◦ n)∗ ωW
Rπ∗ ωX ←
− R(π ◦ a)∗ ωX ′ ←
− R(π ◦ a ◦ b ◦ c)∗ ωXe
f ←
q
q
q
q
q
e the first statement
Note we don’t need R on ℓ, m and n since they are finite. Setting U = X,
of the theorem, (5.3) now follows since τ >− dim W (α) = 0 based on our choice of n. However,
Rπ∗ ωX ←
− R(π ◦ a)∗ ωX ′
q
q
factors through Rπ∗ ωX [dim X]. Precomposing with the natural map R(π ◦ a ◦ b ◦ c)∗ ω e ←
X
R(π ◦ a ◦ b ◦ c)∗ ωXe [dim X] and taking cohomology yields (5.4).
q
Theorem 5.5. Suppose that π : X −→ S is a proper morphism of F -finite integral schemes
of characteristic p > 0 with X normal. Further suppose that L is a Cartier divisor on
X and that ∆ is a Q-divisor on X such that L − (KX + ∆) is a π-big and π-semi-ample
Q-Cartier Q-divisor on X. Then there exists a finite surjective map f : Y −→ X from a
normal integral F -finite scheme Y such that:
(a) The natural trace map
f∗ OY (⌈KY + f ∗ (L − (KX + ∆))⌉) −→ OX (⌈KX + L − (KX + ∆)⌉)
has image τ (X; ∆) ⊗ OX (L).
(b) The induced map on cohomology
hi R(π ◦ f )∗ OY (⌈KY + f ∗ (L − (KX + ∆))⌉) −→ hi Rπ∗ τ (X; ∆) ⊗ OX (L)
is zero for all i > 0.
Proof (cf. [Bha10]). Certainly by Theorem 4.6 we can assume that (a) holds for some surjective finite map f ′ : Y ′ −
→ X (and every further finite map). On Y ′ , we may also as′∗
sume that f ∆ is integral and f ′∗ (KX + ∆) is Cartier, and we may further assume that
→W
OY ′ (f ′∗ (L − (KX + ∆))) is the pull-back of a line bundle L via some map π : Y ′ −
over S such that L is ample over S. Note, f ′∗ (L − (KX + ∆)) is still big so we may assume
that W has the same dimension as Y ′ (and thus also the same dimension as X).
Y′
f′
/X
π
W
ρ
/S
Since we only need now to prove (b), replacing X by Y ′ we may assume that KX + ∆ is
Cartier and that OX (L − (KX + ∆)) is the pull-back of some relatively ample line bundle
L via an alteration g : X −
→ W over S with structural map ρ : W −
→ S.
Y
h
/ X := Y ′
❍❍
❍❍ π
❍❍
g
❍❍
❍❍
$/
W
ρ
S
By Lemma 5.2, there exists a finite cover h : Y −
→ X such that R(g ◦ h)∗ ωY −
→ Rg∗ ωX
factors through g∗ ωX . Choose n0 > 0 such that hi Rρ∗ (g∗ ωX ⊗ L n ) = 0 for all i > 0 and
all n > n0 (by Serre vanishing). Also choose an integer e > 0 such that pe > n0 . Consider
F -SINGULARITIES VIA ALTERATIONS
now Y
ρ
h
/X
Fe
g
/W
Fe
h
/W
ρ
26
g
h
/ S which can also be expressed as Y −−
→ X −−→
g
Fe
ρ
W −−→ S −−−→ S and Y −−→ X −−→ W −−−→ W −−→ S. Finally consider the factorization:
e
hi R(ρ ◦ F e ◦ g ◦ h)∗ (ωY ⊗ h∗ g∗ (F e )∗ L ) = hi R(ρ ◦ F e )∗ L p ⊗ R(g ◦ h)∗ ωY
e
−
→ F∗e hi R(ρ)∗ (L p ⊗ g∗ ωX )
e
e
−
→ F∗e hi R(ρ)∗ (L p ⊗ Rg∗ ωX ) = hi (Rρ∗ )F∗e (L p ⊗ Rg∗ ωX )
−
→ hi ((Rρ∗ )(L ⊗ Rg∗ F∗e ωX ))
−
→ hi ((Rρ∗ )(L ⊗ Rg∗ ωX )) = hi Rπ∗ (g∗ L ⊗ ωX ) .
This map is zero since the second line is zero by construction.
Corollary 5.6. Let X be a projective variety over an F -finite field k. Suppose that L is
a Cartier divisor on X, and that ∆ is a Q-divisor such that L − (KX + ∆) is a big and
semi-ample Q-Cartier Q-divisor. Then there exists a finite surjective map f : Y −→ X from
a normal variety Y such that:
(a) The natural trace map
f∗ OY (⌈KY + f ∗ (L − (KX + ∆)⌉) −→ OX (⌈KX + L − (KX + ∆)⌉)
has image τ (X; ∆) ⊗ OX (L).
(b) The induced map on cohomology
H i Y, OY (⌈KY + f ∗ (L − (KX + ∆))⌉) −→ H i X, τ (X; ∆) ⊗ OX (L)
is zero for all i > 0.
6. Transformation rules for test ideals under alterations
The fact that the test ideal can be computed via alterations suggests a transformation
rule for test ideals under alterations. We derive this transformation rule in this section. We
first state a definition.
Definition 6.1. Suppose that X is a normal variety over a field k and Γ is a Q-Cartier
Q-divisor. For example, one might take Γ = L − (KX + ∆) where L is a Cartier divisor and
KX + ∆ is Q-Cartier. We define
\
Trf
T 0 (X, Γ) :=
Image H 0 (Y, OY ⌈KY + f ∗ Γ⌉) −−−→ H 0 X, OX (⌈KX + Γ⌉)
f: Y−
→X
where f runs over all finite dominant maps f : Y −
→ X such that Y is normal and equidimensional.
Alternately, if X is any (not necessarily normal or even reduced) F -finite d-dimensional
equidimensional scheme of finite type over a field k and L is any line bundle on X, then
we define
\
Trf
T 0 (X, L ) :=
Image H 0 Y, ωY ⊗ f ∗ L −−−→ H 0 X, ωX ⊗ L
f: Y−
→X
where f runs over all finite dominant maps f : Y −
→ X such that Y is normal and equidimensional of dimension d. In both cases the maps are induced by the trace map as described
in Proposition 2.18.
Remark 6.2. We expect that the reader has noticed the upper-script 0 in the definition of
T 0 (X, Γ). We include this for two reasons:
F -SINGULARITIES VIA ALTERATIONS
27
(i) It serves to remind the reader that T 0 (X, Γ) is a submodule of H 0 (X, OX (⌈KX + Γ⌉)).
(ii) In the future, it might be reasonable to extend this definition to higher cohomology
groups. For example, Theorem 5.5 is a vanishing theorem for appropriately defined
higher T i (X, Γ).
First we make a simple observation:
Lemma 6.3. For any finite dominant map f : Y −→ X between proper normal varieties
over a field k with Γ a Q-Cartier Q-divisor on X, then T 0 (Y, f ∗ Γ) is sent onto T 0 (X, Γ)
via the trace map
β : H 0 Y, OY (⌈KY + f ∗ Γ⌉) −→ H 0 X, OX (⌈KX + Γ)⌉) .
Proof. If Y is proper over a field k, then H 0 Y, OY (⌈KY + f ∗ Γ⌉) is a finite dimensional
vector space, and so T 0 (Y, f ∗ Γ) is the image of a single map
→ H 0 Y, OY (⌈KY + f ∗ Γ⌉) ,
H 0 Y ′ , OY ′ (⌈KY ′ + g∗ f ∗ Γ⌉) −
0 X, O (⌈K + Γ⌉) yields
for some finite cover g : Y ′ −
→ Y . Composing
with
the
map
to
H
X
X
the inclusion T 0 (X, Γ) ⊆ β T 0 (Y, f ∗ Γ) .
For the other inclusion, we simply notice that given any finite dominant map h : W −
→ X,
we can find a finite dominant map U −
→ X which factors through both h and f .
Lemma 6.4. Suppose that X is as in Definition 6.1. Observe that H 0 (X, ωX ⊗ L ) ∼
=
q
H0 (X, ωX [− dim X] ⊗ L ). Furthermore, T 0 (X, ωX ⊗ L ) is:
\
Trf q
q
q
0
∗
0
Image H (Y, ωY [− dim X] ⊗ f L ) −−−−→ H (X, ωX [− dim X] ⊗ L ) .
f :Y −
→X
where the intersection runs over all finite dominant maps f : Y −→ X such that Y is normal
and equidimensional.
Proof. First note that the isomorphism H 0 (X, ωX ⊗ L ) ∼
= H0 (X, ωX [− dim X] ⊗ L ) follows
q
from analyzing the spectral sequence computing H0 (X, ωX [− dim X] ⊗ L ). The second
statement then follows immediately.
q
Mimicking Lemma 6.3, we also obtain:
Lemma 6.5. For any finite dominant map f : Y −→ X between proper equidimensional
schemes of finite type over a field k with L a line bundle on X, then T 0 (Y, ωY ⊗ f ∗ L ) is
sent onto T 0 (X, ωX ) via the trace map
q
q
H0 Y, ωY [− dim X] ⊗ f ∗ L −→ H0 X, ωX [− dim X] ⊗ L .
Proof. The proof is identical to the proof of Lemma 6.3.
T 0 (X, L
By Theorem 4.6, if X = Spec R is affine and L = 0, then
− (KX + ∆)) is just
the global sections of τ (X; ∆). Inspired by this, we demonstrate that we may also use
alterations in order to compute T 0 (X, L − (KX + ∆)).
First, suppose that f : Y −
→ X is an alteration and recall from Proposition 2.18 that we
have a map
Trf : f∗ ωY (−⌊f ∗ (KX + ∆)⌋) −
→ ωX (−⌊KX + ∆⌋) .
Twisting by a Cartier divisor L and taking cohomology leads us to a map
(6.6)
Ψ : H 0 (Y, ωY (⌈f ∗ (L − KX − ∆)⌉)) = H 0 (X, f∗ ωY (⌈f ∗ (L − KX − ∆)⌉))
−
→ H 0 (X, ωX (⌈L − KX − ∆⌉)) .
F -SINGULARITIES VIA ALTERATIONS
28
Theorem 6.7. Suppose that X is a F -finite normal variety over k and that ∆ is a Q-divisor
such that KX + ∆ is Q-Cartier. Finally set L to be any Cartier divisor. Then
T 0 (X, L − KX − ∆)
\
Trf
=
Image H 0 (Y, ωY (⌈f ∗ (L − KX − ∆)⌉)) −−−→ H 0 (X, ωX (⌈L − KX − ∆⌉))
f :Y −
→X
where f runs over all alterations f : Y −→ X and the maps in the intersection are as in (6.6).
T
Proof. Certainly we have the containment T 0 (X, L − (KX + ∆)) ⊇ f : Y −
→X (. . .) since
this latter intersection intersects more modules than the one defining T 0 (X, L − (KX + ∆)).
We need to prove the reverse containment.
β
α
Fix an alteration f : Y −
→ X. Set Y −−→ W = Spec(f∗ OY ) −−→ X to be the Stein
factorization of f . Certainly
τ (ωW , β ∗ (L − KX − ∆)) ⊆ α∗ OY (⌈KY + f ∗ (L − KX − ∆)⌉).
Choose a finite cover h : U −
→ W such that Trh : OU (⌈KU − h∗ β ∗ (L − KX − ∆)⌉) −
→
OW (⌈KW − β ∗ (L − KX − ∆)⌉) has image τ (ωW , β ∗ (L − KX − ∆)).
Y ❇
❇
❇❇ f
❇❇
❇❇
/X
/W
α
U
h
β
We now have the following factorization:
H 0 (U, OU (⌈KU + h∗ β ∗ (L − KX − ∆)⌉)) −
→ H 0 (X, β∗ τ (ωW , β ∗ (L − KX − ∆)))
−
→ H 0 (X, β∗ α∗ OY (⌈KY + f ∗ (L − KX − ∆)⌉))
= H 0 (X, f∗ OY (⌈KY + f ∗ (L − KX − ∆)⌉))
= H 0 (Y, ωY (⌈f ∗ (L − KX − ∆))⌉)
−
→ H 0 (X, ωX (⌈L − KX − ∆⌉))
And so we obtain the desired inclusion.
Theorem 6.8. Suppose that f : Z −→ X is an alteration between normal F -finite varieties
over a field k, ∆ is a Q-divisor on X such that KX + ∆ is Q-Cartier and L is any Cartier
divisor on X. Additionally suppose that either X is affine or proper over k.
Then
0
∗
Ψ T Z, f (L − (KX + ∆)) = T 0 X, L − (KX + ∆)
where Ψ is the map described in (6.6).
In particular, if f is proper and birational and X is affine, then this yields a transformation rule for the test ideal under proper birational morphisms.
Proof. Certainly T 0 (X, L − (KX + ∆)) ⊇ Ψ T 0 (Z, f ∗ (L − (KX + ∆))) in either case by
Theorem 6.7.
If X is proper over k, then so is Z and so H 0 (Z, OZ (⌈KZ + f ∗ (L − (KX + ∆))⌉)) is a
finite dimensional k-vector space. It follows that T 0 (Z, f ∗ (L − (KX + ∆))) is the image of
a single map
Trg
H 0 Z ′ , OZ ′ (⌈KZ ′ + g∗ f ∗ (L − KX − ∆)⌉) −−−→ H 0 (Z, OZ (⌈KZ + f ∗ (L − KX − ∆)⌉))
F -SINGULARITIES VIA ALTERATIONS
29
for some finite cover g : Z ′ −
→ Z. However, f ◦ g : Z ′ −
→ X is also an alteration, and it
0
follows that T (X, L − (KX + ∆)) is contained in the image of
Trf ◦g
H 0 Z ′ , OZ ′ (⌈KZ ′ + g ∗ f ∗ (L − KX − ∆)⌉) −−−−→ H 0 (X, OX (⌈KX + L − KX − ∆⌉))
whose image is clearly Trf T 0 (Z, f ∗ (L − (KX + ∆))) .
On the other hand, suppose X = Spec R is affine and observe that, without loss of
generality, we may assume that KX + ∆ is effective. Set S = H 0 (Y, OY ) and define
α
β
Y −−→ X ′ = Spec S −−→ X to be the Stein factorization of f . Now, the global sections H 0 (Z, OZ (⌈KZ + f ∗ (L − (KX + ∆))⌉)) can be identified with elements of ωS (L)
because we assumed that KX + ∆ ≥ 0. In particular, we have τ (ωS ,β ∗ (L − KX − ∆)) ⊆
H 0 (Z, OZ (⌈KZ −f ∗ (L−(KX +∆))⌉)). But Trβ τ (ωS , β ∗ (L−KX −∆)) = τ (R, L−KX −∆)
and the other containment follows.
Remark 6.9. In order to generalize the above result to arbitrary schemes, it would be helpful
to know that the intersection defining T 0 (X, Γ) stabilized in general.
7. Surjectivities on cohomology
In this section we show how the vanishing statements obtained in [Bha10] and in Section 5,
combined with the ideas of Section 6, can be used to construct global sections of adjoint line
bundles. We are treating this current section as a proof-of-concept. In particular, many of
the statements can be easily generalized. We leave the statements simple however in order
to demonstrate the main ideas. Consider the following prototypical application of Kodaira
vanishing.
Example 7.1. Suppose that X is a smooth projective variety in characteristic zero and that
D is an effective Cartier divisor on X. Set L to be an ample line bundle on X. We have
the following short exact sequence
0−
→ ωX ⊗ L −
→ ωX (D) ⊗ L −
→ ωD ⊗ L |D −
→0.
Taking cohomology gives us
0−
→ H 0 (X, ωX ⊗ L ) −
→ H 0 (X, ωX (D) ⊗ L ) −
→ H 0 (D, ωD ⊗ L |D ) −
→ H 1 (X, ωX ⊗ L ).
Kodaira vanishing implies that H 1 (X, ωX ⊗ L ) is zero and so
H 0 (X, ωX (D) ⊗ L ) −
→ H 0 (D, ωD ⊗ L |D )
is surjective.
Consider now the same example (in characteristic zero) but do not assume that X is
smooth.
Example 7.2. Suppose that X is a normal projective variety in characteristic zero and that
D is a reduced Cohen-Macaulay Cartier divisor on X. These conditions are enough to imply
that the natural map ωX (D) −
→ ωD is surjective.
e −
Set L to be an ample (or even big and nef) line bundle on X. Choose π : X
→ X to
e
e
be a log resolution of (X, D) and set D to be the strict transform of D on X. We have the
F -SINGULARITIES VIA ALTERATIONS
b (X):
following diagram of exact triangles in Dcoh
/ π∗ ω e (D)
e ⊗ π∗L
/ π∗ ω e ⊗ π ∗ L
0
X
X
Rπ∗ ωXe ⊗ π ∗ L
0
/ ωX ⊗ L
/ Rπ∗ ω e (D)
e ⊗ π∗L
X
/ ωX (D) ⊗ L
30
/ π∗ ω e ⊗ π ∗ L |D
D
/ Rπ∗ ω e ⊗ π ∗ L |D
D
/ ωD ⊗ L |D
/0
+1
/
/0
The vertical equalities and the top right surjection are due Grauert-Riemenschneider vanishing [GR70]. Because X is not smooth, H 1 (X, ωX (L)) is not necessarily zero, see for example
e ω e ⊗ π ∗ L ) is zero by Kawamata-Viehweg
[AJ89]. However, H 1 (X, (π∗ ωXe ) ⊗ L ) = H 1 (X,
X
∗
vanishing since π L is big and nef, [Kaw82, Vie82]. Thus we have the surjection:
e ⊗L) −
→ H 0 (X, (π∗ ωDe ) ⊗ L |D ) ,
H 0 (X, (π∗ ωXe (D))
between submodules of H 0 (X, ωX (D) ⊗ L ) and H 0 (D, ωD ⊗ L |D ).
Interestingly, π∗ ωDe is independent of the choice of embedding of D into X since π∗ ωDe is
the multiplier module for D by definition. Even more, H 0 (D, π∗ ωDe ⊗ L |D ) only depends
on the pair (D, L |D ).
Furthermore, using the method of proof of Theorem 8.3 below, it is easy to see that
\
Image H 0 (E, ωE ⊗ f ∗ L |D ) −
→ H 0 (D, ωD ⊗ L |D )
H 0 (D, π∗ ωDe ⊗ L |D ) =
f : E−
→D
where the intersection runs over all regular alterations f : E −
→ D. In light of Theorem 6.7
0
the subspace H (D, π∗ ωDe ⊗ L |D ) may be viewed as an analog of T 0 (D, ωDe ⊗ L |D ). This
inspires the remainder of the section.
In characteristic p > 0, Kodaira vanishing does not hold even on smooth varieties [Ray78].
However, we have the following corollary of [Bha10], cf. Corollary 5.6.
Theorem 7.3. Suppose that D is a Cartier divisor on a normal proper d-dimensional
variety X and L is a big and semi-ample line bundle on X. Using the natural map
γ : H 0 (X, ωX (D) ⊗ L ) −→ H 0 (D, ωD ⊗ L |D )
one has an inclusion
T 0 (D, ωD ⊗ L |D ) ⊆ γ(T 0 (X, ωX ⊗ L (D))).
In particular, if T 0 (D, ωD ⊗ L |D ) 6= 0, then H 0 (X, ωX (D) ⊗ L ) 6= 0. And if T 0 (D, ωD ⊗
L |D ) = H 0 (D, ωD ⊗ L |D ) then γ is surjective.
Proof. Using Lemma 6.4, set f : Y −
→ X to be a finite cover of X such that T 0 (X, ωX ⊗
L (D)) is equal to
Trf q
q
q
∗
0
0
Image H (Y, ωY [−d] ⊗ f L (D)) −−−−→ H (X, ωX [−d] ⊗ L (D))
q
noting that H0 (X, ωX [−d] ⊗ L (D)) ∼
= H 0 (X, ωX ⊗ L (D)). By [Bha10, Proposition 5.5.3],
there exists a finite cover g : Z −
→ Y such that H d−1 (Y, f ∗ L −1 ) −
→ H d−1 (Z, g∗ f ∗ L −1 ) is
q
q
1−d
∗
∗
the zero map. Therefore the dual map H (Z, g f L ⊗ ωY ) −
→ H1−d (Y, f ∗ L ⊗ ωX ) is
zero.
F -SINGULARITIES VIA ALTERATIONS
31
Set DY = f ∗ D and DZ = g∗ f ∗ D. Note that DY or DZ may not be normal or even
reduced, even if D is. They are however equidimensional. Then there is a map between
long exact sequences:
0
/
q
H−d (Z, g∗ f ∗ L ⊗ ωZ )
/
/
q
H−d (Z, g∗ f ∗ L ⊗ ωZ (DZ ))
H1−d (DZ , g∗ f ∗ L |D ⊗ ωD )
/
q
Z
H1−d (Z, g∗ f ∗ L ⊗ ωZ )
q
β
0
/
q
H−d (Y, f ∗ L ⊗ ωY )
/
/
q
H−d (Y, f ∗ L ⊗ ωY (DY ))
H1−d (DY , f ∗ L |D ⊗ ωD )
q
Y
ν
0
0
/
q
/
H−d (X, L ⊗ ωX )
H 0 (X, L ⊗ ωX )
/
/
H1−d (Y, f ∗ L ⊗ ωY )
q
/
q
/
δ
α
0
H−d (X, L ⊗ ωX (D))
γ
H 0 (X, L ⊗ ωX (D))
γ
H1−d (D, L |D ⊗ ωD )
q
/
/
H1−d (X, L ⊗ ωX )
q
H 0 (D, L |D ⊗ ωD )
where the vertical equalities are obtained from the spectral sequences computing the middle
lines. Note we identify f with its restriction f |DY and g with g|DZ
Choose the h : E −
→ DZ to be normalization of the (DZ )red and notice we have a map
H 0 (E, ωE ⊗ h∗ g∗ f ∗ L |D ) = H1−d (E, ωE ⊗ h∗ g∗ f ∗ L |D )
q
−
→ H1−d (DZ , ωDZ ⊗ g ∗ f ∗ L |D )
q
α◦β
−−−→ H1−d (D, ωD ⊗ L |D ) = H 0 (D, ωD ⊗ L |D ).
q
The image of this map contains T 0 (D, ωD ⊗ L |D ), and thus the image of α ◦ β also contains
T 0 (D, ωD ⊗L |D ). Therefore, if we view d ∈ T 0 (D, ωD ⊗M ) as an element of H1−d (D, L |D ⊗
q
q
ωD ), it must have some pre-image z ∈ H1−d (DZ , f ∗ L |D ⊗ ωDZ ). The diagram implies that
q
δ(β(z)) = 0. Thus β(z) is an image of some element in y ∈ H 0 (X, f ∗ L ⊗ ωY (DY )). It
follows that ν(y) ∈ T 0 (X, ωX ⊗ L (D)) and so d = γ(ν(y)) which completes the proof.
Remark 7.4. If we knew that the intersection defining T 0 (X, L − KX − ∆) stabilized, then
the previous result could be generalized to arbitrary equidimensional schemes (not just
those which are of finite type over a field). Even without this hypothesis, the argument
above still implies that T 0 (D, ωD ⊗ L |D ) ⊆ γ(H 0 (X, ωX (D) ⊗ L )). The same statement
holds for Theorem 7.6.
Remark 7.5. We expect that one can obtain more precise surjectivities involving characteristic p > 0 analogs of adjoint ideals. In particular, T 0 (X, ωX ⊗ L ) is not the right analog
e appearing in Example 7.2 above.
of the term H 0 (X, π∗ ωXe (D))
We also show that this method can be generalized with Q-divisors.
Theorem 7.6. Suppose X is a normal F -finite variety which is proper over a field k and
that D is a Cartier divisor on X. Additionally, suppose that ∆ is a Q-divisor on X with
no common components with D and such that KX + ∆ is Q-Cartier. Finally suppose that
L is a Cartier divisor on X such that L − (KX + D + ∆) is big and semi-ample. Then the
natural map
H 0 (X, OX (⌈KX + D + L − (KX + D + ∆)⌉) = H 0 (X, OX (⌈L − ∆⌉)
γ
−−→ H 0 (D, OD (⌈KD + L|D − (KD + ∆|D )⌉)) = H 0 (D, OD (L|D − ⌊∆⌋|D ).
yields an inclusion
T 0 (D, L|D − (KD + ∆|D )) ⊆ γ T 0 (X, D + L − (KX + D + ∆))
F -SINGULARITIES VIA ALTERATIONS
32
noting that T 0 (D, L|D − (KD + ∆|D )) ⊆ H 0 (D, OD (⌈L|D − ⌊∆⌋|D ⌉)).
Proof. Let us first point out that γ is induced from the restriction map
OX (⌈KX + D + L − (KX + D + ∆)⌉)
−
→ OD (⌈KX + D + L − (KX + D + ∆)⌉|D ) = OD (KD + L|D − (KD + ⌊∆⌋|D )).
Now choose a finite cover h : W −
→ X, with W normal, such that h∗ (KX + ∆) is an integral
∗
Cartier divisor and set DW = h D. Note DW is not necessarily normal or even reduced (it
is however unmixed and thus equidimensional). We have the diagram
/ h∗ ωD h∗ (L − (KX + D + ∆))
h∗ ωW h∗ (D + L − (KX + D + ∆))
W
ξ
ωX (⌈D + L − (KX + D + ∆)⌉)
/ ωD (⌈L|D − (KD + ∆)⌉|D )
of which we take global sections
and then apply the method of Lemmas
6.3 and 6.5 to
0
∗
conclude that the image ξ T DW , ωDW ⊗ OW (h (L − (KX + D + ∆))) is equal to
T 0 D, L|D − (KD + ∆|D ) ⊆ H 0 D, OD (KD + L|D − (KD + ∆|D ))
(7.7)
(7.8)
⊆ H 0 D, ωD (⌈L|D − (KD + ∆)⌉|D ) .
Likewise T 0 (X, D + L − (KX + D + ∆)) is the image of T 0 (W, h∗ (D + L − (KX + D + ∆))).
Thus it is sufficient to show that via the map
H 0 W, ωW (h∗ (D + L − (KX + D + ∆))) −
→ H 0 DW , ωDW (h∗ (L − (KX + D + ∆))) ,
each element of T 0 DW , ωDW ⊗ OW (h∗ (L − (KX+ D + ∆))) is the image of an element
of T 0 W, ωW ⊗ OW (h∗ (D + L − (KX + D + ∆))) . We have just reduced to the setting of
Theorem 7.3 and the result follows.
8. Transformation rules for multiplier ideals
It still remains to be proven that the multiplier ideal in characteristic zero can be characterized as in our Main Theorem, for which we need to explore the behavior of multiplier
ideals under proper dominant maps. We further analyze the behavior of multiplier ideals
under alterations in arbitrary characteristic, which leads to an understanding of when (and
why) the classical characteristic zero transformation rule (2.41) for the multiplier ideal under
finite maps may fail in positive characteristic.
Suppose that A is a normal ring in arbitrary characteristic and that (X = Spec A, ∆)
is an affine pair such that KX + ∆ is Q-Cartier. As discussed in Section 2.4, J (X; ∆) is
only known to be quasi-coherent assuming a theory of resolution of singularities is at hand.
Nonetheless, it is always a sheaf of (fractional) ideals, and in this section we will use the
notation J (A; ∆) := H 0 (X, J (X; ∆)) to denote the corresponding ideal of global sections.
An important and useful perspective, largely in the spirit of [Lip94], is to view Definition 2.28
as a collection of valuative conditions for membership in the multiplier ideal J (A; ∆).
Specifically, suppose E is a prime divisor on a normal proper birational modification
θ: Z −
→ X. After identification of the function fields K = Frac(A) = K(X) = K(Z),
E gives rise to a valuation ordE centered on X. The valuation ring of ordE is simply the
local ring OZ,E . Thus, we have that J (A; ∆) can be described as the fractional ideal inside
of K(X) given by
F -SINGULARITIES VIA ALTERATIONS
J (A; ∆) =
\
OZ,E (⌈KZ − θ ∗ (KX + ∆)⌉)
θ : Z−
→X
Prime E on Z
=
(
33
f ∈K
ordE (f ) ≥ ordE (⌊θ ∗ (KX + ∆) − KZ ⌋)
for all θ : Z −
→ X and all prime E on Z
)
We now show how to generalize the characteristic zero transformation rule for multiplier
ideals under finite maps (2.41) so as to incorporate the trace map as in (2.42). Furthermore,
by working in arbitrary characteristic, we also recover both of these transformation rules in
positive characteristic for separable finite maps of degree prime to the characteristic.
Theorem 8.1. Suppose that π : Spec B = Y −→ X = Spec A is a finite dominant map of
normal integral schemes of any characteristic and that (X, ∆X ) is a pair such that KX +∆X
is Q-Cartier. Define ∆Y = ∆X − Ramπ . Then
Tr(J (B; ∆Y )) ⊆ J (A; ∆X ) .
Furthermore, if the field trace map Tr : Frac(B) −→ Frac(A) satisfies Tr(B) = A (e.g. the
degree of π is prime to the characteristic), then
Tr(J (B; ∆Y )) = J (B; ∆Y ) ∩ K(A) = J (A; ∆X ).
Proof. Suppose f ∈ J (B; ∆Y ). Fix a prime divisor E on a normal proper birational modification θ : Z −
→ X. Consider the discrete valuation ring R = OZ,E , viewed as a subring
of K(A), and let r ∈ K(A) be a uniformizer for R. Denote by S the integral closure of R
inside of K(B). Then S can also be realized in the following manner. Let W be the normal
scheme fitting into a commutative diagram
W
η
/Y
ρ
π
Z
θ
/X
where ρ is finite and η is birational (that is, take W to be the normalization of the relevant
irreducible component of Y ×X Z). Let E1 , . . . , Ek be the prime divisors on W mapping
T
onto E. Then we have S = ki=1 OW,Ei , where again we have considered each OW,Ei as a
subring of K(B). In particular, for g ∈ K(B), we have g ∈ S if and only if ordEi (g) ≥ 0 for
all i = 1, . . . , k.
Let Φ be a generator for the rank one free S-module HomR (S, R). If we write Tr : S −
→R
as Tr( ) = Φ(c · ), we know from [ST10, Proposition 4.8] that divW (c) = Ramρ so that
ordEi (c) = ordEi (KW − ρ∗ KZ ) .
Let λE = ordE (⌈KZ − θ ∗ (KX + ∆X )⌉) and consider g = cr λE f . Since f ∈ J (B; ∆Y ), it
follows that ordEi (f ) + ⌈KW − η ∗ (KY + ∆Y )⌉ ≥ 0 and hence
ordEi (g) ≥ ordEi (KW − ρ∗ KZ + ρ∗ ⌈KZ − θ ∗ (KX + ∆X )⌉ − ⌈KW − η ∗ (KY + ∆Y )⌉)
= ordEi (ρ∗ ⌈−θ ∗ (KX + ∆X )⌉ − ⌈−η ∗ (KY + ∆Y )⌉)
= ordEi (ρ∗ ⌈−θ ∗ (KX + ∆X )⌉ − ⌈−η ∗ (KY + π ∗ ∆X − (KY − π ∗ KX ))⌉)
= ordEi (ρ∗ ⌈−θ ∗ (KX + ∆X )⌉ − ⌈ρ∗ (−θ ∗ (KX + ∆X ))⌉)
≥ 0.
F -SINGULARITIES VIA ALTERATIONS
34
It now follows that g ∈ S, and thus Φ(g) = r λE Tr(f ) ∈ R. In other words, we have
ordE (Φ(g)) = ordE (Tr(f )) + ordE (r λE )
= ordE (Tr(f )) + ordE (⌈KZ − θ ∗ (KX + ∆X ⌉) ≥ 0
and we conclude that Tr(f ) ∈ J (A; ∆X ) and thus Tr(J (B; ∆Y )) ⊆ J (B; ∆X ) as desired.
Note that every divisorial valuation ν : K(B) \ {0} −
→ Z centered on Y can be realized as
ν = ordEi as in the setup above. Indeed, the restriction ν to K(A) gives rise to a discrete
valuation ring whose residue field has transcendence degree (dim(Y ) − 1) = (dim(X) − 1)
over Λ; see [Bou98, Chapter VI, Section 8]. By Proposition 2.45 in [KM98], this valuation
ring can be realized as OZ,E for some prime divisor E on θ : Z −
→ X as above, so that
ν = ordEi for some i.
Let us now argue that J (A; ∆X ) · π∗ OY ⊆ π∗ J (B; ∆Y ). Suppose h ∈ J (A; ∆X ). We
have by assumption
0 ≤ ordE (h) + ordE (⌈KZ − θ ∗ (KX + ∆X )⌉)
whence it follows from [Har77, Chapter IV, Proposition 2.2], that
0 ≤ ordEi (h) + ordEi (ρ∗ ⌈KZ − θ ∗ (KX + ∆X )⌉)
≤ ordEi (h) + ordEi (Ramρ ) + ordEi (⌈ρ∗ (KZ − θ ∗ (KX + ∆X ))⌉)
= ordEi (h) + ordEi (⌈KW − η ∗ (KY + ∆Y )⌉) .
Thus, we conclude h ∈ J (B; ∆Y ) and hence J (A; ∆X ) · π∗ OY ⊆ π∗ J (B; ∆Y ).
Now assume in addition that the trace map is surjective, i.e. Tr(B) = A. We have, using
the surjectivity of trace at the third inequality, that
J (A; ∆X ) ⊆ (J (A; ∆X )·π∗ OY )∩OX ⊆ π∗ J (B; ∆Y )∩OX ⊆ Tr (π∗ J (B; ∆Y )) ⊆ J (A; ∆X ).
This necessitates equality throughout, which completes the proof.
We now complete the proof of our main theorem.
Corollary 8.2. Suppose that (X, ∆) is a pair in characteristic zero. Then
\
Tr
J (X; ∆) =
Image π∗ OY (⌈KY − π ∗ (KX + ∆)⌉) −−−→ K(X)
π:Y −
→X
where π ranges over all alterations with Y normal, Tr : K(Y ) −→ K(X) is the field trace,
and we have that KY = π ∗ KX + Ramπ wherever π is finite.
Proof. Because of the existence of resolution of singularities in characteristic zero, we may
assume that each Y is smooth and that KY − π ∗ (KX + ∆) is supported on a simple normal
crossings divisor on Y . First consider a finite map f : W −
→ X with W normal (note that
we are in characteristic zero, so the map is separable). If we pick a representative KX such
that OX (KX ) = ωX and also pick KW = f ∗ KX + Ramf , then we recall from Example 2.2
that the field trace Tr : K(W ) −
→ K(X) induces a map Tr : OW (KW ) −
→ OX (KX ) which is
identified with the Grothendieck trace Trf : f∗ ωW −
→ ωX .
Fix any proper dominant map π : Y −
→ X. Factor π through a birational map ρ : Y −
→W
and a finite map f : W −
→ X with W normal via Stein factorization. Thus π∗ ωY −
→ ωX
factors through Tr : f∗ ωW −
→ ωX . It follows that
π∗ OY (KY − π ∗ (KX + ∆)) −
→ K(X)
factors through Tr : K(W ) −
→ K(X). Furthermore, ρ∗ OY (KY −π ∗ (KX +∆)) is by definition
the multiplier ideal J (W ; ∆W ) where ∆W = f ∗ ∆ − Ramf , since Y is smooth. Thus,
F -SINGULARITIES VIA ALTERATIONS
35
Image π∗ OY (KY − π ∗ (KX + ∆)) −
→ K(X) is simply Tr(ρ∗ J (W ; ∆W )). But that is equal
to J (X; ∆) by Theorem 8.1.
Finally, we turn our attention to the behavior of the multiplier ideal under proper dominant maps in characteristic zero.
Theorem 8.3. If (X, ∆) is a log-Q-Gorenstein pair in characteristic zero, then
\
Tr
J (X; ∆) =
Image hdim Y −dim X Rπ∗ OY (⌈KY − π ∗ (KX + ∆)⌉) −−−π→ K(X)
π:Y −
→X
where the intersection runs over all proper dominant maps from normal varieties Y . Note
the map to K(X) is induced from the trace map as in Proposition 2.18.
Proof. We may restrict our maps π : Y −
→ X to those maps where Y is regular and which
factor through a fixed regular alteration f : Z −
→ X such that f ∗ (KX + ∆) is an integral
Cartier divisor. Thus by Corollary 8.2, it is sufficient to show that
hdim Y −dim X R(f ◦ ρ)∗ OY (KY − π ∗ (KX + ∆)) −
→ h0 Rf∗ ωZ (−f ∗ (KX + ∆))
is surjective on global sections. However, by the statement of [Kov00, Theorem 2], first
correctly proved in full generality in [Bha10, Theorem 4.1.3], the natural map OZ −
→ Rρ∗ OY
has a left inverse in the derived category, and thus so does
Rf∗ Rρ∗ (ωY [dim Y ] ⊗ OY (π ∗ (KX + ∆))) −
→ Rf∗ (ωZ [dim Z] ⊗ OZ (f ∗ (KX + ∆))) .
In particular, taking −dth cohomology yields the desired surjection on global sections.
9. Further questions
This theory suggests a large number of potential directions for further inquiry. We
highlight a few of the more obvious ones below.
Question 9.1 (Mixed characteristic). What can be said in mixed characteristic? In particular, does the intersection from our Main Theorem stabilize for schemes in mixed characteristic?
We have learned that M. Hochster and W. Zhang have made progress on this question
in low dimensions for isolated singularities.
Question 9.2 (Adjoint ideals). Can one develop a characteristic theory analogous to the
theory of adjoint ideals, cf. [Laz04, 9.3.E] or [Tak08], described via alterations or finite
covers?
The characterization of test ideals, as well as F -regular and F -rational singularities suggests the following:
Question 9.3 (F -pure singularities). Can F -pure (or F -injective) singularities likewise be
described by alterations?
Finally, we consider the following question:
Question 9.4 (Effectivity of covers and alterations). Given a pair (X, ∆), how can we identify
finite covers (or alterations) π : Y −
→ X such that
Tr
τ (X; ∆) = Image ⌈π∗ OY (KY − π ∗ (KX + ∆)⌉) −−−π→ K(X) ?
In other words, can we determine when the intersection from our Main Theorem stabilizes?
F -SINGULARITIES VIA ALTERATIONS
36
We do not have a good answer to this question. The key point in our construction is
repeated use of the Equational Lemma [HH92, HL07, SS12]. The procedure in that Lemma
is constructive. However, this is not very satisfying. It would be very satisfying and likely
useful if one had a different geometric or homological criterion for identifying π : Y −
→X
as in the question above.
References
[AJ89]
[Bha10]
[Bha12]
[Bli04]
[Bli13]
[BB11]
[BSTZ10]
[Bou98]
[CR11]
[Con00]
[dJ96]
[Ein97]
[FW89]
[GR70]
[Har66]
[Har77]
[Har07]
[Hir64]
D. Arapura and D. B. Jaffe: On Kodaira vanishing for singular varieties, Proc. Amer.
Math. Soc. 105 (1989), no. 4, 911–916. MR952313 (89h:14013)
B. Bhatt: Derived direct summands, ProQuest LLC, Ann Arbor, MI, 2010, Thesis (Ph.D.)–
Princeton University. 2753219
B. Bhatt: Derived splinters in positive characteristic, Compos. Math. 148 (2012), no. 6,
1757–1786. 2999303
M. Blickle: Multiplier ideals and modules on toric varieties, Math. Z. 248 (2004), no. 1,
113–121. MR2092724 (2006a:14082)
M. Blickle: Test ideals via algebras of p−e -linear maps, J. Algebraic Geom. 22 (2013), no. 1,
49–83. 2993047
M. Blickle and G. Böckle: Cartier modules: finiteness results, J. Reine Angew. Math. 661
(2011), 85–123. 2863904
M. Blickle, K. Schwede, S. Takagi, and W. Zhang: Discreteness and rationality of
F -jumping numbers on singular varieties, Math. Ann. 347 (2010), no. 4, 917–949. 2658149
N. Bourbaki: Commutative algebra. Chapters 1–7, Elements of Mathematics (Berlin),
Springer-Verlag, Berlin, 1998, Translated from the French, Reprint of the 1989 English translation. MR1727221 (2001g:13001)
A. Chatzistamatiou and K. Rülling: Higher direct images of the structure sheaf in positive
characteristic, Algebra Number Theory 5 (2011), no. 6, 693–775. 2923726
B. Conrad: Grothendieck duality and base change, Lecture Notes in Mathematics, vol. 1750,
Springer-Verlag, Berlin, 2000. MR1804902 (2002d:14025)
A. J. de Jong: Smoothness, semi-stability and alterations, Inst. Hautes Études Sci. Publ.
Math. (1996), no. 83, 51–93. 1423020 (98e:14011)
L. Ein: Multiplier ideals, vanishing theorems and applications, Algebraic geometry—Santa
Cruz 1995, Proc. Sympos. Pure Math., vol. 62, Amer. Math. Soc., Providence, RI, 1997,
pp. 203–219. MR1492524 (98m:14006)
R. Fedder and K. Watanabe: A characterization of F -regularity in terms of F -purity,
Commutative algebra (Berkeley, CA, 1987), Math. Sci. Res. Inst. Publ., vol. 15, Springer, New
York, 1989, pp. 227–245. MR1015520 (91k:13009)
H. Grauert and O. Riemenschneider: Verschwindungssätze für analytische Kohomologiegruppen auf komplexen Räumen, Invent. Math. 11 (1970), 263–292. MR0302938 (46 #2081)
R. Hartshorne: Residues and duality, Lecture notes of a seminar on the work of A.
Grothendieck, given at Harvard 1963/64. With an appendix by P. Deligne. Lecture Notes
in Mathematics, No. 20, Springer-Verlag, Berlin, 1966. MR0222093 (36 #5145)
R. Hartshorne: Algebraic geometry, Springer-Verlag, New York, 1977, Graduate Texts in
Mathematics, No. 52. MR0463157 (57 #3116)
R. Hartshorne: Generalized divisors and biliaison, Illinois J. Math. 51 (2007), no. 1, 83–98
(electronic). MR2346188
H. Hironaka: Resolution of singularities of an algebraic variety over a field of characteristic
zero. I, II, Ann. of Math. (2) 79 (1964), 109–203; ibid. (2) 79 (1964), 205–326. MR0199184 (33
#7333)
[HY11]
[HH92]
[HL07]
[Kaw82]
M. Hochster and Y. Yao: A weak embedding theorem (and test exponents) for modules of
finite phantom projective dimension, work in progress, 2011.
M. Hochster and C. Huneke: Infinite integral extensions and big Cohen-Macaulay algebras,
Ann. of Math. (2) 135 (1992), no. 1, 53–89. 1147957 (92m:13023)
C. Huneke and G. Lyubeznik: Absolute integral closure in positive characteristic, Adv. Math.
210 (2007), no. 2, 498–504. 2303230 (2008d:13005)
Y. Kawamata: A generalization of Kodaira-Ramanujam’s vanishing theorem, Math. Ann. 261
(1982), no. 1, 43–46. MR675204 (84i:14022)
F -SINGULARITIES VIA ALTERATIONS
37
[KKMSD73] G. Kempf, F. F. Knudsen, D. Mumford, and B. Saint-Donat: Toroidal embeddings. I,
Lecture Notes in Mathematics, Vol. 339, Springer-Verlag, Berlin, 1973. MR0335518 (49 #299)
[KM98]
J. Kollár and S. Mori: Birational geometry of algebraic varieties, Cambridge Tracts in
Mathematics, vol. 134, Cambridge University Press, Cambridge, 1998, With the collaboration of C. H. Clemens and A. Corti, Translated from the 1998 Japanese original. MR1658959
(2000b:14018)
[Kov00]
[Laz04]
S. J. Kovács: A characterization of rational singularities, Duke Math. J. 102 (2000), no. 2,
187–191. MR1749436 (2002b:14005)
R. Lazarsfeld: Positivity in algebraic geometry. II, Ergebnisse der Mathematik und ihrer
Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], vol. 49,
Springer-Verlag, Berlin, 2004, Positivity for vector bundles, and multiplier ideals. MR2095472
(2005k:14001b)
[Lip78]
[Lip94]
[LT81]
[Ray78]
[SS12]
[Sch09]
[Sch10]
[Sch11]
[ST08]
[ST10]
[ST12a]
[ST12b]
[Sin99]
[Smi94]
J. Lipman: Desingularization of two-dimensional schemes, Ann. Math. (2) 107 (1978), no. 1,
151–207. 0491722 (58 #10924)
J. Lipman: Adjoints of ideals in regular local rings, Math. Res. Lett. 1 (1994), no. 6, 739–755,
With an appendix by Steven Dale Cutkosky. MR1306018 (95k:13028)
J. Lipman and B. Teissier: Pseudorational local rings and a theorem of Briançon-Skoda about
integral closures of ideals, Michigan Math. J. 28 (1981), no. 1, 97–116. MR600418 (82f:14004)
M. Raynaud: Contre-exemple au “vanishing theorem” en caractéristique p > 0, C. P.
Ramanujam—a tribute, Tata Inst. Fund. Res. Studies in Math., vol. 8, Springer, Berlin, 1978,
pp. 273–278. 541027 (81b:14011)
A. Sannai and A. K. Singh: Galois extensions, plus closure, and maps on local cohomology,
Adv. Math. 229 (2012), no. 3, 1847–1861.
K. Schwede: F -adjunction, Algebra Number Theory 3 (2009), no. 8, 907–950.
K. Schwede: Centers of F -purity, Math. Z. 265 (2010), no. 3, 687–714. 2644316
K. Schwede: Test ideals in non-Q-Gorenstein rings, Trans. Amer. Math. Soc. 363 (2011),
no. 11, 5925–5941. 2817415 (2012c:13011)
K. Schwede and S. Takagi: Rational singularities associated to pairs, Michigan Math. J. 57
(2008), 625–658.
K. Schwede and K. Tucker: On the behavior of test ideals under finite morphisms,
arXiv:1003.4333, to appear in J. Algebraic Geom.
K. Schwede and K. Tucker: A survey of test ideals, Progress in commutative algebra 2,
Walter de Gruyter, Berlin, 2012, pp. 39–99. 2932591
K. Schwede and K. Tucker: Test ideals of non-principal ideals: Computations, jumping numbers, alterations and division theorems, arXiv:1212.6956, to appear in Journal de
Mathématiques Pures et Appliquées.
A. K. Singh: Q-Gorenstein splinter rings of characteristic p are F-regular, Math. Proc. Cambridge Philos. Soc. 127 (1999), no. 2, 201–205. 1735920 (2000j:13006)
K. E. Smith: Tight closure of parameter ideals, Invent. Math. 115 (1994), no. 1, 41–60.
MR1248078 (94k:13006)
[Smi95]
K. E. Smith: Test ideals in local rings, Trans. Amer. Math. Soc. 347 (1995), no. 9, 3453–3472.
MR1311917 (96c:13008)
[Smi97a]
[Smi97b]
[Smi97c]
[Tak04]
[Tak08]
[TW92]
K. E. Smith: Erratun to vanishing, singularities and effective bounds via prime characteristic
local algebra, http://www.math.lsa.umich.edu/~kesmith/santaerratum.ps, 1997.
K. E. Smith: F -rational rings have rational singularities, Amer. J. Math. 119 (1997), no. 1,
159–180. MR1428062 (97k:13004)
K. E. Smith: Vanishing, singularities and effective bounds via prime characteristic local algebra, Algebraic geometry—Santa Cruz 1995, Proc. Sympos. Pure Math., vol. 62, Amer. Math.
Soc., Providence, RI, 1997, pp. 289–325. MR1492526 (99a:14026)
S. Takagi: An interpretation of multiplier ideals via tight closure, J. Algebraic Geom. 13
(2004), no. 2, 393–415. MR2047704 (2005c:13002)
S. Takagi: A characteristic p analogue of plt singularities and adjoint ideals, Math. Z. 259
(2008), no. 2, 321–341. MR2390084 (2009b:13004)
M. Tomari and K. Watanabe: Normal Zr -graded rings and normal cyclic covers,
Manuscripta Math. 76 (1992), no. 3-4, 325–340. 1185023 (93j:13002)
F -SINGULARITIES VIA ALTERATIONS
[Vie82]
E. Viehweg:
38
Vanishing theorems, J. Reine Angew. Math. 335 (1982), 1–8. MR667459
(83m:14011)
Institut für Mathematik, Johannes Gutenberg-Universität Mainz, 55099 Mainz, Germany
E-mail address: [email protected]
Department of Mathematics, The Pennsylvania State University, University Park, PA,
16802, USA
E-mail address: [email protected]
Department of Mathematics, University of Utah, Salt Lake City, UT, 84112, USA
E-mail address: [email protected]
| 0 |
1
L ATTICE E RASURE C ODES OF L OW R ANK
WITH
N OISE M ARGINS
Vinay A. Vaishampayan
arXiv:1801.04466v1 [] 13 Jan 2018
Dept. of Engineering Science and Physics
City University of New York-College of Staten Island
Staten Island, NY USA
Abstract
We consider the following generalization of an (n, k) MDS code for application to an erasure
channel with additive noise. Like an MDS code, our code is required to be decodable from any k
received symbols, in the absence of noise. In addition, we require that the noise margin for every
allowable erasure pattern be as large as possible and that the code satisfy a power constraint. In this
paper we derive performance bounds and present a few designs for low rank lattice codes for an additive
noise channel with erasures.
Index terms Lattices, Erasure Codes, MDS Codes, Compound Channel.
I. I NTRODUCTION
YS1
Z
EncoderModulator
X
Y
+
Erasure
Network
YS2
YSi
YSm
Fig. 1. Coding and Modulation for the Erasure Network.
We consider low rank lattice codes for transmission over a noisy erasure channel as illustrated
in Fig. 1. In this figure k information symbols are mapped by an encoder/modulator to a vector
x = (x1 , x2 , . . . , xn ) ∈ Λ, where Λ is a rank-k lattice in Rn . The output of the additive noise
channel is y = x + z, where z = (z1 , z2 , . . . , zn ) is a noise vector independent of x and with
independent components. Components of y are then erased by an erasure network, whose outputs
2
are obtained by retaining only those symbols of y indexed by subsets S ⊂ {1, 2, . . . , n} in a
given sub-collection of subsets; thus yS coincides with y is the positions identified by S. As
an example, with n = 4, S = {2, 4} and y = (a, b, c, d), yS = (b, d). A decoder estimates
the source symbols based on yS with a probability of error denoted Pe (S). The objective is to
minimize Pe (S) for each S by designing a single codebook which satisfies a power constraint
E[X t X] ≤ nP , where E denotes expectation with respect to a uniform distribution on the
codebook. Here we consider as our sub-collection S, all k-subsets of {1, 2, . . . , n}. Our paper is
organized as follows. Prior work and lattice background is in Sec. II. Two performance bounds
are presented in Sec. III, constructions for codes in dimension n = 4 are presented and compared
to the derived bounds in Sec. IV. A summary is in Sec. V.
We use the acronym w.l.o.g to mean ‘without loss of generality’.
II. P RIOR W ORK AND R EVIEW OF L ATTICE T ERMINOLOGY
The problem considered here may be viewed as a code design problem for a special case of
the compound channel, see e.g. [1], [5]. This work was motivated by a study on cross layer
coding that appeared in [4]. For prior contributions on the Gaussian erasure channel, please refer
to [8] and the references therein.We now develop notation and some basic definitions for low
rank lattices in Rn . Let {φi , i = 1, 2, . . . , k} be a collection of k ≤ n orthonormal column
vectors in Rn and let Φ = (φi , i = 1, 2, . . . , k) denote the associated n × k orthonormal matrix.
Let V denote a k × k generator matrix of full rank for a lattice ΛV = V Zk := {V u, u ∈ Zk }.
We will refer to ΛV as the mother lattice. Let
Λ = ΦΛV := ΦV Zk .
(1)
Λ is a rank-k lattice in Rn . Let G(ΛV ) = V t V denote the Gram matrix of ΛV (t is the transpose
operator).
The determinant of a lattice det Λ is defined in terms of the determinant of its Gram matrix by
det Λ := det(G(Λ)). Let ρ(Λ) denote the radius of the largest inscribed sphere in a Voronoi cell
of Λ. The packing density of Λ is defined in terms of Vk the volume of a unit-radius Euclidean
ball in Rk by
√
∆k (Λ) = Vk ρk / det Λ.
(2)
3
We denote by ∆k (opt), the largest packing density that can be acheived by any lattice in Rk . The
problem of finding lattices that maximize the packing density is a classical problem in number
theory and geometry, with several excellent references [3], [6].
The following definitions are from [6]. A body captures the notion of a solid subset of Rn ,
specifically, B ⊂ Rn is a body if it has nonempty interior and is contained in the closure
of its interior [6]. A body B ⊂ Rn is said to be centrally symmetric if B = −B, where
−B = {−x : x ∈ B}. A closed body B with the property that for any x ∈ B, the point
λx ∈ B for every 0 ≤ λ < 1 is called a star body. While convex bodies are star bodies, the
converse is not true. A simple example, and one directly relevant to us is the star body formed by
the union of centrally symmetric ellipsoids in Rn . A lattice Λ is said to be admissible for B ⊂ Rn ,
p
or B-admissible, if no non-zero point in Λ lies in B. The greatest lower bound of ( det Λ)
over all B−admissible lattices is called the lattice constant of B, denoted ∆(B) (which is set
to ∞ is there are no B− admissible lattices). A B-admissible lattice Λ with det Λ = ∆(B)2 is
said to be a critical lattice for B.
A lattice Λ is said to be a packing lattice for a body B if the sets B and B + λ are disjoint for
all non-zero λ ∈ Λ. It is known, Thm.1, Ch. 3, Sec 20 [6], that Λ is a packing lattice for centrally
symmetric, convex body B if and only if it is admissible for 2B. Thus, for a convex body, the
problem of finding a packing lattice for B is equivalent to that of finding an admissible lattice
for 2B. The connection between packings and admissibility for non-convex bodies is messier.
The distinction arises because for a centrally symmetric body B, Λ is a lattice packing of B
if and only if it is admissible for B + B, where + denotes the set sum or Minkowski sum. If
the centrally symmetric body is also convex, then B + B = 2B and thus packing problems and
admissibility problems are closely related. On the other hand, if B is centrally symmetric but
non-convex, in order to solve a packing problem for B one must solve an admissibility problem
for B + B, and this set may not be as easily described as B.
In our application, we need to index body B by subset S in a given sub-collection of subsets
S
and our problem is one of packing S∈S B(S), which is non-convex. While admissibility for
non convex centrally symmetric body 2C says nothing in general about packings for C, it turns
S
out that our design problem is equivalent to finding a critical lattice for 2 S∈S B(S) because the
decoder knows S. Thus it is possible to draw on the theory of admissible lattices for star bodies.
This theory provides several key ingredients to help find a solution to this problem. Most notably,
in the chapter on Mahler’s compactness theorem [2], Theorem VII states that every critical lattice
4
for a bounded star body S has n linearly independent points on the boundary of S.
III. B OUNDS
Let In = {1, 2, . . . , n}. For S ⊂ In , |S| = k let ΛS be the lattice obtained be retaining
only those coordinates that are in S or equivalently ΛS is the projection of Λ into the subspace
CS := Span{ei , i ∈ S} where ei = (0, ..., 0, 1, 0, ..., 0)t is the ith unit vector in Rn . For any
k-subset S ⊂ {1, 2, . . . , n}, we denote by ΦS the k × k submatrix obtained by extracting from Φ
the k rows identified by S. The generator matrix for ΛS is ΦS V and its Gram matrix G(ΛS ) =
V t ΦtS ΦS V .
Define the (packing volume) contraction ratio
βS = (ρ(ΛS )/ρ(ΛV ))k
(3)
let ρmin = minS ρ(ΛS ) and let βmin = minS βS .
A. Determinant Upper Bound
We will use symbols x̄, x# to denote the arithmetic mean and geometric mean, respectively,
of the real numbers xi over some index set I. When a k-dim mother lattice ΛV is set in IRn
using a basis Φ, the projections on the nk subsets S, cannot all be simultaneously good. There
are two important factors that measure the ‘goodness’ of the projections—the packing density
and the scale of the lattices ΛS . The following theorem develops one of two bounds presented
in this paper.
Theorem 1. (Determinant Bound) Given a mother lattice ΛV and orthonormal basis Φ, let
β # and ∆# be respectively, the geometric mean of the volume contraction ratios and packing
densities of the child lattices ΛS , taken over all k-subsets of {1, 2, . . . , n}. Then
(β # ∆(ΛV ))2 ≤
(∆# )2
.
n
(4)
k
Equality holds if and only if all child lattices have equal determinants.
Proof. The packing densities of the mother lattice ΛV and child lattice ΛS are related by the
following identity
∆2 (ΛV )βS2 = ∆2 (ΛS )
det ΛS
.
det ΛV
(5)
5
Compute the geometric mean of both sides over the collection of k-subsets S to get
! 1
Y det ΛS (nk)
∆2 (ΛV )(β # )2 = (∆# )2
.
det ΛV
S
(6)
From the arithmetic-geometric mean inequality it follows that
∆2 (ΛV )(β # )2 ≤ (∆# )2
1
n
k
X det ΛS
det ΛV
S
!
(7)
and equality holds if and only if det ΛS is a constant with respect to S. However
X
X
det ΛS =
det(G(ΛS ))
S
S
=
X
det((ΦS V )t ) det(ΦS V )
S
(a)
= det((ΦV )t (ΦV ))
= det ΛV ,
(8)
where in (a) we have used the Cauchy-Binet formula, see e.g. [7]. The remainder of the proof
follows directly.
The following corollary is immediate.
Corollary 1. Given mother lattice ΛV and orthonormal basis Φ, let βmin be the minimum volume
contraction ratio of the child lattices ΛS , taken over all k-subsets of {1, 2, . . . , n}, and ∆# the
geometric mean of the packing densities. Then
(βmin ∆(ΛV ))2 ≤
(∆# )2
∆k (opt)2
.
≤
n
n
k
(9)
k
Equality holds in the left inequality iff all the contraction ratios are equal and all the child lattices
have equal determinants. Equality holds in the right inequality iff all child lattices achieve the
optimal packing density in dimension k.
B. Trace Upper Bound
Theorem 2. (Trace Bound) For an (n, k) code, the compaction ratio is bounded as
2/k
2/k
βmin ≤ βS
≤
k
.
n
(10)
Equality holds if the shortest vector of each lattice ΛS is the image of the shortest vector in ΛV .
6
Proof. Upon summing over all k-subsets S we obtain
X
X
G(ΛS ) =
V t ΦtS ΦS V
S
S
!
X
= Vt
ΦtS ΦS
V
S
n−1 t
Φ ΦV
= V
k−1
n−1
=
V t V.
k−1
t
(11)
By definition the smallest packing radius of any child lattice ρmin satisfies
ρmin 2 ≤ ρ(ΛS )2 ≤ (1/2)ut G(ΛS )u
(12)
for any non-zero u ∈ Zk and any k-subset S. Upon averaging over subsets S we obtain the
upper bound
ρ(ΛS )2 ≤
=
2
1 X t
u G(ΛS )u
n
k
S
1
2
t
n u
k
X
G(ΛS )u
S
n−1
k−1
2 nk
=
ut G(ΛV )u.
(13)
Equality holds if ΦS V u is the shortest vector in ΛS for all S. Thus
ρ(ΛS )2 ≤
k 2
ρ (ΛV )
n
(14)
and (10) follows immediately.
IV. A NALYSIS OF S OME (4, k) C ODES
We construct Φ for n = 4 for various values of k and various mother lattices ΛV . Numerical
2/k
results for the (4, k), k = 2, 3 are presented in Fig. 2, in which βmin is plotted as a function of
the packing density of the mother lattice. We have plotted the determinant bound using both the
optimal and the cubic lattice for the child lattices. We have also plotted the trace bound.
Observe that in the (n, k) = (4, 2) case there is a significant gap between the best possible
construction and the upper bounds. In the (4, 3) case performance close to the determinant bound
is achieved by setting the mother lattice to be the cubic lattice. Also with the cubic lattice as
the mother lattice, since the trace bound is lower than the determinant bound, this is proof that
7
it is impossible to simultaneously achieve the packing density of D3 when the mother lattice is
the cubic lattice.
(n,k)=(4,3)
(n,k)=(4,2)
0.9
0.6
0.8
0.5
0.7
0.6
- 2/k
min
- 2/k
min
0.4
0.3
0.5
0.4
0.3
0.2
Det. Bound (A 2)
Trace Bound
Det Bound (Z 2)
$ M=Z2
0.1
$ M=A2
Lower Bound
0
0.75
0.8
0.85
0.9
0.2
Det. Bound: D 3
Trace Bound
0.1
Det Bound: Z 3
Z3
FCC
BCC
0
0.4
0.95
0.5
0.6
"($ M)
0.7
0.8
"($ M)
Fig. 2. Bounds on β 2/k and values obtained from the construction.
4
4
4
2
2
2
0
0
0
-2
-2
-2
-4
-5
0
5
-4
-5
0
5
-4
-5
4
4
4
2
2
2
0
0
0
-2
-2
-2
-4
-5
0
5
-4
-5
0
5
0
0.5
0
-4
-5
5
0
5
(a)
2
1.5
1
0.5
0
-0.5
-1
-1.5
-2
-2
-1.5
-1
-0.5
1
1.5
2
(b)
Fig. 3. Packings (a) and the star body (b) derived from noise spheres in each of the six subspaces for the (4, 2) code with
ΛV = Z2 as described in the text.
Geometrically, the ability to decode correctly, post-erasure, with iid Gaussian noise is determined by the largest noise sphere which can be packed by the projected (or child) lattice ΛS
in each of the nk subspaces CS . When each noise sphere is projected back onto the subspace
spanned by the columns of Φ, a noise sphere is transformed into an ellipsoid. To see this
consider the noise sphere kxk2 ≤ r2 in subspace CS . Setting x = ΦS y, this leads to the noise
8
ellipsoid BS (r) = {y : kΦS yk2 ≤ r2 }. The packing of the noise ellipsoids BS (ρmin ), (recall
that ρmin is the half the length of the shortest non-zero vector in ΛS ), by the mother lattice
ΛV is shown in the six panels in Fig. 3(a), one for each subspace. Fig. 3(b) shows the star
S
body B(2ρmin ) = s∈S BS (2ρmin ). This illustrates the star body, which is the union of the six
ellipses, two of which are circles. Also shown are points of the lattice ΛV , which in this case
is an admissible lattice for this star body and illustrates also the interpretation as a code design
problem for the compound channel. Observe that Z2 is simultaneously good as a packing for
all six erasure configurations, i.e. for each of the bodies BS (ρmin ). Also, ΛV is simultaneously
critical for five of the six bodies BS (2ρmin ) (notice that one circle does not touch any of the
lattice points).
A. (4, 1)
t
2
= 1/4. The trace and determinant upper
Let Φ = a a a a , a = 1/2. We obtain βmin
bounds yield β 2 ≤ 1/4. Hence this construction is optimal.
B. (4, 2)
√
With ΛV = Z2 , a = 1/ 3 and
Φt =
a
a
a 0
a −a 0 a
we obtain six child lattices with Gram matrices
2a2 0
a2 a2
,
,
2
2
2
0 2a
a 2a
2
2
2
2
a
−a
2a −a
,
,
2
2
2
2
−a 2a
−a
a
2a2 a2
a
2
a2
0
2
,
a
0
.
2
a
(15)
All six child lattices are similar to Z2 . The first one has shortest vector of square length 2a2 and all
2/k
the others have square length a2 . This code achieves βmin = βmin = 1/3, β # = 21/6 /3 = 0.374.
The trace upper bound is βmin ≤ βS ≤ 1/2, regardless of the mother lattice, while the
√
√
determinant upper bound depends on the mother lattice, and is 2/3 and 1/ 6 = 0.408 for
ΛV = A2 (hexagonal lattice) and Z2 , resp. Thus the determinant bound is tighter than the trace
bound but greater than 1/3. This construction does not meet the trace bound or the determinant
9
bound with equality. However with ΛV = Z2 , the following theorem shows this to be the best
βmin possible. A computer-based search has also failed to reveal any improvements in β # .
Theorem 3. Let Φ be a 4 × 2 matrix with orthonormal columns, so that Φt Φ = I. Let ΛV = Z2 .
Let ΛS = ΦS ΛV where S is a 2-subset of {1, 2, 3, 4}. Let rS be the length of the shortest non-zero
vector in ΛS . Then rS2 ≤ 1/3 for at least one of the six 2-subsets of {1, 2, 3, 4}.
Proof. For this proof we will write
x 1 x2 x3 x4
.
Φt =
y1 y2 y3 y4
(16)
Our proof is by contradiction. Suppose that the shortest non-zero vector in all ΛS has square
length r2 > 1/3. Now x2i < r2 /2, in at most one position i, else the shortest would be smaller
than r2 . The same is true for yi2 , (xi − yi )2 and (xi + yi )2 . Hence there exists one position i,
w.l.o.g. i = 1, such that x2i ≥ r2 /2 , yi2 ≥ r2 /2 and (xi − yi )2 ≥ r2 /2. It follows that (i) x1 and
y1 must be of opposite sign and (ii) (x1 + y1 )2 < r2 /2. (i) is true because if x1 , y1 are of the
√
√
√
same sign and (x1 − y1 )2 ≥ r2 /2 then |y1 | ≥ 2r or |x1 | ≥ 2r. Assuming x1 ≥ 2r, we have
r2 ≤ x23 + x24 = 1 − x21 − x22 ≤ 1 − 2r2 which contradicts the hypothesis that r2 > 1/3. (ii) is
√
√
true for if not |x1 | ≥ 2r or |y1 | ≥ 2r, and by the same argument as in (i) r2 cannot exceed
1/3.
Since (x1 + y1 )2 < r2 /2, it follows that (xi + yi )2 ≥ r2 /2 for positions i = 2, 3, 4. In at least
one of these positions, say i = 2, x2i ≥ r2 /2 and yi2 ≥ r2 /2. Now x2 and y2 must be of the
same sign and (x2 − y2 )2 < r2 /2, by a proof similar to that used before. Thus for i = 3, 4,
(xi + yi )2 ≥ r2 /2 and (xi − yi )2 ≥ r2 /2. Again for i = 3, 4, both x2i ≥ r2 /2 and yi2 ≥ r2 /2
cannot hold for the same i, hence, either x23 < r2 /2, y32 ≥ r2 /2 and x24 ≥ r2 /2, y42 < r2 /2 or
x24 < r2 /2, y42 ≥ r2 /2 and x23 ≥ r2 /2, y32 < r2 /2. We assume the first case. The proof for the
other case is similar.
We have already proved that x2 and y2 are of the same sign. Assume they are both positive
(if not reverse signs of all elements of Φ). Further, assume that y2 > x2 and consider positions
i = 2, 3 (if x2 > y2 , then the same proof applies but for positions i = 2, 4). We now break up
the proof into two cases:
Case 1: (y3 and x3 of the same sign): Either (a) (y3 − x3 )2 ≥ r2 or (b) (y3 − x3 )2 < r2 . If (a)
then |y3 | − |x3 | ≥ r and y22 + y32 ≥ x22 + (|x3 | + r)2 ≥ x22 + x23 + r2 ≥ 2r2 . Thus r2 ≤ y12 + y42 =
10
1 − (y22 + y33 ) ≤ 1 − 2r2 , hence r2 ≤ 1/3.
If (b) then let (y3 − x3 )2 = r2 − 2 (0 < 2 ≤ r2 /2). Thus |y3 | − |x3 | =
√
r2 − 2 and it follows
that y32 ≥ x23 + r2 − 2 . Since (y2 − x2 )2 + (y3 − x3 )2 ≥ r2 it follows that (y2 − x2 )2 ≥ 2 and
that |y2 | ≥ |x2 | + and thus y22 ≥ x22 + 2 . Thus y22 + y32 ≥ x22 + 2 + x23 + r2 − 2
= x22 + x23 + r2 ≥ 2r2 . But then r2 ≤ y12 + y42 = 1 − (y22 + y32 ) ≤ 1 − 2r2 and it follows that
r2 ≤ 1/3.
Case 2: (y3 and x3 are of opposite signs): Either (a) (y3 + x3 )2 ≥ r2 or (b) (y3 + x3 )2 < r2 .
If (a) then |y3 + x3 | ≥ r and due to opposite signs |y3 | − |x3 | ≥ r from which y32 ≥ x23 + r2 . It
follows that y22 + y32 ≥ x22 + x23 + r2 ≥ 2r2 . This implies r2 ≤ 1/3. If (b) then let
(y3 + x3 )2 = r2 − 2 . Since y3 and x3 have opposite signs (|y3 | − |x3 |)2 = r2 − 2 and thus
y32 ≥ x23 + r2 − 2 . Since (y2 + x2 )2 + (y3 + x3 )2 ≥ r2 it follows that (y2 + x2 )2 + r2 − 2 ≥ r2
and hence (y2 + x2 )2 ≥ 2 which implies that y22 ≥ x22 + 2 . Thus
y22 + y32 ≥ x22 + 2 + x23 + r2 − 2 ≥ 2r2 . Once again this means r2 ≤ 1/3.
C. (4, 3)
It is checked by direct evaluation that for
a
t
Φ = a
a
the four child lattices have Gram matrices
1/4
3/4 1/4
1/4 3/4 −1/4 ,
1/4 −1/4 3/4
3/4 1/4 −1/4
1/4 3/4 1/4
−1/4 1/4 3/4
a = 1/2, ΛV = Z3 and
−a −a −a
a −a a ,
−a a
a
(17)
3/4 −1/4 1/4
−1/4 3/4 1/4
1/4
1/4 3/4
3/4 −1/4 −1/4
−1/4 3/4 −1/4 .
−1/4 −1/4 3/4
√
All child lattices are congruent to the body-centered cubic lattice with packing density π 3/8.
This construction achieves the trace bound r2 = 3/4 and is optimal.
11
Set ΛV to be the face-centered cubic lattice, the densest lattice packing in R3 , packing density
√
∆V = π/ 18 = 0.7408, so that det ΛV = 4,
0
−1 1
2 0 1
V = −1 −1 1 , G(ΛV ) = 0 2 1 .
(18)
0
0 −1
1 1 2
With Φ such that
1
0 0
2/3 1 1
Φ∗V =
,
−2/3 1 0
1/3 0 1
(19)
we get β 2/k = (ρmin /ρ(ΛV ))2 = 0.5. Each child lattice has unit determinant and shortest vector
of length equal to unity. All four child lattices have an identical packing density of π/6, which
is the packing density of the cubic lattice Z3 . Also observe that the determinant bound (4) holds
with equality, which implies that the selected Φ is optimal among rotations for which all child
lattices achieve the packing density of the cubic lattice.
√
Let ΛV the body-centered cubic lattice, ∆(ΛV ) = π 3/8, ρ2 = 3/4,
3 −1 −1
1 −1 1
V = −1 1 1 , G(ΛV ) = −1 3 −1 .
−1 −1 3
−1 −1 1
√
√
Upon setting a = 1/2, b = a + a, c = a − a and
√
a a
0
−c c
b
√
0
−c b
a − a
c
Φ=
,Φ ∗ V =
√
0
−b c
a
a
b
√
a −a
0
b −b −c
we obtain child lattices with Gram matrices given in terms of d = 2c2 + b2 , e = 2b2 + c2 ,
f = −c2 − 2bc, g = c2 + 2bc, h = 3bc, i = −b2 − 2bc,
d f −g d −g f
f d h , −g e −h ,
−g h e
f −h d
12
e
−g
i
−g d −h ,
i −h e
e
−i
−g
−i e −h .
−g −h d
(20)
√
Each child lattice has determinant 4 and shortest vector with square length d = (9/4 − 1/ 2) =
1.5429 and packing density ∆ = πd3/2 /12 = 0.5017. This results in β 2/k = d/3 = 0.5143.
V. S UMMARY
We considered the design of power-constrained rank-k lattice codes in Rn , with the property
that error free recovery is possible for any k code symbols and the minimum distance for each of
the nk lattices obtained by projection onto k-dim subspaces spanned by k coordinate vectors are
bounded from below. The potential application is to erasure channels with additive noise. Bounds
on the performance are derived and the performance of specific constructions are investigated
for n = 4.
R EFERENCES
[1] D. Blackwell, L. Breiman, and A. Thomasian. The capacity of a class of channels. The Annals of Mathematical Statistics,
pages 1229–1241, 1959.
[2] J. W. S. Cassels. An introduction to the geometry of numbers. Springer Science & Business Media, 2012.
[3] J. H. Conway and N. J. A. Sloane. Sphere Packings, Lattices and Groups. Springer-Verlag, New York, 1988.
[4] T. A. Courtade and R. D. Wesel. Optimal allocation of redundancy between packet-level erasure coding and physical-layer
channel coding in fading channels. IEEE Transactions on Communications, 59(8):2101–2109, 2011.
[5] I. Csiszár and P. Narayan. Capacity of the gaussian arbitrarily varying channel. IEEE Transactions on Information Theory,
37(1):18–26, 1991.
[6] P. M. Gruber and C. G. Lekkerkerker. Geometry of numbers. North-Holland, 1987.
[7] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge university press, 2012.
[8] A. Ozçelikkale, S. Yuksel, and H. M. Ozaktas. Unitary precoding and basis dependency of mmse performance for gaussian
erasure channels. Information Theory, IEEE Transactions on, 60(11):7186–7203, 2014.
| 7 |
Standard detectors aren’t (currently) fooled by physical adversarial stop signs
arXiv:1710.03337v2 [] 26 Oct 2017
Jiajun Lu, Hussein Sibai, Evan Fabry, David Forsyth
University of Illinois at Urbana Champaign
{jlu23, sibai2, efabry2, daf}@illinois.edu
Abstract
classified) or not [20, 8, 21, 12, 2, 4]. Current procedures to
build adversarial examples for deep networks appear to subvert the feature construction implemented by the network to
produce odd patterns of activation in late stage RELU’s; this
can be exploited to build one form of defence [10]. There
is some evidence that other feature constructions admit adversarial attacks, too [12]. The success of these attacks can
be seen as a warning not to use very highly non-linear feature constructions without having strong mathematical constraints on what these constructions can do; but taking that
position means one cannot use methods that are largely accurate and effective.
An adversarial example is an example that has been adjusted to produce the wrong label when presented to a system at test time. If adversarial examples existed that could
fool a detector, they could be used to (for example) wreak
havoc on roads populated with smart vehicles. Recently, we
described our difficulties creating physical adversarial stop
signs that fool a detector. More recently, Evtimov et al. produced a physical adversarial stop sign that fools a proxy
model of a detector. In this paper, we show that these physical adversarial stop signs do not fool two standard detectors
(YOLO and Faster RCNN) in standard configuration. Evtimov et al.’s construction relies on a crop of the image to the
stop sign; this crop is then resized and presented to a classifier. We argue that the cropping and resizing procedure
largely eliminates the effects of rescaling and of view angle. Whether an adversarial attack is robust under rescaling and change of view direction remains moot. We argue
that attacking a classifier is very different from attacking a
detector, and that the structure of detectors – which must
search for their own bounding box, and which cannot estimate that box very accurately – likely makes it difficult to
make adversarial patterns. Finally, an adversarial pattern
on a physical object that could fool a detector would have
to be adversarial in the face of a wide family of parametric
distortions (scale; view angle; box shift inside the detector;
illumination; and so on). Such a pattern would be of great
theoretical and practical interest. There is currently no evidence that such patterns exist.
It is important to distinguish between a classifier and a
detector to understand the current state of the art. A classifier accepts an image and produces a label. Classifiers
are scored on accuracy. A detector, like FasterRCNN [18],
identifies image boxes that are “worth labelling”, and then
generates labels (which might include background) for
each. The final label generation step employs a classifier.
However, the statistics of how boxes span objects in a detector are complex and poorly understood. Some modern
detectors like YOLO 9000 [17] predict boxes and labels
using features on a fixed grid, resulting in fairly complex
sampling patterns in the space of boxes, and meaning that
pixels outside a box may participate in labelling that box.
One cannot have too many boxes, because too many boxes
means much redundancy; worse, it imposes heavy demands
on the accuracy of the classifier. Too few boxes chances
missing objects. Detectors are scored on a composite score,
taking into account both the accuracy with which the detector labels the box and the accuracy of the placement of the
box.
1. Introduction
It is usual to attack classifiers, and all the attacks of
which we are aware are attacks on classifiers. However,
for many applications, classifiers are not themselves useful. Road signs are a good example. A road sign classifier
would be applied to images that consist largely of road sign
(e.g. those of [22]). But there is little application need for
a road-sign classifier except as a component of a road sign
detector, because one doesn’t usually have to deal with images that consist largely of road sign. Instead, one deals
An adversarial example is an example that has been adjusted to produce the wrong label when presented to a system at test time. Adversarial examples are of interest only
because the adjustments required seem to be very small and
are easy to obtain [23, 7, 5]. Numerous search procedures
generate adversarial examples [14, 16, 15]. There is fair evidence that it is hard to tell whether an example is adversarial
(and so (a) evidence of an attack and (b) likely to be mis1
with images that contain many things, and must find and
label the road sign. It is quite natural to study road sign
classifiers (e.g. [19]) because image classification remains
difficult and academic studies of feature constructions are
important. But there is no particular threat posed by an attack on a road sign classifier. An attack on a road sign detector is an entirely different matter. For example, imagine
possessing a template that, with a can of spray paint, could
ensure that a detector read a stop sign as a yield sign (or
worse!). As a result, it is important to know whether (a)
such examples could exist and (b) how robust their adversarial property is in practice.
Printing adversarial images then photographing them can
retain their adversarial property [9, 1], which suggests adversarial examples might exist in the physical world. Adversarial examples in the physical world could cause a great
deal of mischief. In earlier work, we showed that it was
difficult to build physical examples that fooled a stop-sign
detector [11]. In particular, if one actually takes video of
adversarial stop-signs out of doors, the adversarial pattern
does not appear to affect the performance of the detector by
much. We speculated that this might be because adversarial
patterns were disrupted by being viewed at different scales,
rotations, and orientations. This generated some discussion.
OpenAI demonstrated a search procedure that could produce an image of a cat that was misclassified when viewed
at multiple scales [1]. There is some blurring of the fur texture on the cat, but this would likely be imperceptible to
most observers. OpenAI also demonstrated a search procedure that could produce an image of a cat that was misclassified when viewed at multiple scales and orientations [1].
However, there are significant visible artifacts on that image; few would feel that it had not obviously been tampered
with.
Recently, Evtimov et al. have demonstrated several
physical stop-signs that are misclassified [3]. Their attack
is demonstrated on stop-signs that are cropped from images
and presented to a classifier. By cropping, they have proxied the box-prediction process in a detector; however, their
attack is not intended as an attack on a detector (the paper
does not use the word “detector”, for example). In this paper, we show that standard off-the-shelf detectors that have
not seen adversarial examples in training detect their stop
signs rather well, under a variety of conditions. We explain
(a) why their result is puzzling; (b) why their result may
have to do with specific details of their pipeline model, particularly the classifier construction and (c) why the distinction between a classifier and a detector means their work has
not put the core issue – can one build physical adversarial
stop-signs? – to rest.
2. Experimental Results
Evtimov et. al have demonstrated a construction of physical adversarial stop signs [3]. They demonstrate poster attacks (the stop sign is covered with a poster that looks like
a faded stop sign) and sticker attacks (the attacker makes
stickers placed on particular locations on a stop sign), and
conclude that one can make physical adversarial stop signs.
There are two types of tests: stationary tests, where the sign
is imaged from a variety of orientations and directions; and
drive-by tests, where the sign is viewed from a camera based
on a car.
We obtained two standard detectors (the MS-COCO pretrained standard YOLO [17]; Faster RCNN [18], pretrained
version available on github) and applied them to the images
and videos from their paper. First, we applied both detectors
on the images shown in the paper (reproduced as Figure 1
for reference). All adversarial stop-signs are detected by
both detectors (Figure 2 and Figure 3).
We downloaded videos provided by the authors
at
https://iotsecurity.eecs.umich.edu/
#roadsigns, and applied the detectors to those videos.
We find:
• YOLO detects the adversarial stop signs produced by
poster attacks about as well as the true stop signs (figure 4, and the videos we provide at https://www.
youtube.com/watch?v=EfbonX1lE5s);
• YOLO detects the adversarial stop signs produced by
sticker attacks about as well as the true stop signs (figure 5, and the videos we provide at https://www.
youtube.com/watch?v=GOjNKQtFs64);
• Faster RCNN detects the adversarial stop signs
produced by poster attacks about as well as the
true stop signs (figure 6, and the videos we provide at https://www.youtube.com/watch?
v=x53ZUROX1q4);
• Faster RCNN detects the adversarial stop signs
produced by sticker attacks about as well as the
true stop signs (figure 7, and the videos we provide at https://www.youtube.com/watch?
v=p7wwvWdn2pA);
• Faster RCNN detects stop signs rather more accurately
than YOLO;
• both YOLO and Faster RCNN detect small stop signs
less accurately; as the sign shrinks in the image, YOLO
fails significantly earlier than Faster RCNN.
These effects are so strong that there is no point in significance testing, etc.
Distance/Angle
Subtle Poster
Camoutflage Graffiti
100%
66.67%
Camoutflage Art
(LISA-CNN)
Camoutflage Art
(GTSRB*-CNN)
5’ 0°
5’ 15°
10’ 0°
10’ 30°
40’ 0°
Targeted-Attack Success
100%
80%
Figure 1: Table IV of [3], reproduced for the readers’ convenience. This table shows figures of different adversarial constructions, from different distances and viewed at different angles.
Video can be found at:
• https://www.youtube.com/watch?v=
afIr6_cvoqY (YOLO; poster);
• https://www.youtube.com/watch?v=
nCcoJBQ8C3c (YOLO; sticker);
• https://www.youtube.com/watch?v=
10DDFs73_6M (FasterRCNN; poster);
• https://www.youtube.com/watch?v=
rqLhTZZ0U2w) (YOLO; poster);
• https://www.youtube.com/watch?v=
KQyzQtuyZxc (FasterRCNN; poster);
• https://www.youtube.com/watch?v=
Ep-aE8T3Igs (YOLO; sticker);
• https://www.youtube.com/watch?v=
FRDyz7tDVdM (FasterRCNN; sticker);
Distance/Angle
Subtle Poster
Camoutflage Graffiti
Camoutflage Art
(LISA-CNN)
Camoutflage Art
(GTSRB*-CNN)
5’ 0°
5’ 15°
10’ 0°
10’ 30°
40’ 0°
Targeted-Attack Success
0%
0%
0%
0%
Figure 2: YOLO detection results on the stop signs of figure 1.
• https://www.youtube.com/watch?v=
F-iefz8jGQg (FasterRCNN; sticker).
At our request, the authors kindly provided full resolution versions of the videos at https://iotsecurity.
eecs.umich.edu/#roadsigns. We applied YOLO
and Faster RCNN detectors to those videos. We find:
• YOLO detects the adversarial stop signs produced by
poster attacks well (figure 8);
• YOLO detects the adversarial stop signs produced by
sticker attacks (figure 9);
• Faster RCNN detects the adversarial stop signs produced by poster attacks very well (figure 10);
• Faster RCNN detects the adversarial stop signs produced by sticker attacks very well (figure 11);
• Faster RCNN detects stop signs rather more accurately
than YOLO;
• YOLO works better on higher resolution video;
• Faster RCNN detect even far and small stop signs accurately.
Distance/Angle
Subtle Poster
Camoutflage Graffiti
Camoutflage Art
(LISA-CNN)
Camoutflage Art
(GTSRB*-CNN)
0%
0%
0%
0%
5’ 0°
5’ 15°
10’ 0°
10’ 30°
40’ 0°
Targeted-Attack Success
Figure 3: Faster RCNN detection results on the stop signs of figure 1.
These effects are so strong that there is no point in significance testing, etc.
3. Classifiers and Detectors are Very Different
Systems
The details of the system attacked are important in assessing the threat posed by Evtimov et al.’s stop signs. Their
process is: acquire image (or video frame); crop to sign;
then classify that box. This process is seen in earlier road
sign literature, including [22, 19]. The attack is on the clas-
sifier. There are two classifiers, distinguished by architecture and training details. LISA-CNN consists of three convolutional layers followed by a fully connected layer ( [3],
p5, c1), trained to classify signs into 17 classes ( [3], p4, c2),
using the LISA dataset of US road signs [13]. The other is
a publicly available implementation (from [24]) of a classifier demonstrated to work well at road signs (in [19]); this
is trained on the German Traffic Sign Recognition Benchmark ([22]), with US stop signs added. Both classifiers are
accurate ( [3], p5, c1). Each classifier is applied to 32 × 32
images ( [3], p4, c2). However, in both stationary and drive
Figure 4: In relatively low resolution, YOLO detects printed poster physical adversarial stop sign and real stop sign similarly.
Figure 5: In relatively low resolution, YOLO detects sticker physical adversarial stop sign and real stop sign similarly.
by tests, the image is cropped and resized ( [3], p8, c2).
An attack on a road sign classifier is of no particular interest in and of itself, because no application requires classification of close cropped images of road signs. An attack
on a road sign detector is an entirely different matter. We
interpret Evtimov et al.’s pipeline as a proxy model of a
detection system, where the cropping procedure spoofs the
process in a detector that produces bounding boxes. This is
our interpretation of the paper, but it is not an unreasonable
interpretation; for example, table VII of [3] shows boxes
placed over small road signs in large images, which suggests authors have some form of detection process in mind.
We speculate that several features of this proxy model make
it a relatively poor model of a modern detection system.
These features also make the classifier that labels boxes relatively vulnerable to adversarial constructions.
The key feature of detection systems is that they tend not
to get the boxes exactly right (for example, look at the boxes
in Figure 12), because it is extremely difficult to do. Localization of boxes is measured using the intersection over
Figure 6: In relatively low resolution, Faster RCNN detects printed poster physical adversarial stop sign and real stop sign
similarly.
Figure 7: In relatively low resolution, Faster RCNN detects sticker physical adversarial stop sign and real stop sign similarly.
union score; one computes AI /AU , where AI is the area
of intersection between predicted and true box, and AU is
the area of the union of these boxes. For example, YOLO
has a mean Average Precision of 78.6% at an IOU score of
.5 – this means that only boxes with IOU with respect to
ground truth of .5 or greater are counted as a true detection. Even with very strong modern detectors, scores fall
fast with increasing IOU threshold. How detection systems
predict boxes depends somewhat on the architecture. Faster
RCNN predicts interesting boxes, then classifies them [18].
YOLO uses a grid of cells, where each cell uses features
computed from much of the image to predict boxes and labels near that cell, with confidence information [17]. One
should think of this architecture as an efficient way of predicting interesting boxes, then classifying them. All this
means that, in modern detector systems, boxes are not centered cleanly on objects. We are not aware of any literature
on the statistics of box locations with respect to a root coor-
Figure 8: In higher resolution video, YOLO detects printed poster physical adversarial stop sign well. YOLO works better
on higher resolution than lower resolution video.
Figure 9: In higher resolution video, YOLO detects sticker physical adversarial stop sign well. YOLO works better on higher
resolution than lower resolution video.
dinate system for the detected object.
There are several reasons that Evtimov et al.’s attack on
a classifier makes a poor proxy of a detection system.
Close cropping can remove scale and translation effects: The details of the crop and resize procedure are not
revealed in [3]. However, these details matter. We believe
their results are most easily explained by assuming the sign
was cropped reasonably accurately to its bounding box, then
resized (Table VII of [3], shown for the reader’s convenience here as Figure 12). If the sign is cropped reasonably accurately to its bounding box, then resized, the visual
effects of slant and scale are largely removed. In particular, isotropic resizing removes effects of scale other than
loss of spatial precision in the sampling grid This means the
claim that the adversarial construction is invariant to slant
and scale is moot. Close cropping is not a feature of modern
detection systems, and would make the proxy model poor.
Low resolution boxes: Almost every pixel in an accurately cropped box will testify to the presence of a stop sign.
Thus, very low resolution boxes may mean that fewer pixels
need to be modified to confuse the underlying classifier. In
contrast to the 32x32 boxes of [3], YOLO uses a 7x7 grid
on a 448x448 dimension image; each grid cell predicts pre-
dict bounding box extents and labels. This means that each
prediction in YOLO observes at least 64x64 pixels. The relatively low resolution of the classifier used makes the proxy
model poor.
Cropping and variance: Detection systems like YOLO
or Faster RCNN cannot currently produce accurate bounding boxes. Producing very accurate boxes requires searching a larger space of boxes, and so creates problems with
false positives. While there are post-processing methods to
improve boxes [6], this tension is fundamental (for example, see figure 2 and 3). In turn, this means that the classification procedure within the detector must cope with a
range of shifts between box and object. We speculate that,
in a detection system, this could serve to disrupt adversarial patterns, because the pattern might be presented to the
classification process inside the detector in a variety of locations relative to the bounding box. In other words, the
adversarial property of the pattern would need to be robust
to shifts and rescales within the box. At the very least, this
effect means that one cannot draw conclusions from the experiments of [3].
Cropping and context: The relatively high variance of
bounding boxes around objects in detector systems has an-
Figure 10: In higher resolution video, Faster RCNN detects printed poster physical adversarial stop sign very well.
Figure 11: In higher resolution video, Faster RCNN detects sticker physical adversarial stop sign very well.
other effect. The detector system sees object context information that may have been hidden in the proxy model. For
example, cells in YOLO do not distinguish between pixels
covered by a box and others when deciding (a) where the
box is and (b) what is in it. While the value of this information remains moot, its absence means the proxy model is a
poor model.
4. Discussion
We do not claim that detectors are necessarily immune
to physical adversarial examples. Instead, we claim that
there is no evidence as of writing that a physical adversarial
example can be constructed that fools a detector. In earlier
work, we said we had not produced such examples. The
main point of this paper is to point out that others have not,
too; and that fooling a detector is a very different business
from fooling a classifier.
There is a tension between the test-time accuracy of a
classifier, and the ability to construct adversarial examples
that are “like” and “close to” real images but are misclassified. In particular, if there are lots of such things, why is
the classifier accurate on test? How does the test procedure
“know” not to produce adversarial examples? The usual,
and natural, explanation is that the measure of the space
of adversarial examples A under the distribution of images
P (I) is “small”. Notice that A is interesting only if P (A)
is small and for most u ∈ A, P (u) is “big” (i.e. there is not
much point in an adversarial example that doesn’t look like
an image) and there there is at least some of A “far” from
true classifier boundaries (i.e. there is not much point in replacing a stop sign with a yield sign, then complaining it is
mislabelled). This means that A must have small volume,
too. If A has small volume, but it is easy for an optimization
process to find an adversarial example close to any particular example, then there must also be a piece of A quite close
to most examples (one can think of “bubbles” or “tubes” of
bad labels threading through the space of images). In this
view, Evtimov et al.’s paper presents an important puzzle.
If one can construct an adversarial pattern that remains adversarial for a three dimensional range of views (two angles
and a scale), this implies that close to any particular pattern
there is a three parameter “sheet” inside A – but how does
the network know to organize its errors into a form that is
consistent with nuisance viewing parameters?
One answer is that it is trained to do so because it is
trained on different views of objects, meaning that A has internal structure learned from training examples. While this
can’t be disproved, it certainly hasn’t been proved. This
answer would imply that, in some way, the architecture of
the network can generalize across viewing parameters better
Figure 12: Table VII of [3], reproduced for the reader’s convenience. Note the close crops of stop signs, shown as yellow
boxes. The whole image could not be passed to a stop sign classifier; therefore, some form of box must be produced. In
that paper, the box is produced by cropping and resizing the crop to a standard size. In the text, we argue that this cropping
suppresses the effects of scale and slant. However, it is a poor model of the boxes produced by modern detectors, because it
is placed accurately round the sign.
than it generalizes across labels (after all, the existence of
an adversarial example is a failure to generalize labels correctly). Believing this requires fairly compelling evidence.
Ockham’s razor suggests another answer: Evtimov et al.,
by cropping closely to the stop sign, removed most of the
effect of slant and scale, and so the issue does not arise.
Whether physical adversarial examples exist that fool a
detector is a question of the first importance. Here are quite
good reasons they might not. An adversarial pattern on a
physical object that could fool a detector would have to be
adversarial in the face of a wide family of parametric distortions (scale; view angle; box shift inside the detector;
illumination; and so on). While it is quite possible that the
box created by the detector reduces the effects of view angle
and scaling, at least for plane objects, the box shift is an important effect. There is no evidence that adversarial patterns
exist that can fool a detector. Finding such patterns (or disproving their existence) is an important technical challenge.
More likely to exist, but significantly less of a nuisance, is
a pattern that, viewed under the right circumstances (and so
just occasionally) would fool a detector.
Acknowledgements
We are particularly grateful to Ivan Evtimov, Kevin
Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li,
Atul Prakash, Amir Rahmati, and Dawn Song, the authors
of [3], who have generously shared data and have made
comments on this manuscript which lead to improvements.
References
[1] A. Athalye and I. Sutskever. Synthesizing robust adversarial
examples. arXiv preprint arXiv:1707.07397, 2017. 2
[2] N. Carlini and D. Wagner. Defensive distillation is not robust
to adversarial examples. 1
[3] I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li,
A. Prakash, A. Rahmati, and D. Song. Robust physicalworld attacks on machine learning models. arXiv preprint
arXiv:1707.08945, 2017. 2, 3, 5, 6, 8, 10
[4] A. Fawzi, O. Fawzi, and P. Frossard. Analysis of classifiers’ robustness to adversarial perturbations. arXiv preprint
arXiv:1502.02590, 2015. 1
[5] A. Fawzi, S. Moosavi-Dezfooli, and P. Frossard. Robustness of classifiers: from adversarial to random noise. CoRR,
abs/1608.08967, 2016. 1
[6] S. Gidaris and N. Komodakis. Locnet: Improving localization accuracy for object detection. arXiv preprint
arXiv:1511.07763, 2015. 8
[7] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint
arXiv:1412.6572, 2014. 1
[8] S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. CoRR, abs/1412.5068,
2014. 1
[9] A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2016.
2
[10] J. Lu, T. Issaranon, and D. Forsyth. Safetynet: Detecting
and rejecting adversarial examples robustly. arXiv preprint
arXiv:1704.00103, 2017. 1
[11] J. Lu, H. Sibai, E. Fabry, and D. Forsyth. No need to
worry about adversarial examples in object detection in au-
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
tonomous vehicles. arXiv preprint arXiv:1707.03501, 2017.
2
J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff.
On detecting adversarial perturbations. arXiv preprint
arXiv:1702.04267, 2017. 1
A. Mogelmose, M. M. Trivedi, and T. B. Moeslund. Visionbased traffic sign detection and analysis for intelligent driver
assistance systems: Perspectives and survey. IEEE Transactions on Intelligent Transportation Systems, 13(4):1484–
1497, 2012. 5
S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool:
a simple and accurate method to fool deep neural networks.
CoRR, abs/1511.04599, 2015. 1
S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and
P. Frossard. Universal adversarial perturbations. arXiv
preprint arXiv:1610.08401, 2016. 1
A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks
are easily fooled: High confidence predictions for unrecognizable images. In CVPR, 2015. 1
J. Redmon and A. Farhadi. Yolo9000: better, faster, stronger.
arXiv preprint arXiv:1612.08242, 2016. 1, 2, 7
S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards
real-time object detection with region proposal networks. In
Advances in neural information processing systems, pages
91–99, 2015. 1, 2, 7
P. Sermanet and Y. LeCun. Traffic sign recognition with
multi-scale convolutional networks. In Neural Networks
(IJCNN), The 2011 International Joint Conference on, pages
2809–2813. IEEE, 2011. 2, 5
U. Shaham, Y. Yamada, and S. Negahban. Understanding adversarial training: Increasing local stability of neural nets through robust optimization.
arXiv preprint
arXiv:1511.05432, 2015. 1
M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Accessorize to a crime: Real and stealthy attacks on state-ofthe-art face recognition. In Proceedings of the 2016 ACM
SIGSAC Conference on Computer and Communications Security, CCS ’16, pages 1528–1540, New York, NY, USA,
2016. ACM. 1
J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man
vs. computer: Benchmarking machine learning algorithms
for traffic sign recognition. Neural networks, 32:323–332,
2012. 1, 5
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan,
I. Goodfellow, and R. Fergus. Intriguing properties of neural
networks. arXiv preprint arXiv:1312.6199, 2013. 1
V. Yadav. p2-trafficsigns. https://github.com/vxy10/p2TrafficSigns, 2016. 5
| 2 |
ARTINIAN LEVEL ALGEBRAS OF SOCLE DEGREE 4
arXiv:1701.03180v1 [] 11 Jan 2017
SHREEDEVI K. MASUTI AND M. E. ROSSI
A BSTRACT. In this paper we study the O-sequences of the local (or graded) K-algebras of socle degree 4. More precisely, we prove that an O-sequence h = (1, 3, h2 , h3 , h4 ), where h4 ≥ 2, is the h-vector
of a local level K-algebra if and only if h3 ≤ 3h4 . We also prove that h = (1, 3, h2 , h3 , 1) is the h-vector
of a local Gorenstein K-algebra if and only if h3 ≤ 3 and h2 ≤ (h32+1) + (3 − h3 ). In each of these cases
we give an effective method to construct a local level K-algebra with a given h-vector. Moreover we
refine a result by Elias and Rossi by showing that if h = (1, h1 , h2 , h3 , 1) is an unimodal Gorenstein
O-sequence, then h forces the corresponding Gorenstein K-algebra to be canonically graded if and
only if h1 = h3 and h2 = (h12+1), that is the h-vector is maximal.
1. I NTRODUCTION
Let ( A, m) be an Artinian local or graded K-algebra where K is an algebraically closed field of
characteristic zero. Let Soc( A) = (0 : m) be the socle of A. We denote by s the socle degree of
A, that is the maximum integer j such that m j 6= 0. The type of A is τ := dimK Soc( A). Recall
that A is said to be level of type τ if Soc( A) = ms and dimK ms = τ. If A has type 1, equivalently dimK Soc( A) = 1, then A is Gorenstein. In the literature local rings with low socle degree,
also called short local rings, have emerged as a testing ground for properties of infinite free resolutions (see [AIS08], [AHS13], [HS11], [Sjo79], [Roos05], [CRV01]). They have been also extensively studied in problems related to the irreducibility and the smoothness of the punctual Hilbert
scheme Hilbd (P nK ) parameterizing zero-dimensional subschemes P nK of degree d, see among others [Poo08], [EV08], [CEVV09], [CENR13]. In this paper we study the structure of level K-algebras
of socle degree 4, hence m5 = 0. One of the most significant information on the structure is given
by the Hilbert function.
By definition, the Hilbert function of A, hi = hi ( A) = dimK mi /mi+1 , is the Hilbert function of
the associated graded ring grm ( A) = ⊕i≥0 mi /mi+1 . We also say that h = (h0 , h1 , . . . , hs ) is the hvector of A. In [Mac27] Macaulay characterized the possible sequences of positive integers hi that
can occur as the Hilbert function of A. Since then there has been a great interest in commutative
algebra in determining the h-vectors that can occur as the Hilbert function of A with additional
properties (for example, complete intersection, Gorenstein, level, etc). A sequence of positive
h ii
integers h = (h0 , h1 , . . . , hs ) satisfying Macaulay’s criterion, that is h0 = 1 and hi+1 ≤ hi for
i = 1, . . . , s − 1, is called an O-sequence. A sequence h = (1, h1 , . . . , hs ) is said to be a level (resp.
Gorenstein) O-sequence if h is the Hilbert function of some Artinian level (resp. Gorenstein) Kalgebra A. Remark that h1 is the embedding dimension and, if A is level, hs is the type of A.
Notice that a level O-sequence is not necessarily the Hilbert function of an Artinian level graded
Date: January 13, 2017.
2010 Mathematics Subject Classification. Primary: 13H10; Secondary: 13H15, 14C05.
Key words and phrases. Macaulay’s inverse system, Hilbert functions, Artinian Gorenstein and level algebras, canonically graded algebras.
The first author was supported by INdAM COFOUND Fellowships cofounded by Marie Curie actions, Italy. The
second author was partially supported by PRIN 2014 “Geometria delle varieta’ algebriche”.
1
2
MASUTI AND ROSSI
K-algebra. This is because the Hilbert function of the level ring ( A, m) is the Hilbert funtion of
grm ( A) which is not necessarily level. From now on we say that h is a graded level O-sequence if h
is the Hilbert function of a level graded standard K-algebra. For instance it is well known that the
h-vector of a Gorenstein graded K-algebra is symmetric, but this is no longer true for a Gorenstein
local ring. Characterize level O-sequences is a wide open problem in commutative algebra. The
problem is difficult and very few is known even in the graded case as evidenced by [GHMY07].
In the following table we give a summary of known results:
Characterization of
Graded
Local
Gorenstein O-sequences with h1 = 2
[Sta78, Theorem 4.2]
[Ber09, Theorem 2.6]
level O-sequences with h1 = 2
[Iar84, Theorem 4.6A], [Iar04] [Ber09, Theorem 2.6]
Gorenstein O-sequences with h1 = 3
[Sta78, Theorem 4.2]
Open
Open. In [GHMY07] authors
level O-sequences with h1 = 3
gave a complete list with
and τ ≥ 2
s ≤ 5 or s = 6 and τ = 2
Open
level O-sequences with s ≤ 3
[Ste14], [GHMY07]
[Ste14, Theorem 4.3]
In this paper we fill the above table by characterizing the Gorenstein and level O-sequences with
a particular attention to socle degree 4 and embedding dimension h1 = 3.
In our setting we can write A = R/I where R = K [[ x1 , . . . , xr ]] (resp. K [ x1 , . . . , xr ]) is the formal
power series ring (resp. the polynomial ring with standard grading) and I an ideal of R. We
say that A is graded when it can be presented as R/I where I is an homogeneous ideal in R =
K [ x1 , . . . , xr ]. Without loss of generality we assume that h1 = dimK m/m2 = r.
Recall that the socle type of A = R/I is the sequence E = (0, e1 , . . . , es ), where
ei := dimK ((0 : m) ∩ mi )/(0 : m) ∩ mi+1 ).
It is known that for all i ≥ 0,
(1.1)
hi ≤ min{dimK Ri , ei dimK R0 + ei+1 dimK R1 + · · · + es dimK Rs−i }
(see [Iar84]). Hence a necessary condition for h to be a level O-sequence is that hs−1 ≤ h1 hs being
es = hs and ei = 0 otherwise. In the following theorem we prove that this condition is also
sufficient for h = (1, 3, h2 , h3 , h4 ) to be a level O-sequence, provided h4 ≥ 2. However, if h4 = 1,
we need an additional assumption for h to be a Gorenstein O-sequence. We remark that the result
can not be extended to h1 > 3 (see Example 3.7).
Theorem 1. Let h = (1, 3, h2 , h3 , h4 ) be an O-sequence.
(a) Let h4 = 1. Then h is a Gorenstein O-sequence if and only if h3 ≤ 3 and h2 ≤ (h32+1) + (3 −
h3 ) .
(b) Let h4 ≥ 2. Then h is a level O-sequence if and only if h3 ≤ 3h4 .
The proof of the above result is effective in the sense that in each case we construct a local level
K-algebra with a given h-vector verifying the necessary conditions (see Theorem 3.6).
Combining Theorem 1(b) and results in [GHMY07] we show that there are level O-sequences
which are not admissible in the graded case (see Section 3, Table 1). A similar behavior was
observed in [Ste14] for socle degree 3. Theorem 1(a) is a consequence of the following more general
result which holds for any embedding dimension:
3
Theorem 2.
(1.2)
(a) If (1, h1 , . . . , hs−2 , hs−1 , 1) is a Gorenstein O-sequence, then
hs −1 + 1
hs−1 ≤ h1 and hs−2 ≤
+ ( h1 − h s − 1 ) .
2
(b) If h = (1, h1 , h2 , h3 , 1) is an unimodal O-sequence satisfying (1.2), then h is a Gorenstein
O-sequence.
The bound (1.2) improves the bound for hs−2 given in (1.1).
If A is a Gorenstein local K-algebra with symmetric h-vector, then grm ( A) is Gorenstein, see
[Iar94]. It is a natural question to ask, in this case, whether A is analytically isomorphic to grm ( A).
Accordingly with the definition given in [Ems78, Page 408] and in [ER12], recall that an Artinian
local K-algebra ( A, m) is said to be canonically graded if there exists a K-algebra isomorphism between A and its associated graded ring grm ( A).
For instance J. Elias and M. E. Rossi in [ER12] proved that every Gorenstein K-algebra with symmetric h-vector and m4 = 0 (s ≤ 3) is canonically graded. A local K-algebra A of socle type E is said
to be compressed if equality holds in (1.1) for all 1 ≤ i ≤ s, equivalently the h-vector is maximal (see
[Iar84, Definition 2.3]). In [ER15, Theorem 3.1] Elias and Rossi proved that if A is any compressed
Gorenstein local K-algebra of socle degree s ≤ 4, then A is canonically graded. In Section 4 we
prove that if the socle degree is 4, then the assumption can not be relaxed. More precisely only the
maximal h-vector forces every corresponding Gorenstein K-algebra to be canonically graded. We
prove that if h is unimodal and not maximal, then there exists a Gorenstein K-algebra with Hilbert
function h which is not canonically graded. To prove that a local K-algebra is not canonically
graded is in general a very difficult task. See also [Jel16] for interesting discussions.
Theorem 3. Let h = (1, h1 , h2 , h3 , 1) be a Gorenstein O-sequence with h2 ≥ h1 . Then every local
Gorenstein K-algebra with Hilbert function h is necessarily canonically graded if and only if h1 =
h3 and h2 = (h12+1).
The main tool of the paper is Macaulay’s inverse system [Mac1916] which gives a one-to-one
correspondence between ideals I ⊆ R such that R/I is an Artinian local ring and finitely generated
R-submodules of a polynomial ring. In Section 2 we gather preliminary results needed for this
purpose. We prove Theorems 1 and 2 in Section 3, and Theorem 3 in Section 4.
We have used Singular [Singular], [Eli15] and CoCoA [CoCoA] for various computations and
examples.
A CKNOWLEDGEMENTS
We thank Juan Elias for providing us the updated version of I NVERSE - SYST. LIB for computations.
2.
PRELIMINARIES
2.1. Macaulay’s Inverse System. In this subsection we recall some results on Macaulay’s inverse
system which we will use in the subsequent sections. This theory is well-known in the literature,
especially in the graded setting (see for example [Mac1916, Chapter IV] and [IK99]). However, the
local case is not so well explored. We refer the reader to [Ems78], [Iar94] for an extended treatment.
4
MASUTI AND ROSSI
It is known that the injective hull of K as R-module is isomorphic to a divided power ring
P = K [ X1 , . . . , Xr ] which has a structure of R-module by means of the following action
◦ : R × P −→
( f , g) −→
P
f ◦ g = f (∂X1 , . . . , ∂Xr )( g)
where ∂Xi denotes the partial derivative with respect to Xi . For the sake of simplicity from now
on we will use xi instead of the capital letters Xi . If { f 1 , . . . , f t } ⊆ P is a set of polynomials, we
will denote by h f 1 , . . . , f t i R the R-submodule of P generated by f1 , . . . , f t , i.e., the K-vector space
generated by f 1 , . . . , f t and by the corresponding derivatives of all orders. We consider the exact
pairing of K-vector spaces:
h , i : R × P −→
K
( f , g) −→ ( f ◦ g)(0).
For any ideal I ⊂ R we define the following R-submodule of P called Macaulay’s inverse system:
I ⊥ : = { g ∈ P | h f , g i = 0 ∀ f ∈ I }.
Conversely, for every R-submodule M of P we define
AnnR ( M ) := { g ∈ R | h g, f i = 0 ∀ f ∈ M }
which is an ideal of R. If M is generated by polynomials f := f1 , . . . , f t , with f i ∈ P, then we will
write AnnR ( M ) = AnnR (f) and Af = R/ AnnR (f).
By using Matlis duality one proves that there exists a one-to-one correspondence between ideals
I ⊆ R such that R/I is an Artinian local ring and R-submodules M of P which are finitely generated. More precisely, Emsalem in [Ems78, Proposition 2] and Iarrobino in [Iar94, Lemma 1.2]
proved that:
Proposition 2.1. There is a one-to-one correspondence between ideals I such that R/I is a level local ring
of socle degree s and type τ and R-submodules of P generated by τ polynomials of degree s having linearly
independent forms of degree s. The correspondence is defined as follows:
I ⊆ R such that R/I
M ⊆ P submodule generated by
1− 1
is a level local ring of
τ polynomials of degree
←→
socle degree s and type τ
s with l.i. forms of degree s
I
AnnR ( M )
−→
←−
I⊥
M
By [Ems78, Proposition 2(a)], the action h , i induces the following isomorphism of K-vector
spaces:
(2.2)
( R/I )∗ ≃ I ⊥ ,
(where ( R/I )∗ denotes the dual with respect to the pairing h , i induced on R/I). Hence dimK R/I =
dimK I ⊥ . As in the graded case, it is possible to compute the Hilbert function of A = R/I via the
inverse system. We define the following K-vector space:
( I ⊥ )i :=
I ⊥ ∩ P≤i + P<i
.
P<i
Then, by (2.2), it is known that
(2.3)
hi ( R/I ) = dimK ( I ⊥ )i .
5
2.2. Q-decomposition. It is well-known that the Hilbert function of an Artinian graded Gorenstein K-algebra is symmetric, which is not true in the local case. The problem comes from the fact
that, in general, the associated algebra G := grm ( A) of a Gorenstein local algebra A is no longer
Gorenstein. However, in [Iar94] Iarrobino proved that the Hilbert function of a Gorenstein local
K-algebra A admits a “symmetric” decomposition. To be more precise, consider a filtration of G
by a descending sequence of ideals:
G = C (0) ⊇ C (1) ⊇ · · · ⊇ C (s) = 0,
where
C ( a) i : =
( 0 : m s + 1− a − i ) ∩ m i
.
( 0 : m s + 1− a − i ) ∩ m i + 1
Let
Q( a) = C ( a)/C ( a + 1).
Then
{Q( a) : a = 0, . . . , s − 1}
is called Q-decomposition of the associated graded ring G. We have
s −1
hi ( A) = dimK Gi =
∑ dimK Q(a)i .
a=0
Iarrobino [Iar94, Theorem 1.5] proved that if A = R/I is a Gorenstein local ring then for all
a = 0, . . . , s − 1, Q( a) is a reflexive graded G-module, up to a shift in degree: HomK ( Q( a)i , K ) ∼
=
Q( a)s− a−i . Hence the Hilbert function of Q( a) is symmetric about s−2 a . Moreover, he showed that
Q(0) = G/C (1) is the unique (up isomorphism) socle degree s graded Gorenstein quotient of G.
Let f = f [s] + lower degree terms... be a polynomial in P of degree s where f [s] is the homogeneous part of degree s and consider A f the corresponding Gorenstein local K-algebra. Then,
Q ( 0) ∼
= R/ AnnR ( f [s]) (see [Ems78, Proposition 7] and [Iar94, Lemma 1.10]).
3. C HARACTERIZATION
OF LEVEL
O- SEQUENCES
In this section we characterize Gorenstein and level O-sequences of socle degree 4 and embedding dimension 3. First if h = (1, h1 , . . . , hs ) is a Gorenstein O-sequence (in any embedding
dimension), we give an upper bound on hs−2 in terms of hs−1 which improves the already known
inequality given in (1.1).
Theorem 3.1.
(3.2)
(a) If (1, h1 , . . . , hs−2 , hs−1 , 1) is a Gorenstein O-sequence, then
hs −1 + 1
hs−1 ≤ h1 and hs−2 ≤
+ ( h1 − h s − 1 ) .
2
(b) If h = (1, h1 , h2 , h3 , 1) is an O-sequence such that h2 ≥ h3 and it satisfies (3.2), then h is a
Gorenstein O-sequence.
Proof. (a): From (1.1) it follows that hs−1 ≤ h1 . Let A be a Gorenstein local K-algebra with the
Hilbert function h and {Q( a) : a = 0, . . . , s − 1} be a Q-decomposition of grm ( A). Since Q(i )s−1 =
0 for i > 0, dimK Q(0)s−1 = hs−1 . Hence dimK Q(0)1 = dimK Q(0)s−1 = hs−1 and dimK Q(0)s−2 =
h 1i
dimK Q(0)2 ≤ hs−1 . This in turn implies that dimK Q(1)s−2 = dimK Q(1)1 ≤ h1 − hs−1 . Therefore
hs−2 = dimK Q(0)s−2 + dimK Q(1)s−2
hs −1 + 1
≤
+ ( h1 − h s − 1 ) .
2
6
MASUTI AND ROSSI
(b): Suppose h2 ≤ h1 . Set
f = x14 + · · · + x4h3 + x3h3 +1 + · · · + x3h2 + x2h2 +1 + · · · + x2h1 .
Then A f has the Hilbert function h. Now assume h2 > h1 . Denote h3 := n and define monomials
gi ∈ K [ x1 , . . . , xn ] as follows:
2
if 1 ≤ i ≤ n
xi
gi = xi−n xi−n+1 if n + 1 ≤ i ≤ 2n − 1
x n x1
if i = 2n.
For i = (i1 , . . . , in ) ∈ N n , let xi := x1i1 . . . xnin . Let T be the set of monomials xi of degree 2 in
K [ x1 , . . . , xn ] such that
/ { gi : 1 ≤ i ≤ 2n}.
xi ∈
1
n +1
Then | T | = (n+
2 ) − 2n. We write T = { gi : 2n < i ≤ ( 2 )}. Define
h2 − h1
n
2
x
g
+
∑ x2i gn+i + x3n+1 + · · · + x3h1
∑ i i
i=1
f = i=n 1
h2 − h1 + n
n
∑ x2 gi + ∑ x2 gn+i + ∑ g2 + x3n+1 + · · · + x3
i
i
i
h1
i=1
Then h3 = dimK
if h2 − h1 > n.
2n +1
i=1
∂f
∂f
∂x1 , . . . , ∂xn
if h2 − h1 ≤ n
= n and
h2 = dimK { gi : 1 ≤ i ≤ h2 − h1 + n}
Thus A f has the Hilbert function (1, h1 , h2 , h3 , 1).
S
·
{ x2n+1 , . . . , x2h1 } .
Remark 3.3. If h3 = h1 , then f in the proof of Theorem 3.1(b) is homogeneous and hence A f is a
graded Gorenstein K-algebra.
If the socle degree is 4 and h1 ≤ 12, then the converse holds in (a).
Corollary 3.4. An O-sequence h = (1, h1 , h2 , h3 , 1) with h1 ≤ 12 is a Gorenstein O-sequence if and only
if h satisfies (3.2).
Proof. By Theorem 3.1, it suffices to show that h2 ≥ h3 if h1 ≤ 12. Let A be a local Gorenstein Kalgebra with the Hilbert function h. Considering the symmetric Q-decomposition of grm ( A), we
observe that in this case Q(0) has the Hilbert function (1, h3 , h2 − m, h3 , 1), for some non-negative
integer m. Since h1 ≤ 12, by (3.2) h3 ≤ 12. As Q(0) is a graded Gorenstein K-algebra, by [MZ16,
Theorem 3.2] we conclude that h2 − m ≥ h3 since Q(0) has unimodal Hilbert function, hence
h2 ≥ h3 .
Remark 3.5. A Gorenstein O-sequence (1, h1 , h2 , h3 , 1) does not necessarily satisfy h2 ≥ h3 . For
example, consider the the sequence h = (1, 13, 12, 13, 1). By [Sta78, Example 4.3] there exists a
graded Gorenstein K-algebra with h as h-vector.
In the following theorem we characterize the h-vector of local level algebras of socle degree 4
and embedding dimension 3.
Theorem 3.6. Let h = (1, 3, h2 , h3 , h4 ) be an O-sequence.
(a) Let h4 = 1. Then h is a Gorenstein O-sequence if and only if h3 ≤ 3 and h2 ≤ (h32+1) + (3 − h3 ).
(b) Let h4 ≥ 2. Then h is a level O-sequence if and only if h3 ≤ 3h4 .
7
Proof. (a): Follows from Corollary 3.4.
(b): The “only if” part follows from (1.1). The converse is constructive and we prove it by inductive
steps on h4 . First we consider the cases h4 = 2, 3, 4 and then h4 ≥ 5. In each case we define the
polynomials f := f1 , . . . , f h4 ∈ P = K [ x1 , x2 , x3 ] of degree 4 such that Af has the Hilbert function h.
We set g1′ = x33 , g2′ = x22 x3 , g3′ = x12 x2 , g4′ = x1 x32 .
For short, in this proof we use the following notation: m := h2 and n := h3 .
Case 1: h4 = 2.
In this case n ≤ 6 as h3 ≤ 3h4 by assumption. Suppose m = 2. Then h is an O-sequence implies
that n = 2. In this case, let f1 = x14 + x32 and f2 = x24 . Then Af has the Hilbert function (1, 3, 2, 2, 2).
Now assume that m ≥ 3.
′
x4−i gi if 1 ≤ i ≤ n − 2
Subase 1: m ≥ n. We set gi = gi′
if n − 1 ≤ i ≤ m − 2
0
if m − 1 ≤ i ≤ 4.
(Here x0 = x3 ). Define
f1 = x14 + g1 + g2 and f 2 = x24 + g3 + g4 .
Then
h3 = dimK { x13 , x23 , g1′ , . . . , gn′ −2 } = n and
g′
h2 = dimK { x12 , x22 , i : 1 ≤ i ≤ m − 2} = m
x4 − i
and hence Af has the required Hilbert function h.
Subcase 2: m < n. The only possible ordered tuples (m, n) with m < n ≤ 6 such that h is an Osequence are {(3, 4), (4, 5), (5, 6)}. For each 2-tuple (m, n) we define f1 , f2 as:
a.(m, n) = (3, 4) : f1 = x14 + x12 x22 + x32 ; f2 = x24 + x12 x22 .
b.(m, n) = (4, 5) : f1 = x14 + x12 x22 + x34 ; f2 = x24 + x12 x22 .
c.(m, n) = (5, 6) : f1 = x14 + x12 x22 + x34 ; f2 = x24 + x12 x22 + x23 x3 .
Case 2: h4 = 3.
In this case n ≤ 9. We consider the following subcases:
Subcase 1: n ≤ 6. Let f′ = f 1 , f 2 be polynomials defined as in Case 1 such that Af′ has the Hilbert
(
x34
if m ≥ n
function (1, 3, m, n, 2). Now define f3 =
2
2
x1 x2 if m < n.
Then Af has the required Hilbert function h.
Subcase 2: 7 ≤ n ≤ 9. Let f′ = f1 , f 2 be polynomials defined as in Case 1 such that Af′ has the
2 2
2 2
2 2
Hilbert function (1, 3, m, 6, 2). We set p1 = x(
2 x3 , p2 = x1 x2 and p3 = x1 x3 . Since h is an O-sequence
n −6
∑i=1 pi if m = 6
and n ≥ 7, we get m ≥ 5. Now define f 3 =
if m = 5.
x22 x32
Then Af has the required Hilbert function h.
Case 3: h4 = 4.
8
MASUTI AND ROSSI
Since h is an O-sequence, n ≤ 10. We consider the following subcases:
Subcase 1: n ≤ 9. Let f′ = f1 , f 2 , f 3 be polynomials defined as in Case 2 such that Af′ has the
Hilbert function (1, 3, m, n, 3). Define
(
x23 x3 if {m ≥ n and n ≤ 6} OR {n ≥ 7 and m = 6}
f4 =
x13 x2 if {m < n ≤ 6} OR {(m, n) = (5, 7)}.
Then Af has the Hilbert function (1, 3, m, n, 4).
Subcase 2: n = 10. As h is an O-sequence, we conclude that m = 6. Let f′ = f 1 , f2 , f 3 be polynomials defined as in Case 2 such that Af′ has the Hilbert function (1, 3, 6, 9, 3). Define f4 = x12 x2 x3 .
Then Af has the Hilbert function (1, 3, 6, 10, 4).
Case 4: h4 ≥ 5.
Since h is an O-sequence, n ≤ 10 and h4 ≤ 15
Subcase 1: n ≥ h4 OR h4 ≥ 11. Let f′ = f 1 , f 2 , f 3 , f 4 be defined as in Case 3 such that Af′ has the
Hilbert function (1, 3, m, n, 4). For 5 ≤ i ≤ 15, define f i as follows:
(
x13 x2 if {m ≥ n and n ≤ 6} OR {n ≥ 7 and m = 6}
f5 =
x1 x23 if {m < n ≤ 6} OR {(m, n) = (5, 7)},
(
x1 x33 if {m ≥ n and n ≤ 6} OR {n ≥ 7 and m = 6}
f6 =
x34
if {m < n ≤ 6} OR {(m, n) = (5, 7)},
(
if n ≥ 7 and m = 6
x34
f7 =
3
x2 x3 if {(m, n) = (5, 7)}.
(Note that in the last case, h4 ≥ 7 implies that n ≥ 7). If h4 ≥ 8, then n ≥ 8 which implies that
m = 6. We set
f8 = x12 x22 , f 9 = x12 x32 , f 10 = x2 x33 , f 11 = x1 x23 , f 12 = x13 x3 , f 13 = x23 x3 , f 14 = x1 x22 x3 , f 15 = x1 x2 x32 .
Now Af has the Hilbert function (1, 3, m, n, h4 ).
Subcase 2: n < h4 ≤ 10. The smallest ordered tuple (n, h4 ) such that h is an O-sequence and n <
h4 is (4, 5). (Here smallest ordered tuple means smallest with respect to the order ≤ defined as:
(n1 , n2 ) ≤ (m1 , m2 ) if and only if n1 ≤ m1 and n2 ≤ m2 ). Let
(
(
(
x32 if m = 3
0
if m < 5
0
if m < 6
q1 =
q2 =
and q3 =
x33 if m ≥ 4,
x22 x3 if m ≥ 5
x1 x32 if m = 6.
We define
f1 = x14 + q1 + q2 , f 2 = x24 + q3 , f3 = x13 x2 , f 4 = x1 x23 , f5 = x12 x22 .
Then Af has the Hilbert function (1, 3, m, 4, 5).
Let h4 ≥ 6. We set
(
f6 =
x34 , f 7
=
x23 x3 , f 8
=
x2 x33 , f 9
=
x22 x32
x1 x33
Then Af has the Hilbert function (1, 3, m, n, h4 ).
if n = 7
f 10 =
if n ≥ 8,
(
x22 x32
x13 x3
if n = 8
if n = 9.
9
Using [GHMY07, Appendix D] and Theorem 3.6(b) we list in the following table all the Osequences which are realizable for local level K-algebras with h4 ≥ 2, but not for graded level
K-algebras.
TABLE 1
(1, 3, 2, 2, 2)
(1, 3, 6, 3, 2)
(1, 3, 6, 4, 3)
(1, 3, 5, 4, 5)
(1, 3, 6, 7, 9)
(1, 3, 3, 2, 2)
(1, 3, 3, 4, 2)
(1, 3, 3, 4, 4)
(1, 3, 6, 4, 5)
(1, 3, 4, 2, 2)
(1, 3, 4, 3, 3)
(1, 3, 5, 4, 4)
(1, 3, 6, 5, 5)
(1, 3, 5, 2, 2)
(1, 3, 5, 3, 3)
(1, 3, 6, 4, 4)
(1, 3, 5, 5, 6)
(1, 3, 6, 2, 2)
(1, 3, 6, 3, 3)
(1, 3, 3, 4, 5)
(1, 3, 6, 5, 6)
(1, 3, 5, 3, 2)
(1, 3, 3, 4, 3)
(1, 3, 4, 4, 5)
(1, 3, 6, 6, 7)
Analogously, by Theorem 3.6(a), the following O-sequences are Gorenstein sequences, but they
are not graded Gorenstein sequences since they are not symmetric. If an O-sequence h is symmetric with h1 = 3 then h is also a graded Gorenstein O-sequence by Remark 3.3.
TABLE 2
(1, 3, 1, 1, 1) (1, 3, 2, 1, 1) (1, 3, 3, 1, 1) (1, 3, 2, 2, 1) (1, 3, 3, 2, 1) (1, 3, 4, 2, 1)
The following example shows that Theorem 3.6(b) can not be extended to h1 ≥ 4 because the
necessary condition h3 ≤ h1 hs is not longer sufficient for characterizing level O-sequences of socle
degree 4.
Example 3.7. The O-sequence h = (1, 4, 9, 2, 2) is not a level O-sequence.
Proof. Let A = R/I be a local level K-algebra with the Hilbert function h. The lex-ideal L ∈ P =
K [ x1 , . . . , x4 ] with the Hilbert function h is
L = ( x12 , x1 x22 , x1 x2 x3 , x1 x2 x4 , x1 x32 , x1 x3 x4 , x1 x42 , x23 , x22 x3 , x22 x4 , x2 x32 , x2 x3 x4 , x2 x42 , x33 , x32 x4 , x3 x44 , x45 ).
A minimal graded P-free resolution of P/L is:
0 −→ P(−6)7 ⊕ P(−8)2 −→ P(−5)26 ⊕ P(−7)6 −→ P(−4)33 ⊕ P(−6)6 −→ P(−2) ⊕ P(−3)14 ⊕ P(−5)2.
By [RS10, Theorem 4.1] the Betti numbers of A can be obtained from the Betti numbers of P/L by
a sequence of negative and zero consecutive cancellations. This implies that β4 ( A) ≥ 3 and hence
A has type at least 3, which leads to a contradiction.
4. C ANONICALLY
GRADED ALGEBRAS WITH
m5 = 0.
It is clear that a necessary condition for a Gorenstein local K-algebra A being canonically graded
is that the Hilbert function of A must be symmetric. Hence we investigate whether a Gorenstein Kalgebra A with the Hilbert function (1, h1 , h2 , h1 , 1) is necessarily canonically graded. If h2 = (h12+1)
(equiv. A is compressed), then by [ER15, Theorem 3.1] A is canonically graded. In this section we
prove that if h = (1, h1 , h2 , h1 , 1) is an O-sequence with h1 ≤ h2 < (h12+1), then there exists a
polynomial F of degree 4 such that A F has the Hilbert function h and it is not canonically graded.
Theorem 4.1. Let h = (1, h1 , h2 , h3 , 1) be a Gorenstein O-sequence with h2 ≥ h1 . Then every local
Gorenstein K-algebra with Hilbert function h is necessarily canonically graded if and only if h1 = h3 and
h2 = (h12+1).
10
MASUTI AND ROSSI
Proof. The assertion is clear for h1 = 1. Hence we assume h1 > 1. The “if” part of the theorem
follows from [ER15, Theorem 3.1]. We prove the converse, that is we show that if h2 < (h12+1),
then there exists a polynomial G of degree 4 such that A G has the Hilbert function h and it is not
canonically graded. We may assume h3 = h1 , otherwise the result is clear. For simplicity in the
notation we put h1 := n and h2 := m.
First we prove the assertion for n ≤ 3. We define
3
x1 x2
x4 + x4 + x3 x
2 3
2
1
F=
4 + x4 + x3 x + x3 x
x
2
2 3
1 2
14
x1 + x24 + x23 x3 + x13 x2 + x1 x22 x3
if n
if n
if n
if n
ϕ ( x n ) = u 1 x1 + · · · + u n x n +
∑
=m=2
= 3 and m = 3
= 3 and m = 4
= 3 and m = 5.
Let G = F + x3n . It is easy to check that A G has the Hilbert function h. We claim that A G is not
canonically graded. Suppose that A G is canonically graded. Then A F ∼
= grm ( A) ∼
= A G . Let
2
2
ϕ : A F −→ A G be a K-algebra automorphism. Since xn ◦ F = 0, xn ∈ AnnR ( F ). This implies that
ϕ( xn )2 ∈ AnnR ( G ) and hence ϕ( xn )2 ◦ G = 0. For i = (i1 , . . . , in ) ∈ N n , let |i | = i1 + · · · + in .
Suppose
i ∈N n ,
ai x i .
|i|≥2
Comparing the coefficients of the monomials of degree ≤ 2 in ϕ( xn )2 ◦ G = 0, it is easy to verify that u1 = · · · = un = 0. This implies that ϕ( xn ) has no linear terms and thus ϕ is not an
automorphisms, a contradiction.
Suppose n > 3. First we define a homogeneous polynomial F ∈ P of degree 4 such that A F
has the Hilbert function h and x2n does not divide any monomial in F (in other words, if xi is a
monomial that occurs in F with nonzero coefficient, then in ≤ 1).
Let T be a monomials basis of P2 . We split the set T \ { x2n } into a disjoint union of monomials as
follows. We set
2
xi
x x
2 n
pi =
x
i − n x i + 1− n
x1 x n
for 1 ≤ i ≤ n − 1
for i = n
for n + 1 ≤ i < 2n
for i = 2n.
Let E = { pi : 1 ≤ i ≤ n}, B = { pi : n + 1 ≤ i ≤ 2n}, C = { xi x j : 1 ≤ i < j < n such that j − i > 1}
and D := { xi xn : 3 ≤ i ≤ n − 2}. Then
T \ { x2n } = E
S S S
·
B
·
C
·
D.
1
Denote by | · | the cardinality, then |C | = (n+
2 ) − 2n − ( n − 4) − 1 and | D | = n − 4. Hence we write
1
n +1
n +1
C = { pi : 2n < i ≤ (n+
2 ) − ( n − 4) − 1} and D = { pi : ( 2 ) − ( n − 4) − 1 < i ≤ ( 2 ) − 1}. We
11
set
Define
4
xi
x23 xn
2
xi− n pi
gi = x22 x32
x1 x22 xn
p2
x2i 2
xn pi
for 1 ≤ i ≤ n − 1
for i = n
for n + 1 ≤ i < 2n and i 6= n + 2
for i = n + 2
for i = 2n
1
for 2n < i ≤ (n+
2 ) − ( n − 4) − 1
1
n +1
for (n+
2 ) − ( n − 4) − 1 < i < ( 2 ) .
m
F=
∑ gi .
i=1
Since m ≥ n, dimK (h F i R )i = n for i = 1, 3. Also, dimK (h F i R )2 = dimK { pi : 1 ≤ i ≤ m} = m.
Hence A F has the Hilbert function h.
Let G = F + x3n . We prove that A G is not canonically graded. Suppose that A G is canonically
graded. Then, as before, A G ∼
= A F . Let ϕ : A F −→ A G be a K-algebra automorphism. Since F does
not contain a monomial multiple of x2n , x2n ◦ F = 0 and hence x2n ∈ AnnR ( F ) which implies that
ϕ( xn )2 ∈ AnnR ( G ). Let
ϕ ( x n ) = u 1 x1 + · · · + u n x n +
∑
ai x i .
i∈N n , |i|≥2
We claim that u1 = · · · = un−1 = 0.
Case 1: m = n. Comparing the coefficients of x12 , x2 xn , x32 , . . . , x2n−1 in ϕ( xn )2 ◦ G = 0, we get u1 =
· · · = un−1 = 0.
Case 2: m = n + 1 OR m = n + 2. Compare the coefficients of x1 x2 , x2 xn , x32 , . . . , x2n−1 in ϕ( xn )2 ◦
G = 0, to get u1 = · · · = un−1 = 0.
Case 3: n + 2 < m < 2n. Comparing the coefficients of x1 x2 , x2 xn , x3 x4 , . . . , xm−n xm−n+1 , x2m−n+1,
x2m−n+2 . . . , x2n−1 in ϕ( xn )2 ◦ G = 0, we get u1 = · · · = un−1 = 0.
Case 4: m ≥ 2n. Comparing the coefficients of x1 x2 , x1 xn , x3 x4 , . . . , xn−1 xn in ϕ( xn )2 ◦ G = 0, we
get u1 = · · · = un−1 = 0.
This proves the claim. Now, comparing the coefficients of xn in ϕ( xn )2 ◦ G = 0, we get un = 0
(since F does not contain a monomial divisible by x2n ). This implies that ϕ( xn ) has no linear terms
and hence ϕ is not an automorphism, a contradiction.
We expect that the Theorem 4.1 holds true without the assumption h2 ≥ h1 . The problem is
that, as far as we know, the admissible Gorenstein not unimodal h-vectors are not classified even
if s = 4. However, starting from an example by Stanley, we are able to construct a not canonically
graded Gorenstein K-algebra with (not unimodal) h-vector (1, 13, 12, 13, 1).
Corollary 4.2. Let h = (1, h1 , h2 , h1 , 1) where h1 ≤ 13 be a Gorenstein O-sequence. Then every Gorenstein K-algebra with the Hilbert function h is necessarily canonically graded if and only if h2 = (h12+1).
Proof. If a local Gorenstein K-algebra A has Hilbert function h = (1, h1 , h2 , h1 , 1), then by considering Q-decomposition of grm ( A) we conclude that grm ( A) ∼
= Q(0). This implies that h is also the
Hilbert function of a graded Gorenstein K-algebra. By [MZ16, Theorem 3.2] if h1 ≤ 12, then the
Hilbert function of a graded Gorenstein K-algebra is unimodal. Hence by Theorem 4.1 the result
follows.
12
MASUTI AND ROSSI
If h1 = 13 and h is unimodal, then the assertion follows from Theorem 4.1. Now, by [MZ16, Theorem 3.2] the only nonunimodal graded Gorenstein O-sequence with h1 = 13 is h = (1, 13, 12, 13, 1).
In this case we write P = [ x1 , . . . , x10 , x, y, z]. Let
10
F=
∑ xi µi ,
i=1
3 .
where µ = { x3 , x2 y, x2 z, xy2 , xyz, xz2 , y3 , y2 z, yz2 , z3 } = {µ1 , . . . , µ10 }. Let G = F + x13 + · · · + x10
Then A G has the Hilbert function h. We claim that A G is not canonically graded. Suppose A G is
canonically graded. Then A G ∼
= A F . Let ϕ : A F −→ A G be a K-algebra automorphism. Since
x12 ∈ AnnR ( F ), ϕ( x1 )2 ∈ AnnR ( G ). Let
ϕ( x1 ) = u1 x1 + · · · + u10 x10 + u11 x + u12 y + u13 z +
∑
ai x i .
i∈N n , |i|≥2
Comparing the coefficients of x1 x, x7 y, x10 z in ϕ( x1 )2 ◦ G = 0, we get u11 = u12 = u13 = 0. Now,
comparing the coefficients of x1 , . . . , x10 in ϕ( x1 )2 ◦ G = 0, we get u1 = · · · = u10 = 0. This implies
that ϕ( x1 ) has no linear terms and thus ϕ is not an automorphism, a contradiction.
R EFERENCES
[AHS13]
[AIS08]
[Ber09]
[CEVV09]
[CENR13]
[CoCoA]
[CRV01]
[Singular]
[Ste14]
[Eli15]
[ER12]
[ER15]
[EV08]
[Ems78]
[GHMY07]
[HS11]
[Iar84]
[Iar94]
[Iar04]
L. L. Avramov, I. B. Henriques and L. M. Şega, Quasi-complete intersection homomorphisms, Pure and Applied
Math. Quarterly 9 no 4 (2013), 579-612. 1
L. L. Avramov, S. B. Iyengar and L. M. Şega, Free resolutions over short local rings, J. Lond. Math. Soc. 78
(2008), 459-476. 1
V. Bertella, Hilbert function of local Artinian level rings in codimension two, J. Algebra 321 (2009), 1429-1442. 2
D. A. Cartwright, D. Erman, M. Velasco and B. Viray, Hilbert schemes of 8 points, Algebra Number Theory 3
(2009), 763-795. 1
G. Casnati, J. Elias, R. Notari and M. E. Rossi, Poincaré series and deformations of Gorenstein local algebras,
Comm. Algebra 41 (2013), 1049-1059. 1
CoCoATeam, CoCoA: a system for doing Computations in Commutative Algebra, available at
http://cocoa.dima.unige.it. 3
A. Conca, M. E. Rossi and G. Valla, Gröbner flags and Gorenstein algebras, Compositio Math. 129 (2001),
95-121. 1
W. Decker, G.-M. Greuel, G. Pfister and H. Schönemann, S INGULAR 4-1-0 — A computer algebra system
for polynomial computations (2016), available at http://www.singular.uni-kl.de. 3
A. De Stefani, Artinian level algebras of low socle degree, Comm. Algebra 42 (2014), 729-754. 2
J.
Elias,
I NVERSE - SYST. LIB –Singular library
for
computing
Macaulay’s
inverse
systems,
http://www.ub.edu/C3A/elias/inverse-syst-v.5.2.lib, 2015. 3
J. Elias and M. E. Rossi, Isomorphism classes of short Gorenstein local rings via Macaulay’s inverse system, Trans.
Amer. Math. Soc. 364 (2012), 4589-4604. 3
J. Elias and M. E. Rossi, Analytic isomorphisms of compressed local algebras, Proc. Amer. Math. Soc. 143 (2015),
973-987. 3, 9, 10
J. Elias and G. Valla, Structure theorems for certain Gorenstein ideals, Special volume in honor of Melvin
Hochster, Michigan Math. J. 57 (2008), 269-292. 1
J. Emsalem, Géométrie des points épais, Bull. Soc. Math. France 106 (1978), no. 4, 399–416. 3, 4, 5
A. V. Geramita, T. Harima, J. C. Migliore and Y. S. Shin, The Hilbert function of a level algebra, Mem. Amer.
Math. Soc. 186 (2007), no. 872, vi+139. 2, 9
I. B. Henriques and L. M. Şega, Free resolutions over short Gorenstein local rings, Math. Z. 267 (2011), 645-663.
1
A. Iarrobino, Compressed algebras: Artin algebras having given socle degrees and maximal length, Trans. Amer.
Math. Soc. 285 (1984), 337-378. 2, 3
A. Iarrobino, Associated graded algebra of a Gorenstein Artin algebra, Mem. Amer. Math. Soc. 107 (1994),
no. 514, viii+115. 3, 4, 5
A. Iarrobino, Ancestor ideals of vector spaces of forms, and level algebras, J. Algebra 272 (2004), 530-580. 2
13
A. Iarrobino and V. Kanev, Power sums, Gorenstein algebras, and determinantal loci, Appendix C by Iarrobino
and Steven L. Kleiman, Lecture Notes in Mathematics, vol. 1721, Springer-Verlag, Berlin, 1999. 3
[Jel16]
J. Jelisiejew, Classifying local Artinian Gorenstein algebras, to appear in J. Collect. Math. (2016), available at
arXiv:1511.08007. 3
[Mac1916] F. S. Macaulay, The algebraic theory of modular systems, Cambridge Mathematical Library, Cambridge University Press, Cambridge, 1994. Revised reprint of the 1916 original; With an introduction by Paul Roberts.
3
[Mac27]
F. S. Macaulay, Some properties of enumeration in the theory of modular systems, Proc. Lond. Math. Soc. 26
(1927), 531-555. 1
[MZ16]
J. Migliore and F. Zanello, Stanley’s nonunimodal Gorenstein h-vector is optimal, Proc. Amer. Math. Soc. 145
(2017), 1-9. 6, 11, 12
[Poo08]
B. Poonen, Isomorphism types of commutative algebras of finite rank over an algebraically closed field. Computational arithmetic geometry, Contemp. Math. 463, Amer. Math. Soc., Providence, RI, 2008, 111-120. 1
[Roos05]
J.-E. Roos, Good and bad Koszul algebras and their Hochschild homology, J. Pure Appl. Algebra 201 (2005), 295327. 1
[RS10]
M. E. Rossi and L. Sharifan, Consecutive cancellations in Betti numbers of local rings, Proc. Amer. Math. Soc.
138 (2010), 61-73. 9
[Sjo79]
G. Sjödin, The Poincaré series of modules over a local Gorenstein ring with m3 = 0, Mathematiska Institutionen,
Stockholms Universitet, 1979. 1
[Sta78]
R. Stanley, Hilbert functions of graded algebras, Adv. in Math. 28 (1978), 57-83. 2, 6
[IK99]
D IPARTIMENTO DI M ATEMATICA , U NIVERSIT À DI G ENOVA , V IA D ODECANESO 35, 16146 G ENOVA , I TALY
E-mail address: [email protected]
D IPARTIMENTO DI M ATEMATICA , U NIVERSIT À DI G ENOVA , V IA D ODECANESO 35, 16146 G ENOVA , I TALY
E-mail address: [email protected]
| 0 |
Reconfigurable Antenna Multiple Access for 5G
mmWave Systems
Mojtaba Ahmadi Almasi∗ , Hani Mehrpouyan∗, David Matolak∗∗ , Cunhua Pan†, Maged Elkashlan†
∗ Department of Electrical and Computer Engineering, Boise State University, {mojtabaahmadialm,hanimehrpouyan}@boisestate.edu
∗∗ Department of Electrical Engineering, University of South Carolina, {matolak}@cec.sc.edu
arXiv:1803.09918v1 [] 27 Mar 2018
† School of Electronic Engineering and Computer Science, Queen Mary University of London, {c.pan,maged.elkashlan}@qmul.ac.uk
Abstract—This paper aims to realize a new multiple access
technique based on recently proposed millimeter-wave reconfigurable antenna architectures. To this end, first we show that integration of the existing reconfigurable antenna systems with the
well-known non-orthogonal multiple access (NOMA) technique
causes a significant degradation in sum rate due to the inevitable
power division in reconfigurable antennas. To circumvent this
fundamental limit, a new multiple access technique is proposed.
The technique which is called reconfigurable antenna multiple
access (RAMA) transmits only each user’s intended signal at
the same time/frequency/code, which makes RAMA an inter-user
interference-free technique. Two different cases are considered,
i.e., RAMA with partial and full channel state information (CSI).
In the first case, CSI is not required and only the direction
of arrival for a specific user is used. Our analytical results
indicate that with partial CSI and for symmetric channels,
RAMA outperforms NOMA in terms of sum rate. Further, the
analytical result indicates that RAMA for asymmetric channels
achieves better sum rate than NOMA when less power is assigned
to users that experience better channel quality. In the second
case, RAMA with full CSI allocates optimal power to each user
which leads to higher achievable rates compared to NOMA
for both symmetric and asymmetric channels. The numerical
computations demonstrate the analytical findings.
I. I NTRODUCTION
The rapid growth of global mobile data traffic is expected
to be satisfied by exploiting a plethora of new technologies,
deemed as 5th generation (5G) networks. To meet this demand,
millimeter-wave (mmWave) communications operating in the
30 − 300 GHz range is emerging as one of the most promising solutions [1]. The existence of a large communication
bandwidth at mmWave frequencies represents the potential for
significant throughput gains. Indeed, the shorter wavelengths
at the mmWave band allow for the deployment of a large
number of antenna elements in a small area, which enables
mmWave systems to potentially support more higher degrees
of beamforming gain and multiplexing [1]. However, significant path loss, channel sparsity, and hardware limitations are
major obstacles for the deployment of mmWave systems. In
order to address these obstacles, several mmWave systems
have been proposed to date [2]–[4].
An analog beamforming mmWave system is designed in [2]
which uses one radio frequency (RF) chain and can support
only one data stream. Subsequent work considers a hybrid
beamforming mmWave system to transmit multiple streams
This project is supported in part by the NSF ERAS grant award numbers
1642865.
by exploiting several RF chains [3]. In [4], the concept of
beamspace multi-input multi-output (MIMO) is introduced
where several RF chains are connected to a lens antenna array
via switches. In addition to these systems, non-orthogonal
multiple access (NOMA) has been also considered as another
promising enabling technique for 5G to enhance spectral
efficiency in multi-user scenarios [5], [6]. Unlike orthogonal
multiple access (OMA) techniques that are realized in time,
frequency, or code domain, NOMA is realized in the power
domain [7]. In fact, NOMA performs superposition coding
(SC) in the power domain at the transmitter. Subsequently,
successive interference cancellation (SIC) is applied at the
receiver [8], [9].
In order to serve more users in 5G wireless communications, recently, the integration of NOMA in mmWave
systems, i.e., mmWave-NOMA, has been studied [10]–[14].
The work in [10] designs a random beamforming technique
for mmWave-NOMA systems. The base station (BS) randomly
radiates a directional beam toward paired users. In [11], it
is shown that mismatch between the users’ channel vector
and finite resolution analog beamforming1 simplifies utilizing
NOMA in mmWave-MIMO systems. The work in [12], proposes the combination of beamspace MIMO and NOMA, to
ensure more users can be served with a limited number of RF
chains. As a result, the number of served users is more than the
number of RF chains [12]. In [13], NOMA is studied for hybrid mmWave-MIMO systems. A power allocation algorithm
has been provided in order to maximize energy efficiency.
Newly, a joint power allocation and beamforming algorithm
for NOMA in the analog mmWave systems has been proposed
in [14]. Although the works [10]–[14] have all resulted in
maximizing bandwidth efficiency, this gain has come at the
costs of higher complexity at the receiver and the use of
multiple RF chains or one RF chain along with a large number
of phase shifters and power amplifiers which can be costly at
mmWave frequencies. Hence, in contrast to the prior works,
here, we will propose a new multiple access scheme that takes
advantage of reconfigurable antennas to outperform NOMA,
while at the same requiring one RF chain at the transmitter
and simple receiver structure.
Recently, there has been a new class of reconfigurable
antennas that can support the transmission of multiple radiation beams with one RF chain [15], [16]. The unique
1 Finite resolution analog beamforming is due to the use of a finite number
of phase shifters.
II. S YSTEM M ODEL AND NOMA
In this section, first we introduce the reconfigurable antenna
systems and their properties. Then, NOMA technique for a
BS and multiple users is described. Finally, NOMA for the
reconfigurable antennas is investigated.
A. Reconfigurable antenna systems
A reconfigurable antenna can support multiple reconfigurable orthogonal radiation beams. To accomplish this, a
spherical dielectric lens is fed with multiple tapered slot
antennas (TSAs), as shown in Fig. 1. The combination of each
TSA feed and the lens produces highly directive beams in far
field [15], [16]. That is, each TSA feed generates a beam in
a given direction in the far field. Therefore, a reconfigurable
antenna system is a multi-beam antenna capable of generating
M ≫ 1 independent beams where M is the number of TSA
feeds. Only the feed antennas that generate the beams in the
desired directions need to be excited. To this end, the output
of the RF chain is connected to a beam selection network (see
Fig. 1). The network has one input port that is connected to
the RF chain and M output ports that are connected to the M
TSA feeds [15]. For instance, in Fig. 1, the network selects
(a) RF Transceiver Chain
Spherical
Dielectric Lens
RF Transciever Chain
Mixer
PA
Duplexer
Mixer
LNA
Far-Field
Pencil Beams
Beam Selection
Network
property that distinguishes the system in [15], [16] from prior
art is that the proposed reconfigurable antenna architecture
can support multiple simultaneous orthogonal reconfigurable
beams via one RF chain. Inspired by this class of antennas, this
paper proposes a fresh multiple access technique for mmWave
reconfigurable antenna systems which is called reconfigurable
antenna multiple access (RAMA). We consider a scenario in
which a single BS is equipped with a mmWave reconfigurable
antenna and each beam of the antenna serves one user where
the users are not aligned with the same direction. Given
that the limitation on the RF circuitry of the antenna [15]
results in the division of the transmitted power amongst the
beams, the current state-of-the-art in mmWave-NOMA would
not operate efficiently in such a setting. To enhance the
performance of multiple access schemes in the mmWave band
and also overcome this fundamental limit for reconfigurable
antennas, unlike NOMA, RAMA aims to transmit only the
intended signal of each user. To accommodate this technique,
we will consider two cases, RAMA with partial channel state
information (CSI) and RAMA with full CSI. In the first case,
channel gain information is not required and only the direction
of arrival (DoA) for a specific user is used. Our results
indicate that with partial CSI and for symmetric channels,
RAMA outperforms NOMA in terms of sum rate. Further, the
analytical result indicates that RAMA for asymmetric channels
achieves better sum rate than NOMA when less power is
assigned to a user that experiences a better channel quality. In
the second case, RAMA with full CSI allocates optimal power
to each user which leads to higher achievable rates compared
to NOMA for both symmetric and asymmetric channels. Our
extensive numerical computations demonstrate the analytical
findings.
√
Notations: Hereafter, j = −1. Also, E[·] and | · | denote
the expected value and amplitude value of (·), respectively.
Tapered Slot
Antenna Feeds
Fig. 1. Schematic of the reconfigurable antenna steering multiple beams. The
antenna is composed of a spherical lens fed with a number of tapered slot
antenna feeds. Each feed generates a beam in a given direction in the far
field [15].
only five outputs that are connected to the input ports of five
TSA feeds. Accordingly, the network divides power of the
output signal of the RF chain equally or unequally amongst
five TSA feeds.
It is mentioned that the reconfigurable antennas steer reconfigurable independent beams. This steering is achieved
by selecting the appropriate TSA feed. Also, recall that we
assume that the transmitter has knowledge of the DoA of
the users. Accordingly, by appropriately steering the beams,
the reconfigurable antenna can manipulate the phase of the
received signal at the user terminal. Therefore, steering multiple reconfigurable independent beams and routing the power
amongst those beams are two properties of the reconfigurable
antennas.
B. Review of NOMA
In this paper, we consider the downlink of a single communication cell with a BS in the center that serves multiple users.
The BS and users are provided with a signal omnidirectional
antennas. For simplicity, the number of users is restricted to
two where the users are not aligned with the same direction,
i.e., there is an angle gap between the users.
Let the BS have signals si (i = 1, 2) for User i, where
E[|si |2 ] = 1, with power transmission pi . The sum of pi s, for
i = 1, 2, equals to p. According to the principle of the NOMA
downlink, at the transmitter, s1 and s2 are superposition coded
as
√
√
(1)
x = p 1 s1 + p 2 s2 .
Hence, the received signal at the ith user, for i = 1, 2, is
given by
√
√
yi = xhi + ni ≡ ( p1 s1 + p2 s2 )hi + ni ,
(2)
where hi is the complex channel gain between the BS and User
i, and ni denotes the additive white Gaussian noise with power
σi2 . At the receiver, each user performs the SIC process to
decode the desired signal. The optimal decoding order depends
on the channel gain. Without loss of generality, let us assume
that User 1 have better channel gain, i.e., |h1 |2 /σ12 ≥ |h2 |2 /σ22 ,
which gives p1 ≤ p2 .
After applying SIC, the achievable rate for NOMA for User
i can be determined as
RN = log (1 + p1 |h21 |2 ),
1
2
σ1
(3)
2
2
R2N = log2 (1 + p2 |h22| /σ2 2 ).
p1 |h2 | /σ +1
2
Allocated power to user 1
Power
Allocated power to user 2
Time/Frequency/Code
...,s2, s1
Beam Selection Network
This result indicates that power allocation greatly affects the
achievable rate for each user. For example, an improper power
allocation does now allow User 1 to decode s2 correctly, which
in turn does not allow for the interference from User 2 to be
successfully eliminated.
RF
Im
Re
C. NOMA for the reconfigurable antennas
8-PSK constellation
Phase
Detector
ej
User 1
0
s1
.5p
0.5p
s2
User 2
TSA feed
Suppose that a BS is equipped with the described recon2. Schematic of the BS for reconfigurable antenna multiple access
figurable antenna system and aims to simultaneously serve Fig.
technique with partial CSI and equal power division amongst the TSA feeds.
two users by using NOMA. The reconfigurable antenna steers It is assumed that the signals s1 and s2 are selected form 8-PSK constellation
two beams by feeding two TSAs. Each TSA serves one user
which is equipped with a single omnidirectional antenna. The
superposition coding of si with allocated power pi is defined A. RAMA with Partial CSI
in (1). Users 1 and 2 receive the following signals as
Let us assume that the BS utilizes a reconfigurable antenna
(
√ √
√
√
and
has partial CSI, i.e., knows the DoA of users. Thus, power
z1 = αxh1 + n1 ≡ α( p1 s1 + p2 s2 )h1 + n1 ,
√
√
√
√
is allocated equally for each user, i.e.,
z2 = 1 − αxh2 + n2 ≡ 1 − α( p1 s1 + p2 s2 )h2 + n2 ,
(4)
p1 = p2 = 0.5p.
(6)
respectively, where factor α ∈ (0, 1) is due to the power
division in the reconfigurable antennas. When α = 0.5,
it means the power is divided equally between two TSAs.
Thus, each user receives only a portion of the power of the
transmitted signal x.
Consider the same order for the channel gains, i.e,
|h1 |2 /σ12 ≥ |h2 |2 σ22 . An error-free SIC process results in the
achievable rate for each user as
RNR = log (1 + αp1 |h2 1 |2 ),
1
2
σ1
(5)
2
2
R2NR = log2 (1 + (1−α)p2 |h22| /σ2 2 ).
(1−α)p1 |h2 | /σ2 +1
Obviously, signal to noise ratio (SNR) for User 1 and signal
to interference plus noise ratio (SINR) for User 2 in (5)
is less than that of the NOMA given in (3). As a result,
combining NOMA with reconfigurable antennas will reduce
the achievable user rate when considering the same channel
gains, power allocation, and noise power. It is noteworthy
that for reconfigurable antenna-NOMA the definition of power
allocation is different from power division. Power allocation
is a strategy which is used in NOMA. While, power division
is one of the properties of the reconfigurable antennas that
divides power of the superposition coded signal among two
TSAs.
In brief, when users are not aligned in the same direction,
the reconfigurable antennas cannot harvest the benefits of
NOMA due to the power division property. Whereas, when
users are located on the same direction, the reconfigurable antenna steers only one beam to serve users which means power
division is not required. In this case, reconfigurable antennaNOMA and NOMA in Subsection II.B achieve identical sum
rate performance.
III. R ECONFIGURABLE A NTENNA M ULTIPLE ACCESS
In this section, a novel multiple access technique for the
reconfigurable antenna systems is proposed. The technique
which we call RAMA takes advantage of the reconfigurable
antenna and directional transmission in mmWave bands.
Our main objective is to suppress inter-user interference. To
this end, we aim to transmit only the intended signal for each
user at the same time/frequency/code blocks. The intended
signals for Users 1 and 2 are s1 and s2 , respectively. Let us
assume that si , for i = 1, 2, is drawn from a phase shift
keying (PSK) constellation and E[|si |2 ] = 1. Accordingly, s2
can be expressed in terms of s1 as
s2 = s1 ej∆θ ,
(7)
where ∆θ denotes the difference between the phases of s1 and
s2 . Similar to (1), the power of the transmitted signal, x, is
assumed to be p.
Unlike NOMA, in RAMA only one of the signals, say s1 ,
is upconverted by the RF chain block and the whole power
p is allocated to that signal before the power division step.
Therefore, x is given by
√
x = ps1 .
(8)
It is clear that the superposition coded signal in (1) and the
signal in (8) carry the same average power p. The proposed
multiple access technique for the signal in (8) is shown in
Fig. 2. The phase detector block calculates the phase difference
between s1 and s2 , ej∆θ . Moreover, as shown in Fig. 2, the
beam selection network selects two TSA feeds as highlighted
with black color based on the users’ DoA. For simplicity, we
call them TSA 1 and TSA 2. The network divides the power
equally between TSA√1 and TSA 2. That is, the signal of TSAs
1 and 2 is given by 0.5ps1 .
The signal intended for TSA 1 is the desired signal for
User 1. To transmit the signal intended for User 2 via TSA
2 we take advantage of the reconfigurable antennas. Thanks
to properties of reconfigurable antennas, signal at each TSA
can independently be rotated with an arbitrary angle. Using
this proper, the beam selection network multiplies the signal
j∆θ
in corresponding to TSA 2 with e√
. This results√ in the
transmitted signal via TSA 2 to be 0.5ps1 ej∆θ = 0.5ps2
which is the desired signal for User 2. Since transmission
is directional in mmWave bands, each user receives only its
intended signal as
(
√
z1 = 0.5ps1 h1 + n1 ,
(9)
√
z2 = 0.5ps2 h2 + n2 .
It is noteworthy that, here, we assume that there is no
interference that is imposed from the signal intended for User
1 on User 2 and vice versa. This assumption is very well
justified since the structure of the proposed lens based slotted
reconfigurable antenna results in very directional beams with
very limited sidelobes. Moreover, due to significant pathloss
and shadowing at mmWave frequencies we do not expect the
signals from the sidelobes to reach the unintended user2 . We
also highlight that in contrast to NOMA, full CSI and the
SIC process are not required at the receiver. Furthermore, in
RAMA power allocation and power division carry the same
concept, such that the routed power for TSA 1 (or 2) is the
same as the allocated power for User 1 (or 2).
The achievable rate of RAMA for each user under equal
power allocation is obtained as
RR,I = log (1 + p|h12|2 ),
1
2
2σ1
(10)
2
R2R,I = log2 (1 + p|h22| ).
2σ2
Let us denote RAMA with partial CSI by RAMA-I. It is
valuable to compare the sum rate of NOMA and RAMA-I. To
this end, we consider two extreme cases as follows. By definiN
tion, sum rate for NOMA and RAMA-I are Rsum
= R1N + R2N
R,I
R,I
R,I
and Rsum = R1 + R2 , respectively, where R1N and R2N
are defined in (3) and R1R,I and R2R,I are defined in (10).
Case I: p|h1 |2 /σ12 = p|h2 |2 /σ22 . In his paper is case is
called symmetric channel [5]. That is, the two users have the
same SNR. In this case, RAMA-I always achieves higher sum
rate than NOMA.
To show this, we calculate the sum rate for NOMA and
RAMA. For NOMA, the sum rate can be calculated as
p2 |h2 |2 /σ22
p1 |h1 |2
N
+ log2 1 +
Rsum
= log2 1 +
2
σ1
p1 |h2 |2 /σ22 + 1
p1 |h1 |2
p2 |h2 |2 /σ22
(a)
= log2 1 +
1
+
σ12
p1 |h2 |2 /σ22 + 1
2
p2 |h2 |2
p1 |h1 |
+
= log2 1 +
σ12
σ22
2
p|h|
(b)
= log2 1 + 2 .
(11)
σ
The (a) follows from log2 a + log2 b = log2 ab and the (b)
follows the assumption that |h|2 /σ 2 = |h1 |2 /σ12 = |h2 |2 /σ22
and p1 + p2 = p.
Also, for RAMA-I, we follow the same steps as in NOMA.
Hence, it is obtained as
R,I
Rsum
= log2 1 +
p2 |h|4
p|h|2
+
.
2
σ
4σ 4
(12)
R,I
N
Since p2 |h|4 /4σ 4 > 0, it gives Rsum
≤ Rsum
.
2 Detail analysis of the impact of sidelobes on inter-user interference in
RAMA is subject of future research.
Case II: p|h1 |2 /σ12 ≥ p|h2 |2 /σ22 . Here, this case is called
asymmetric channel [5]. That is, we assume that the channel
gain for User 1 is stronger than User 2. It can be shown that
for asymmetric channels, RAMA-I achieves higher sum rate
than NOMA when the power isproperly allocated for Users 1
and 23 . To proof this claim, we have
p2 |h2 |2 /σ22
p1 |h1 |2
) + log2 (1 +
)
2
σ1
p1 |h2 |2 /σ22 + 1
p1 |h1 |2
p2 |h2 |2 /σ22
(a)
= log2 (1 +
)(1 +
)
2
σ1
p1 |h2 |2 /σ22 + 1
(b)
p2 |h2 |2
p1 |h1 |2
)(1 +
) ,
(13)
≤ log2 (1 +
2
σ1
σ22
N
Rsum
= log2 (1 +
where p1 and p2 are allocated power for Users 1 and 2, respectively. Also, the (a) follows from log2 a + log2 b = log2 ab,
and the (b) is a result of p1 |h2 |2 /σ22 > 0 . Using (13) and
R,I
N
(10), the inequality Rsum
≤ Rsum
holds when the following
condition follows
(1 +
p1 |h1 |2
p2 |h2 |2
p|h1 |2
p|h2 |2
)(1
+
)
≤
(1
+
)(1
+
).
σ12
σ22
2σ12
2σ22
(14)
Obviously, for p1 /p ∈ (0, 0.5] the inequality holds which
indicates that User 1 should have lower power than User
2. Although this range is not tight, it gives a considerable
insight. This result implies that with proper power allocation in
NOMA, RAMA-I attains higher sum rate. However, RAMA-I
may not achieve user fairness when channel gain of one of
the users is significantly greater than that of other user. In this
case, the allocated power for user with strong channel gain
should be far less than other user and equal power allocation
would not lead to user fairness.
B. RAMA with full CSI
Assume that full CSI is available at the BS. Furthermore, the
BS can unequally allocate the power between two users. For
signals s1 and s2 that are chosen from a QAM constellation
and E[|si |2 ] = 1, the relationship between two arbitrary signals
selected from the constellation is given by
s2 = s1 s̄ej∆θ
(15)
where s̄ denotes |s2 |/|s1 |. For RAMA with full CSI, the
transmitted signal is defined as
p
p
(16)
x = p′ s1 ≡ ( p1 + p2 s̄2 )s1 ,
which obviously has the same average power as the signal in
(1). It is assumed that p1 and p2 are the allocated power for
User 1 and User 2, respectively. For simplicity, we consider
that our power allocation strategy is exactly the same as
NOMA. Fig. 3 depicts the schematic of the reconfigurable
antenna for the RAMA. The phase detector and s̄ calculator
blocks calculate ej∆θ and s̄, respectively. The beam selection
network first selects two suitable TSA feeds, TSA 1 and TSA
2 which are highlighted with black color in Fig. 3, based on
3 To achieve user fairness in NOMA, when |h |2 /σ 2 ≥ |h |2 /σ 2 we have
1
2
2
1
p1 ≤ p2 [5].
•
Power
Allocated power to User 1
Allocated power to User 2
...,s2, s1
Beam Selection Network
Time/Frequency/Code
RF
Im
Re
16-QAM constellation
Calculator
Phase
Detector
User 1
s1
√p 1
√p2 s
2
User 2
TSA feed
ejΔθ
RAMA technique is not an alternative for NOMA. When
users are aligned in the same direction, RAMA would
be combined with one of OMA or NOMA. Accordingly,
the users located on the same direction are considered
as a cluster. Each cluster will be served via RAMAOMA or RAMA-NOMA. Integration of RAMA with
other multiple access techniques is beyond the scope of
this paper.
Fig. 3. Schematic of the BS for reconfigurable antenna multiple access
technique regarding full CSI and unequal power allocation. It is assumed
that signals s1 and s2 are chosen from 16-QAM constellation.
σ2
respectively.
It is straightforward to show that RAMA with full CSI,
denoted by RAMA-II, achieves higher sum rate than NOMA
R,II
N
irrespective of their channel condition. That is, Rsum
≤ Rsum
R,II
R,II
R,II
where Rsum = R1 +R2 . Further, RAMA-II considers user
fairness the same as NOMA.
The following remarks are in order:
•
•
•
RAMA-I significantly reduces system overhead. In highly
dense networks where the number of users is much larger
than two, it is reasonable to serve users with RAMAI. This is because RAMA-I does not require the BS to
know full CSI and only users’ direction information is
necessary.
RAMA-I implemented by PSK constellation is very
simple in practice. The beam selection network divides
the power equally by using a simple power divider and
the received signal is interference-free. The later is also
preserved for RAMA-II. Indeed, RAMA-I with QAM
constellations is operational if power is divided properly.
RAMA-II needs an optimal power allocation strategy. In
Subsection III.B it is pointed out that power allocation
strategy for NOMA is adopted for RAMA-II. However, this strategy may not be efficient in reconfigurable
antenna systems with full CSI. It is because NOMA
considers the interference and the minimum achievable
rate for each user to allocate the power [5]. Whereas,
for RAMA-II the interference is removed which leads to
designing a better power allocation strategy.
20
20
NOMA
RAMA
10
5
0
-10
NOMA, p 1 /p = 1/4
NOMA, p 1 /p = 1/2
15
Sum rate (bits/s/Hz)
respectively. Accordingly, the achievable rate for user 1 and
user 2 is obtained as
RR,II = log (1 + p1 |h21 |2 ),
1
2
σ1
(18)
2
R2R,II = log2 (1 + p2 |h22 | ),
This section evaluates the performance of the proposed multiple access technique by using numerical computations, where
the analytical findings will also be verified. Transmission in
mmWave bands can be done through both line-of-sight and
non line-of-sight paths. Here, for the sake of simplicity, we
will consider Rayleigh fading channels.
Figure 4.(a) represents sum rate versus symmetric channel
plot for NOMA and RAMA-I. It is clear that RAMA-I
achieves better sum rate than NOMA. This is because the user
achievable rate for NOMA is limited due to the interference
from other users, which degrades the sum rate. At symmetric
channels, the interference in NOMA is severe and as a result
leads to a considerable sum rate gap compared to the RAMA
technique which is an inter-user interference-free technique.
This result verifies our claim in Case I Subsection III.A.
Figure 4.(b) illustrates sum rate performance versus asymmetric channel gains for RAMA-I and various power allocation schemes for NOMA. The aim of this simulation is to support our analytical finding in Case II Subsection III.A. When (p|h1 |2 /σ12 )/(p|h2 |2 /σ22 ) is not large
enough, RAMA-I outperforms NOMA for all power allocation
schemes since the channel is similar to a symmetric channel.
By increasing (p|h1 |2 /σ12 )/(p|h2 |2 /σ22 ) channel satisfies the
condition in Case II Subsection III.A. At high region of
(p|h1 |2 /σ12 )/(p|h2 |2 /σ22 ), for p1 /p ≤ 1/2, e.g., p1 /p = 1/2
and p1 /p = 1/4, sum rate of RAMA-I is always better than
NOMA which is consistent with Case II. For p1 /p = 3/4 and
large channel gain difference, NOMA has a little better sum
rate. This is because much more power is allocated to User 1
which can nearly achieve maximum sum rate. However, in this
condition NOMA does not consider user fairness. In contrast,
Sum rate (bits/s/Hz)
the users’ DoA information. Then, it divides the signal in (16)
√
√
into p1 s1 and p2 s1 s̄ regarding the obtained CSI and the
√
power allocation strategy. The signal for User 1, p1 s1 , is
ready to transmit from TSA 1. For User 2, the intended signal
√
√
is built by multiplying p2 s1 s̄ by ej∆θ which yields p2 s2 .
Hence, the interference-free received signals for Users 1 and
2 are attained as
(
√
z 1 = p 1 s1 + n1 ,
(17)
√
z 2 = p 2 s2 + n2 ,
IV. N UMERICAL C OMPUTATIONS
NOMA, p 1 /p = 3/4
15
RAMA-I, p 1 /p = 1/2
10
5
0
0
10
2
20
2
30
0
20
40
|h 1 | = |h 2 | (dB)
(p|h 1 | 2 / 21 )/(p|h 2 | 2 / 22 ) (dB)
(a)
(b)
60
Fig. 4. Sum rate comparison between NOMA and RAMA-I for (a) symmetric
channel and (b) symmetric channel.
4
3
2
OMA
NOMA
RAMA-II
1
0
0
2
4
Achievable rate for User 1 (bits/s/Hz)
Achievable rate for User 2 (bits/s/Hz)
Achievable rate for User 2 (bits/s/Hz)
5
1
0.8
0.6
0.4
0.2
OMA
NOMA
RAMA-II
0
0
5
10
Achievable rate for User 1 (bits/s/Hz)
(a)
(b)
Fig. 5. Achievable rate region of user 1 and 2 for (a) symmetric channel
with p|hi |2 /σi2 = 15 dB for i = 1, 2 and (b) asymmetric channel with
p|h1 |2 /σ12 = 30 dB and p|h2 |2 /σ22 = 0 dB.
equal power is allocated for the users in RAMA-I and they
cannot exploit maximum sum rate.
Figure 5 shows achievable rate region of two users for
OMA, NOMA, and RAMA-II. OMA in downlink transmission
is assumed to be implemented by OFDMA technique where
R1O = βlog2 (1 + p1 |h1 |2 /βσ12 ) and R2O = (1 − β)log2 (1 +
p2 |h2 |2 /(1 − β)σ22 ) are achievable rate for Users 1 and 2 with
the bandwidth of β Hz assigned to User 1 and (1 − β) Hz
assigned to User 2 [5].
In Fig. 5.(a), channel is assumed to be symmetric where its
gain is set to p|hi |2 /σi2 = 15 dB for i = 1, 2. The achievable
rate region for OMA and NOMA are identical. The region
for RAMA-II is much wider than that for OMA and NOMA
because RAMA-II neither suffers from inter-user interference
nor divides the bandwidth among users. For instance, when
User 2 achieves rate 2.5 bits/s/Hz, achievable rate of User 1
for RAMA-II with channel gain information is approximately
twice higher than that for OMA and NOMA. Notice that when
total power is allocated to either user, all three techniques are
able to achieve maximum rate for that user.
The achievable sum rate of asymmetric channel for
p|h1 |2 /σ12 = 30 dB and p|h2 |2 /σ22 = 0 dB has been
represented in Fig. 5.(b). From the figure it is clear that for
OMA, NOMA and RAMA-II only User 1 achieves maximum
sum rate when whole power is allocated to that user. However,
NOMA achieves wider rate region than OMA as expected.
Interestingly, the achievable rate region for RAMA-II is greater
than NOMA. As an example, when we want that User 1
to achieve 8 bits/s/Hz, User 1 can reach 0.68 bits/s/Hz and
0.8 bits/s/Hz with NOMA and RAMA-II, respectively. This is
because the achievable rate of User 2 in NOMA is affected by
the interference term from User 1 due to using superposition
coding at transmitter and SIC process at receiver. Whereas, in
RAMA-II, User 1 does not impart interference on User 2. In
other words, the gap between NOMA and RAMA-II reflects
the impact of inter-user interference on NOMA.
V. C ONCLUSION
In this paper, we propose a new multiple access technique
for mmWave reconfigurable antennas in order to simultaneously support two users by using a single BS in downlink.
First, we show that NOMA is not suitable technique for
serving the users for the reconfigurable antenna systems. Then,
by wisely using the properties of reconfigurable antennas, a
novel multiple access technique called RAMA is designed
by assuming partial CSI and full CSI. The proposed RAMA
provides mmWave reconfigurable antenna system with an
inter-user interference-free user serving. That is, the users with
higher allocated power do not required to decode signals of
other users. It is shown that for symmetric channels RAMAI outperforms NOMA for an arbitrary p1 in terms of sum
rate. Also, for asymmetric channels RAMA-I demonstrates
better sum rate performance if approximately more than half
of the power is allocated to User 2. Further, RAMA-II always
achieves higher sum rate than NOMA. Numerical computations support our analytical investigations.
R EFERENCES
[1] T. S. Rappaport, S. Sun, R. Mayzus, H. Zhao, Y. Azar, K. Wang, G. N.
Wong, J. K. Schulz, M. Samimi, and F. Gutierrez, “Millimeter wave
mobile communications for 5G cellular: It will work!” IEEE Access,
vol. 1, pp. 335–349, 2013.
[2] J. Kim and I. Lee, “802.11 WLAN: history and new enabling MIMO
techniques for next generation standards,” IEEE Commun. Mag., vol. 53,
no. 3, pp. 134–140, 2015.
[3] O. El Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, and R. W. Heath,
“Spatially sparse precoding in millimeter wave mimo systems,” IEEE
Trans. Wireless Commun., vol. 13, no. 3, pp. 1499–1513, 2014.
[4] J. Brady, N. Behdad, and A. M. Sayeed, “Beamspace MIMO for
millimeter-wave communications: System architecture, modeling, analysis, and measurements,” IEEE Trans. Antennas Propagat., vol. 61, no. 7,
pp. 3814–3827, 2013.
[5] K. Higuchi and A. Benjebbour, “Non-orthogonal multiple access
(NOMA) with successive interference cancellation for future radio
access,” IEICE Trans. Commun., vol. 98, no. 3, pp. 403–414, 2015.
[6] L. Dai, B. Wang, Y. Yuan, S. Han, I. Chih-Lin, and Z. Wang, “Nonorthogonal multiple access for 5G: solutions, challenges, opportunities,
and future research trends,” IEEE Commun. Mag., vol. 53, no. 9, pp.
74–81, 2015.
[7] D. Tse and P. Viswanath, Fundamentals of wireless communication.
Cambridge university press, 2005.
[8] Y. Saito, A. Benjebbour, Y. Kishiyama, and T. Nakamura, “Systemlevel performance evaluation of downlink non-orthogonal multiple access (NOMA),” in Proc. IEEE Int. Symp. Pers., Indoor Mobile Radio
Commun. (PIMRC). IEEE, Sep. 2013, pp. 611–615.
[9] Y. Saito, Y. Kishiyama, A. Benjebbour, T. Nakamura, A. Li, and
K. Higuchi, “Non-orthogonal multiple access (NOMA) for cellular
future radio access,” in Proc. IEEE VTC Spring. IEEE, 2013, pp.
1–5.
[10] Z. Ding, P. Fan, and H. V. Poor, “Random beamforming in millimeterwave NOMA networks,” IEEE Access, 2017.
[11] Z. Ding, L. Dai, R. Schober, and H. V. Poor, “NOMA meets finite
resolution analog beamforming in massive MIMO and millimeter-wave
networks,” IEEE Commun. Lett., 2017.
[12] B. Wang, L. Dai, Z. Wang, N. Ge, and S. Zhou, “Spectrum and energyefficient beamspace MIMO-NOMA for millimeter-wave communications using lens antenna array,” IEEE J. Select. Areas Commun., vol. 35,
no. 10, pp. 2370–2382, 2017.
[13] W. Hao, M. Zeng, Z. Chu, and S. Yang, “Energy-efficient power allocation in millimeter wave massive MIMO with non-orthogonal multiple
access,” IEEE Wireless Commun. Lett., vol. 6, no. 6, pp. 782–785, 2017.
[14] Z. Xiao, L. Zhu, J. Choi, P. Xia, and X.-G. Xia, “Joint power allocation
and beamforming for non-orthogonal multiple access (NOMA) in 5G
millimeter-wave communications,” arXiv preprint arXiv:1711.01380,
2017.
[15] M. A. Almasi, H. Mehrpouyan, V. Vakilian, N. Behdad, and H. Jafarkhani, “Reconfigurable antennas in mmWave MIMO systems,” arXiv
preprint arXiv:1710.05111, 2017.
[16] B. Schoenlinner, X. Wu, J. P. Ebling, G. V. Eleftheriades, and G. M.
Rebeiz, “Wide-scan spherical-lens antennas for automotive radars,”
IEEE Trans. Microwave Theory Tech., vol. 50, no. 9, pp. 2166–2175,
2002.
| 7 |
Ensemble Estimation of Mutual Information
Kevin R. Moon1 , Kumar Sricharan2 , and Alfred O. Hero III3,*
arXiv:1701.08083v1 [] 27 Jan 2017
1
Department of Genetics, Yale University — 2 Xerox PARC — 3 EECS Department, University of Michigan
Abstract—We derive the mean squared error convergence rates
of kernel density-based plug-in estimators of mutual information
measures between two multidimensional random variables X and
Y for two cases: 1) X and Y are both continuous; 2) X is continuous and Y is discrete. Using the derived rates, we propose an
ensemble estimator of these information measures for the second
case by taking a weighted sum of the plug-in estimators with
varied bandwidths. The resulting ensemble estimator achieves the
1/N parametric convergence rate when the conditional densities
of the continuous variables are sufficiently smooth. To the best of
our knowledge, this is the first nonparametric mutual information
estimator known to achieve the parametric convergence rate for
this case, which frequently arises in applications (e.g. variable
selection in classification). The estimator is simple to implement
as it uses the solution to an offline convex optimization problem
and simple plug-in estimators. A central limit theorem is also
derived for the ensemble estimator. Ensemble estimators that
achieve the parametric rate are also derived for the first case (X
and Y are both continuous) and another case 3) X and Y may
have any mixture of discrete and continuous components.
I. I NTRODUCTION
Mutual information (MI) estimation has many applications
in machine learning including MI has been used in fMRI data
processing [1], structure learning [2], independent subspace
analysis [3], forest density estimation [4], clustering [5], neuron classification [6], and intrinsically motivated reinforcement
learning [7], [8]. Another particularly common application
is feature selection or extraction where features are chosen
to maximize the MI between the chosen features X and
the outcome variables Y [9]–[12]. In many of these applications, the predictor labels have discrete components (e.g.
classification labels) while the input variables have continuous
components. To the best of our knowledge, there are currently
no nonparametric MI estimators that are known to achieve the
parametric mean squared error (MSE) convergence rate 1/N
when X and/or Y contain discrete components. Also, while
many nonparametric estimators of MI exist, most can only
be applied to specific information measures (e.g. Shannon or
Rényi information). In this paper, we provide a framework
for nonparametric estimation of a large class of MI measures
where we only have available a finite population of i.i.d.
samples. We separately consider three cases: 1) X and Y
are both continuous; 2) X is continuous and Y is discrete; 3)
X and Y may have any mixture of discrete and continuous
components. We focus primarily on the second case which
includes the problem of feature selection in classification. We
derive a MI estimator for this case that achieves the parametric
∗ This research was partially supported by ARO MURI grant W911NF-151-0479 and DOE grant de-na0002534.
MSE rate when the conditional densities of the continuous
variables are sufficiently smooth. We also show how these
estimators are extended to the first and third cases.
Our estimation method applies to other MI measures in
addition to Shannon information, which have been the focus
of much interest. The authors of [9] defined an information measure based on a quadratic divergence that could be
estimated more efficiently than Shannon information. A MI
measure based on the Pearson divergence was considered in
[13] for computational efficiency and numerical stability. The
authors of [14] and [3] used minimal spanning tree generalized
nearest-neighbor graph approaches, respectively, to estimate
Rényi information.
A. Related Work
Many estimators for Shannon MI between continuous random variables have been developed. A popular k-nn-based
estimator was proposed in [15] which is a modification of
the entropy estimator derived in [16]. However, these estimators only achieve the parametric convergence rate when
the dimension of each of the random variables is less than 3
[17]. Similarly, the Rényi information estimator in [3] does not
achieve the parametric rate. Some other estimators are based
on maximum likelihood estimation of the likelihood ratio [18]
and minimal spanning trees [19].
Recent work has focused on nonparametric divergence estimation for purely continuous random variables. One approach
[20]–[23] uses an optimal kernel density estimator (KDE) to
achieve the parametric convergence rate when the densities are
at least d [22], [23] or d/2 [20], [21] times differentiable where
d is the dimension of the data. These optimal KDEs require
knowledge of the density support boundary and are difficult
to construct near the boundary. Numerical integration may
also be required for estimating some divergence functionals
under this approach, which can be computationally expensive.
In contrast, our approach to MI estimation does not require numerical integration and can be performed without knowledge
of the support boundary.
More closely related work [24]–[28] uses an ensemble
approach to estimate entropy or divergence functionals. These
works construct an ensemble of simple plug-in estimators by
varying the neighborhood size of the density estimators. They
then take a weighted average of the estimators where the
weights are chosen to decrease the bias with only a small
increase in the variance. The parametric rate of convergence is
achieved when the densities are either d [24]–[26] or (d+1)/2
[27], [28] times differentiable. These approaches are simple to
implement as they only require simple plug-in estimates and
the solution of an offline convex optimization problem. These
estimators have also performed well in various applications
[29]–[33]
Finally, the authors of [34] showed that k-nn or KDE based
approaches underestimate the MI when the MI is large. As
MI increases, the dependencies between random variables increase which results in less smooth densities. Thus a common
approach to overcome this issue is to require the densities to
be smooth [20]–[28].
B. Contributions
In the context of this related work, we make the following
novel contributions in this paper: (1) For continuous random
variables (case 1), we extend the asymptotic bias and variance
results for divergence estimators [27], [28] to kernel density
plug-in MI estimators without boundary correction [35] by
incorporating machinery to handle the dependence between
the product of marginal density estimators (Section II), (2) we
extend the theory to handle discrete random variables in the
mixed cases (cases 2 and 3) by reformulating the densities
as a mixture of the conditional density of the continuous
variables given the discrete variables (Section III), and (3) we
leverage this theory for the mixed cases in conjunction with
the generalized theory of ensemble estimators [27], [28] to
derive, to the best of our knowledge, the first non-parametric
estimator that achieves a parametric rate of MSE convergence
of O (1/N ) for the mixed cases (Section IV), where N is
the number of samples available from each distribution. We
also derive a central limit theorem for the ensemble estimators
(Section IV-C). We verify the theory through experiments
(Section V).
II. C ONTINUOUS R ANDOM VARIABLES
In this section, we obtain MSE convergence rates of plugin MI estimators when X and Y are continuous (case 1 in
Section I). This will enable us to derive the MSE convergence
rates of plug-in MI estimators when X is continuous and
Y is discrete and when X and Y may have any mixture
of continuous and discrete components (respectively, cases
2 and 3 in Section I). These rates can then be used to
derive ensemble estimators that achieve the parametric MSE
rate. Let fX (x), fY (y), and fXY (x, y) be dX , dY , and
dX + dY = d-dimensional densities. Let g(t1 , t2 ) = g tt12
(e.g. g(t1 , t2 ) = log(t1 /t2 ) for Shannon information). We
define a family of MIs as
Z
fX (x)fY (y)
fXY (x, y)dxdy.
(1)
G1 (X; Y) = g
fXY (x, y)
and KY (·) be kernel functions with ||KX ||∞ , ||KY ||∞ < ∞
where ||K||∞ = supx |K(x)|. The KDE for fX is
N
Xj − Xi
1 X
.
(2)
KX
f̃X,hX (Xj ) =
hX
M hdXX i=1
i6=j
The KDEs f̃Y,hY (Yj ) and f̃Z,hZ (Xj , Yj ) (where hZ =
(hX , hY )) for estimating fY and fXY , respectively, are defined similarly using KY and the product kernel KX · KY .
Then G1 (X; Y) is estimated as
!
N
1 X
f̃X,hX (Xi )f̃Y,hY (Yi )
G̃hX ,hY =
.
(3)
g
N i=1
f̃Z,hZ (Xi , Yi )
B. Convergence Rates
To derive the convergence rates of G̃hX ,hY we assume that
1) fX , fY , fXY , and g are smooth; 2) fX and fY have
bounded support sets SX and SY ; 3) fX , fY , and fXY are
strictly lower bounded on their support sets. More specifically,
we assume that the densities belong to the bounded Hölder
class Σ(s, H) (the precise definition is included in the appendices) which implies that the densities are r = ⌊s⌋ times
differentiable. These assumptions are comparable to those
in similar studies on asymptotic convergence analysis [20]–
[26], [28]. To derive the convergence rates without boundary
corrections, we also assume that 4) the boundary of the support
set is smooth with respect to the corresponding kernels. The
full assumptions are
• (A.0): The kernels KX and KY are symmetric product
kernels with bounded support.
• (A.1): There exist constants ǫ0 , ǫ∞ such that 0 < ǫ0 ≤
fX (x) ≤ ǫ∞ < ∞, ∀x ∈ SX , ǫ0 ≤ fY (y) ≤ ǫ∞ , ∀y ∈
SY , and ǫ0 ≤ fXY (x, y) ≤ ǫ∞ , ∀(x, y) ∈ SX × SY .
• (A.2): Each of the densities belong to Σ(s, H) in the
interior of their support sets with s ≥ 2.
• (A.3): g (t1 /t2 ) has an infinite number of mixed derivatives wrt t1 and t2 .
∂ k+l g(t1 ,t2 )
/(k!l!), k, l = 0, 1, . . . are strictly
• (A.4):
l
∂tk
1 ∂t2
upper bounded for ǫ0 ≤ t1 , t2 ≤ ǫ∞ .
• (A.5): Let K be either KX or KY , S either SX or SY , h
either hX or hY . Let px (u) : Rd → R be a polynomial in
u of order q ≤ r = ⌊s⌋ whose coefficients are a function
of x and are r − q times differentiable. For any positive
integer t
!t
Z
Z
K(u)px (u)du
x∈S
where vt (h) admits the expansion
A. The KDE Plug-in Estimator
When both X and Y are continuous with marginal densities
fX and fY , the MI functional G1 (X; Y) can be estimated
using KDEs. Assume that N i.i.d. samples {Z1 , . . . , ZN } are
available from the joint density fXY with Zi = (Xi , Yi )T . Let
M = N − 1 and let hX , hY be kernel bandwidths. Let KX (·)
dx = vt (h),
u:K(u)>0, x+uh∈S
/
vt (h) =
r−q
X
i=1
ei,q,t hi + o hr−q ,
for some constants ei,q,t .
Assumption (A.5) states that the support of the density is
smooth with respect to the kernel K in the sense that the
expectation with respect to any random variable u of the area
of the kernel that falls outside the support S is a smooth
function of the bandwidth h provided that the distribution
function px (u) of u is smooth (e.g. s ≥ 2). The inner integral
captures this expectation while the outer integral averages
this inner integral over all points near the boundary of the
support. The vt (h) term captures the fact that the smoothness
of this expectation is proportional to the smoothness of the
function px (u). As an example, this smoothness assumption
is satisfied when the support is rectangular and the kernel is the
uniform rectangular kernel [27], [28]. Note that this boundary
assumption does not result in parametric convergence rates for
the plug-in estimator G̃hX ,hY , which is in contrast with the
boundary assumptions in [20]–[23]. However, the estimators in
[20]–[23] perform boundary correction, which requires knowledge of the density support boundary and complex calculations
at the boundary in addition to the boundary assumptions, to
achieve the parametric convergence rates. In contrast, we use
ensemble methods to improve the resulting convergence rates
of G̃hX ,hY without boundary correction.
Theorem 1. (Bias) Under assumptions A.0 − A.5 and for
general g, the bias of G̃hX ,hY is
r
r
X
X
h
i
B G̃hX ,hY
=
c10,i,j hiX hjY +
j=0 i=0
i+j6=0
+O hsX + hsY +
c11
N hdXX hdYY
1
N hdXX hdYY
If g (t1 , t2 ) also has j, l-th order mixed derivatives
β
tα
1 t2
!
.
∂ j+l
∂tj1 ∂tl2
(4)
that
depend on t1 and t2 only through
for some α, β ∈ R for
each 1 ≤ k, l, ≤ λ, the bias of G̃hX ,hY is
h
i
B G̃hX ,hY
⌊λ/2⌋
=
X
r
X
m,n=0
i,j=0
i+j+m+n6=0
+
⌊λ/2⌋ r
r
X XX
m=1 i=0 j=0
c11,j,i,m,n
hiX hjY
m
n
N hdXX
N hdYY
m
c13,m,n,j hiX hjY / N hdXX hdYY
λ/2
+O hsX + hsY + 1/ N hdXX hdYY
.
(5)
The constants in both (4) and (5) depend only on the densities and their derivatives, the functional g and its derivatives,
and the kernels. They are independent of N, hX , and hY .
The purpose of Theorem 1 is two-fold. First, we use
Theorem 1 to derive the bias expressions for the MI plugin estimators when X and Y may have a mixture of discrete
and continuous components (cases 2 and 3) in Section III.
Second, in conjunction with Theorem 2 which follows, the
results in Theorem 1 can be used to derive MI ensemble
estimators in Appendix B-A that achieve the parametric MSE
convergence rate when the densities are sufficiently smooth.
The expression in (5) enables us to achieve the parametric rate
under less restrictive smoothness assumptions on the densities
(s > d/2 for (5) compared to s ≥ d for (4)). The extra
condition required on the mixed derivatives of g to obtain the
expression in (5) is satisfied, for example, for Shannon and
Renyi information measures.
Theorem 2. (Variance) If the functional g is Lipschitz continuous in both of its arguments with Lipschitz constant Cg ,
then the variance of G̃hX ,hY is
h
i 22C 2 ||KX · KY ||2
g
∞
.
V G̃hX ,hY ≤
N
Similar to Theorem 1, Theorem 2 is used to derive variance
expressions for the MI plug-in estimators under cases 2 and
3. Theorem 2 is also necessary to derive optimally weighted
ensemble estimators. The proofs of Theorems 1 and 2 are
similar to the proofs of the bias and variance results for
the divergence functional estimators in [27]. The primary
difference is in handling certain products of the marginal
KDEs that appear in the expansion of the MSE. See Appendix
C and D for details.
Theorems 1 and 2 indicate that for the MSE of the plug-in
estimator to go to zero for case 1, we require hX , hY → 0
and N hdXX hdYY → ∞. The Lipschitz assumption on g is
comparable to other nonparametric estimators of distributional
functionals [20]–[23], [27]. Specifically, assumption A.1 ensures that functionals such as those for Shannon and Renyi
informations are Lipschitz on the space ǫ0 to ǫ∞ .
III. M IXED R ANDOM VARIABLES
In this section, we extend the results of Section II to MI
estimation when X and Y may have a mixture of discrete and
continuous components. For simplicity, we focus primarily on
the important case when X is continuous and Y is discrete
(case 2 in Section I). The more general case when X and Y
may have any mixture of continuous and discrete components
(case 3 in Section I) is discussed in Section III-B. As an
example of the former case, if Y is a predictor variable (e.g.
classification labels), then the MI between X and Y indicates
the value of X as a predictor of Y. Although Y is discrete,
fXY = fZ is also a density. Let SX be the support of the
density fX and SY be the support of the probability mass
function fY . The MI is
G2 (X; Y)
X Z fX (x)fY (y)
fXY (x, y)dx
(6)
=
g
fXY (x, y)
y∈SY
Z
X
fX (x)
fX|Y (x|y)dx.
=
fY (y) g
fX|Y (x|y)
y∈SY
PN
Let Ny = i=1 1{Yi =y} where y ∈ SY . Let f̃X,hX be as
in (2) and define Xy = {Xi ∈ {X1 , . . . , XN } |Yi = y}. Then
if Xi ∈ Xy , the KDE of fX|Y (x|y) is
X
1
Xi − Xj
f̃X|y,hX|y (Xi ) =
.
KX
X
hX|y
(Ny − 1) hdX|y
Xj ∈Xy
i6=j
We define the plug-in estimator G̃hX ,hX|Y of (6) as
1 X
g f̃X,hX (X)/f̃X|y,hX|y (X) ,
G̃hX ,hX|y =
Ny
X∈Xy
X Ny
=
G̃hX ,hX|y .
N
=⇒ G̃hX ,hX|Y
(7)
y∈SY
A. Convergence Rates
To apply the theory of optimally weighted ensemble estimation to G̃hX ,hX|Y , we need to know its MSE as a function
of the bandwidths and the sample size.
Theorem 3. (Bias) Assume that assumptions A.0 − A.5 apply
to the functional g, the kernel KX , and the densities fX and
fX|Y . Assume that hX|y = lN−β
with 0 < β < d1X and l a
y
positive number. Then the bias of G̃hX ,hX|Y is
i
h
B G̃hX ,hX|Y
=
r
r
X
X
c13,i,j hiX lj N −jβ +
j=0 i=0
i+j6=0
+ O hsX + N −sβ
c14,X
c14,Y
+ dX 1−βdX
l N
N hdXX
1
1
+ 1−βdX
+
dX
N
N hX
!
.
If g (t1 , t2 ) also has j, l-th order mixed derivatives
β
tα
1 t2
depend on t1 and t2 only through
each 1 ≤ j, l ≤ λ, then the bias is
h
i
B G̃hX ,hX|Y
⌊λ/2⌋
=
that
for some α, β ∈ R for
hiX lj N −jβ
m
c14,j,i,m,n
n
N hdXX
(ldX N 1−βdX )
m,n=0
i,j=0
i+j+m+n6=0
1
1
+ O hsX + N −sβ +
.
λ/2 +
λ/2
1−βd
X)
(N
N hdXX
(9)
Proof: We focus on (8) as (9) follows similarly. It can be
shown that
i
i
h
X Ny h
B G̃hX ,hX|y Y1 , . . . , YN .
B G̃hX ,hX|Y = E
N
y∈SY
The conditional bias of G̃hX ,hX|y given Y1 , . . . , YN can then
be obtained from Theorem 1 as
i
h
B G̃hX ,hX|y Y1 , . . . , YN
=
r
X
c10,i,j hiX hjX|y
i,j=0
i+j6=0
+O
hsX
+
hsX|y
1
1
+
+
dX
X
N y hX
Ny hdX|y
!
i=0
(10)
From [36], the i-th central moment of Ny has the form of
i ⌊i/2⌋
h
X
E (NY − N fY (y))i =
cn,i (fY (y))N n .
n=0
has terms proportional to N 1−γ−i+n ≤
N
for i = 0, 1, . . . since n ≤ ⌊i/2⌋. Then since
there is an N in the denominator of (7), this leaves terms of
the form of N −γ when i = 0, 1 and N −1 for i ≥ 2. This
completes the proof for the bias. See Appendix E for more
details.
Thus E N1−γ
y
1−γ−⌊i/2⌋
r
X
X
α
Nα
y = (Ny − N fY (y) + N fY (y))
∞
X
α
α−i
i
=
(N fY (y))
(Ny − N fY (y)) ,
i
i=0
=⇒ E Nα
y
∞
i
h
X
α
α−i
i
(N fY (y))
E (Ny − N fY (y)) .
=
i
(8)
∂ j+l
∂tj1 ∂tl2
Then given that hX|y ∝ N−β
y , (7) gives terms of the form
of N1−γ
with
γ
>
0.
N
is a binomial random variable
y
y
with parameter fY (y), N trials, and mean N fY (y). Thus we
need to compute the fractional moments of a binomial random
variable. By the generalized binomial theorem, we have that
Theorem 4. If the functional g is Lipschitz continuous in
both of its arguments and SY is finite, then the variance of
G̃hX ,hX|Y is O(1/N ).
Proof: By the law of total variance, we have
ii
i
h h
h
V G̃hX ,hX|Y = E V G̃hX ,hX|Y Y1 , . . . , YN
ii
h h
+ V E G̃hX ,hX|Y Y1 , . . . , YN .
Given all of the Yi ’s, the estimators G̃hX ,hX|y are all independent since they use differenthsets of Xi ’s for each y. From
i
Theorem 2, we know that V G̃hX ,hX|Y Y1 , . . . , YN =
P
2
N
/N
O
. Taking the expectation then yields
y
y∈SY
O(1/N ).
For the
h second term, we know ifrom the proof of Theorem 3
that E G̃hX ,hX|Y Y1 , . . . , YN yields a sum of terms of
the form of Nγy /N for 0 < γ ≤ 1. Taking the variance
of the sum
of these terms yields a sum of terms of the
form V Nγy /N 2 (the covariance terms can be bounded by
the Cauchy-Schwarz
inequality to yield similar terms). Then
V Nγy can be bounded by taking a Taylor series expansion of
the functions Nγy and N2γ
y at the point N fY (y) which yields
an expression that depends
the central moments of Ny .
on
From this, we obtain V Nγy = O(N ) which completes the
proof. See Appendix E for details.
Theorems 3 and 4 provide exact expressions for the bias and
bounds on the variance of the plug-in MI estimator, respectively. It is shown in Section IV that the MSE of the plug-in
estimator converges very slowly to zero under this setting.
However, Theorems 3 and 4 provide with us the necessary
information for applying the theory of optimally weighted
ensemble estimation to obtain estimators with improved rates.
This is done in Section IV.
B. Extension to Other Cases
The results in Section III-A can be extended to the case
where X and/or Y may have a mixture of continuous and
discrete components (case 3 in Section I). This scenario can be
divided further into three different cases: A) X is continuous
and Y has a mixture of discrete and continuous components;
B) X and Y both have a mixture of discrete and continuous
components; C) Y is discrete and X has a mixture of discrete
and continuous components. Consider case A first. Denote the
discrete and continuous components of Y as Y1 and Y2 ,
respectively. Denote the respective support sets as SY1 and
SY2 . We can then write
G3A (X; Y)
X Z fX (x)fY (y1 , y2 )
fXY (x, y1 , y2 )dxdy2
=
g
fXY (x, y1 , y2 )
y1 ∈SY1
Z
X
fX (x)fY2 |Y1 (y2 |y1 )
=
fY1 (y1 ) g
fXY2 |Y1 (x, y2 |y1 )
y1 ∈SY1
× fXY2 |Y1 (x, y2 |y1 )dxdy2 .
(11)
The subscript 3A indicates that we are considering case
A under the third case described in the introduction. The
expression in (11) is very similar to the expression in (6).
After plugging in KDEs for the corresponding densities and
conditional densities, a nearly identical procedure to that in
Section III-A can be followed to derive the bias and variance
of the corresponding plug-in estimator.
Now consider case B. Denote the discrete and continuous
components of X as X1 and X2 , respectively. Then if Y1 is
the discrete component of Y, then the expression inside the
g functional in (11) includes fX1 (x1 )fY1 (y1 )/fX1 Y1 (x1 , y1 ).
Thus the plug-in estimator must include estimators for
fX1 (x1 ), fY1 (y1 ), and fX1 Y1 (x1 , y1 ). Define Ny1 =
P
N
i=1 1{Y1,i =y1 } where Y1,i is the discrete component of Yi .
Then the estimator we use for fY1 (y1 ) is Ny1 /N . The estimators for fX1 (x1 ) and fX1 Y1 (x1 , y1 ) are defined similarly.
The bias and variance expressions of this plug-in estimator can
then be derived with some slight modifications of Theorems 1
and 2. See Appendix E-C for an expression for G3B (X; Y) in
this case and a sketch of these modifications. Case C follows
similarly as the expression inside the g functional in (11)
includes fX1 (x1 )fY (y)/fX1 Y (x1 , y) where all the terms are
probability mass functions.
The resulting bias and variance expressions in these settings
are analogous to those in Theorems 1, 2, and 3 as the variance
will be O(1/N ) and the bias will depend on expansions of the
bandwidths for the various KDEs. Ensemble methods can then
be applied to improve the MSE convergence rates as described
in the next section.
IV. E NSEMBLE E STIMATION OF MI
A. Mixed Random Variables
We again focus on the case where X is continuous and
Y is discrete (case 2 in Section I). If no bias correction
is performed, then Theorem 3 shows that the optimal bias
rate of the plug-in estimator G̃hX ,hX|Y is O 1/N 1/(dX +1) ,
which converges very slowly to zero when dX is not small.
We use the theory of optimally weighted ensemble estimation
to improve this rate. An ensemble of estimators is formed by
choosing different bandwidth values. Consider first the case
where (8) applies. Let L be a set of real positive numbers with
|L| = L. This set will parameterize the bandwidths for f̃X,hX
and f̃X|y,hX|y resulting in L estimators in the ensemble. While
different parameter sets for f̃X,hX and f̃X|y,hX|y can be chosen,
we only use one set here for simplicity
√ of exposition. To ensure
that the final terms in (8) are O(1/ N ) when s ≥ d, for each
estimator in the ensemble we choose hX (l) = lN −1/(2dX )
−1/(2dX )
and hX|y (l) = lNy
where l ∈ L. Define
P w to be a
weight vector parameterized by l ∈ L with l∈L w(l) = 1
and define
X
X Ny
G̃hX (l),hX|y (l) .
(12)
G̃w,1 =
w(l)
N
l∈L
y∈SY
From Theorem 3, the bias of G̃w,1 is
r
i XX
h
−i
θ w(l)li N 2dX
B G̃w,1 =
l∈L i=1
+O
√
−s
−1
L||w||2 N 2dX + N 2
,
(13)
where we use θ notation to omit the constants.
We use the general theory of optimally weighted ensemble
estimation in [28] to improve the MSE convergence rate of
the plug-in estimator by using the weights to cancel
o lower
n the
order terms in (13). The theory is as follows. Let Êl
be
l∈L
an indexed ensemble
of
estimators
with
the
weighted
ensemble
P
estimator Êw = l∈L w(l)Êl satisfying:
•
C.1. Let ci be constants depending on the underlying
density, J = {i1 , . . . iI } a finite index set with I < L,
ψi (l) basis functions depending only on the parameter l
and not on N , φi (N ) functions of the sample size N that
are independent of l. Assume the bias is
h i X
1
.
ci ψi (l)φi (N ) + O √
B Êl =
N
i∈J
•
C.2. Assume the variance is
h i
1
1
+o
.
V Êl = cv
N
N
Theorem 5. [28] If conditions
C.1 and C.2 hold for an
n o
ensemble of estimators Êl
, then there exists a weight
l∈L
vector w0 such that the MSE of Êw0 attains the parametric
rate of convergence of O (1/N ). The weight w0 is the solution
to the offline convex optimization problem
minw ||w||
P 2
subject to
l∈L w(l)
P = 1,
γw (i) = l∈L w(l)ψi (l) = 0, i ∈ J.
(14)
To apply Theorem 5 to an ensemble of estimators,
√ all φi (N )
functions that converge to zero slower than 1/ N and the
corresponding ψi (l) functions must be known for the base
estimator. Otherwise, Theorem 5 can only be guaranteed to
improve the bias up to the slowest unknown bias rate. This
theorem was applied in [28] to the problem of divergence
functional estimation where the plug-in estimator has slowly
converging bias but the resulting ensemble estimator achieves
the parametric rate for sufficiently smooth densities.
We apply Theorem 5 to the ensemble estimator G̃w,1 as
conditions C.1 and C.2 are satisfied with φi (N ) = N −i/(2dX )
and ψi (l) = li for i ∈ {1, . . . r} as seen in (8) and (13). If
s ≥ dX , then the MSE of the optimally weighted estimator
G̃w0 ,1 is O(1/N ). A similar approach can be used for the
case where X contains a mixture of continuous and discrete
components and Y is discrete (or vice versa). To the best of
our knowledge, these are the first nonparametric estimators
to achieve the MSE parametric rate in this setting of mixed
random variables.
If the mixed derivatives of the functional g satisfy the
extra condition required for (9), we can define an ensemble
estimator G̃w0 ,2 that achieves the parametric MSE rate if
s > dX /2. For simplicity, we focus primarily on G̃w0 ,1 . See
Appendix B-B for details on G̃w0 ,2 .
In practice, the optimization problem in (14) typically
results in a very large increase in variance. Thus we follow
the lead of [24]–[27] and use a relaxed version of (14):
minw ǫP
subject to
l∈L w(l) = 1,
1
γw (i)N 2 φi (N ) ≤ ǫ, i ∈ J,
(15)
2
kwk2 ≤ η.
As shown in [24]–[27], the ensemble estimator G̃w0 ,1 using
the resulting weight vector from the optimization problem in
(15) still achieves the parametric MSE convergence rate under
the same assumptions as described previously. It was also
shown in [27] that the heuristic of setting η = ǫ works well
in practice. Algorithm 1 summarizes the estimator G̃w0 ,1 .
A similar approach can be used to derive an ensemble
estimator G̃cont
w0 ,1 for the case when X and Y are continuous
(case 1 in Section I). See Appendix B-A for details. The
case where X and Y both contain a mixture of discrete and
continuous components follows similarly.
Algorithm 1 Optimally weighted ensemble MI estimator
G̃w0 ,1
Input: L positive real numbers L, samples {Z1 , . . . , ZN }
from fXY , dimension dX , function g, kernel KX
Output: The optimally weighted MI estimator G̃w0 ,1
1: Solve for w0 using (15) with basis functions ψi (l) = l i ,
φi (N ) = N −i/(2dX ) , l ∈ L, and 0 ≤ i ≤ dX .
2: for all l ∈ L and y ∈ SY do
P
3:
Ny ← N
i=1 1{Yi =y}
−1/(2dY )
4:
hX (l) ← lN −1/(2dX ) , hX|y (l) ← lNy
5:
for Xi ∈ Xy do
6:
Calculate f̃X,hX (l) (Xi ), f̃X|y,hX|y (l) (Xi ) as described
in the text
7:
end for
P
f̃
(X)
8:
G̃hX (l),hX|y (l) ← N1y X∈Xy g f̃ X,hX (l) (X)
X|y,hX|y (l)
9:
10:
end for P
P
G̃w0 ,1 ← l∈L w0 (l) y∈Sy
Ny
N G̃hX (l),hX|y (l)
1) Select the minimum and maximum bandwidth parameter
to produce density estimates that satisfy the following:
first the minimum bandwidth should not lead to a zerovalued density estimate at any sample point; second
the maximum bandwidth should be smaller than the
diameter of the support.
2) Ensure the bandwidths are sufficiently distinct. Similar
bandwidth values lead to negligible decrease in bias and
many bandwidth values may increase ||w0 ||2 resulting in
an increase in variance [24].
3) Select L = |L| > |J| = I to obtain a feasible solution
for the optimization problems in (14) and (15). We find
that choosing a value of 30 ≤ L ≤ 60, and setting L to
be L linearly spaced values between the minimum and
maximum values described above works well in practice.
The resulting ensemble estimators are robust in the sense that
they are not sensitive to the exact choice of the bandwidths
or the number of estimators as long as the the rough rules-ofthumb given above are followed. Moon et al [27], [28] gives
more details on ensemble estimator parameter selection for
continuous divergence estimation. These details also apply to
the continuous parts of the mixed cases for MI estimation in
this paper.
Since the optimal weight w0 can be calculated offline, the
computational complexity of the estimators is dominated by
the construction
of the KDEs which has a complexity of
O N 2 using the standard implementation. For very large
datasets, more efficient KDE implementations (e.g. [37]) can
be used to reduce the computational burden.
B. Parameter Selection
In theory, the theoretical results of the previous sections
hold for any choice of the bandwidth vectors as determined
byL. In practice, we find that the following rules-of-thumb
for tuning the parameters lead to high-quality estimates in the
finite sample regime.
C. Central Limit Theorem
We finish this section with central limit theorems for the
ensemble estimators. This enables us to perform hypothesis
testing on the mutual information.
Theorem 6. Let G̃cont
be a weighted ensemble estimator
w
when X and Y are continuous with bandwidths hX (lX ) and
hY (lY ) for each estimator in the ensemble. Assume that the
functional g is Lipschitz in both arguments with Lipschitz
constant Cg and that hX (lX ), hY (lY ) = o(1), N → ∞,
and N hdXX (lX ), N hdYY (lY ) → ∞ for each lX ∈ LX and
lY ∈ LY . Then for fixed LX and LY , and if S is a standard
normal random variable,
!
i
h
i r h
kernel and the optimally weighted ensemble estimator G̃w0 ,1
for various sample sizes and for d = 4, 6, 9, respectively. The
ensemble estimator outperforms the standard plug-in estimator,
especially for larger sample sizes and larger dimensions.
This demonstrates that while an individual kernel estimator
performs poorly, an ensemble of estimators including the
individual estimator performs well.
For the second case, Y has six possible outcomes (i.e.
|SY | = 6) and respective probabilities Pr(Y = 0) = 0.35,
≤ t → Pr (S ≤ t) . Pr(Y = 1) = 0.2, Pr(Y = 2) = Pr(Y = 3) = 0.15,
Pr
G̃cont
− E G̃cont
/ V G̃cont
w
w
w
Pr(Y = 4) = 0.1, and Pr(Y = 5) = 0.05. We chose
α = 0.5 and d = 6. The conditional covariances matrices
The proof is based on an application of Slutsky’s Theorem
are again 0.1 × Id and the conditional means are, respectively,
preceded by an application of the Efron-Stein inequality (see
µ̄0 = 0.25 × 1̄d , µ̄1 =
0.75 × 1̄d , and µ̄2 =T 0.5 × 1̄d , µ̄T3T=
Appendix F).
T
T T
0.25
×
1̄
,
0.5
×
1̄
, µ̄4 = 0.75 × 1̄2 , 0.375 × 1̄4 ,
4
2
If the space SY is finite, then the ensemble estimators for
T
T T
the mixed component case also obey a central limit theorem. and µ̄5 = 0.5 × 1̄4 , 0.25 × 1̄2 . The parameters for the
The proof follows by an application of Slutsky’s Theorem ensemble estimators and the KDE plug-in estimators are the
same as in the top three plots in Figure 1. The bottom plot in
combined with Theorem 6.
Figure 1 again compares the ensemble estimator to the plug-in
Corollary 7. Let G̃w be a weighted ensemble estimator when KDE estimator. The ensemble estimator also outperforms the
X is continuous and Y is discrete with bandwidths hX (l) plug-in estimator in this setting.
and hX|y (l) for each estimator in the ensemble. Assume that
the functional g is Lipschitz in both arguments and that
VI. C ONCLUSION
X
hX , hX|y = o(1), N → ∞, and N hdXX , N hdX|y
→ ∞ for
We derived the MSE convergence rates for plug-in KDEeach l ∈ L and ∀y ∈ SY with Sy finite. Then for fixed L,
based estimators of MI measures between X and Y when
!
they have only continuous components and for the case where
i r h
h
i
Pr
G̃w − E G̃w
/ V G̃w ≤ t
→ Pr (S ≤ t) .
V. E XPERIMENTAL VALIDATION
In this section, we validate our theory by estimating the
Rényi-α MI integral (i.e. g(x) = xα in (6); see [38]) where X
is a mixture of truncated Gaussian random variables restricted
to the unit cube and Y is a categorical random variable. We
choose Rényi MI as it has received recent interest (e.g. [3]) and
the estimation problem does not reduce to entropy estimation
in contrast with Shannon MI. Thus this is a clear case where
there are no other nonparametric estimators that are known to
achieve the parametric MSE rate.
We consider two cases. In the first case, Y has three
possible outcomes (i.e. |SY | = 3) and respective probabilities
Pr(Y = 0) = Pr(Y = 1) = 2/5 and Pr(Y = 2) = 1/5.
The conditional covariance matrices are all 0.1 × Id and
the conditional means are, respectively, µ̄0 = 0.25 × 1̄d ,
µ̄1 = 0.75 × 1̄d , and µ̄2 = 0.5 × 1̄d , where Id is the d × d
identity matrix and 1̄d is a d-dimensional vector of ones.
This experiment can be viewed as the problem of estimating
MI (e.g. for feature selection or Bayes error bounds) of a
classification problem where each discrete value corresponds
to a distinct class, the distribution of each class overlaps
slightly with others, and the class probabilities are unequal. We
use α = 0.5. We set L to be 40 linearly spaced values between
1.2 and 3. The bandwidth in the KDE plug-in estimator is also
set to 2.1N −1/(2d).
The top three plots in Figure 1 shows the MSE (200 trials) of
the plug-in KDE estimator of the MI integral using a uniform
Y is discrete and X is continuous. We also showed how
convergence rates can be obtained for the case when X and/or
Y contain a mixture of discrete and continuous components.
Using these rates, we defined ensemble estimators that achieve
an MSE rate of O(1/N ) when the densities are sufficiently
smooth and showed that a central limit theorem also holds. To
the best of our knowledge, this is the first nonparametric MI
estimator that achieves the MSE convergence rate of O(1/N )
in this setting of mixed random variables (i.e. X and Y are
not both purely discrete or purely continuous).
R EFERENCES
[1] B. Chai, D. Walther, D. Beck, and L. Fei-Fei, “Exploring functional
connectivities of the human brain using multivariate information analysis,” in Advances in neural information processing systems, pp. 270–278,
2009.
[2] K. R. Moon, M. Noshad, S. Y. Sekeh, and A. O. Hero III, “Information
theoretic structure learning with confidence,” in Proc IEEE Int Conf
Acoust Speech Signal Process, 2017.
[3] D. Pál, B. Póczos, and C. Szepesvári, “Estimation of rényi entropy and
mutual information based on generalized nearest-neighbor graphs,” in
Advances in Neural Information Processing Systems, pp. 1849–1857,
2010.
[4] H. Liu, L. Wasserman, and J. D. Lafferty, “Exponential concentration for
mutual information estimation with application to forests,” in Advances
in Neural Information Processing Systems, pp. 2537–2545, 2012.
[5] J. Lewi, R. Butera, and L. Paninski, “Real-time adaptive informationtheoretic optimization of neurophysiology experiments,” in Advances in
Neural Information Processing Systems, pp. 857–864, 2006.
[6] E. Schneidman, W. Bialek, and M. J. B. II, “An information theoretic
approach to the functional classification of neurons,” Advances in Neural
Information Processing Systems, vol. 15, pp. 197–204, 2003.
[7] S. Mohamed and D. J. Rezende, “Variational information maximisation
for intrinsically motivated reinforcement learning,” in Advances in
Neural Information Processing Systems, pp. 2116–2124, 2015.
|S |=3, d=4
Y
Kernel
Weighted
MSE
10 -2
10 -3
10 2
10 3
Sample Size N
|S |=3, d=6
Y
MSE
10 -1
Kernel
Weighted
10 -2
10 -3
10 2
10 3
Sample Size N
|SY|=3, d=9
MSE
10 -1
Kernel
Weighted
10 -2
10 -3
10 2
10 3
Sample Size N
|SY|=6, d=6
MSE
10 -1
10 -2
10 -3
10 2
Kernel
Weighted
10 3
Sample Size N
Figure 1. MSE log-log plots as a function of sample size for the uniform
kernel plug-in MI estimator ("Kernel") and the proposed optimally weighted
ensemble estimator G̃w0 ,1 ("Weighted") for the distributions described in the
text. The top three plots each correspond to the first case where |SY | = 3
and the bottom plot corresponds to the second case where |SY | = 6. The
ensemble estimator outperforms the kernel plug-in estimator, especially for
larger sample sizes. Note also that as the dimension increases, the performance
gap between the two estimators increases.
[8] C. Salge, C. Glackin, and D. Polani, “Changing the environment based
on empowerment as intrinsic motivation,” Entropy, vol. 16, no. 5,
pp. 2789–2819, 2014.
[9] K. Torkkola, “Feature extraction by non parametric mutual information
maximization,” The Journal of Machine Learning Research, vol. 3,
pp. 1415–1438, 2003.
[10] J. R. Vergara and P. A. Estévez, “A review of feature selection methods
based on mutual information,” Neural Computing and Applications,
vol. 24, no. 1, pp. 175–186, 2014.
[11] H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information criteria of max-dependency, max-relevance, and minredundancy,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, no. 8, pp. 1226–1238, 2005.
[12] N. Kwak and C.-H. Choi, “Input feature selection by mutual information
based on parzen window,” Pattern Analysis and Machine Intelligence,
IEEE Transactions on, vol. 24, no. 12, pp. 1667–1671, 2002.
[13] M. Sugiyama, “Machine learning with squared-loss mutual information,”
Entropy, vol. 15, no. 1, pp. 80–112, 2012.
[14] J. A. Costa and A. O. Hero, “Geodesic entropic graphs for dimension
and entropy estimation in manifold learning,” IEEE Transactions on
Signal Processing, vol. 52, no. 8, pp. 2210–2221, 2004.
[15] A. Kraskov, H. Stögbauer, and P. Grassberger, “Estimating mutual
information,” Physical review E, vol. 69, no. 6, p. 066138, 2004.
[16] L. Kozachenko and N. N. Leonenko, “Sample estimate of the entropy
of a random vector,” Problemy Peredachi Informatsii, vol. 23, no. 2,
pp. 9–16, 1987.
[17] W. Gao, S. Oh, and P. Viswanath, “Demystifying fixed k-nearest neighbor information estimators,” arXiv preprint arXiv:1604.03006, 2016.
[18] T. Suzuki, M. Sugiyama, J. Sese, and T. Kanamori, “Approximating
mutual information by maximum likelihood density ratio estimation.,”
FSDM, vol. 4, pp. 5–20, 2008.
[19] S. Khan, S. Bandyopadhyay, A. R. Ganguly, S. Saigal, D. J. Erickson III,
V. Protopopescu, and G. Ostrouchov, “Relative performance of mutual
information estimation methods for quantifying the dependence among
short and noisy data,” Physical Review E, vol. 76, no. 2, p. 026209,
2007.
[20] A. Krishnamurthy, K. Kandasamy, B. Poczos, and L. Wasserman, “Nonparametric estimation of renyi divergence and friends,” in Proceedings of
The 31st International Conference on Machine Learning, pp. 919–927,
2014.
[21] K. Kandasamy, A. Krishnamurthy, B. Poczos, L. Wasserman, and
J. Robins, “Nonparametric von mises estimators for entropies, divergences and mutual informations,” in Advances in Neural Information
Processing Systems, pp. 397–405, 2015.
[22] S. Singh and B. Póczos, “Exponential concentration of a density
functional estimator,” in Advances in Neural Information Processing
Systems, pp. 3032–3040, 2014.
[23] S. Singh and B. Póczos, “Generalized exponential concentration inequality for rényi divergence estimation,” in Proceedings of the 31st
International Conference on Machine Learning (ICML-14), pp. 333–
341, 2014.
[24] K. Sricharan, D. Wei, and A. O. Hero, “Ensemble estimators for
multivariate entropy estimation,” Information Theory, IEEE Transactions
on, vol. 59, no. 7, pp. 4374–4388, 2013.
[25] K. R. Moon and A. O. Hero, “Ensemble estimation of multivariate
f-divergence,” in Information Theory (ISIT), 2014 IEEE International
Symposium on, pp. 356–360, IEEE, 2014.
[26] K. R. Moon and A. O. Hero, “Multivariate f-divergence estimation with
confidence,” in Advances in Neural Information Processing Systems,
pp. 2420–2428, 2014.
[27] K. R. Moon, K. Sricharan, K. Greenewald, and A. O. Hero, “Nonparametric ensemble estimation of distributional functionals,” arXiv preprint
arXiv:1601.06884v2, 2016.
[28] K. R. Moon, K. Sricharan, K. Greenewald, and A. O. Hero, “Improving
convergence of divergence functional ensemble estimators,” in 2016
IEEE International Symposium on Information Theory (ISIT), 2016.
[29] Z. Szabó and A. Lőrincz, “Distributed high dimensional information
theoretical image registration via random projections,” Digital Signal
Processing, vol. 22, no. 6, pp. 894–902, 2012.
[30] S. V. Gliske, K. R. Moon, W. C. Stacey, and A. O. Hero, “The intrinsic
value of HFO features as a biomarker of epileptic activity,” in IEEE
International Conference on Acoustics, Speech, and Signal Processing,
pp. 6290–6294, 2016.
[31] K. Moon, V. Delouille, and A. O. Hero, “Meta learning of bounds on
the Bayes classifier error,” in IEEE Signal Processing and SP Education
Workshop, pp. 13–18, IEEE, 2015.
[32] K. R. Moon, J. J. Li, V. Delouille, R. De Visscher, F. Watson, and
A. O. Hero, “Image patch analysis of sunspots and active regions. I.
Intrinsic dimension and correlation analysis,” Journal of Space Weather
and Space Climate, vol. 6, no. A2, 2016.
[33] K. R. Moon, V. Delouille, J. J. Li, R. De Visscher, F. Watson, and
A. O. Hero, “Image patch analysis of sunspots and active regions. II.
Clustering via matrix factorization,” Journal of Space Weather and Space
Climate, vol. 6, no. A3, 2016.
[34] S. Gao, G. Ver Steeg, and A. Galstyan, “Efficient estimation of mutual information for strongly dependent variables,” in Proceedings of
the Eighteenth International Conference on Artificial Intelligence and
Statistics, pp. 277–286, 2015.
[35] R. J. Karunamuni and T. Alberts, “On boundary correction in kernel
density estimation,” Statistical Methodology, vol. 2, no. 3, pp. 191–212,
2005.
[36] J. Riordan, “Moment recurrence relations for binomial, poisson and
hypergeometric frequency distributions,” The Annals of Mathematical
Statistics, vol. 8, no. 2, pp. 103–111, 1937.
[37] V. C. Raykar, R. Duraiswami, and L. H. Zhao, “Fast computation of
kernel estimators,” Journal of Computational and Graphical Statistics,
vol. 19, no. 1, pp. 205–220, 2010.
[38] J. C. Principe, Information theoretic learning: Renyi’s entropy and kernel
perspectives. Springer Science & Business Media, 2010.
[39] B. Efron and C. Stein, “The jackknife estimate of variance,” The Annals
of Statistics, pp. 586–596, 1981.
[40] R. Durrett, Probability: Theory and Examples. Cambridge University
Press, 2010.
[41] A. Gut, Probability: A Graduate Course. Springer Science & Business
Media, 2012.
A PPENDIX A
H ÖLDER C LASS
We derive MSE convergence rates for the plug-in estimators in terms of the smoothness of the densities which we characterize
by the Hölder Class.
Pd
Definition 1 (Hölder Class). Let X ⊂ Rd be a compact space. For r = (r1 , . . . , rd ), ri ∈ N, define |r| = i=1 ri and
|r|
Dr = ∂xr1∂...∂xrd . The Hölder class Σ(s, H) of functions on L2 (X ) consists of the functions f that satisfy
1
d
s−r
|Dr f (x) − Dr f (y)| ≤ H kx − yk
,
for all x, y ∈ X and for all r s.t. |r| ≤ ⌊s⌋.
For notation, let EZ denote the conditional expectation given Z.
A PPENDIX B
MI E NSEMBLE E STIMATION E XTENSIONS
A. Continuous Random Varables
We can also apply Theorem 5 to obtain MI estimators that achieve the parametric rate for the case when X and Y are
dX dY
−1/2
continuous.
for the O(1/(N hdXX hdYY )) terms to
√ For general g, (4) in the main paper indicates that we need hX hY ∝ N
be O(1/ N ). We consider the more general case where the parameters may differ for hX and hY . Let LX and LY be sets of
real, positive numbers with |LX | = LX and |LY | = LY . For each estimator in the ensemble, choose
P lX ∈ LX and lY ∈ LY and
set hX (lX ) = lX N −1/(2(dX +dY )) and hY (lY ) = lY N −1/(2(dX +dY )) . Define the matrix w s.t. lX ∈LX ,lY ∈LY w(lX , lY ) = 1.
i j
From Theorems 1 and 2, conditions C.1 and C.2 are satisfied if s ≥ dX + dY with ψi,j (lX , lY ) = lX
lY and φi,j (N ) =
−(i+j)/(2(dX +dY ))
N
for 0 ≤ i, j ≤ dX + dY s.t. 0 < i + j ≤ dX + dY . The optimal weight w0 is calculated using (14) in the
main paper. The resulting estimator
X
G̃cont
w0 (lX , lY )G̃hX (lX ),hY (lY )
w0 ,1 =
lX ∈LX ,lY ∈LY
achieves the parametric MSE rate when s ≥ dX + dY .
Again, if the mixed derivatives of the functional g also satisfy the extra condition required for (5) in the main paper, then
we can define an estimator that achieves the parametric MSE rate under less strict smoothness assumptions. See Appendix
B-B2.
B. The ODin2 Estimators
The estimators G̃w0 ,1 and G̃cont
w0 ,1 are analogous to the ODin1 estimators in [27], [28]. In this section, we derive ensemble
estimators of MI that achieve the parametric rate under less strict smoothness assumptions on the densities. These estimators
are analogous to the ODin2 estimators in [27], [28].
1) Mixed Random Variables: We first consider the case where X is continuous and Y is discrete. Recall that if hX|y = lN−β
y
j+l
with 0 < β < d1X and l a positive number, and if g (t1 , t2 ) has j, l-th order mixed derivatives ∂t∂j ∂tl that depend on t1 and t2
1
2
β
only through tα
1 t2 for some α, β ∈ R for each 1 ≤ j, l ≤ λ, then the bias of the plug-in estimator for this case is
⌊λ/2⌋
i
h
B G̃hX ,hX|Y =
r
X
hiX lj N −jβ
m
c14,j,i,m,n
n
N hdXX
(ldX N 1−βdX )
m,n=0
i,j=0
i+j+m+n6=0
X
1
1
.
+ O hsX + N −sβ +
λ/2 +
λ/2
1−βd
X
dX
)
(N
N hX
(16)
−1/(dX +δ)
Choose L to be a set of real positive numbers and let δ > 0. For each estimator in the ensemble,
√ set hX (l) = lN
−1/(dX +δ)
and hX|y (l) = lNy
where l ∈ L. This ensures that the final terms in (29) are O(1/ N ) if s ≥ (dX + δ)/2 and
λ ≥ dX /δ + 1. Define G̃w,2 as in (12) in the main paper with the chosen values of hX (l) and hX|y (l). Theorem 5 can be
−i−mδ
applied in this case as conditions C.1 and C.2 are satisfied with φi,m (N ) = N dX +δ and ψi,m (l) = li−mdX for i ∈ {0, . . . , r},
m ∈ {0, . . . ⌊λ/2⌋}, and 0 ≤ i + mδ ≤ (dX + δ)/2. Then if s ≥ (dX + δ)/2 and λ ≥ dX /δ + 1, the MSE of the optimally
weighted estimator G̃w0 ,2 is O(1/N ). Then since δ can be chosen arbitrarily close to zero, the parametric rate can be achieved
theoretically as long as s > dX /2.
The analogous divergence functional estimators for G̃w0 ,1 and G̃w0 ,2 in [27], [28] were referred to as the ODin1 and
ODin2 estimators, respectively, where ODin stands for Optimally Weighted Distributional Functional estimators. The ODin2
estimator has better statistical properties as the parametric rate is guaranteed under less restrictive smoothness assumptions on
the densities. On the other hand, the number of parameters required for the optimization problem in (14) in the main paper
is larger for the ODin2 estimator than the ODin1 estimator. In theory, this could lead to larger variance although this wasn’t
necessarily true in practice according to the experiments in [27].
2) Continuous Random Variables: We now consider the case where both X and Y are continuous. Again, if g (t1 , t2 ) has
j+l
β
j, l-th order mixed derivatives ∂t∂j ∂tl that depend on t1 and t2 only through tα
1 t2 for some α, β ∈ R for each 1 ≤ k, l, ≤ λ,
1
2
then the bias of G̃hX ,hY is
h
B G̃hX ,hY
i
⌊λ/2⌋
r
X
X
=
m,n=0
i,j=0
i+j+m+n6=0
⌊λ/2⌋
+
r X
r
X X
m=1 i=0 j=0
c11,j,i,m,n
hiX hjY
m
n
N hdXX
N hdYY
m
c13,m,n,j hiX hjY / N hdXX hdYY
λ/2
dX dY
s
s
+O hX + hY + 1/ N hX hY
.
(17)
Set δ > 0 and choose hX (lX ) = lX N −1/(dX +dY +δ) and hY (lY ) = lY N −1/(dX +dY +δ) . Then conditions C.1 and C.2 are
i−mdX j−ndY
satisfied if s ≥ (dX + dY + δ)/2 and λ ≥ (dX + dY + δ)/δ with ψ1,i,j,m,n (lX , lY ) = lX
and φ1,i,j,m,n (N ) =
lY
N
−
i+j+m(dY +δ)+n(dX +δ)
dX +dY +δ
dX +dY +δ
2
dX +dY +δ
.
The
2
for 0 < i + j + m(dY + δ) + n(dX + δ) ≤
− d i+j+mδ
X +dY +δ
i−mdX j−mdY
and the terms ψ2,i,j,m (lX , lY ) = lX
lY
optimal weight w0 is again calculated using (14)
and φ2,i,j,m (N ) = N
for m ≥ 1 and i + j + mδ ≤
in the main paper and the resulting estimator G̃cont
achieves
the
parametric
MSE
convergence rate when s ≥ (dX + dY + δ)/2.
w0 ,2
Since δ can be chosen arbitrarily close to zero, the parametric rate can be achieved theoretically as long as s > (dX + dY )/2.
G̃cont
w0 ,2 is the ODin2 estimator for continuous random variables.
P ROOF
A PPENDIX C
T HEOREM 1 (B IAS )
OF
The proof of the bias results in Theorem 1 share some similarities with the proof of the bias results for the divergence
functional estimators in in [27]. The primary differences deal with the product of the marginal KDEs that appear in the
expansion of the bias terms.
The bias of G̃hX ,hY can be expressed as
!
"
#
h
i
fX (X)fY (Y)
f̃X,hX (X)f̃Y,hY (Y)
−g
B G̃hX ,hY
= E g
fXY (X, Y)
f̃Z,hZ (X, Y)
h
h
i
i
!
(Y)
f̃
(X)
E
f̃
E
Y,h
Y
X,h
X
Y
X
f̃X,hX (X)f̃Y,hY (Y)
= E g
−g
f̃Z,hZ (X, Y)
EX,Y f̃Z,hZ (X, Y)
h
h
i
i
EX f̃X,hX (X) EY f̃Y,hY (Y)
f
(X)f
(Y)
X
Y
,
−g
(18)
+E g
fXY (X, Y)
EX,Y f̃Z,hZ (X, Y)
where X and Y are drawn jointly from fXY . We can view these terms as a variance-like component (the first term) and a
bias-like component, where the respective Taylor series expansions depend on variance-like or bias-like
terms of the KDEs.
EX [f̃X,hX (X)]EY [f̃Y,hY (Y)]
We first consider the bias-like term, i.e. the second term in (18). The Taylor series expansion of g
f̃
(X,Y)
E
X,Y Z,hZ
around fX (X)fY (Y) and fXY (X, Y) gives an expansion with terms of the form of
h
h
i
i
h
i
i
EX f̃X,hX (X) EY f̃Y,hY (Y) − fX (X)fY (Y) ,
BiZ f̃X,hX (X)f̃Y,hY (Y) =
h
i
i
BiZ f̃Z,hZ (X, Y) =
EX,Y f̃Z,hZ (X, Y) − fXY (X, Y) .
(19)
Since we are not doing boundary correction, we need to consider separately the cases when Z is in the interior of the support
SX × SY and when Z is close to the boundary of the support. For precise definitions, a point Z = (X, Y ) ∈ SX × SY is
′
′
′
Y −Y
K
= 0, and a point Z ∈ SX × SY is near the
in the interior of SX × SY if for all Z ∈
/ SX × SY , KX X−X
Y
hX
hY
boundary of the support if it is not in the interior.
It can be shown by Taylor series expansions of the probability densities that for Z = (X, Y) drawn from fXY in the interior
of SX × SY , then
⌊s/2⌋
h
i
EX f̃X,hX (X)
=
h
i
EY f̃Y,hY (Y)
=
i
h
EX,Y f̃Z,hZ (Z)
=
fX (X) +
X
s
cX,j (X)h2j
X + O (hX ) ,
X
s
cY,j (Y)h2j
Y + O (hY ) ,
(20)
j=1
⌊s/2⌋
fY (Y) +
j=1
⌊s/2⌋ ⌊s/2⌋
fXY (X, Y) +
X X
2j
s
s
cXY,i,j (X, Y)h2i
X hY + O (hX + hY ) .
i=0 j=0
i+j6=0
For a point near the boundary of the support, we extend the expectation beyond the support of the density. As an example
if X is near the boundary of SX , then we get
Z
h
i
1
X−V
EX f̃i,hi (X) − fi (X) =
fX (V )dV − fX (X)
K
X
hX
hdXX V :V ∈SX
"
#
Z
1
X−V
=
KX
fX (V )dV − fX (X)
hX
hdXX V :KX X−V
>0
hX
#
"
Z
X−V
1
fX (V )dV
− dX
KX
hX
hX V :V ∈S
/ X
=
T1,X (X) − T2,X (X).
(21)
We only evaulate the density fX and its derivatives at points within the support when we take its Taylor series expansion.
Thus the exact manner in which we define the extension of fX does not matter as long as the Taylor series remains the same
and as long as the extension is smooth. Thus the expected value of T1,X (X) gives an expression of the form of (20). For the
T2,X (X) term, we can use multi-index notation on the expansion of fX to show that
#
"
Z
X−V
1
fX (V )dV
KX
T2,X (X) =
hX
hdXX V :V ∈S
/ X
Z
KX (u)fX (X + hX u)du
=
u:hX u+X∈S
/ X ,KX (u)>0
X h|α| Z
X
=
KX (u)Dα fX (X)uα du + o(hrX ).
α! u:hX u+X∈S
/ X ,KX (u)>0
|α|≤r
Then since the |α|th derivative of fX is r − |α| times differentiable, we apply the condition in assumption A.5 to obtain
E [T2,X (X)] =
r
X
ei hiX + o (hrX ) .
i=1
Similar expressions can be found for f̃Y,hY and f̃Z,hZ and for when (21) is raised to a power t. Applying this result gives for
the second term in (18),
r
r
X
X
c10,i,j hiX hjY + O (hsX + hsY ) .
(22)
j=0 i=0
i+j6=0
For the first term in (18), a Taylor series expansion of g
f̃X,hX (X)f̃Y,hY (Y)
f̃Z,hZ (X,Y)
h
h
i
i
around EX f̃X,hX (X) EY f̃Y,hY (Y) and
EX,Y f̃Z,hZ (X, Y) gives an expansion with terms of the form of
h
iq
ẽqZ,hZ (Z) =
f̃Z,hZ (Z) − EZ f̃Z,hZ (Z)
,
h
h
iq
i
ẽqXY,hX ,hY (Z) =
.
f̃X,hX (X)f̃Y,hY (Y) − EX f̃X,hX (X) EY f̃Y,hY (Y)
(23)
We can take the expected value of these expressions to obtain terms of the form of
1
1
1
1
,
,
,
N hdXX N hdYY N 2 hdXX hdYY N hdXX hdYY
(24)
and their respective powers. This can be seen for ẽqXY,hX ,hY (Z) as follows. Define
Xi − X
Xi − X
Yj − Y
Yj − Y
Vi,j (Z) = KX
KY
− EX KX
EY KY
hX
h
hX
hY
h Y′
i
= ηij (Z) − EX [ηi (X)] EY ηj (Y) .
We can then write
ẽXY,hX ,hY (Z) =
N
N X
X
1
N 2 hdXX hdYY
The binomial theorem then gives
k
h ′
ik−l
X
l
k
.
=
(Z) EX [ηi (X)] EY ηj (Y)
EZ ηij
l
k
(Z)
EZ Vi,j
Vi,j (Z).
i=1 j=1
(25)
l=0
By using a similar Taylor series analysis as before, for Z in the interior,
⌊s/2⌋
X
l
2dX dY
dX 2dY
2n
(Z) = hdXX hdYY
EZ ηij
cXY,2,m,n,l (Z)h2m
h
+
O
h
h
+
h
h
.
X
Y
X
Y
X Y
m,n=0
Combining this with (20) and (25) gives
⌊s/2⌋
X
k
dX dY
2dX dY
dX 2dY
2n
EZ Vi,j (Z) = hX hY
cXY,3,m,n,k (X)h2m
,
X hY + O hX hY + hX hY
(26)
m,n=0
where the constants depend on the densities, their derivatives, and the moments of the kernels. As an example, let q = 2. Then
due to the independence between the Zi samples,
EZ ẽ2XY,hX ,hY (Z) =
=
=
N
X
1
X 2dY
N 4 h2d
X hY
1
E
Z
X 2dY
N 2 h2d
X hY
⌊s/2⌋
1
N 2 hdXX hdYY
EZ [Vi,j (Z)Vm,n (Z)]
i,j,m,n=1
X
m,n=0
2
Vi,j
(Z) +
(N − 1)
X 2dY
N 2 h2d
X hY
EZ [Vi,j (Z)Vi,n (Z)]
⌊s/2⌋
2n
cXY,3,m,n,2 (X)h2m
X hY
+
X
1
X
m,n=0 i,j=0
i+j6=0
cXY,4,m,n,i,j (X)
2n
h2m
X hY
X jdY
N hid
X hY
+O
1
N
,
where the last step follows from (26) and a similar analysis of EZ [Vi,j (Z)Vi,n (Z)]. For q > 2, it can be shown that if n(q)
is the set of integer divisors of q including 1 but excluding q, then
i ⌊s/2⌋
h
X
X
cXY,6,i,j,q,m,n (Z)
1
X cXY,5,i,j,q,n (Z)
2i 2j
q
EZ ẽXY,hX ,hY (Z) =
.
h
h
+
O
+
q−n
q−n
q−m
X Y
N
d
d
d
d
X
X
Y
Y
i,j=0 n∈n(q) N 2 hX hY
N hY
m∈n(q)∪{q} N hX
n∈n(q)∪{q}
m+n6=2q
i
h
A similar procedure can be used to find the expression for EZ ẽqZ,hZ (Z) . When Z is near the boundary of the supposrt, we
n
can obtain similar expressions by following a similar procedure as in the derivation of (22). This results in powers of hm
X hY
2m 2n
instead of hX hY .
h
h
i
i
For general functionals g, we can only guarantee that the mixed derivatives of g evaluated at EX f̃X,hX (X) EY f̃Y,hY (Y)
and EX,Y f̃Z,hZ (X, Y) converge to the mixed derivative evaluated at fX (X)fY (Y) and fXY (X, Y) at some rate o(1). Thus
we are left with the following terms in the bias:
!
1
1
o
+
N hdXX
N hdYY
j+l
β
However, if we know that g (t1 , t2 ) has j, l-th order mixed derivatives ∂t∂j ∂tl that depend on t1 and t2 only through tα
1 t2 for
1
2
some α, β ∈ R, then by the generalized binomial theorem, we find that
m
⌊s/2⌋
∞
α
X
X
α α−m
s
ci,j (X)h2j
fX (X)
.
EX f̃X,hX (X) =
X + O (hX )
m
m=0
j=1
α
α
A similar result holds for EY f̃Y,hY (Y)
and EZ f̃Z,hZ (Z) . Combining these expressions with 24 completes the proof.
P ROOF
OF
A PPENDIX D
T HEOREM 2 (VARIANCE )
As for the bias, the proof of the variance result in Theorem 2 is similar to the proof of the variance result in [27] and so we
do not present all of the details. The primary differences again deal with the product of the marginal KDEs. The proof uses
the Efron-Stein inequality [39]:
′
′
Lemma 8. (Efron-Stein Inequality) Let X1 , . . . , Xn , X1 , . . . , Xn be independent random variables on the space S. Then if
f : S × · · · × S → R, we have that
V [f (X1 , . . . , Xn )] ≤
n
2
′
1X
E f (X1 , . . . , Xn ) − f (X1 , . . . , Xi , . . . , Xn )
.
2 i=1
In this case we consider the samples {Z1 , . . . , ZN } and
′
G̃hX ,hY . By the triangle inequality,
′
G̃hX ,hY − G̃hX ,hY
′
Z1 , Z2 . . . , ZN
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃Z,hZ (X1 , Y1 )
1
g
N
≤
n
N2
1 X
g
+
N j=2
!
o
and the respective estimators G̃hX ,hY and
′
′
f̃X,hX (X1 )f̃Y,hY (Y1 )
′
′
f̃Z,hZ (X1 , Y1 )
−g
f̃X,hX (Xj )f̃Y,hY (Yj )
f̃Z,hZ (Xj , Yj )
!
′
−g
!
′
f̃X,hX (Xj )f̃Y,hY (Y1 )
′
f̃Z,hZ (X1 , Y1 )
!
.
By the Lipschitz condition on g, the first term in (27) can be decomposed into terms of the form of
′
f̃Z,hZ (Z1 ) − f̃Z,hZ (Z1 ) ,
′
′
f̃X,hX (X1 )f̃Y,hY (Y1 ) − f̃X,hX (X1 )f̃Y,hY (Y1 ) .
By making a substitution in the expectation, it can be shown that
E
′
f̃Z,hZ (Z1 ) − f̃Z,hZ (Z1 )
2
≤ 2||KX · KY ||2∞ .
X1 − Xi
hX
For the product of the marginal KDEs, we have that
f̃X,hX (X1 )f̃Y,hY (Y1 ) =
=
1
M 2 hdXX hdYY
N X
N
X
i=2 j=2
KX
1
1
f̃Z,hZ (Z1 ) +
d
2
M
M hXX hdYY
Y1 − Yj
KY
hY
X
Y1 − Yj
X1 − Xi
KY
.
KX
hX
hY
i6=j
(27)
By applying the triangle inequality, Jensen’s inequality, and similar substitutions, we get
E
′
′
f̃X,hX (X1 )f̃Y,hY (Y1 ) − f̃X,hX (X1 )f̃Y,hY (Y1 )
2
2
′
2
f̃Z,hZ (Z1 ) − f̃Z,hZ (Z1 )
E
M2
2(M − 1)
+
×
X 2dY
M 3 h2d
hY
X
X
X1 − Xi
Y1 − Yj
E KX
KY
hX
hY
i6=j
!
!!2
′
′
Y1 − Yj
X1 − Xi
KY
−KX
hX
hY
≤
4 + 2(M − 1)2
||KX · KY ||2 .
M2
≤
For the second term in (27), it can be shown that
E
′
f̃Z,hZ (Zi ) − f̃Z,hZ (Zi )
2
Y1 − Yj
X1 − Xi
KY
E KX
X 2dY
hX
hY
M 2 h2d
X hY
!
!!2
′
′
Y1 − Yj
X1 − Xi
KY
−KX
hX
hY
1
=
2||KX · KY ||2∞
.
M2
≤
By a similar approach,
′
′
f̃X,hX (Xi )f̃Y,hY (Yi ) − f̃X,hX (Xi )f̃Y,hY (Yi )
=
1
′
f̃Z,hZ (Zi ) − f̃Z,hZ (Zi ) +
M 2 hdXX hdYY
+
X
KX
n=2
n6=i
=⇒ E
Xi − Xn
hX
KY
X
KY
n=2
n6=i
Yi − Y1
hY
Yi − Yn
hY
′
Yi − Y1
hY
− KY
′
′
f̃X,hX (Xi )f̃Y,hY (Yi ) − f̃X,hX (Xi )f̃Y,hY (Yi )
2
KX
!!
Xi − X1
hX
′
− KX
Xi − X1
hX
,
≤ 6||KX · KY ||2∞
1
(M − 2)2
+
2
M
M4
We can then apply the Cauchy Schwarz inequality to bound the square of the second term in (27) to get
N2
X
E
g
j=2
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃Z,hZ (X1 , Y1 )
!
! 2
f̃X,hX (X1 )f̃Y,hY (Y1 )
2
2
≤ 14Cg ||KX · KY ||∞ .
′
f̃Z,hZ (X1 , Y1 )
′
−g
′
Applying Jensen’s inequality in conjunction with these results gives
E
′
G̃hX ,hY − G̃hX ,hY
Applying the Efron-Stein inequality finishes the proof.
2
≤
44Cg2 ||KX · KY ||2∞
.
N2
!!
T HEORY
FOR
A PPENDIX E
M IXED R ANDOM VARIABLES
A. Proof of Theorem 3 (Bias)
1
dX .
Let hX|y = lN−β
for some positive l and 0 < β <
y
bias of the plug-in estimator G̃hX ,hX|Y
i
h
=
B G̃hX ,hX|Y
r
r
X
X
Under assumptions A.0 − A.5, we prove that for general g, the
c13,i,j hiX lj N −jβ +
j=0 i=0
i+j6=0
+O
hsX
+N
Furthermore, if g (t1 , t2 ) has j, l-th order mixed derivatives
α, β ∈ R, then for any positive integer λ ≥ 2, the bias is
i
h
=
B G̃hX ,hX|Y
r
r
X
X
c13,i,j hiX lj N −jβ +
j=0 i=0
i+j6=0
λ/2
+
r
X
r
X
−sβ
+
1
N hdXX
∂ j+l
∂tj1 ∂tl2
c14,X
c14,y
+ dX 1−βdX
dX
l N
N hX
+
1
N 1−βdX
1
+
N
!
.
β
that depend on t1 and t2 only through tα
1 t2 for some
λ/2 λ/2 r
r
XXX
X
n −nβ
hm
Xl N
c14,j,i,m,n
j
i
j=1 i=1 m=0 n=0
N hdXX (ldX N 1−βdX )
n −nβ
hm
hm ln N −nβ
Xl N
c14,m,n,j,X X
j + c14,m,n,j,Y d
j
X N 1−βdX )
dX
(l
j=1 m=0 n=0
N hX
X
+O hsX + N −sβ +
1
N hdXX
(28)
λ/2 +
1
λ/2
(N 1−βdX )
+
1
.
N
(29)
We only prove (28) as the proof of (29) is identical. The bias of G̃hX ,hX|Y is
i
h
B G̃hX ,hX|Y
i
h
= E G̃hX ,hX|Y − G(X; Y)
X Ny
fX (X)
= E
G̃hX ,hX|y − g
N
fX|Y (X|Y)
y∈SY
X Ny
f
(X)
X
= E E
Y, Y1 , . . . , YN
G̃hX ,hX|y − g
N
fX|Y (X|Y)
y∈SY
X Ny
f
(X)
X
Y, Y1 , . . . , YN
E
= E
G̃hX ,hX|y − g
N
fX|Y (X|Y)
y∈SY
i
X Ny h
B G̃hX ,hX|y Y1 , . . . , YN ,
= E
N
y∈SY
where we use the law of total expectation and the fact that
P
y∈SY
Ny
N
= 1. Let hX|y = lN−β
for some positive l and
y
0<β<
1
dX .
From Theorem 1, the conditional bias of G̃hX ,hX|y given Y1 , . . . , YN is
i
h
=
B G̃hX ,hX|y Y1 , . . . , YN
r
r
X
X
c10,i,j hiX hjX|y +
j=0 i=0
i+j6=0
c11,X
c11,y
+
dX
X
N y hX
Ny hdX|y
1
1
+
+O hsX + hsX|y +
dX
X
N y hX
Ny hdX|y
=
r
r
X
X
c10,i,j hiX lj N−jβ
+
y
j=0 i=0
i+j6=0
+O
hsX
+
N−sβ
y
!
c11,y
c11,X
+
dX
X
d
X
N y hX
l N1−βd
y
1
1
+
+ 1−βdX
Ny hdXX
Ny
!
.
(30)
Ny is a binomial random variable Multiplying (30) by Ny results in terms of the form of N1−γ
with γ ≥ 0. Ny is a binomial
y
random variable with parameter fY (y),N trials, and mean N fY (y). We can compute the fractional moments of a binomial
random variable by using the generalized binomial theorem to obtain (see the main paper)
∞
i
h
X
α
α
(N fY (y))α−i E (NY − N fY (y))i
=
E Ny
i
i=0
=
⌊i/2⌋
∞
X
X
α
α−i
(N fY (y))
cn,i (fY (y))N n
i
n=0
i=0
=
⌊i/2⌋
∞
X
X
α
α−i
fY (y)
cn,i (fY (y))N α−i+n ,
i
n=0
i=0
where we use the following expression for the i-th central moment of a binomial random variable derived by Riordan [36]:
h
i ⌊i/2⌋
X
i
E (NY − N fY (y)) =
cn,i (fY (y))N n .
n=0
If α = 1 − γ, then dividing by N results in terms of the form of N −γ−i+n . Since n ≤ ⌊i/2⌋, −γ − i + n is always less than
zero and is only greater than −1 if i = 0. This completes the proof.
B. Proof of Theorem 4 (Variance)
As for the bias, we assume that hX|y = lNy−β for some positive l and 0 < β < d1X . By the law of total variance, we have
ii
ii
h h
i
h h
h
(31)
V G̃hX ,hX|Y = E V G̃hX ,hX|Y Y1 , . . . , YN + V E G̃hX ,hX|Y Y1 , . . . , YN .
Note that given all of the Yi ’s, the estimators G̃hX ,hX|y are all independent since they use different sets of Xi ’s for each y.
By Theorem 2, we have
i
h
X N2y 1
= O
V G̃hX ,hX|Y Y1 , . . . , YN
·
N 2 Ny
y∈SY
X Ny
.
= O
N2
y∈SY
1
Taking the expectation wrt Y1 , . . . YN then gives O N for the first term in (31).
For the second term in (31), from (30) we have that for general g
r
i
h
X
1
X
+ N−sβ
+ N1−βd
= O
N−jβ
+
E G̃hX ,hX|y Y1 , . . . , YN
y
y
y
N
y
j=0
= O (f (Ny )) .
′
By the Efron-Stein inequality, we have that if Ny is an independent and identically distributed realization of Ny , then
′ 2
X Ny
′
1 X
f (Ny )
≤
V
E Ny f (Ny ) − Ny f Ny
N
2N 2
y∈SY
y∈SY
′ 2
′
1
E Ny f (Ny ) − Ny f Ny
= O
N2
1
V [Ny f (Ny )] ,
= O
N2
(32)
′
where the second step follows from the fact that SY is finite and the last step follows from the fact that Ny and Ny are iid.
The expression V [Ny f (Ny )] is simply a sum of terms of the form of V Nγy where 0 < γ ≤ 1. Even the covariance terms
can be bounded by the square root of the product of these terms by the Cauchy Schwarz inequality.
Let py = fY (y). Consider the Taylor series expansion of the function h(x) = xγ at the point N py . This is
h(x)
=
γ
γ−1
(N py ) + γ (N py )
(x − N py ) +
γ(γ − 1)
γ−2
2
(N py )
(x − N py )
2
∞
X
γ(γ − 1) . . . (γ − k + 1)
γ−k
2
(N py )
(x − N py ) .
(33)
k!
k=3
From Riordan
[36], we know that the ith central moment of Ny is O N ⌊i/2⌋ . Then since γ ≤ 1, the last terms in (33) are
O N −1 when x = Ny and we take the expectation. Thus
+
E Nγy
2
=⇒ E Nγy
γ(γ − 1)
γ−1
(N py )
(1 − py ) + O N −1
2
2
γ(γ − 1)
2γ−2
2γ
2γ−1
(N py )
= (N py ) + γ(γ − 1)(1 − py ) (N py )
+
2
+O N −1 .
γ
= (N py ) +
By a similar Taylor series expansion, we have that
2γ
2γ−1
= (N py ) + γ(2γ − 1)(1 − py ) (N py )
+ O N −1 .
E N2γ
y
Combining these results gives
V Nγy
2
− E Nγy
= E N2γ
y
= O N 2γ−1 + N 2γ−2 + N −1
= O (N ) ,
where the last step follows from the fact that γ ≤ 1. Combining this result with (32) gives
ii
h h
1
V E G̃hX ,hX|Y Y1 , . . . , YN = O
.
N
i
h
By the law of total variance, V G̃hX ,hX|Y = O N1 .
C. Extension to the Generalized Case
In this section, we sketch the theory for the case where both X and Y have a mixture of discrete and continuous components.
Denote the discrete and continuous components of X as X1 and X2 , respectively. Similarly, denote the discrete and continuous
components of Y as Y1 and Y2 , respectively. Then the generalized mutual information is
X Z fX (x1 , x2 )fY (y1 , y2 )
fXY (x1 , x2 , y1 , y2 )dx2 dy2
G(X; Y) =
g
fXY (x1 , x2 , y1 , y2 )
y1 ∈SY1
x1 ∈SX1
=
X
y1 ∈SY1
x1 ∈SX1
fX1 Y1 (x1 , y1 )
Z
g
fX1 (x1 )fY1 (y1 )fX2 |X1 (x2 |x1 )fY2 |Y1 (y2 |y1 )
fX1 Y1 (x1 , y1 )fX2 Y2 |X1 Y1 (x2 , y2 |x1 y1 )
fX2 Y2 |X1 Y1 (x2 , y2 |x1 y1 )dx2 dy2 .
P
Define Ny1 = N
i=1 1{Y1,i =y1 } where Y1,i is the discrete component of Yi . Then the estimator we use for fY1 (y1 ) is Ny1 /N .
The estimators for fX1 (x1 ) and fX1 Y1 (x1 , y1 ) are defined similarly with Nx1 and Nz1 .
We first consider the conditional bias of the resulting plug-in estimator where we condition on the discrete random variables.
Recall that by Taylor series expansions, we decompose the bias into “variance-like” terms in (23) and “bias-like” terms in (19).
For the bias-like term, if we condition on the discrete random variables, then
in (20) in this case is
the equivalent expression
i
N
multiplied by Nx1 /N . This results in terms of the form of, for example, Nz1 − fZ1 (x1 , y1 ) . The expected value of these
expressions is the ith central moment of a binomial random variable divided by N i which is O(1/N ) for i ≥ 1. Thus these
i
N
terms contribute O(1/N ) to the bias. In all other cases, the expected value of Nz1
is O(1). Thus only the constants are
affected by these terms in the equivalent expression in (22). Similar results hold for the estimators of fX1 (x1 ) and fY1 (y1 ).
For the “variance-like” terms, we can simply factor out the estimators for fX1 (x1 ), fY1 (y1 ), and fZ1 (z1 ). The expected
value of these estimators is again O(1) so they only affect the constants.
For the variance, the law of total variance can again be used by conditioning on the discrete components. For the conditional
variance, the Lipschitz conditions on g in this case simply scales the resulting terms by the square of the estimators for
fX1 (x1 ), fY1 (y1 ), and fZ1 (z1 ). Then since the expected value of the square of these estimators is O(1), the expected value of
the conditional variance is still O(1/N ). Then by similar arguments given above for the bias and in Section E-B, the variance
of the conditional expectation of the estimator is also O(1/N ). Thus the total variance is O(1/N ).
P ROOF
A PPENDIX F
T HEOREM 6 (CLT)
OF
This proof shares some similarities with the CLT proof for the divergence functional estimators in [27], [28]. The primary
differences again deal with handling products of marginal density estimators and with handling two of the terms in the
Efron-Stein inequality. We will first find the asymptotic distribution of
"
!
!#!
N
i
h
√
f̃X,hX (Xi )f̃Y,hY (Yi )
1 X
f̃X,hX (Xi )f̃Y,hY (Yi )
g
=√
N G̃hX ,hY − E G̃hX ,hY
− EZi g
N i=1
f̃Z,hZ (Xi , Yi )
f̃Z,hZ (Xi , Yi )
"
!#
"
!#!
N
f̃X,hX (Xi )f̃Y,hY (Yi )
f̃X,hX (Xi )f̃Y,hY (Yi )
1 X
−E g
.
EZi g
+√
N i=1
f̃Z,hZ (Xi , Yi )
f̃Z,hZ (Xi , Yi )
By the standard central limit theorem [40], the second term converges in distribution to a Gaussian random variable with
variance
!##
" "
f̃X,hX (X)f̃Y,hY (Y)
.
V EZ g
f̃Z,hZ (X, Y)
All that remains is to show that the first term converges in probability to zero as Slutsky’s theorem [41] can then be applied.
Denote this first term as WN and note that E [WN ] = 0.
We will use Chebyshev’s n
inequality combined
o with the Efron-Stein inequality to bound ′ the variance of WN . Consider the
′
samples {Z1 , . . . , ZN } and Z1 , Z2 , . . . , ZN and the respective sequences WN and WN . This gives
"
!
!#!
′
1
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃X,hX (X1 )f̃Y,hY (Y1 )
WN − WN = √
− EZ1 g
g
f̃Z,hZ (X1 , Y1 )
f̃Z,hZ (X1 , Y1 )
N
!
"
!#!
′
′
′
′
f̃X,hX (X1 )f̃Y,hY (Y1 )
1
f̃X,hX (X1 )f̃Y,hY (Y1 )
− EZ′ g
+√
g
′
′
′
′
1
f̃Z,hZ (X1 , Y1 )
f̃Z,hZ (X1 , Y1 )
N
!
!!
′
′
N
f̃X,hX (Xi )f̃Y,hY (Yi )
1 X
f̃X,hX (Xi )f̃Y,hY (Yi )
+√
−g
.
(34)
g
′
N i=2
f̃Z,hZ (Xi , Yi )
f̃Z,hZ (Xi , Yi )
Note that
E g
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃Z,hZ (X1 , Y1 )
!
"
− EZ1 g
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃Z,hZ (X1 , Y1 )
!#!2
"
"
= E VX1 g
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃Z,hZ (X1 , Y1 )
!##
.
h
i
f̃
(X )f̃
(Y1 )
Y
We will use the Efron-Stein inequality to bound VX1 g X,hf̃X 1(XY,h
. We thus need to bound the conditional
1 ,Y1 )
Z,hZ
expectation of the term
!
!2
′
′
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃X,hX (X1 )f̃Y,hY (Y1 )
−g
,
g
′
f̃Z,hZ (X1 , Y1 )
f̃Z,hZ (X1 , Y1 )
′
where Zi is replaced with Zi in the KDEs for some i 6= 1. Using similar steps as in Section D, we have that
!
! 2
′
′
f̃
(X
)
f̃
(Y
)
1
(Y
)
(X
)
f̃
f̃
1
1
1
1
Y,h
X,h
X,h
Y,h
Y
X
X
Y
.
−g
=O
E g
′
N2
f̃Z,hZ (X1 , Y1 )
f̃Z,hZ (X1 , Y1 )
h
i
f̃
(X )f̃
(Y1 )
Y
Then by the Efron-Stein inequality, VX1 g X,hf̃X 1(XY,h
=O
,Y )
Z,hZ
E
1
N
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃Z,hZ (X1 , Y1 )
g
A similar result holds for the g
For the third term in (34),
E
′
!
′
f̃X,hX (X1 )f̃Y,hY (Y1 )
′
′
f̃Z,hZ (X1 ,Y1 )
N
X
g
i=2
1
1
"
. Therefore
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃Z,hZ (X1 , Y1 )
− EZ1 g
1
N
!#!2
=O
1
N2
.
term in (34).
f̃X,hX (Xi )f̃Y,hY (Yi )
f̃Z,hZ (Xi , Yi )
!
′
′
f̃X,hX (Xi )f̃Y,hY (Yi )
−g
′
f̃Z,hZ (Xi , Yi )
! !2
!
!
′
′
f̃X,hX (Xi )f̃Y,hY (Yi )
f̃X,hX (Xi )f̃Y,hY (Yi )
−g
E g
=
′
f̃Z,hZ (Xi , Yi )
f̃Z,hZ (Xi , Yi )
i,j=2
!
!#
′
′
f̃X,hX (Xj )f̃Y,hY (Yj )
f̃X,hX (Xj )f̃Y,hY (Yj )
−g
× g
′
f̃Z,hZ (Xj , Yj )
f̃Z,hZ (Xj , Yj )
N
X
"
For the N − 1 terms where i = j, we know from Section D that
!
! 2
′
′
f̃
(X
)
f̃
(Y
)
(Y
)
(X
)
f̃
f̃
1
i
i
i
i Y,hY
X,hX
X,hX
Y,hY
.
E g
−g
=
O
′
N2
f̃Z,hZ (Xi , Yi )
f̃Z,hZ (Xi , Yi )
Thus these terms contribute O(1/N ). For the N 2 − N terms where i 6= j, we can do multiple substitutions of the form
X −X
uj = jhX 1 resulting in
"
!
!
′
′
f̃X,hX (Xi )f̃Y,hY (Yi )
f̃X,hX (Xi )f̃Y,hY (Yi )
E g
−g
′
f̃Z,hZ (Xi , Yi )
f̃Z,hZ (Xi , Yi )
!
!#
!
′
′
X 2dY
f̃X,hX (Xj )f̃Y,hY (Yj )
h2d
f̃X,hX (Xj )f̃Y,hY (Yj )
X hY
−g
× g
=O
.
′
N2
f̃Z,hZ (Xj , Yj )
f̃Z,hZ (Xj , Yj )
Since hdXX hdYY = o(1),
E
N
X
i=2
g
f̃X,hX (Xi )f̃Y,hY (Yi )
f̃Z,hZ (Xi , Yi )
!
′
−g
′
f̃X,hX (Xi )f̃Y,hY (Yi )
′
f̃Z,hZ (Xi , Yi )
! !2
= o(1).
Combining all of these results with Jensen’s inequality gives
"
!
!#!2
2
′
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃X,hX (X1 )f̃Y,hY (Y1 )
3
− EZ1 g
E WN − WN
≤ E g
N
f̃Z,hZ (X1 , Y1 )
f̃Z,hZ (X1 , Y1 )
!
"
!#!2
′
′
′
′
f̃X,hX (X1 )f̃Y,hY (Y1 )
f̃X,hX (X1 )f̃Y,hY (Y1 )
3
− EZ′ g
g
+ E
′
′
′
′
1
N
f̃Z,hZ (X1 , Y1 )
f̃Z,hZ (X1 , Y1 )
!
!!!2
′
′
N
f̃X,hX (Xi )f̃Y,hY (Yi )
3 X
f̃X,hX (Xi )f̃Y,hY (Yi )
+ E
−g
g
′
N
(X
,
Y
)
(X
,
Y
)
f̃
f̃
i
i
i
i
Z,h
Z
Z,hZ
i=2
1
.
=o
N
Applying the Efron-Stein inequality gives that V [WN ] = o(1). Then by ChebyShev’s inequality, WN converges to zero in
probability. This completes the proof for the plug-in estimator.
For the weighted ensemble estimator, we can write
!
N
i
h
X
√
f̃X,hX (l) (Xi )f̃Y,hY (l) (Yi )
1 X
N G̃w − E G̃w = √
w(lX , lY ) g
N i=1 lX ∈LX ,lY ∈LY
f̃Z,hZ (l) (Xi , Yi )
"
!#!
f̃X,hX (l) (Xi )f̃Y,hY (l) (Yi )
−EZi g
f̃Z,hZ (l) (Xi , Yi )
!
N
X
X
f̃X,hX (l) (Xi )f̃Y,hY (l) (Yi )
1
EZi
w(lX , lY )g
+√
f̃Z,hZ (l) (Xi , Yi )
N i=1
lX ∈LX ,lY ∈LY
!
X
f̃X,hX (l) (Xi )f̃Y,hY (l) (Yi )
.
−E
w(lX , lY )g
f̃Z,hZ (l) (Xi , Yi )
lX ∈LX ,lY ∈LY
By the central limit theorem, the second term converges in distribution to a zero-mean Gaussian random variable with variance
!
X
f̃X,hX (l) (Xi )f̃Y,hY (l) (Yi )
.
w(lX , lY )g
V EZi
f̃
(X
,
Y
)
i
i
Z,h
(l)
Z
lX ∈LX ,lY ∈LY
From the previous results, the first term converges to zero in probability as it can be written as
!
N
X
f̃X,hX (l) (Xi )f̃Y,hY (l) (Yi )
1 X
w(lX , lY ) √
g
f̃Z,hZ (l) (Xi , Yi )
N i=1
lX ∈LX ,lY ∈LY
"
!#!
X
f̃X,hX (l) (Xi )f̃Y,hY (l) (Yi )
w(lX , lY )oP (1)
−EZi g
=
f̃Z,hZ (l) (Xi , Yi )
lX ∈LX ,lY ∈LY
= oP (1),
where oP (1) denotes convergence to zero in probability and we use the fact that linear combinations of random variables that
converge in probability individually to constants converge in probability to the linear combination of the constants. The proof
is finished with Slutsky’s theorem.
Note that the proof of Corollary 7 follows a similar procedure as the extension to the ensemble case.
| 10 |
End-to-end Training for Whole Image Breast
Cancer Diagnosis using An All Convolutional
Design
Li Shen
Icahn School of Medicine at Mount Sinai
New York, NY 10029
[email protected], [email protected]
Abstract
We develop an end-to-end training algorithm for whole-image breast cancer diagnosis based
on mammograms. It requires lesion annotations only at the first stage of training. After that,
a whole image classifier can be trained using only image level labels. This greatly reduced
the reliance on lesion annotations. Our approach is implemented using an all convolutional
design that is simple yet provides superior performance in comparison with the previous
methods. On DDSM, our best single-model achieves a per-image AUC score of 0.88 and
three-model averaging increases the score to 0.91. On INbreast, our best single-model
achieves a per-image AUC score of 0.96. Using DDSM as benchmark, our models compare
favorably with the current state-of-the-art. We also demonstrate that a whole image model
trained on DDSM can be easily transferred to INbreast without using its lesion annotations
and using only a small amount of training data.
Code and model availability: https://github.com/lishen/end2end-all-conv
Key Insights
•
•
•
•
1
A patch classification network can be converted into an end-to-end trainable whole image
network by modifying the input size and adding top layers.
The heatmap layer generated by the patch classifier’s output creates an information
bottleneck in the whole image network and shall be removed to improve performance.
The patch classifier is critical to the performance of the whole image classifier.
All convolutional design is superior to a mix of convolutional and fully connected layers.
Introduction
With the rapid advancement in machine learning and especially, deep learning in recent years, there is a keen
interest in the medical imaging community to apply these techniques to cancer screening. Recently, a group
of researchers, along with the Sage Bionetworks and the DREAM community organized a challenge for
competing teams to develop algorithms to improve breast cancer diagnosis using a large database of digital
mammograms (DM) [1]. The main purpose of the challenge is to predict the probability that a patient will
develop cancers within 12 months based on mammographic images. We also participated in the challenge [2]
and obtained a receiver operating characteristic curve (AUC) score of 0.65, which ranked #12 on the
leaderboard [3]. However, much can still be done to further improve the result.
Mammography based breast cancer diagnosis is a challenging problem that cannot be simply treated as a
normal image classification task. That is because the cancerous status of a whole image is determined by a
few small regions that need to be identified. For example, a mammogram is typically in the size of
4000x3000 (height by width) pixels while the cancerous lesion or the region of interest (ROI) can be as small
as 100x100 pixels. Resizing a large mammogram to 224x224 – a common choice used by the image
classification filed – will likely make the ROI hard to detect and/or classify. If a mammographic database
comes with the ROI annotations for all images, then the diagnostic problem can be conveniently solved as an
1
object detection and classification problem that has already been well studied in the computer vision field.
For example, the region-based convolutional neural network (R-CNN) approach [4] and its variants [5]–[7]
can be applied here. Many of the published works [8]–[17] assume the databases under study are fully
annotated with ROIs, thus the developed models cannot be transferred to another database that lacks ROI
annotations. It is unrealistic to require all databases to be fully annotated due to the time and monetary costs
to obtain ROI annotations, which require expertise from radiologists. For a survey of the public databases, see
[18]. If we want to build a breast cancer diagnostic system for production, we will have to consider the
situation where a few datasets are annotated with ROIs while most datasets are only annotated at whole image
level.
That is exactly the situation that the participants faced at the DM challenge [1] during the competitive phase.
Participants were allowed to use any public databases to develop their models. However, the DM challenge
database itself does not contain ROI annotations. Although the DM challenge boasts a large database with
more than 640,000 images from over 86,000 women, its dataset is highly imbalanced: with only 2000 or so
positive cases in the public train set, which significantly reduces the effective sample size. Therefore, directly
training a classification model on such a dataset from scratch is unlikely to give good performance.
In this study, we propose an approach that utilizes a fully annotated database to train a model to recognize
localized patches. After that, the model can be converted into a whole image classifier, which can be trained
end-to-end without ROI annotations. The approach is based on an idea that we independently formed (see
discussion in [2]) during the final stage of the DM competition, which we did not have time to implement
before the competition ended. Our original entry [2] to the DM challenge used a patch classification network,
built on a fully annotated public database, to predict for all the overlapping patches of a mammogram to
generate a so called heatmap. A random forest classifier is then used on top of the heatmap to produce the
final label. However, we observed a significant performance drop when we applied the model to the DM
challenge data because the patch classifier was developed on a public database that contains a different color
profile from the challenge data. It was difficult to transfer the patch classifier onto the challenge data since it
does not have ROI annotations. While the whole image classifier is not end-to-end trainable because it
involves a sliding window step. We later realized that end-to-end training can be achieved by using the
convolutional property to change the input size from patch to whole image, create larger feature maps, and
then add additional convolutional layers on top to produce the final labels. Surprisingly, we found that a
similar idea was employed by the top-performing team [19] after the final results came out. Encouraged by
their success, we want to continue to investigate this line of methods. We propose an all convolutional design
to construct the whole image classifier by simply stacking convolutional blocks on top of the patch classifier,
which provides very competitive results. We also present a pipeline to build a whole image classifier from
scratch and discuss the pros and cons of some important choices.
Notice that our method is somewhat related to two recent publications [20], [21] in breast cancer diagnosis.
They both claim to be an implementation of multi-instance learning (MIL) and use modified cost functions to
satisfy the MIL criterion. Both studies develop whole image models that can be end-to-end trained but they
completely ignore the existence of ROI annotations in databases and train models using only image level
labels. It seems they focus more on the top layers to summarize predictions from local patches than the patch
classifiers to make accurate predictions. However, we will show that the patch classifier is critical to the
performance of the whole image classifier.
The manuscript is organized as follows. In Section 2, we present our method to convert from a patch
classifier to a whole image classifier and the network training algorithm. Section 3 has two parts. In Section
3.1, we will develop whole image classifiers from scratch and evaluate them on a public database that has
ROI annotations. In Section 3.2, we will transfer the whole image models developed in Section 3.1 to another
database using only image level labels. In Section 4, we provide some discussion of the results and future
works. We hope, by virtue of this study, we can provide further insights into this promising method for whole
image classification for the medical imaging community.
2
Methods
2.1
Converting a classifier from recognizing patches to whole images
To perform classification or segmentation on large, complex images, a commonly used approach is to first
develop a classifier to recognize smaller patches, and then use the classifier to scan the whole image using a
sliding window. For example, this approach has been exploited to win the competition in automated detection
of metastatic breast cancer [22] and to segment neuronal membranes in microscopic images [23]. However,
here we want to utilize a patch classifier to initialize the weights of a whole image classification network so
2
Figure 1: The deep learning structure for converting a patch classifier into a whole image classifier by
adding convolutional layers on top. Shown is an example using two residual blocks with the same
structure of [512-512-1024] x 2. Two alternative strategies are also presented: one is to add heatmaps and
FC layers on top and the other is to add a random forest classifier on top of the heatmap.
that the latter can be trained without ROI annotations. In other words, we want the whole image classifier to
be finetuned in an end-to-end fashion without the explicit reliance on a patch image set. This may not seem to
be obvious on the surface but can actually be achieved by exploiting the properties of a convolutional neural
network (CNN). The key idea of a CNN is to learn a set of filters with small sizes and apply them to a much
larger input by using weight sharing, which greatly reduces the number of parameters to learn. Each time a
new convolutional layer is added to the network, the top most layer effectively uses all previous layers to
construct a set of new filters that perform more complex transformations (and also have larger effective
receptive fields) than its precursors. By this reasoning, we can add on top of a patch classification network a
new convolutional layer and turns the patch network itself into part of the new filters of the top layer.
Although the patch network is trained to recognize patches, the properties of convolution allow us to change
the input from patches to whole images. And by doing this modification, a convolutional operation in the top
layer becomes equivalent to applying the patch classifier on all patches of the whole image in one forward
propagation and then apply the convolutional operation on the patch classifier’s outputs. Indeed, we can
design any kind of operations on top of the patch classifier’s output and eventually connect them with image
level labels. For an illustration of this method, see Fig. 1. Notice that variable input size is a feature that is
supported by most major deep learning frameworks [24]–[27] and therefore can be easily implemented. The
advantage of this method is that we now have a network that takes whole images as input and image level
labels as output, cutting the need for a patch image set.
This training method can find many applications in medical imaging. Use breast cancer diagnosis as an
example. Databases with ROI annotations are rare and expensive to obtain. The largest public database for
mammograms – The Digital Database for Screening Mammography (DDSM) [28] – contains thousands of
images with pixel-level annotations, which can be exploited to construct a meaningful patch classifier. Once
the patch classifier is converted into a whole image classifier, it can be finetuned on another database using
only whole image level labels. This way we can significantly reduce the requirement for ROI annotations.
2.2
Network structures for top layers
For patch classification, we can use any networks – such as the 16-layer VGG networks (VGG16) [29] and
the 50-layer residual networks (Resnet50) [30] – that are already proven to be excellent for image
3
classifications. Therefore, we want to focus more on the structure of the top layers in this study. We propose
an all convolutional design that makes no use of the fully connected (FC) layers. Briefly, we first remove the
last few layers of a patch network until the last convolutional layer. We then add convolutional layers, such as
the residual and VGG blocks, on top of the last convolutional layer so that the feature map reaches a proper
size. Lastly, we add a global average pooling layer and an output to complete the network. Notice that we
deliberately discard the output layer of the patch network. That is because the patch classifier typically has a
small output, which may create an “information bottleneck” that prevents the top layers from fully utilizing
the information provided by the patch network. To understand it better, let’s use Resnet50 as an example.
Assume it is trained to recognize a patch into one of the five categories of: background, malignant mass,
benign mass, malignant calcification and benign calcification. Its last convolutional layer has a dimension of
2048. If we add a residual block of dimension of 512 after the patch classifier’s output, it creates a bottleneck
structure of “2048-5-512” in-between the patch network and the top layers. While a network without the
patch classifier’s output will have a structure of “2048-512”, allowing the top layers to use more information
from the patch network.
In contrast to our approach, the top performing team of the DM challenge utilizes the patch classifier’s output
layer to construct a so called heatmap to represent the likelihood of a patch on an input image being one of
the five categories [19]. In the first version of this manuscript, we mistakenly used softmax activation on this
heatmap to generate a probabilistic output for the top layers. This nonlinear transformation would certainly
impede gradients flow. After confirming with the top performing team (personal communications), we remove
the softmax activation from the heatmap to construct networks similar to theirs. However, as we implement
the whole image classifiers, we find the totally inactivated heatmap may cause the top layers to be badly
initialized, leading to divergence. We hypothesize that is because the fully unbounded values can make the
top layers saturate too early. Therefore, we use relu (which truncates the negative values) on the heatmaps
instead in this study and find the convergence to improve. Another important difference between our method
and the top performing team’s method is the use of FC layers. We argue that FC layers are a poor fit for
image recognition because they require a convolutional layer’s output to be flattened, which eliminates all the
spatial information. Our design is fully convolutional and preserves spatial information at every stage of the
network. We will compare the two different strategies in the following sections.
2.3
Network training
There are two parts in training a whole image classifier from scratch. The first part is to train a patch
classification network. We compare the networks that have their weights pretrained on the ImageNet [31]
database with randomly initialized ones. Pretraining may help in speedup learning as well as improving
generalization of the networks. For a pretrained network, notice that the bottom layers represent primitive
features that tend to be invariant across datasets. While the top layers represent higher representations that are
more related to the final labels and therefore need to be trained more aggressively. This demands a higher
learning rate for the top layers than the bottom layers. However, layer-wise learning rate adjustment is not
available in Keras [24] – the deep learning framework that we use for this work. Therefore, we develop a 3stage training strategy that freeze the parameter learning for all layers except the last one and progressively
unfreeze the parameter learning from top to bottom and decrease the learning rate at the same time. The
details of this training strategy is as follows:
1.
2.
3.
Set learning rate to 1e-3 and train the last layer for 3 epochs.
Set learning rate to 1e-4, unfreeze the top layers and train for 10 epochs, where the top layer number
is set to 46 for Resnet50, 11 for VGG16 and 13 for VGG19.
Set learning rate to 1e-5, unfreeze all layers and train for 37 epochs.
In the above, an epoch is defined as a sweep through the train set. The total number of epochs is 50 and early
stopping (set to 10 epochs) is used if training does not improve the validation loss. For randomly initialized
networks, we use a learning rate of 1e-3. Adam [32] is used as the optimizer and the batch size is set to 32.
We also adjust the sample weights within a batch to keep the classes balanced.
The second part is to create and train a whole image classifier from the patch classifier. It is done by first
altering the input size from patch to whole image, which proportionally increases the feature map size for
every convolutional layer. In this study, we fix the patch size to be 224x224 and the whole image size to be
1152x896. Use Resnet50 as an example. For the patch network, the last convolutional layer has a feature map
of size 7x7. While for the whole image network, the same layer’s feature map size becomes 36x28. In our all
convolutional design, we add on top two additional residual blocks each with a stride of 2 to reduce the
feature map size to 9x7 before the global average pooling and softmax output. Alternatively, 7x7 average
pooling can be applied to the convolutional layer to produce a feature map of size 30x22. The weight matrix
4
of the output layer of the patch network can then be copied and applied onto this feature map to generate the
heatmap (the same size of 30x22) but here we replace the softmax with relu to facilitate gradients flow.
Similar to what the top performing team did [19], the heatmap (after max pooling) is first flattened and then
two FC layers and a shortcut connection are added on top before output. Similar to the patch network
training, we also develop a 2-stage training strategy to avoid unlearning the important features at the patch
network. After some trial-and-errors, we decide to use smaller learning rates than usual to prevent the training
from diverging. The details of the 2-stage training are as follows:
1.
2.
Set learning rate to 1e-4, weight decay to 0.001 and train the newly added top layers for 30 epochs.
Set learning rate to 1e-6, weight decay to 0.01 and train all layers for 20 epochs.
Due to memory constraint, we use a small batch size of 2 for whole image training. The other parameters are
the same as the patch classifier training.
To make it easier to convert a patch classifier into a whole image classifier, we calculate the pixel-wise mean
for the mammograms on the train set and use this value for pixel-wise mean centering for both patch and
whole image training. No other preprocessing is applied. To compensate for the lack of sample size, we use
the same data augmentation on-the-fly for both patch and whole image training. The followings are the
random image transformations we use: horizontal and vertical flips, rotation in [-25, 25] degrees, shear in [0.2, 0.2] radians, zoom in [0.8, 1.2] ratio and channel shift in [-20, 20] pixel values.
3
Results
3.1
Developing patch and whole image classifiers on DDSM
3.1.1
Setup and processing of the dataset
The DDSM [28] contains digitized images from scanned films but uses a lossless-JPEG format that is
obsolete. We use a modernized version of the database called CBIS-DDSM [33] which contains images that
have already been converted into the standard DICOM format. Our downloaded dataset on Mar 21, 2017 from
CBIS-DDSM’s website contains the data for 1249 patients or 2584 mammograms, which represents a subset
of the original DDSM database. It includes the cranial cardo (CC) and media lateral oblique (MLO) views for
most of the screened breasts. Using both views for prediction shall provide better result than using each view
separately. However, we will treat each view as a standalone image in this study due to the limitation in
sample size. Our purpose is to predict the malignancy status for an entire mammogram. We perform an 85-15
split on the dataset into train and test sets based on the patients. For the train set, we further set aside 10% of
the patients as the validation set. The splits are done in a stratified fashion so that the positive and negative
patients have the same proportions in the train, validation and test sets. The total numbers of images in the
train, validation and test sets are: 1903, 199 and 376, respectively.
The DDSM database contains the pixel-level annotation for the ROIs and their pathology confirmed labeling:
benign or malignant. It further contains the type of each ROI, such as calcification or mass. Most
mammograms contain only one ROI while a small portion contain more than one ROIs. We first convert all
mammograms into PNG format and resize them into 1152x896. We then create several patch image sets by
sampling image patches from ROIs and background regions. All patches have the same size of 224x224. The
first dataset (referred to as S1) is from patches that are centered on each ROI and a random background patch
from the same mammogram. The second dataset (S10) is from 10 sampled patches around each ROI with a
minimum overlapping ratio of 0.9 and the same number of background patches from the same mammogram.
The third dataset (S30) is created the same way as S10 but we increase the number of patches to 30 for both
ROI and background per mammogram. According to the annotations of the ROIs, all patches are classified
into five different categories: background, calcification-benign, calcification-malignant, mass-benign and
mass-malignant.
3.1.2
Development of patch classifiers
To train a network to classify a patch into the five categories, we use three popular convolutional network
structures: the Resnet50 [30] and the VGG16 and VGG19 [29]. We train the networks on the S10 and S30 sets
using the 3-stage training strategy for 50 epochs in total. On the S1 set, we increase the total number of
epochs to 200 and adjust the numbers for each training stage accordingly. We evaluate the models using test
accuracy of the five classes. The results are summarized in Table 1. On the S1 set, both randomly initialized
and pretrained Resnet50 models achieve very high accuracies but the pretrained network converges much
faster: cutting down the number of epochs by half. Notice that the S1 set is intrinsically easier to classify than
5
Table 1: Test accuracy of the patch classifiers using the
Resnet50, VGG16, and VGG19. #Epochs indicates the epoch
when the best validation accuracy has been reached.
Model
Resnet50
Resnet50
Resnet50
Resnet50
VGG16
VGG19
Resnet50
VGG16
VGG19
Pretrained
N
Y
N
Y
Y
Y
Y
Y
Y
Patch set
S1
S1
S10
S10
S10
S10
S30
S30
S30
Accuracy
0.97
0.99
0.63
0.89
0.84
0.79
0.91
0.86
0.89
#Epochs
198
99
24
39
25
15
23
22
24
the S10 and S30 sets. On the S10 set, the pretrained Resnet50 performs much better than the randomly
initialized Resnet50: a 0.26 difference in test accuracy. We therefore conclude that pretraining can help us
train networks faster and produce better models. We use pretrained networks for the rest of the study. On both
S10 and S30 sets, Resnet50 outperforms VGG16 and VGG19. VGG16 performs better than VGG19 on the
S10 dataset but the order is reversed on the S30 set.
3.1.3
Converting patch to whole image classifiers
We now convert the patch classifiers into whole image classifiers by testing many different configurations for
the top layers. We evaluate different models using the per-image AUC scores on the test set. We first test the
conversion based on the Resnet50 patch classifiers. The results are summarized in Table 2. Notice that in the
original residual network design [30], the authors increased the dimension for each residual block from the
previous block by two times to compensate for the reduction in feature map size. This avoids creating
“bottlenecks” in computation. However, we are not able to do the same here due to memory constraint. In our
first test, we use two residual blocks with the same dimension and a bottleneck design (see [30]) of [512-5122048] without repeating the residual units. This gives an AUC score of 0.85 for the Resnet50 trained on the
S10 set and 0.63 for the Resnet50 trained on the S1 set, a large 0.22 discrepancy in score. We hypothesize
that since the S10 set is 10x larger than the S1 set, it contains much more information about the variations of
benign and malignant ROIs and their neighborhood regions. This information helps a lot in training a whole
image classifier to locate and classify cancer-related regions for diagnosis. We then focus only on patch
classifiers trained on the S10 and S30 sets for the rest of the study. We further vary the design of the residual
blocks by reducing the dimension of the last layer in each residual unit to 1024, which allows us to repeat
each residual unit twice without exceeding memory constraint, i.e. two [512-512-1024]x2 blocks. This design
slightly increases the score to 0.86. We also reduce the dimensions of the first and second blocks and find the
scores to drop by only 0.02-0.03. Therefore, we conclude that the dimensions for the newly added residual
blocks are not critical to the performance of the whole image classifiers.
We then test the conversion based on the VGG16 patch classifier trained on the S10 set. The newly added
VGG blocks all use 3x3 convolutions with batch normalization (BN). The results are summarized in Table 3.
We find that the VGG structure is more likely to suffer from overfitting than the residual structure. However,
this can be alleviated by reducing the model complexity of the newly added VGG blocks. To illustrate this,
we plot the train and validation losses for two VGG configurations during training (Fig. 2): one is two VGG
blocks with the same dimensions of 512 and 3x repetitions and the other has the same dimensions but no
repetition; the first VGG structure contains more convolutional layers and therefore has higher complexity
than the second one. It can be seen that the first VGG network has difficulty in identifying a good local
minimum and suffers badly from overfitting. While the second VGG network has smoother loss curves and
smaller differences between train and validation losses. We further reduce the dimensions of the VGG blocks
and find the scores to decrease by only a small margin. This is in line with the results on the Resnet50 based
models. Overall, the Resnet50 based whole image classifiers perform better than the VGG16 based ones
(Tables 2 & 3). In addition, the Resnet50 based models seem to achieve the best validation score earlier than
the VGG16 based models. To understand whether this performance difference is caused by the patch network
on the bottom or the newly added top layers, we add two residual blocks on top of the VGG16 patch network
to create a “hybrid” model. It gives a score of 0.81, which is in line with the VGG16 based networks but a
few points below the best Resnet50 based models. This suggests the patch network part is more important
6
Table 2: Per-image test AUC scores of the whole image classifiers using the Resnet50 as patch classifiers.
#Epochs indicates the epoch when the best validation score has been reached. → indicates score change from
using softmax to relu on heatmaps.
Patch
set
S1
S10
S10
S10
S10
S10
S30
S30
Block1
S10
S10
[512-512-1024] x 2
[64-64-256] x 2
S10
S10
S10
Heatmap pool
size
5x5
2x2
1x1
S10
[512-512-2048] x 1
[512-512-2048] x 1
[512-512-1024] x 2
[256-256-256] x 1
[256-256-256] x 3
[256-256-512] x 3
[512-512-1024] x 2
[256-256-256] x 1
Add residual blocks on top
Block2
Single-model
AUC
[512-512-2048] x 1
0.63
[512-512-2048] x 1
0.85
[512-512-1024] x 2
0.86
[128-128-128] x 1
0.84
[128-128-128] x 3
0.83
[128-128-256] x 3
0.84
[512-512-1024] x 2
0.81
[128-128-128] x 1
0.84
Add heatmap and residual blocks on top
[512-512-1024] x 2
0.80
[128-128-512] x 2
0.81
Add heatmap and FC layers on top
FC1 FC2
Augmented
AUC
NA
NA
0.88
NA
NA
NA
NA
NA
#Epochs
NA
NA
47
41
64
32
NA
0.67→0.73
512
256
NA
0.68→0.72
2048 1024
NA
0.74→0.65
Probabilistic heatmap + random forest classifier
#trees=500, max depth=9, min samples
0.73
NA
split=300
35
20
25
25
17
48
49
40
28
47
43
Table 3: Per-image test AUC scores of the whole image classifiers using the VGG16 (S10) and VGG19 (S30)
as patch classifiers. #Epochs indicates the epoch when the best validation score has been reached. → indicates
score change from using softmax to relu on heatmaps.
Add VGG blocks on top (* is residual block)
Patch
Block1
Block2
Single-model
Augmented
set
AUC
AUC
S10
512 x 3
512 x 3
0.71
NA
S10
512 x 1
512 x 1
0.83
0.86
S10
256 x 3
128 x 3
0.78
NA
S10
256 x 1
128 x 1
0.80
NA
S10
128 x 1
64 x 1
0.82
NA
S10
*[512-512-1024] x 2
*[512-512-1024] x 2
0.81, 0.851
0.881
S30
512 x 1
512 x 1
0.75
NA
S30
128 x 1
64 x 1
0.75
NA
Add heatmap and FC layers on top
Heatmap pool size FC1 FC2
S10
5x5
64
32
NA
0.66→0.71
S10
2x2
512
256
NA
0.71→0.68
S10
1x1
2048 1024
NA
0.78→0.69
1
Result obtained from extended model training (See text for more details).
than the top layers to whole image classification performance.
7
#Epochs
47
44
30
35
46
46
15
1
26
27
50
Figure 2: Train and validation loss curves of two VGG structures: one is
more complex than the other and suffers from overfitting.
Encouraged by the performance leap going from the S1 to the S10 set, we choose two residual configurations
and two VGG configurations and add them on top of the Resnet50 and the VGG19 patch classifiers trained on
the S30 set, respectively. We expect to obtain even further performance gains. Surprisingly, the performance
decreases for both Resnet50 and VGG19 based models (Tables 2 & 3). Indeed, overfitting happens for the
VGG19 based model training at early stage and the validation loss stops improving further. It is clear that a
good patch classifier is critical to the performance of a whole image classifier. However, merely increasing
the number of sampled patches does not necessarily lead to better whole image classifiers. We leave to future
work to study how to sample patches more efficiently and effectively to help building better image classifiers.
We now test the configurations that are inspired by the top performing team [19]: add a heatmap followed by
two FC layers and a shortcut connection on top of the patch classifier (Fig. 1). We use the Resnet50 and
VGG16 patch classifiers trained on the S10 set. We choose from a few different filter sizes for the max
pooling layer after the heatmap and adjust FC layer sizes accordingly so that it gradually decreases the layer
size until output. We also add a shortcut between the flattened heatmap and the output, implemented by an FC
layer. The results are summarized in Tables 2 & 3. In the first version of the manuscript, we used heatmaps
with softmax activation to generate probabilistic outputs and we found the max pooling layer to be
destructive to the heatmap features and therefore harm the scores. In this version, we use relu on heatmaps
instead and find the trend to be almost reversed: more pooling leads to better score. This could be related to
the difficulty in training when heatmap and FC layers are used. Overall, the performance scores of the two
different activations are on par with each other. Therefore, the heatmap activation is not critical to the whole
image classifier’s performance. Obviously, image classifiers from this design underperform the ones from the
all convolutional design. Notice that this implementation is not an exact replicate of the work in [19]. They
have used a modified VGG network by using 6 instead of 5 VGG blocks. Their design is more like a hybrid
of the two designs compared here by using an additional convolutional block followed by FC layers.
We also want to find out whether the heatmaps are of any use or simply create a bottleneck between the patch
network and the top layers. We add a heatmap (with relu) on top of the Resnet50 patch classifier and then add
two residual blocks with [512-512-1024]x2 design. This network gives a score of 0.80 (Table 2), which is
lower than the same design without the heatmap. To exclude the possibility that the top residual blocks may
be overfit due to the small heatmap as input, we also add two residual blocks with reduced size: [64-64256]x2 and [128-128-512]x2 on top of the heatmap, which slightly improves the score to 0.81. We conclude
that heatmaps shall be removed to facilitate information flow in training whole image networks.
Finally, we test the same strategy as [2], [22]: set a cutoff to binarize the heatmaps; extract regional features;
and train a random forest classifier based on the features (Fig. 1). The Resnet50 trained on the S10 set is used
here as the patch classifier. For heatmaps, we use softmax to obtain probabilistic outputs to help setting
cutoffs. We use four different cutoffs – 0.3, 0.5, 0.7, 0.9 and combine the features. We use a random forest
8
Figure 3: Comparison of two example mammograms from DDSM and INbreast.
classifier with 500 trees. This gives a test score of 0.73 (Table 2), which seems to be in line with our score of
0.65 in the DM challenge. There are two major reasons for the increase of the score in this study: 1. the patch
classifier in this study has been given more training time and produces better accuracy; 2. the train and test
sets are both from the same database. Clearly, this method performs worse than the end-to-end trained all
convolutional networks.
3.1.4
Augmented prediction and model averaging
We pick a few models and use inference-level augmentation by doing horizontal and vertical flips to create four
predictions per model and take an average. Three best performing models: Resnet50 with two [512-512-1024]x2
residual blocks, VGG16 with two 512x1 VGG blocks and VGG16 with two [512-512-1024]x2 residual blocks on
top (hybrid) are selected. We perform extended training on the hybrid model and improve the single-model AUC
score to 0.85. The reason of doing that will be explained in section 3.2.2. The augmented predictions for the three
models improve the AUC scores from 0.86→0.88, 0.83→0.86 and 0.85→0.88, respectively (Tables 2 & 3). The
average of the three augmented predictions gives an AUC score of 0.91. Notice that in [3], they created four network
models each with augmented predictions and predicted for both CC and MLO views for each breast and took an
average of all predictions, which is more aggressive than the model averaging used in this study. We deliberately
avoid doing too much model averaging so that we can better understand the pros and cons of different network
structures.
3.2
Transfer learning for whole image classification on INbreast
3.2.1
Setup and processing of the dataset
Once a whole image classifier has been developed, we can finetune it on another database without using ROI
annotations. This is a key advantage offered by an end-to-end trained classifier. The INbreast [18] dataset is another
public database for mammograms. It is more recent than the DDSM and contains full-field digital images as
opposed to digitized images from films. These images have different color profiles from the images of DDSM,
which can be visually confirmed by looking at two example images from the two databases (Fig. 3). Therefore, this
is an excellent source to test the transferability of a whole image classifier from one database to another. The
INbreast database contains 115 patients and 410 mammograms including both CC and MLO views. We will treat
the two views as separate samples due to sample size limitation. It includes the BI-RADS readings for the images
but lacks biopsy confirmation. Therefore, we manually assign all images with BI-RADS readings of 1 and 2 as
negative samples; 4, 5 and 6 as positive samples; and ignore BI-RADS readings of 3 since it has no clear designation
of negative or positive class. This excludes 12 patients or 23 mammograms from further analysis. Notice that the
categorization based on the BI-RADS readings makes the INbreast dataset inherently easier to classify than the
DDSM dataset. This is because two mammograms with different BI-RADS readings are already visually discernible
according to radiologists, which biases the labeling. We perform a 70-30 split on the dataset into train and validation
sets based on the patients in a stratified fashion. The total numbers of images in the train and validation sets are 280
and 107, respectively. We use exactly the same processing steps on the INbreast images as the DDSM images.
3.2.2
Effectiveness and efficiency of transfer learning
9
Table 4: Transfer learning efficiency with different train set
sizes. Shown are per-image validation AUC scores.
#Patients
#Images
Resnet50
VGG16
20
30
40
50
60
79
117
159
199
239
0.78
0.78
0.82
0.80
0.84
0.87
0.90
0.90
0.93
0.95
VGG-Resnet
Hybrid
0.89
0.90
0.93
0.93
0.91
Although the INbreast database contains ROI annotations, we simply ignore them to test the transferability of a
whole image classifier. We directly finetune the whole image networks on the train set and evaluate the model
performance using per-image validation AUC scores. Two best models – the Resnet50 + two [512-512-1024]x2
residual blocks and the VGG16 + two 512x1 VGG blocks – are used for transfer learning. We use Adam [32] as the
optimizer and set the learning rate at 1e-6; the number of epochs at 200 and the weight decay to be 0.01. The
finetuned Resnet50 based model achieves a score of 0.84. Surprisingly, the finetuned VGG16 based model achieves
a score of 0.92, better than the Resnet50 based models. Yaroslav argued in [19] that the VGG structure is better
suited than the residual structure for whole image classification because the residual networks reduce the feature
map sizes too aggressively to damage the ROI features at the first few layers. Based on this, the underperformance
of the Resnet50 based models is likely due to the few bottom most layers, not the top residual blocks. To validate
that, we use the hybrid model, which has the VGG16 patch classifier at the bottom and two residual blocks on top,
to finetune on the INbreast dataset. This hybrid model achieves a very high score of 0.95, which proves the point.
However, if the VGG structure is indeed better than the residual structure as the bottom layers, how does Resnet50
beat VGG16/19 on the DDSM data? We reason that is because the VGG networks need to be trained longer than the
residual networks to reach their full potentials. This is in line with the observation that the VGG16 based whole
image classifiers achieve the best validation scores on DDSM at later stage than the Resnet50 based ones (Tables 2
& 3). Our choice of 50 epochs on DDSM is mainly driven by computational resource limitation. Since the residual
networks are powered by BN and shortcuts to speed up training, they are able to converge better than the VGG
networks within the same 50 epochs. To prove that, we perform another run of model training for the hybrid model
on the DDSM data with 200 additional epochs. The model improves the test score from 0.81 to 0.85 (Table 3),
which is as good as the best Resnet50 based models. We did not perform additional training for the other VGG16
based models due to computational constraint.
We also want to find out how much data is required to finetune a whole image classifier to reach satisfactory
performance. This has important implications in practice since obtaining labels, even at the whole image level, can
be expensive. We sample a subset with 20, 30, …, 60 patients from the train set for finetuning and evaluate the
model performance on the same validation set (Table 4). With as little as 20 patients or 79 images, the VGG16
based and the hybrid models can achieve scores of 0.87 and 0.89, respectively. The scores seem to quickly saturate
as we increase the train set size. We hypothesize that the “hard” part of the learning is to recognize the textures of
the benign and malignant ROIs while the “easy” part is to adjust to different color profiles. This quick adjustment
can be a huge advantage for the end-to-end trained whole image networks. In future works, a whole image classifier
can be finetuned to make predictions on another database with only a small amount of training data. This greatly
reduce the burden of train set construction.
Finally, with augmented prediction, the VGG16 based model improves the validation score from 0.92→0.94 and the
hybrid model improves the score from 0.95→0.96. The average of the two augmented models gives a score of 0.96.
Including the Resnet50 based model in model averaging does not improve the score.
4
Discussion
We have shown that accurate whole-image breast cancer diagnosis can be achieved with a deep learning
model trained in an end-to-end fashion that is independent from ROI annotations. The network can be based
on an all convolutional design that is simple yet powerful. It can be seen that high-resolution mammograms
are critical to the accuracy of the diagnosis. However, large image size can easily lead to an explosion of
memory requirement. If more GPU memory becomes available in the future, we shall return to this problem
and train our models with larger image sizes or even use the original resolution without downsizing. This will
provide much more details of the ROIs and can potentially improve the performance.
10
The decrease of the scores from the models based on the S10 to the S30 set is a surprise. Yet it indicates the
intricacy of training a whole image classifier. More research is needed to make the whole image training more
robust against divergence and overfitting, especially when the train set is not large. Patch sampling can also
be made more efficient by focusing more on the difficult cases than the easy ones.
Our result supports Yaroslav’s argument [19] that the VGG networks are more suitable than the residual
networks for breast cancer diagnosis. However, we also demonstrate the superiority of the residual networks
to the VGG networks in several aspects. The only problem with the residual structure is the first few layers
that may destroy the fine details of the ROIs. In future works, we can modify the original residual network to
make the first few layers less aggressive in reducing the feature map sizes. That shall lead to improved
performance for the residual networks.
Computational environment
The research in this study is carried out on a Linux workstation with 8 CPU cores and a single NVIDIA
Quadro M4000 GPU with 8GB memory. The deep learning framework is Keras 2 with Tensorflow as the
backend.
Acknowledgements
We would like to thank Gustavo Carneiro and Gabriel Maicas for providing comments on the manuscript;
Quan Chen for discussion on the use of INbreast data.
References
[1] A. D. Trister, D. S. M. Buist, and C. I. Lee, “Will Machine Learning Tip the Balance in Breast Cancer
Screening?,” JAMA Oncol, May 2017.
[2] L. Shen, “Breast cancer diagnosis using deep residual nets and transfer learning,” 2017. [Online]. Available:
https://www.synapse.org/#!Synapse:syn9773182/wiki/426912. [Accessed: 23-Aug-2017].
[3] “The Digital Mammography DREAM Challenge - Final Ranking of Validation Round.” [Online]. Available:
https://www.synapse.org/#!Synapse:syn4224222/wiki/434546. [Accessed: 23-Aug-2017].
[4] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and
Semantic Segmentation,” in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern
Recognition, Washington, DC, USA, 2014, pp. 580–587.
[5] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp.
1440–1448.
[6] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region
proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
[7] J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object Detection via Region-based Fully Convolutional Networks,”
arXiv:1605.06409 [cs], May 2016.
[8] A. R. Jamieson, K. Drukker, and M. L. Giger, “Breast image feature learning with adaptive deconvolutional
networks,” in Proc. SPIE, 2012, vol. 8315, pp. 831506-831506–13.
[9] J. Arevalo, F. A. González, R. Ramos-Pollán, J. L. Oliveira, and M. A. G. Lopez, “Convolutional neural
networks for mammography mass lesion classification,” in 2015 37th Annual International Conference of the
IEEE Engineering in Medicine and Biology Society (EMBC), 2015, pp. 797–800.
[10] G. Carneiro, J. Nascimento, and A. P. Bradley, “Unregistered Multiview Mammogram Analysis with Pretrained Deep Learning Models,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI
2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III, N. Navab, J.
Hornegger, W. M. Wells, and A. F. Frangi, Eds. Cham: Springer International Publishing, 2015, pp. 652–660.
[11] N. Dhungel, G. Carneiro, and A. P. Bradley, “Automated Mass Detection in Mammograms Using Cascaded
Deep Learning and Random Forests,” in 2015 International Conference on Digital Image Computing:
Techniques and Applications (DICTA), 2015, pp. 1–8.
[12] M. G. Ertosun and D. L. Rubin, “Probabilistic visual search for masses within mammography images using
deep learning,” in 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2015, pp.
1310–1315.
[13] A. Akselrod-Ballin, L. Karlinsky, S. Alpert, S. Hasoul, R. Ben-Ari, and E. Barkan, “A Region Based
Convolutional Network for Tumor Detection and Classification in Breast Mammography,” in Deep Learning
and Data Labeling for Medical Applications: First International Workshop, LABELS 2016, and Second
International Workshop, DLMIA 2016, Held in Conjunction with MICCAI 2016, Athens, Greece, October 21,
11
2016, Proceedings, G. Carneiro, D. Mateus, L. Peter, A. Bradley, J. M. R. S. Tavares, V. Belagiannis, J. P.
Papa, J. C. Nascimento, M. Loog, Z. Lu, J. S. Cardoso, and J. Cornebise, Eds. Cham: Springer International
Publishing, 2016, pp. 197–205.
[14] J. Arevalo, F. A. González, R. Ramos-Pollán, J. L. Oliveira, and M. A. Guevara Lopez, “Representation
learning for mammography mass lesion classification with convolutional neural networks,” Computer Methods
and Programs in Biomedicine, vol. 127, pp. 248–257, Apr. 2016.
[15] Daniel Lévy and A. Jain, “Breast Mass Classification from Mammograms using Deep Convolutional Neural
Networks,” arXiv preprint arXiv:1612.00542, 2016.
[16] N. Dhungel, G. Carneiro, and A. P. Bradley, “The Automated Learning of Deep Features for Breast Mass
Classification from Mammograms,” in Medical Image Computing and Computer-Assisted Intervention –
MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II, S.
Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, Eds. Cham: Springer International Publishing,
2016, pp. 106–114.
[17] A. S. Becker, M. Marcon, S. Ghafoor, M. C. Wurnig, T. Frauenfelder, and A. Boss, “Deep Learning in
Mammography: Diagnostic Accuracy of a Multipurpose Image Analysis Software in the Detection of Breast
Cancer,” Invest Radiol, Feb. 2017.
[18] I. C. Moreira, I. Amaral, I. Domingues, A. Cardoso, M. J. Cardoso, and J. S. Cardoso, “INbreast: Toward a
Full-field Digital Mammographic Database,” Academic Radiology, vol. 19, no. 2, pp. 236–248, Feb. 2012.
[19] Yaroslav Nikulin, “DM Challenge Yaroslav Nikulin (Therapixel).” [Online]. Available:
https://www.synapse.org/#!Synapse:syn9773040/wiki/426908. [Accessed: 23-Aug-2017].
[20] W. Zhu, Q. Lou, Y. S. Vang, and X. Xie, “Deep Multi-instance Networks with Sparse Label Assignment for
Whole Mammogram Classification,” arXiv:1705.08550 [cs], May 2017.
[21] Yoni Choukroun, Ran Bakalo, Rami Ben-Ari, Ayelet Askelrod-Ballin, Ella Barkan, and Pavel Kisilev,
“Mammogram Classification and Abnormality Detection from Nonlocal Labels using Deep Multiple Instance
Neural Network,” presented at the Eurographics Workshop on Visual Computing for Biology and Medicine,
2017.
[22] D. Wang, A. Khosla, R. Gargeya, H. Irshad, and A. H. Beck, “Deep Learning for Identifying Metastatic Breast
Cancer,” arXiv:1606.05718 [cs, q-bio], Jun. 2016.
[23] D. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, “Deep Neural Networks Segment Neuronal
Membranes in Electron Microscopy Images,” in Advances in Neural Information Processing Systems 25, F.
Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 2843–2851.
[24] F. Chollet and others, Keras. GitHub, 2015.
[25] Martín Abadi et al., TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015.
[26] Y. Jia et al., “Caffe: Convolutional Architecture for Fast Feature Embedding,” arXiv:1408.5093 [cs], Jun.
2014.
[27] T. Chen et al., “MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed
Systems,” arXiv:1512.01274 [cs], Dec. 2015.
[28] Michael Heath, Kevin Bowyer, Daniel Kopans, Richard Moore, and W. Philip Kegelmeyer, “The Digital
Database for Screening Mammography,” in Proceedings of the Fifth International Workshop on Digital
Mammography, 2001, pp. 212–218.
[29] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,”
arXiv:1409.1556 [cs], Sep. 2014.
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv:1512.03385 [cs],
Dec. 2015.
[31] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int J Comput Vis, vol. 115, no.
3, pp. 211–252, Dec. 2015.
[32] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv:1412.6980 [cs], Dec. 2014.
[33] R. S. Lee, F. Gimenez, A. Hoogi, and D. Rubin, “Curated Breast Imaging Subset of DDSM,” The Cancer
Imaging Archive, 2016.
12
| 7 |
Towards the 1G of Mobile Power Network: RF, Signal and
System Designs to Make Smart Objects Autonomous
Bruno Clerckx1, Alessandra Costanzo2, Apostolos Georgiadis3, and Nuno Borges
Carvalho4
1
Imperial College London, UK, 2University of Bologna, Italy, 3Heriot-Watt University, UK,
4
University of Aveiro, Portugal
Email: [email protected], [email protected], [email protected],
[email protected]
Thanks to the quality of the technology and the existence of international standards,
wireless communication networks (based on radio-frequency RF radiation) nowadays
underpin the global functioning of our societies. The pursuit towards higher spectral efficiency
has been around for about 4 decades, with 5G expected in 2020. 5G and beyond will see the
emergence of trillions of low-power autonomous wireless devices for applications such as
ubiquitous sensing through an Internet of Things (IoT).
Wireless is however more than just communications. For very short range, wireless
power via Inductive Power Transfer is a reality with available products and standards (Wireless
Power Consortium, Power Matters Alliance, Alliance for Wireless Power, Rezence). Wireless
Power via RF (as in wireless communication) on the other hand could be used for longer range
via two different ways, commonly referred to as wireless energy harvesting (WEH) and (farfield or radiative) wireless power transfer/transmission (WPT). While WEH assumes RF
transmitters are exclusively designed for communication purposes whose ambient signals can
be harvested, WPT relies on dedicated sources designed exclusively for wireless power
delivery. Wireless Power via RF has long been regarded as a possibility for energising lowpower devices, but it is only recently that it has become recognised as feasible. Indeed,
according to [Hemour:2014], at a fixed computing load, the amount of requested energy falls
by a factor of two every year and a half due to the evolution of the electrical efficiency of
computer technology. This explains why relying on wireless power to perform meaningful
computation tasks at reasonable distances only became feasible in the last few years and
justifies this recent interest in wireless power.
Recent research advocates that the future of wireless networking goes beyond
conventional communication-centric transmission. In the same way as wireless (via RF) has
disrupted mobile communications for the last 40 years, wireless (via RF) will disrupt the
delivery of mobile power. However, current wireless networks have been designed for
communication purposes only. While mobile communication has become a relatively mature
technology, currently evolving towards its fifth generation, the development of mobile power is
in its infancy and has not even reached its first generation. Not a single standard on mobile
power and far-field WPT exists.
Despite being subject to regulations on exposure to electromagnetic fields as wireless
communication, wireless power brings numerous new opportunities. It enables proactive and
controllable energy replenishment of devices for genuine mobility so that they no longer
depend on centralised power sources. Hence, no wires, no contact, no (or at least reduced)
batteries (and therefore smaller, lighter and compact devices), an ecological solution with no
production/maintenance/disposal of trillions of batteries, a prolonged lifetime and a perpetual,
predictable and reliable energy supply as opposed to ambient energy-harvesting technologies
(solar, thermal, vibration). This is very relevant in future networks with ubiquitous and
autonomous low-power and energy limited devices, device-to-device communications and the
Internet-of-Things (IoT) with massive connections.
Interestingly, radio waves carry both energy and information simultaneously.
Nevertheless, traditionally, energy and information have been treated separately and have
evolved as two independent fields in academia and industry, namely wireless power and
wireless communication, respectively. This separation has for consequences that 1) current
wireless networks pump RF energy into the free space (for communication purposes) but do
not make use of it for energizing devices and 2) providing ubiquitous mobile power would
require the deployment of a separate network of dedicated energy transmitters. Imagine
instead a wireless network where information and energy flow together through the wireless
medium. Wireless communication, or Wireless Information Transfer (WIT), and WPT would
refer to two extreme strategies respectively targeting communication-only and power-only. A
unified Wireless Information and Power Transfer (WIPT) design would have the ability to softly
evolve in between those two extremes to make the best use of the RF spectrum/radiations
and network infrastructure to communicate and energize, and hence outperform traditional
systems relying on a separation of communications and power.
This article reviews some recent promising approaches to make the above vision
closer to reality. In contrast with articles commonly published by the microwave community
and the communication/signal processing community that separately emphasize RF, circuit
and antenna solutions for WPT on one hand and communications, signal and system designs
for WPT on the other hand, this review article uniquely bridges RF, signal and system designs
in order to bring those communities closer to each other and get a better understanding of the
fundamental building blocks of an efficient WPT network architecture. We start by reviewing
the engineering requirements and design challenges of making mobile power a reality. We
then review the state-of-the-art in a wide range of areas spanning sensors and devices, RF
design for wireless power and wireless communications. We identify their limitations and make
critical observations before providing some fresh new look and promising avenues on signal
and system designs for WPT.
Engineering Requirements and Design Challenges of the Envisioned Network
The followings are believed to be the engineering requirements and the main design
challenges: 1) Range: Deliver wireless power at distances of 5-100m for indoor/outdoor
charging of low-power devices; 2) Efficiency: Boost the end-to-end power transfer efficiency
(up to a fraction of percent/a few percent); 3) Non-line of sight (NLoS): Support LoS and NLoS
to widen the practical applications of this network; 4) Mobility support: Support mobile
receivers, at least for those at pedestrian speed; 5) Ubiquitous accessibility: Support
ubiquitous power accessibility within the network coverage area; 6) Seamless integration of
wireless communication and wireless power: Interoperate wireless communication and
wireless power via a unified wireless information and power transfer (WIPT); 7) Safety and
health: Resolve the safety and health issues of RF systems and comply with the regulations;
8) Energy consumption: Limit the energy consumption of the energy-constrained RF powered
devices.
Power Requirements and Consumption of Sensors and Devices
The Integrated Circuit industry is moving from the traditional computing power
paradigm towards a power efficiency (lowest joule per operation) paradigm. This ultra-low
power (ULP) electronics has opened the door to numerous applications in sensor networks
and IoT that do not need nm technology with billions of gates. Sensor nodes commonly require
power for the sensor itself, the data processing circuitry and the wireless data link (e.g. a few
bits/s for temperature sensors to a few kbits/s for ECG or blood pressure monitoring). The first
two functions commonly require less power. This can be attributed to the fact that while CMOS
technology scaling has conventionally provided exponential benefits for the size and power
consumption of digital logic systems, analog RF components, necessary for the data link, have
not seen a similar power scaling. In [Ay:2011], a CMOS image sensor consumes only
14.25µW. In [ADMP801], low power microphones consume 17µW and an ADC digitizing the
microphone output consumes 33µW. Popular protocols for sensor networks include Zigbee
and low power Bluetooth whose commercial-of-the-shelf transmitters consume 35mW
[CC2541]. WiFi is more power-hungry. Despite the progresses in the WiFi industry to design
chipsets for IoT applications by e.g. reducing power consumption in the standby mode to
20µW, an active WiFi transmission consumes around 600mW [Gainspan, CC3100MOD].
Nevertheless, in recent years, there has been significant enhancement with integrated ULP
System on Chip (SoC) and duty-cycled radio whose power consumption is nowadays in the
order of 10-100µW using custom protocols supporting 10-200kbps [Zhang:2013, Kim:2011,
Verma:2010, Pandey:2011]. The use of passive WiFi is also an alternative to generate 802.11b
transmission over distances of 10-30m (in line-of-sight and through walls) while only
consuming 10 and 60µW for 1 and 11 Mbps transmissions, respectively (3 to 4 orders of
magnitude lower than existing WiFi chipsets) [Kellogg:2016].
Observation: 10-100µW is enough to power modern wireless sensors and low-power
devices.
WPT RF Design
Since Tesla’s attempt in 1899, all WPT experiments in 1960-2000 were targeting longdistance and high power transmissions with applications such as Solar Powered Satellite and
wireless-powered aircraft [Brown:1984]. More recently, there has been a significant interest in
WPT and WEH for relatively low-power (e.g., from µW to a few W) delivery over moderate
distances (e.g., a few m to hundreds of m) [Falkenstein:2012, Popovic:2013], owing to the
fast-growing need to build reliable and convenient wireless power systems for remotely
charging various low- to medium-power devices, such as RFID tags, wireless sensors,
consumer electronics [Visser:2013, Popovic:2013a]. The interest in far-field wireless power
has spurred the creation of initiatives like COST IC1301 [Carvalho:2014] and a small number
of start-ups in recent years, namely Drayson Technologies, Powercast, Energous, Ossia.
Fig. 1: Block diagram of a conventional far-field WPT architecture.
Fig. 1 shows a generic wireless power delivery system, which consists of an RF
transmitter and an energy harvester made of a rectenna (antenna and rectifier) and a power
management unit (PMU). Since the quasi-totality of electronics requires a DC power source,
a rectifier is required to convert RF to DC. The recovered DC power then either supplies a low
power device directly, or is stored in a battery or a super capacitor for high power low dutycycle operations. It can also be managed by a DC-to-DC converter before being stored.
Referring to Fig 1, the end-to-end power transfer efficiency e can be expressed as
e=
#$%,'(
#$%,()
=
#*+,() #*+,*) #$%,*) #$%,'(
#$%,() #*+,() #*+,*) #$%,*)
,-
,/
,0
.
(1)
,1
In WEH, the transmitter in Fig 1 is an RF communication transmitter, not controllable
and not optimized for power delivery purposes. Given the typical power density between 10-3
and 10-1 µW/cm2 observed indoor and outdoor at distances from 25 to 100m from a GSM900
base station, WEH is thought unlikely sufficient for powering devices with a few cm2 in size
requiring 10-100µW [Visser:2013]. In WPT, the power transmitter of Fig 1 can be fully
optimized. Therefore, WPT offers more control of the design and room for enhancement of e.
We briefly review the techniques used to enhance e2 , e3 , e4 and e5 .
The DC-to-RF conversion efficiency e2 maximization can leverage a rich literature on
power amplifier (PA) design and rely on transmit signals with constrained Peak-to-Average
Power Ratio (PAPR).
The RF-to-RF conversion efficiency e3 is a bottleneck and requires highly directional
transmission. Common approaches in the RF literature rely on real-time reconfiguration of
time-modulated arrays based on localization of the power receivers [Masotti:2016], phasedarrays [Takahashi:2011] or retrodirective arrays [Miyamoto:2002].
The RF-to-DC conversion efficiency e4 maximization relies on the design of efficient
rectennas. A rectenna harvests electromagnetic energy, then rectifies and filters it using a low
pass filter. Its analysis is challenging due to its nonlinearity, which in turn renders its
implementation hard and subject to several losses due to threshold and reverse-breakdown
voltage,
devices
parasitics,
impedance
matching,
harmonic
generation
[Yoo:1992,Strassner:2013,Valenta:2014]. In WPT, the rectenna can be optimized for the
specific operating frequencies and input power level. It is more challenging in WEH since the
rectenna is designed for a broad range of input power densities (from a few nW/cm2 to a few
µW/cm2) and spectrum (TV, WiFi, 2/3/4G) [Costanzo:2016]. In order to address the large
aggregate frequency spectrum of ambient RF signals, multiband [Masotti:2013,Pinuela:2013,
Niotaki:2014,Belo:2016] and broadband [Kimionis:2017,Song:2015,Sakaki:2014] rectifier
designs have been proposed. In the case of multiband designs one may maximize e4 over a
number of narrowband frequency regions, whereas in the case of broadband (ultra-wideband)
designs one may cover a much larger frequency band however sacrificing the obtained
maximum efficiency. Various rectifier technologies exist, including the popular Schottky diodes
[Falkenstein:2012,Hagerty:2004], CMOS [Le:2008], active rectification [Roberg:2012],
spindiode [Hemour:2014], backward tunnel diodes [Lorenz:2015]. Assuming P78,9: =1W, 5dBi Tx/Rx antenna gain, a continuous wave (CW) at 915MHz, e4 of state-of-the-art rectifiers
is 50% at 1m, 25% at 10m and about 5% at 30m [Hemour:2014]. This severely limits the range
of WPT. Moreover, with the current rectifier technologies, e4 drops from 80% at 10mW to 40%
at 100µW, 20% at 10µW and 2% at 1µW [Valenta:2014, Hemour:2014]. This is due to the
diode not being easily turned on at low input power. Enhancements for the very low power
regime (below 1 µW) rely on spindiodes [Hemour:2014] and backward tunnel diodes
[Lorenz:2015]. For typical input power between 1 µW and 1mW, low barrier Schottky diodes
remain the most competitive and popular technology [Hemour:2014,Costanzo:2016,
Valenta:2014]. e4 also decreases as the frequency increases due to parasitic losses
[Valenta:2014]. The rectifier topology also impacts e4 . A single diode is preferred at low power
(1-500µW) and multiple diodes (voltage doubler/diode bridge/charge pump) favoured above
500µW [Costanzo:2016, Boaventura:2013]. The efficiency is also dependent on the input
power level and the output load variations. One possibility to minimize sensitivity to output
load variation is to use a resistance compression network [Niotaki:2014], while topologies
using multiple rectifying devices each one optimized for a different range of input power levels
can enlarge the operating range versus input power variations and avoid, within the power
range of interest, the saturation effect (that creates a sharp decrease in e4 ) induced by the
diode breakdown [Sun:2013]. This can be achieved using e.g. a single-diode rectifier at low
input power and multiple diodes rectifier at higher power.
Interestingly the rectenna design is not the only factor influencing e4 . Due to the rectifier
nonlinearity, the input waveform (power and shape) also influences e4 in the low input power
regime
(1µW-1mW)
[Trotter:2009,
Boaventura:2011,
Valenta:2013,Valenta:2015,
Collado:2014]. A 20dB gain (in terms of P;<,7: ) of a multisine over a CW excitation at an
average input power of -15dBm was shown in [Valenta:2013]. It is to be noted though that the
output filter is also important in relation to the tone separation in order to boost the performance
of multisine waveform [Boaventura:2014,Pan:2015]. High PAPR signals were also shown
beneficial in [Collado:2014]. It was nevertheless argued in [Blanco:2016] that the
instantaneous power variance is more accurate than PAPR to characterise the effect of
modulation on the rectifier efficiency. Suitable signals and waveforms therefore exploit the
nonlinearity to boost e4 at low input powers and extend WPT range [Boaventura:2013].
Modulation also has an impact on e4 . In [Vera:2010], QPSK modulation was shown to be
beneficial to e4 compared to a CW in the low power regime -20dBm to 0dBm.
[Fukuda:2014,Sakaki:2014] reported somewhat contradicting behaviors in the higher input
power regime of 0-20dBm. In [Fukuda:2014], PSK and QAM modulations were shown
beneficial to e4 compared to a CW, while they were shown detrimental in [Sakaki:2014].
Finally, [Bolos:2016] argues that one may or may not get an advantage from using multisines
or other modulated signals depending on the load and input power.
The DC-to-DC conversion efficiency e5 is enhanced by dynamically tracking the rectifier
optimum load, e.g. dc-to-dc switching converters dynamically track the maximum power point
(MPP) condition [Dolgov:2010,Costanzo:2012]. Due to the variable load on the rectenna, the
changes in diode impedance with power level and the rectifier nonlinearity, the input
impedance of the rectifier becomes highly variable, which renders the matching hard, not to
mention a joint optimization of the matching and load for multisine signals [Bolos:2016].
Interestingly, the maximization of e is not achieved by maximizing e2 , e3 , e4 , e5
independently from each other, and therefore simply concatenating the above techniques. This
is because e2 , e3 , e4 , e5 are coupled with each other due to the rectifier nonlinearity, especially
at input power range 1 µW -1 mW. Indeed, e4 is a function of the input signal shape and power
to the rectifier and therefore a function of 1) the transmit signal (beamformer, waveform, power
allocation) and 2) the wireless channel state. Similarly, e3 depends on the transmit signal and
the channel state and so is e2 , since it is a function of the transmit signal PAPR.
Some recent approaches optimize the system using numerical software tools based on
a combination of full-wave analysis and nonlinear harmonic balance techniques in order to
account for nonlinearities and electromagnetic couplings [Masotti:2016,Costanzo:2014]. This
approach would provide very high accuracy but has the drawback to hold only for an offline
optimization of the system, not for an adaptive WPT whose transmit signal is adapted every
few ms as a function of the channel state, not to mention for an entire WPT network with
multiple transmitters and receivers.
Observations:
First, the majority of the technical efforts in the wireless power literature has been devoted to
the design of the energy harvester but much less emphasis has been put on signals design
for WPT.
Second, the emphasis has remained on point-to-point (single user) transmission.
Third, research has recognized the importance of non-linearity of the rectenna in WPT system
design but has focused to a great extent on decoupling the WPT design by optimizing the
transmitter and the energy harvester independently from each other.
Fourth, multipath and fast fading, critical in NLoS, have been ignored despite playing a key
role in wireless transmissions and having a huge impact on the signal shape and power at the
rectenna input. Recall indeed that multipath has for consequences that transmit and received
(at the input of the rectenna) waveforms are completely different.
Fifth, WPT design has remained very much centered around an open-loop approach with
waveform being static and beamforming relying on tags localization, not on the channel state.
Sixth, the design of the transmit signals is heuristic (with conclusions exclusively based on
observations from measurements using various predefined and standard waveforms) and
there exist no systematic approach and performance bounds to design and evaluate them.
Waveform and beamformer have been studied independently, despite being part of the same
transmit signal.
To tackle the listed challenges, we need:
First, a closed-loop and adaptive WPT architecture with a reverse communication link from
the receiver to the transmitter that is used to support various functions such as channel
feedback/training, energy feedback, charging control, etc. The transmitter should be able to
flexibly adjust the transmission strategy jointly optimized across space and frequency (through
beamforming and waveform) in accordance with the channel status (commonly called Channel
State Information - CSI), and thus, renders state-of-the-art Multi-Input Multi-Output (MIMO)
processing an indispensable part of WPT. An example of a closed-loop and adaptive WPT
architecture is displayed in Fig. 2.
Fig. 2: Block diagram of a closed-loop and adaptive WPT architecture.
Second, a systematic approach to design and optimize, as a function of the channel, the
signal at the transmitter (encompassing beamforming and waveform) so as to maximize e3 ×e4
subject to transmit power and PAPR constraints. This requires to capture the rectifier
nonlinearity as part of the signal design and optimization. Such a systematic design
methodology will lead to the implementation of efficient strategies as part of the “transmission
optimization” module of Fig. 2.
Third, a link and system design approach that takes wireless power from a rectenna paradigm
to a network paradigm with multiple transmitters and/or receivers. Instances of such network
architectures may be a deployment of co-located transmit antennas delivering power to
multiple receivers or the dense and distributed deployment of well-coordinated
antennas/transmitters, as illustrated in Fig. 3.
Fig. 3: Illustration of a WPT network with co-located/distributed transmit antennas and
multiple receivers (P T/R: Power Transmitter/Receiver).
Leveraging Ideas from Wireless Communications
The fundamental limits of a communication network design lie in information and
communication theories that derive the capacity of wireless channels (point-to-point,
broadcast, multiple access, interference channel with single and multiple antennas) and
identify transmission and reception strategies to achieve it, most commonly under the
assumption of a linear communication channel with additive white Gaussian noise (AWGN)
[68-70]. In the 70s till early 2000s, the research emphasis was on a link optimization, i.e.
maximizing the point-to-point spectral efficiency (bits/s/Hz) with advances in modulation and
waveforms, coding, MIMO, Channel State Information (CSI) feedback and link adaptation,
communication over (multipath) fading channel. CSI feedback enables to dynamically adapt
the transmission strategies as a function of the channel state. It leads to a drastic increase in
rate and complexity reduction in receiver design. The emphasis in 4G design shifted towards
a system optimization, with a more interference-centric system design. MIMO evolved into a
multi-link/user/cell MIMO. Multiple users are scheduled in the same time-frequency resource
onto (ideally) non-interfering spatial beams. This led to significant features such as multi-user
MIMO, multi-user fairness and scheduling and multi-point cooperation. The availability of
accurate CSI at the Transmitter (CSIT) is also crucial for multi-user multi-antenna wireless
communication networks, for beamforming and interference management purposes. Some
promising technologies consist in densifying the network by adding more antennas either in a
distributed or in a co-localized manner. The distributed deployment leads to dense network
(with a high capacity backbone) requiring interference mitigation techniques, commonly
denoted as Coordinated Multi-Point transmission and reception (CoMP) in 3GPP, and
classified into joint processing (or Network MIMO) and coordinated scheduling, beamforming
and power control. Co-localized deployment leads to Massive/Large-Scale MIMO where a
base station designs pencil beams (with large beamforming gain) serving its own users, using
per-cell design rules, while simultaneously avoiding inter-cell interference. The reader is
invited to consult [Clerckx:2013] for fundamentals and designs of state-of-the-art MIMO
wireless communication networks.
Observations:
First, wireless power and communication systems share the same medium and techniques
inspired by communications, such as MIMO, closed-loop operation, CSI acquisition and
transmitter coordination are expected to be useful to WPT.
Second, existing techniques developed for wireless communications cannot be directly
applied to wireless power, due to their distinct design objectives (rate vs energy), practical
limitations (hardware and power constraints), receiver sensitivities (e.g. -30dBm for rectenna
vs -60dBm for information receivers), interference (beneficial in terms of energy harvesting vs
detrimental in communications) and models (linear wireless communication channel vs
nonlinear wireless power channel due to the rectifier).
WPT Signal and System Design
Aside the traditional WPT RF design, a new and complementary line of research on
communications and signal design for WPT has emerged recently in the communication
literature [Zeng:2017] and is briefly reviewed in the sequel. This includes among others the
design of efficient transmit signals (including waveform, beamforming and power allocation),
CSI acquisition strategies, multiuser transmission strategies, integration with communications
and system prototyping. Importantly, the nonlinearity of the rectifier has to be captured as part
of the signal and system design and optimization as it induces coupling among the various
efficiencies.
Let us first consider a point-to-point scenario with a single transmitter and receiver. The
first systematic approach towards signal design in adaptive closed-loop WPT was proposed in
[Clerckx:2015,Clerckx:2016], where the transmit signal, accounting jointly for multisine
waveform, beamforming and power allocation, is optimized as a function of the CSI to maximize
𝑒3 ×𝑒4 subject to optional transmit PAPR constraints. Uniquely, such a signal design resolves
some limitations of the WPT literature by optimally exploiting a beamforming gain, a frequency
diversity gain and the rectifier nonlinearity. The rectifier nonlinearity was modelled using a Taylor
expansion of the diode characteristic, which is a popular model in the RF literature
[Boaventura:2011,Boaventura:2013]. The phases of the optimized waveform can be computed
in closed-form while the magnitudes result from a non-convex optimization problem that can be
solved using convex optimization techniques, so-called Reversed Geometric Program (GP).
Multiple observations were made in [Clerckx:2015,Clerckx:2016]. First, it was observed that the
derived adaptive and optimized signals designed accounting for the nonlinearity are more
efficient than non-adaptive and non-optimized multisine signals (as used in [Trotter:2009,
Boaventura:2011, Valenta:2013, Valenta:2015]). Second, the rectifier nonlinearity was shown
essential to design efficient wireless power signals and ignoring it leads to inefficient signal
design in the low power regime. Third, the optimized waveform design favours a power allocation
over multiple frequencies and those with stronger frequency-domain channel gains are allocated
more power. Fourth, multipath and frequency-selective channels were shown to have significant
impact of DC output power and waveform design. Though multipath is detrimental to performance
with non-adaptive waveforms, it is beneficial with channel-adaptive waveform and leads to a
frequency diversity gain.
As an illustration, Fig. 4 (top) displays the magnitude of the frequency response of a given
realization of the wireless channel over a 10MHz-bandwidth. We consider a multisine waveform
with 16 sinewaves uniformly spread within the 10MHz. Assuming this channel has been acquired
to the transmitter, the magnitudes of the optimized waveform on the 16 frequencies can be
computed and are displayed on Fig. 4 (bottom). Interestingly, in contrast with the waveforms
commonly used in the RF literature [Trotter:2009, Boaventura:2011, Valenta:2013,
Valenta:2015, Collado:2014] that are non-adaptive to the channel state, the optimized adaptive
waveform has a tendency to allocate more power to frequencies exhibiting larger channel gains.
Fig. 4: Frequency response of the wireless channel and WPT waveform magnitudes (N = 16)
for 10 MHz bandwidth [Clerckx:2016].
The performance benefits of those optimized channel-adaptive multisine waveforms
over the non-adaptive design (in-phase multisine with uniform power allocation) of
[Trotter:2009, Boaventura:2011, Valenta:2013, Valenta:2015] has been validated using ADS
and PSpice simulations with a single series rectifier in a WiFi-like environment at 5.18GHz for
an average input power of about -12dBm in [Clerckx:2015] and -20dBm in [Clerckx:2016,
Clerckx:2017]. As illustrated in Fig. 5, for a single transmit antenna and a single series rectifier
subject to an average input power of -12dBm and multipath fading, the gains in terms of
harvested DC power are very significant with over 100% gains for 4 sinewaves and about
200% gain for 8 sinewaves over the non-adaptive design. Significant performance gains have
also been validated in [Clerckx:2016] at -20dBm average input power for various bandwidths
and in the presence of multiple transmit antennas where waveform and beamforming are
jointly designed. Moreover, it was interestingly shown in [Clerckx:2017] that the systematic
signal design approach of [Clerckx:2016] actually is applicable to and provides gains (100%200%) in a wide range of rectifier topologies, e.g. single series, voltage doubler, diode bridge.
Details on circuit design and simulation assumptions can be found in [Clerckx:2016,
Clerckx:2017].
Fig. 5: DC power vs. number of sinewaves N for adaptive and non-adaptive waveforms.
Importantly, systematic and optimized signal designs of [Clerckx:2016] also show that
contrary to what is claimed in [Valenta:2015,Collado:2014], maximizing PAPR is not always
the right approach to design efficient wireless power signal. High PAPR is a valid metric in
WPT with multisine waveforms if the channel is frequency flat, not in the presence of multipath
and frequency selectivity. This can be inferred from Fig 5 where the non-adaptive multisine
waveform leads to a much lower DC power despite exhibiting a significantly higher transmit
PAPR compared to the adaptive waveform. Recall indeed that the adaptive waveform will
unlikely allocate power uniformly across all sinewaves, as it emphasizes the ones
corresponding to strong frequency domain channel. This leads to waveforms whose PAPR is
lower than the non-adaptive in-phase multisine waveform with uniform power allocation.
Results in [Clerckx:2016] also highlighted the potential of a large scale multi-sine multiantenna closed-loop WPT architecture. In [Huang:2016,Huang:2017], such a promising
architecture was designed and shown to be an essential technique in enhancing 𝑒 and
increasing the range of WPT for low-power devices. It enables highly efficient very far-field
wireless charging by jointly optimizing transmit signals over a large number of frequency
components and transmit antennas, therefore combining the benefits of pencil beams and
waveform design to exploit the large beamforming gain of the transmit antenna array and the
non-linearity of the rectifier at long distances. The challenge is on the large dimensional problem,
which calls for a reformulation of the optimization problem. The new design enables orders of
magnitude complexity reduction in signal design compared to the Reverse GP approach.
Another low-complexity adaptive waveform design approach expressed in closed-form (hence,
suitable for practical implementation) has been proposed in [Clerckx:2017] and shown to
perform close to the optimal design. Fig 6 illustrates how the rectifier output voltage decreases
with the range for several values of the number of sinewaves N and transmit antennas M in the
multisine transmit waveform. By increasing both N and M, the range is expanded thanks to the
optimized channel-adaptive multisine waveforms that jointly exploit a beamforming gain, a
frequency diversity gain and the rectifier nonlinearity.
Fig. 6: Rectifier average output voltage as a function of the Tx-Rx distance [Huang:2017].
Note that in those recent progress on signal design, despite the presence of many transmit
antennas and sinewaves, a single receive antenna and rectifier per terminal has been assumed.
It would be interesting to understand how to extend the signal design to multiple receive
antennas. This brings the problem of RF or dc combining or mixed RF-dc combining
[Popovic:2014, Gutmann:1979, Shinohara:1998].
Discussions so far assumed deterministic multisine waveforms. It is of significant interest
to understand how modulated waveforms perform in comparison to deterministic waveforms
and how modulation could be tailored specifically for WPT to boost the end-to-end power
transfer efficiency. This would also open the way to understanding how to design unified and
efficient signals for the simultaneous transmission of information and power. A modulated
waveform exhibits randomness and this randomness has an impact on the amount of harvested
DC power. Interestingly, it was shown in [Clerckx:2017b] that for single-carrier transmission,
modulation using circularly symmetric complex Gaussian (CSCG) inputs is beneficial to the
performance compared to an unmodulated continuous wave. This gain comes from the large
fourth order moment offered by CSCG inputs which is exploited by the rectifier nonlinearity. Even
further gain can be obtained using asymmetric Gaussian inputs [Varasteh:2017]. On the other
hand, for multi-carrier transmission, modulation using CSCG inputs was shown in
[Clerckx:2017b] to be less efficient than a multisine because of the independent randomness
across carriers, which leads to random fluctuations. This contrasts with the periodic behavior of
deterministic multisine waveforms that are more suitable to turn on and off the rectifier
periodically. Interestingly, it was noted in [Clerckx:2017b] that PAPR is not an appropriate metric
to assess the suitability of a general modulated waveform for WPT. Nevertheless, despite all
those recent progress on signal design for WPT, the optimum input distribution remains unknown
and so is the optimum waveform.
We now understand that a systematic design of waveform design (including modulation,
beamforming, power allocation) is a key technique to jointly exploit a beamforming gain, the
channel frequency-selectivity and the rectifier nonlinearity, so as to enhance the end-to-end
power transfer efficiency and the range of WPT. One challenge is that those waveforms have
been designed assuming perfect CSI at the transmitter. In practice, this is not the case and the
transmitter should find ways to acquire the CSI. Various strategies exist, including forward-link
training with CSI feedback, reverse-link training via channel reciprocity, power probing with
limited feedback [Zeng:2017]. The first two are reminiscent of strategies used in modern
communication systems [Clerckx:2013]. The last one is more promising and tailored to WPT as
it is implementable with very low communication and signal processing requirements at the
terminal. It relies on harvested DC power measurement and on a limited number of feedback
bits for waveform selection and refinement [Huang:2017b]. In the waveform selection strategy,
the transmitter transmits over multiple timeslots with every time a different waveform precoder
within a codebook, and the receiver reports the index of the precoder in the codebook that leads
to the largest harvested energy. In the waveform refinement strategy, the transmitter sequentially
transmits two waveforms in each stage, and the receiver reports one feedback bit indicating an
increase/decrease in the harvested energy during this stage. Based on multiple one-bit
feedback, the transmitter successively refines waveform precoders in a tree-structured
codebook over multiple stages.
Wireless power networks are however not limited to a single transmitter and receiver. Let
us now consider the presence of a single transmitter and multiple users/receivers, with each
receiver equipped with one rectenna. In this multi-user deployment, the energy harvested by a
given rectenna in general depends on the energy harvested by the other rectennas. Indeed, a
given waveform may be suitable for a given rectenna but found inefficient for another rectenna.
Hence, there exists a trade-off between the energy harvested by the different rectennas. The
energy region formulates this trade-off by expressing the set of all rectenna harvested energy
that are simultaneously achievable. It is mathematically written as a weighted sum of harvested
energy where by changing the weights we can operate on a different point of the energy region
boundary. Strategies to design WPT waveforms in this multi-user/rectenna deployment were
discussed in [Clerckx:2016, Huang:2017]. Fig 7 illustrates such an energy region for a two-user
scenarios with a multisine waveform spanning 20 transmit antennas and 10 frequencies. The
key message here is that by optimizing the waveform to jointly deliver power to the two users
simultaneously, we get an energy region (‘weighted sum’) that is larger than the one achieved
by doing a timesharing approach, i.e. TDMA, where the transmit waveform is optimized for a
single user at a time and each user is scheduled to receive energy during a fraction of the time.
Fig. 7: 2-user energy region with M = 20 and N = 10 [Huang:2017].
Moving towards an entire network made of many transmitters and receivers, a network
architecture needs to be defined [Zeng:2017]. This may consist in having all transmitters
cooperating to jointly design the transmit signals to multiple receivers or having a local
coordination among the transmitters such that a given receiver is served by a subset of the
transmitters or the simplest scenario where each receiver is served by a single transmitter. This
leads to different resource allocation and charging control strategies (centralized vs distributed)
and requirements in terms of CSI sharing and acquisition at the different transmitters. Results
in [Zeng:2017] show that distributing antennas in a coverage area (as in Fig 3) and enabling
cooperation among them distributes energy more evenly in space and therefore potentially
enhances the ubiquitous accessibility of wireless power, compared to a co-located deployment.
It also avoids creating strong energy beams in the direction of the users, which is desirable from
a health and safety perspective.
Demonstrating the feasibility of the aforementioned signal and system designs through
prototyping and experimentation remains a largely open challenge. It requires the
implementation of a closed-loop WPT architecture with a real-time over-the-air transmission
based on a frame structure switching between a channel acquisition phase and wireless power
transfer phase. The channel acquisition needs to be done at the millisecond level (similarly to
CSI acquisition in communication). Different blocks need to be built, namely channel estimation,
channel-adaptive waveform design and rectenna. The first prototype of a closed-loop WPT
architecture based on channel-adaptive waveform optimization and dynamic channel
acquisition, as illustrated in Fig 8, was recently reported in [Kim:2017] with further enhancements
in [Kim:2017b]. Importantly, all experimental results validate the theory developed in
[Clerckx:2016,Clerckx:2017] and fully confirm the following observations: 1) diode nonlinearity
is beneficial to WPT performance and is to be exploited in systematic waveform design, 2) the
wireless propagation channel has a significant impact on signal design and system
performance, 3) CSI acquisition and channel-adaptive waveforms are essential to boost the
performance in frequency-selective channels (as in NLoS scenarios), 4) larger bandwidths
benefit from a channel frequency diversity gain, 5) PAPR is not an accurate metric to assess
and design waveforms for WPT in general frequency-selective channels. The performance gain
of channel-adaptive multisine waveforms versus non-adaptive multisine waveform in a NLoS
deployment with a single antenna at the transmitter and receiver is illustrated in Fig 9. We note
the significant boost of the average harvested DC power at the rectenna output by 105% over
an open-loop WPT architecture with non-adaptive multisine waveform (with the same number
of sinewaves) and by 170% over a continuous wave.
Fig. 8: Prototype architecture with 3 key modules: signal optimization, channel acquisition and
energy harvester [Kim:2017,Kim:2017b].
Fig. 9: Harvested DC power with the architecture of Fig 8 in an indoor NLoS deployment as a
function of N uniformly spread within a 10MHz bandwidth [Kim:2017b].
Ultimately wireless power and wireless communications will have to be integrated. This
calls for a unified Wireless Information and Power Transfer (WIPT) paradigm. A major challenge
is to characterize the fundamental tradeoff between conveying information and energy wirelessly
[Varshney:2008, Grover:2010, Zhang:2013] and to identify corresponding transmission
strategies. Leveraging the aforementioned wireless power signal designs, it has been shown
that the rectifier nonlinearity has profound impact on the design of WIPT
[Clerckx:2016b,Clerckx:2017b, Varasteh:2017]. In contrast with the classical capacity achieving
CSCG input distribution, rectifier nonlinearity leads to input distribution that are asymmetric
Gaussian in single-carrier transmission over frequency flat channels [Varasteh:2017] and to nonzero mean Gaussian in multi-carrier transmissions [Clerckx:2016b,Clerckx:2017b].
Nevertheless, the optimal input distribution and transmit signal strategy for WIPT remain
unknown. Those refreshing results are in sharp contrast with earlier results of [Varshney:2008,
Grover:2010, Zhang:2013] that ignore the rectifier nonlinearity and therefore rely on the
conventional capacity-achieving CSCG input distribution.
Observations:
First, the above results show the huge potential in a systematic signal and system design and
optimization approach towards efficient WPT and WIPT that accounts for the unique
characteristic of wireless power, namely the non-linearity.
Second, the nonlinearity radically changes the design of WPT and WIPT: it 1) leads to a WIPT
design different from that of conventional wireless communication (whose channel is assumed
linear), 2) favours a different input distribution, signal design, transceiver architecture and use
of the RF spectrum, 3) is beneficial to increase the rectifier output DC power and enlarge the
rate-energy region.
Third, an adaptive signal design approach provides a different paradigm compared to the
traditional WPT design. It leads to an architecture where the rectenna is fixed as much as
possible (e.g. with a fixed load) but the transmit signal is adaptive in contrast with the approach
in the RF literature where the waveform is fixed and the rectenna/PMU is adaptive (e.g.
dynamic load control). Since the wireless channel changes quickly (10ms order), it can be
impractical for energy-constrained devices to dynamically compute and adjust the matching
and the load as a function of the channel. Even though both approaches are complementary,
the adaptive signal approach makes the transmitter smarter and decreases the need for
power-hungry optimization at the devices. Nevertheless, adaptation implies acquiring CSIT,
which is an important challenge to be addressed. Ultimately, it is envisioned that an entire endto-end optimization of the system should be conducted, likely resulting in an architecture
where the transmit signals and the rectennas adapt themselves dynamically as a function of
the channel state.
Conclusions:
An integrated signal and system optimization has been introduced as the strategic approach
to realize the first generation of a mobile power network and to enable energy autonomy of
pervasive devices, such as smart objects, sensors and embedded systems in a wide range of
operating conditions.
It has been shown that the nonlinear nature of this design problem, both for the transmitter
and for the receiver sides, must be accounted for the signal and the circuit- level design. In
this way, a new architecture of the system is foreseen enabling simultaneous WPT and WIPT
while enhancing the power transfer efficiency at ultra-low power levels. Techniques for
dynamic tracking of the channel changes need to be exploited to adaptively modify the
transmitted energy, both in terms of its waveform shape and of its intensity, with the twofold
advantage of reducing the complexity of the rectenna and of the PMU design while keeping
the rectenna itself in its own optimum operating conditions.
References
[1]
[Hemour:2014] S. Hemour and K. Wu, “Radio-Frequency Rectifier for Electromagnetic Energy Harvesting: Development
Path and Future Outlook” Proceedings of the IEEE, Vol. 102, No. 11, November 2014.
[2]
[Ay:2011] S.U. Ay, “A CMOS Energy Harvesting and Imaging (EHI) Active Pixel Sensor (APS) Imager for Retinal
Prosthesis”, IEEE Trans. Biomedical Circuits and Systems 6, 535-545 (2011).
[3]
[ADMP801] ADMP801. http://www.cdiweb.com/datasheets/invensense/ADMP801_2_Page.pdf.
[4]
[CC2541] TI CC2541. http://www.ti.com/lit/ds/symlink/cc2541.pdf.
[5]
[Gainspan] Gainspan gs1500m. http://www.alphamicro.net/media/412417/gs1500m_datasheet_rev_1_4.pdf.
[6]
[CC3100MOD] TI CC3100MOD. http://www.ti.com/lit/ds/symlink/cc3100mod.pdf.
[7]
[Zhang:2013] Y. Zhang et al., “A Batteryless 19 µW MICS/ISM-Band Energy Harvesting Body Sensor Node SoC for ExG
Applications,” IEEE Journal of Solid-State Circuits, Vol. 48, No. 1, Jan. 2013.
[8]
[Kim:2011] H. Kim, R. F. Yazicioglu, S. Kim, N. Van Helleputte, A. Artes, M. Konijnenburg, J. Huisken, J. Penders, and C.
Van Hoof, “A configurable and low-power mixed signal soc for portable ECG monitoring applications,” in 2011 Symp. VLSI
Circuits Dig., June 2011, pp. 142–143.
[9]
[Verma:2010] N. Verma, A. Shoeb, J. Bohorquez, J. Dawson, J. Guttag, and A. P. Chandrakasan, “A micro-power EEG
acquisition SoC with integrated feature extraction processor for a chronic seizure detection system,” IEEE J. Solid-State
Circuits, vol. 45, no. 4, pp. 804–816, Apr. 2010.
[10]
[Pandey:2011] J. Pandey and B. Otis, “A sub-100 µW MICS/ISM band transmitter based on injection-locking and
frequency multiplication,” IEEE J. Solid-State Circuits, vol. 46, no. 5, pp. 1049–1058, May 2011.
[11]
[Kellogg:2016] B. Kellogg, V. Talla, S. Gollakota and J.R. Smith, “Passive Wi-Fi: Bringing Low Power to Wi-Fi
Transmissions” from Proceedings of the 13th USENIX Symposium on Networked Systems Design and Implementation,
Santa Clara, March 2016.
[12]
[Brown:1984] W. C. Brown, ‘‘The history of power transmission by radio waves,’’ IEEE Trans. Microw. Theory Tech., vol.
MTT-32, no. 9, pp. 1230–1242, Sep. 1984.
[13]
[Falkenstein:2012] E. Falkenstein, M. Roberg, and Z. Popovic, “Low-power wireless power delivery,” IEEE Trans.
Microwave Theory Tech., vol. 60, no. 7, pp. 2277–2286, Jun. 2012.
[14]
[Popovic:2013] Z. Popovic, “Cut the cord: Low-power far-field wireless powering,” IEEE Microwave Mag., vol. 14, no. 2,
pp. 55–62, Mar. 2013.
[15]
[Visser:2013] H.J. Visser, R.J.M. Vullers, “RF Energy Harvesting and Transport for Wireless Sensor Network Applications:
Principles and Requirements,” Proceedings of the IEEE | Vol. 101, No. 6, June 2013.
[16]
[Popovic:2013a] Z. Popovic, E. A. Falkenstein, D. Costinett, and R. Zane, “Low-power far-field wireless powering for
wireless sensors,” Proc. IEEE, vol. 101, no. 6, pp. 1397–1409, Jun. 2013.
[17]
[Carvalho:2014] N.B. Carvalho et al., “Wireless Power Transmission: R&D Activities within Europe,” IEEE Trans. on MTT,
vol. 62, no. 4, April 2014.
[18]
[Costanzo:2016] A. Costanzo and D. Masotti, “Smart solutions in smart spaces: Getting the most from far-field wireless
power transfer,” IEEE Microw. Mag., vol. 17, no. 5, pp. 30–45, May 2016.
[19]
[Masotti:2016] D. Masotti, A. Costanzo, M. Del Prete, and V. Rizzoli, “Time modulation of linear arrays for real-time
reconfigurable wireless power transmission,” IEEE Trans. Microwave Theory Tech., vol. 64, no. 2, pp. 331–342, Feb.
2016.
[20]
[Takahashi:2011] T. Takahashi, T. Mizuno, M. Sawa, T. Sasaki, T. Takahashi, and N. Shinohara, “Development of phased
array for high accurate microwave power transmission,” in Proc. 2011 IEEE MTT-S Int. Microwave Workshop Series on
Innovative Wireless Power Transmission: Technologies, Systems, and Applications (IMWS), May 2011, pp. 157–160.
[21]
[Miyamoto:2002] R. Y. Miyamoto and T. Itoh, “Retrodirective arrays for wireless communications,” IEEE Micro. Mag., vol.
3, no. 1, pp. 71–79, Mar. 2002.
[22]
[Yoo:1992] T. W. Yoo and K. Chang, "Theoretical and experimental development of 10 and 35 GHz rectennas," in IEEE
Transactions on Microwave Theory and Techniques, vol. 40, no. 6, pp. 1259-1266, Jun 1992.
[23]
[Strassner:2013] B. Strassner and K. Chang, "Microwave Power Transmission: Historical Milestones and System
Components," in Proceedings of the IEEE, vol. 101, no. 6, pp. 1379-1396, June 2013.
[24]
[Valenta:2014] C.R. Valenta and G.D. Durgin, “Harvesting Wireless Power: Survey of Energy-Harvester Conversion
Efficiency in Far-Field, Wireless Power Transfer Systems,” IEEE Microwave Magazine, June 2014.
[25]
[Masotti:2013] D. Masotti, A. Costanzo, M. D. Prete and V. Rizzoli, "Genetic-based design of a tetra-band high-efficiency
radio-frequency energy harvesting system," in IET Microwaves, Antennas & Propagation, vol. 7, no. 15, pp. 1254-1263,
December 10 2013.
[26]
[Pinuela:2013] M. Piñuela, P. D. Mitcheson and S. Lucyszyn, "Ambient RF Energy Harvesting in Urban and Semi-Urban
Environments," in IEEE Transactions on Microwave Theory and Techniques, vol. 61, no. 7, pp. 2715-2726, July 2013.
[27]
[Niotaki:2014] K. Niotaki, A. Collado, A. Georgiadis, S. Kim and M. M. Tentzeris, "Solar/Electromagnetic Energy
Harvesting and Wireless Power Transmission," in Proceedings of the IEEE, vol. 102, no. 11, pp. 1712-1722, Nov. 2014.
[28]
[Belo:2016] D. Belo, A. Georgiadis and N. B. Carvalho, "Increasing wireless powered systems efficiency by combining
WPT and Electromagnetic Energy Harvesting," 2016 IEEE Wireless Power Transfer Conference (WPTC), Aveiro, 2016,
pp. 1-3.
[29]
[Kimionis:2017] J. Kimionis, A. Collado, M. M. Tentzeris and A. Georgiadis, "Octave and Decade Printed UWB Rectifiers
Based on Nonuniform Transmission Lines for Energy Harvesting," in IEEE Transactions on Microwave Theory and
Techniques, vol. PP, no. 99, pp. 1-9.
[30]
[Song:2015] C. Song, Y. Huang, J. Zhou, J. Zhang, S. Yuan and P. Carter, "A High-Efficiency Broadband Rectenna for
Ambient Wireless Energy Harvesting," in IEEE Transactions on Antennas and Propagation, vol. 63, no. 8, pp. 3486-3495,
Aug. 2015.
[31]
[Sakaki:2014] H. Sakaki and K. Nishikawa, "Broadband rectifier design based on quality factor of input matching circuit,"
2014 Asia-Pacific Microwave Conference, Sendai, Japan, 2014, pp. 1205-1207.
[32]
[Hagerty:2004] J.A. Hagerty, F.B. Helmbrecht, W.H. McCalpin, R. Zane, and Z.B. Popovic, “Recycling Ambient Microwave
Energy with Broad-Band Rectenna Arrays,” IEEE Trans. on Microw. Theory and Techn., vol 52, no 3, March 2004.
[33]
[Le:2008] T. Le, K. Mayaram, and T. Fiez, “Efficient far-field radio frequency energy harvesting for passively powered
sensor networks,” IEEE J. Solid-State Circuits, vol. 43, no. 5, pp. 1287–1302, May 2008.
[34]
[Roberg:2012] M.Roberg, T. Reveyrand, I. Ramos,E. A. Falkenstein, and Z. Popović, “High-efficiency harmonically
terminated diode and transistor rectifiers,” IEEE Trans. Microw. Theory Techn., vol. 60, no. 12, pp. 4043–4052, Dec. 2012.
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
[66]
[67]
[68]
[Hemour:2014] S. Hemour, Y. Zhao, C. H. P. Lorenz, D. Houssameddine, Y. Gui, C.-M. Hu, and K. Wu, “Towards LowPower High-Efficiency RF and Microwave Energy Harvesting,” in IEEE Transactions on Microwave Theory and
Techniques, vol. 62, no. 4, pp. 965-976, Apr. 2014.
[Lorenz:2015] C. H. P. Lorenz, S. Hemour, W. Li, Y. Xie, J. Gauthier, P. Fay, and K. Wu, “Overcoming the efficiency
limitation of low microwave power harvesting with backward tunnel diodes,” in Proc. 2015 IEEE MTT-S Int. Microwave
Symp. (IMS), May 2015, pp. 1–4.
[Boaventura:2013] A. Boaventura, A. Collado, N.B. Carvalho, and A. Georgiadis, “Optimum Behavior: Wireless power
transmission system design through behavioral models and efficient synthesis techniques” IEEE Microwave Magazine,
Vol. 14, No. 2, pp. 26-35, April 2013.
[Niotaki:2014] K. Niotaki, A. Georgiadis, A. Collado and J. S. Vardakas, "Dual-Band Resistance Compression Networks
for Improved Rectifier Performance," in IEEE Transactions on Microwave Theory and Techniques, vol. 62, no. 12, pp.
3512-3521, Dec. 2014.
[Sun:2013] H. Sun, Z. Zhong and Y. X. Guo, "An Adaptive Reconfigurable Rectifier for Wireless Power Transmission," in
IEEE Microwave and Wireless Components Letters, vol. 23, no. 9, pp. 492-494, Sept. 2013.
[Trotter:2009] M.S. Trotter, J.D. Griffin and G.D. Durgin, “Power-Optimized Waveforms for Improving the Range and
Reliability of RFID Systems,” 2009 IEEE International Conference on RFID.
[Boaventura:2011] A. S. Boaventura and N. B. Carvalho, “Maximizing DC Power in Energy Harvesting Circuits Using
Multisine Excitation,” 2011 IEEE MTT-S International Microwave Symposium Digest (MTT).
[Valenta:2013] C.R. Valenta and G.D. Durgin, “Rectenna performance under power-optimized waveform excitation,” in
IEEE Int. Conf. on RFID, Orlando, Florida, 2013, 237–244.
[Valenta:2015] C. R. Valenta, M. M. Morys, and G. D. Durgin, “Theoretical Energy-Conversion Efficiency for EnergyHarvesting Circuits Under Power-Optimized Waveform Excitation,” in IEEE Transactions on Microwave Theory and
Techniques, vol. 63, no. 5, pp. 1758-1767, May 2015.
[Boaventura:2014] A. Boaventura, N.B. Carvalho and A. Georgiadis, “The Impact of Multi-Sine Tone Separation on RFDC Efficiency,” Proceedings of Asia-Pacific Microwave Conference 2014.
[Pan:2015] Ning Pan et al., “Amplitude and Frequency Analysis of Multi-sine Wireless Power Transfer,” Integrated
Nonlinear Microwave and Millimetre-wave Circuits Workshop (INMMiC), 2015
[Collado:2014] A. Collado and A. Georgiadis, “Optimal Waveforms for Efficient Wireless Power Transmission,” IEEE
Microwave and Wireless Components Letters, vol. 24, no.5, May 2014.
[Blanco:2016] J. Blanco, F. Bolos, A. Georgiadis, “Instantaneous power variance and radio frequency to dc conversion
efficiency of wireless power transfer systems,” IET Microwaves, Antennas & Propagation, vol 10, 2016.
[Boaventura:2013] A.S. Boaventura and N.B. Carvalho, “Extending reading range of commercial RFID readers,” IEEE
Trans. Microwave Theory Tech., Vol. 61, No. 1, pp. 633-640, Jan. 2013.
[Vera:2010] G.A. Vera, A. Georgiadis, A. Collado, and S. Via, “Design of a 2.45 GHz Rectenna for Electromagnetic (EM)
Energy Scavenging,” IEEE Radio and Wireless Symposium (RWS), 2010.
[Fukuda:2014] G. Fukuda et al., “Evaluation on Use of Modulated Signal for Microwave Power Transmission,”
Proceedings of the 44th European Microwave Conference 2014.
[Sakaki:2014] H. Sakaki et al., “Analysis of rectifier RF-DC power conversion behavior with QPSK and 16 QAM input
signals for WiCoPT system,” in Proc. 2014 Asia-Pacific Microwave Conf. (APMC), Nov. 2014, pp. 603-605.
[Bolos:2016] F. Bolos, J. Blanco, A. Collado, and A. Georgiadis, “RF energy harvesting from multi-tone and digitally
modulated signals,” IEEE Trans. Microwave Theory Tech., vol. 64, no. 6, pp. 1918–1927, June 2016.
[Dolgov:2010] A. Dolgov, R. Zane, and Z. Popovic, “Power management system for online low power RF energy
harvesting optimization,” IEEE Trans. Circuits Syst. I, vol. 7, no. 7, pp. 1802–1811, July 2010.
[Costanzo:2012] A. Costanzo, A. Romani, D. Masotti, N. Arbizzani, and V. Rizzoli, “RF/baseband co-design of switching
receivers for multiband microwave energy harvesting,” Sens. Actuators A, Phys., vol. 179, pp.158–168, June 2012.
[Costanzo:2014] A. Costanzo et al., “Electromagnetic energy harvesting and wireless power transmission: A unified
approach,” Proc. IEEE, vol. 102, no. 11, pp. 1692–1711, Nov. 2014
[Clerckx:2013] B. Clerckx and C. Oestges, “MIMO Wireless Networks: Channels, Techniques and Standards for MultiAntenna, Multi-User and Multi-Cell Systems”, Academic Press (Elsevier), Oxford, UK, Jan 2013.
[Zeng:2017] Y. Zeng, B. Clerckx and R. Zhang, “Communications and Signals Design for Wireless Power Transmission”
IEEE Trans. on Comm, invited paper, Vol 65, No 5, pp 2264 - 2290, May 2017.
[Clerckx:2015] B. Clerckx, E. Bayguzina, D. Yates, and P.D. Mitcheson, “Waveform Optimization for Wireless Power
Transfer with Nonlinear Energy Harvester Modeling,” IEEE ISWCS 2015, August 2015, Brussels.
[Clerckx:2016] B. Clerckx and E. Bayguzina, “Waveform Design for Wireless Power Transfer”, IEEE Trans on Sig Proc,
Vol. 64, No. 23, pp. 6313-6328, Dec 2016.
[Clerckx:2017] B. Clerckx and E. Bayguzina, “Low-Complexity Adaptive Multisine Waveform Design for Wireless Power
Transfer,” IEEE Antennas and Wireless Propagation Letters, vol 16, 2017.
[Huang:2016] Y. Huang and B. Clerckx, “Waveform optimization for large-scale multi-antenna multi-sine wireless power
transfer,” in Proc. The 17th IEEE International workshop on Signal Processing advances in Wireless Communications
(SPAWC) 2016.
[Huang:2017] Y. Huang and B. Clerckx, “Large-scale multi-antenna multi-sine wireless power transfer,” IEEE Trans on
Sig. Proc, vol 65, no 21, Nov 2017.
[Popovic:2014] Z. Popovic et al., “Scalable RF Energy Harvesting,” IEEE Trans. Microw. Theory Techn, Vol. 62, No. 4,
April 2014.
[Gutmann:1979] R. J. Gutmann and J. M. Borrego, “Power combining in an array of microwave power rectifiers,” IEEE
Trans. Microw. Theory Techn., vol. MTT-27, no. 12, pp. 958–968, Dec. 1979.
[Shinohara:1998] N. Shinohara and H. Matsumoto, “Dependence of DC output of a rectenna array on the method of interconnection
of its array elements,” Elect. Eng. Jpn., vol. 125, no. 1, pp. 9–17, 1998.
[Clerckx:2016b] B. Clerckx, “Waveform Optimization for SWIPT with Nonlinear Energy Harvester Modeling,” 20th
International ITG Workshop on Smart Antennas (WSA 2016), March 2016.
[Clerckx:2017b] B. Clerckx, “Wireless Information and Power Transfer: Nonlinearity, Waveform Design and Rate-Energy
Tradeoff,” in submission, arXiv:1607.05602
[Varasteh:2017] M. Varasteh, B. Rassouli and B. Clerckx, “Wireless Information and Power Transfer over an AWGN
channel: Nonlinearity and Asymmetric Gaussian Signaling,” IEEE ITW 2017, arXiv:1705.06350.
[69]
[70]
[71]
[72]
[73]
[74]
[Huang:2017b] Y. Huang and B. Clerckx, “Waveform Design for Wireless Power Transfer with Limited Feedback,”
arXiv:1704.05400.
[Kim:2017] J. Kim, B. Clerckx, and P.D. Mitcheson, “Prototyping and Experimentation of a Closed-Loop Wireless Power
Transmission with Channel Acquisition and Waveform Optimization,” IEEE WPTC 2017.
[Kim:2017b] J. Kim, B. Clerckx and P.D. Mitcheson, “Closed-Loop Wireless Power Transmission: System Design,
Prototyping and Experimentation,” in submission.
[Varshney:2008] L. R. Varshney, “Transporting information and energy simultaneously,” in Proc. Int. Symp. Inf. Theory,
Jul. 2008, pp. 1612–1616.
[Grover:2010] P. Grover and A. Sahai, “Shannon meets tesla: Wireless information and power transfer,” in Proc. Int.
Symp. Inf. Theory, Jun. 2010, pp. 2363–2367.
[Zhang:2013] R. Zhang and C. K. Ho, “MIMO broadcasting for simultaneous wireless information and power transfer,”
IEEE Trans. Wireless Commun., vol. 12, no. 5, pp. 1989-2001, May 2013
| 7 |
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
arXiv:1701.05387v1 [math.PR] 19 Jan 2017
n
o
Abstract: In this contribution we are concerned with the asymptotic behaviour, as u → ∞, of P supt∈[0,T ] Xu (t) > u ,
where Xu (t), t ∈ [0, T ], u > 0 is na family of centered Gaussian
o processes with continuous trajectories. A key application of our findings concerns P supt∈[0,T ] (X(t) + g(t)) > u , as u → ∞, for X a centered Gaussian process and g
some measurable trend function. Further applications include the approximation of both the ruin time and the ruin
probability of the Brownian motion risk model with constant force of interest.
Key Words: Extremes; Gaussian processes; fractional Brownian motion; ruin probability; ruin time.
AMS Classification: Primary 60G15; secondary 60G70
1. Introduction
Let X(t), t ≥ 0 be a centered Gaussian process with continuous trajectories. An important problem in applied and
theoretical probability is the determination of the asymptotic behavior of
(
)
(1)
p(u) = P
u→∞
sup (X(t) + g(t)) > u ,
t∈[0,T ]
for some T > 0 and g(t), t ∈ [0, T ] a bounded measurable function. For instance, if g(t) = −ct, then in the context of
risk theory p(u) has interpretation as the ruin probability over the finite-time horizon [0, T ]. Dually, in the context of
queueing theory, p(u) is related to the buffer overload problem; see e.g., [1–5].
For the special case that g(t) = 0, t ∈ [0, T ] the exact asymptotics of (1) is well-known for both locally stationary
and general non-stationary Gaussian processes, see e.g., [6–18]. Commonly, for X a centered non-stationary Gaussian
process it is assumed that the standard deviation function σ is such that t0 = argmaxt∈[0,T ] σ(t) is unique and
σ(t0 ) = 1. Additionally, if the correlation function r and the standard deviation function σ satisfy (hereafter ∼ means
asymptotic equivalence)
(2)
α
β
1 − r(s, t) ∼ a |t − s| ,
1 − σ(t0 + t) ∼ b |t| ,
s, t → t0
for some a, b, β positive and α ∈ (0, 2], then we have (see [10][Theorem D.3])
(3)
2
2
p(u) ∼ C0 u( α − β )+ P {X(t0 ) > u} ,
where (x)+ = max(0, x) and
C0 =
Here Γ(·) is the gamma function, and
(
)
1
W (t)
Hα = lim
E sup e
,
T →∞ T
t∈[0,T ]
u → ∞,
1/α −1/β
Γ(1/β + 1)Hα , if α < β,
a b
b/a
Pα ,
1,
Pαb/a
=E
if α = β,
if α > β.
(
sup e
t∈[0,∞)
W (t)−b/a|t|α
)
, with W (t) =
√
α
2Bα (t) − |t| ,
are the Pickands and Piterbarg constants, respectively, where Bα is a standard fractional Brownian motion (fBm)
with self-similarity index α/2 ∈ (0, 1], see [19–25] for properties of both constants.
The more general case with non-zero g has also been considered in the literature; see, e.g., [1, 26–30]. However, most
of the aforementioned contributions treat only restrictive trend functions g. For instance, in [26][Theorem 3] a Höldertype condition for g is assumed, which excludes important cases of g that appear in applications. The restrictions are
Date: January 20, 2017.
1
2
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
often so severe that simple cases such as the Brownian bridge with drift considered in Example 3.11 below cannot be
covered.
A key difficulty when dealing with p(u) is that X + g is not a centered Gaussian process. It is however possible to get
rid of the trend function g since for any bounded function g and all u large (1) can be re-written as
(
)
X(t)
, t ∈ [0, T ].
pT (u) = P sup Xu (t) > u , Xu (t) =
1
−
g(t)/u
t∈[0,T ]
Here Xu is centered, however it depends on the threshold u, which complicates the analysis.
Extremes of threshold-dependent Gaussian processes Xu (t), t ∈ R have been already dealt with in several contributions,
see e.g., [2, 3, 30–32]. Our principal result in Theorem 2.4 derives the asymptotics of pT (u) for quite general families
of centered Gaussian processes Xu under tractable assumptions on the variance and correlation functions of Xu . To
this end, in Theorem 2.2 we first derive the asymptotics of
(
p∆ (u) = P
)
sup Xu (t) > u ,
t∈∆(u)
u→∞
for some short compact intervals ∆(u).
Applications of our main results include derivation of Proposition 3.1 for a class of locally stationary Gaussian processes
with trend and that of Proposition 3.6 for a class of non-stationary Gaussian processes with trend, as well as those of
their corollaries. For instance, a direct application of Proposition 3.6 yields the asymptotics of (1) for a non-stationary
X with standard deviation function σ and correlation function r satisfying (2) with t0 = argmaxt∈[0,T ] σ(t). If further
the trend function g is continuous in a neighborhood of t0 , g(t0 ) = maxt∈[0,T ] g(t) and
g(t) ∼ g(t0 ) − c|t − t0 |γ ,
(4)
t → t0
for some positive constants c, γ, then (3) holds with C0 specified in Proposition 3.9 and β, u being substituted by
min(β, 2γ) and u − g(t0 ) respectivelly.
Complementary, we investigate asymptotic properties of the first passage time (ruin time) of X(t) + g(t) to u on the
finite-time interval [0, T ], given the process has ever exceeded u during [0, T ]. In particular, for
τu = inf{t ≥ 0 : X(t) > u − g(t)},
(5)
with inf{∅} = ∞, we are interested in the approximate distribution of τu |τu ≤ T , as u → ∞. Normal and exponential
approximations of various Gaussian models have been discussed in [30, 32–35]. In this paper, we derive general results
for the approximations of the conditional passage time in Propositions 3.3, 3.10. The asymptotics of p∆ (u) for a short
compact intervals ∆(u) displayed in Theorem 2.2 plays a key role in the derivation of these results.
Organisation of the rest of the paper: In Section 2, the tail asymptotics of the supremum of a family of centered
Gaussian processes indexed by u are given. Several applications and examples are displayed in Section 3. Finally, we
present all the proofs in Section 4 and Section 5.
2. Main Results
Let Xu (t), t ∈ R, u > 0 be a family of threshold-dependent centered Gaussian processes with continuous trajectories,
variance functions σu2 and correlation functions ru . Our main results concern the asymptotics of slight generalization
of p∆ (u) and pT (u) for families of centered Gaussian processes Xu satisfying some regularity conditions for variance
and coavariance respectivelly.
Let C0∗ (E) be the set of continuous real-valued functions defined on the interval E such that f (0) = 0 and for some
ǫ2 > ǫ1 > 0
(6)
lim
|t|→∞,t∈E
f (t)/|t|ǫ1 = ∞,
if sup{x : x ∈ E} = ∞ or inf{x : x ∈ E} = −∞.
lim
|t|→∞,t∈E
f (t)/|t|ǫ2 = 0,
In the following Rα denotes the set of regularly varying functions at 0 with index α ∈ R, see [36–38] for details.
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
3
We shall impose the following assumptions where ∆(u) is a compact interval:
A1: For any large u, there exists a point tu ∈ R such that σu (tu ) = 1.
A2: There exists some λ > 0 such that
lim
(7)
sup
u→∞ t∈∆(u)
1
σu (tu +t)
− 1 u2 − f (uλ t)
f (uλ t) + 1
=0
holds for some non-negative continuous function f with f (0) = 0.
A3: There exists ρ ∈ Rα/2 , α ∈ (0, 2] such that
lim
sup
u→∞ s,t∈∆(u)
t6=s
and η := lims→0
2
ρ (s)
s2/λ
1 − ru (tu + s, tu + t)
−1 =0
ρ2 (|t − s|)
∈ [0, ∞], with λ given in A2.
Remark 2.1. If f satisfies f (0) = 0 and f (t) > 0, t 6= 0, then
lim
sup
u→∞ t∈∆(u),t6=0
1
σu (tu +t) − 1
u−2 f (uλ t)
−1 =0
for some λ > 0 implies that (7) is valid.
Next we introduce some further notation, starting with the Pickands-type constant defined by
)
(
Hα [0, T ] = E
sup e
√
2Bα (t)−|t|α
, T > 0,
t∈[0,T ]
where Bα is an fBm. Further, define for f ∈ C0∗ ([S, T ]) with S, T ∈ R, S < T and a positive constant a
)
(
f
Pα,a
[S, T ] = E
sup e
√
2aBα (t)−a|t|α −f (t)
,
t∈[S,T ]
and set
f
f
Pα,a
[0, ∞) = lim Pα,a
[0, T ],
T →∞
f
Pα,a
(−∞, ∞) =
lim
S→−∞,T →∞
f
Pα,a
[S, T ].
f
f
The finiteness of Pα,a
[0, ∞) and Pα,a
(−∞, ∞) is guaranteed under weak assumptions on f , which will be shown in
f
[0, ∞).
the proof of Theorem 2.2, see [2, 3, 5, 7, 15, 25, 39–43] for various properties of Hα and Pα,a
−
Denote by I{·} the indicator function. For the regularly varying function ρ(·), we denote by ←
ρ (·) its asymptotic
inverse (which is asymptotically unique). Throughout this paper, we set 0 · ∞ = 0 and u−∞ = 0 if u > 0. Let
Ψ(u) := P {N > u}, with N a standard normal random variable.
In the next theorem we shall consider two functions x1 (u), x2 (u), u ∈ R such that x1 ( 1t ) ∈ Rµ1 , x2 ( 1t ) ∈ Rµ2 with
µ1 , µ2 ≥ λ, and
lim uλ xi (u) = xi ∈ [−∞, ∞], i = 1, 2,
(8)
u→∞
with x1 < x2 .
Theorem 2.2. Let Xu (t), t ∈ R be a family of centered Gaussian processes with variance functions σu2 and correlation
functions ru . If A1-A3 are satisfied with ∆(u) = [x1 (u), x2 (u)], and f ∈ C0∗ ([x1 , x2 ]), then for Mu satisfying Mu ∼
u, u → ∞, we have
(9)
P
(
sup Xu (tu + t) > Mu
t∈∆(u)
where
(10)
f
and Pα,η
(−∞, ∞) ∈ (0, ∞).
)
−I{η=∞}
−
ρ (u−1 )
Ψ(Mu ),
∼ C uλ ←
R x2 −f (t)
dt,
if η = ∞,
Hα x1 e
f
C=
Pα,η [x1 , x2 ],
if η ∈ (0, ∞),
sup
−f (t)
, if η = 0,
t∈[x1 ,x2 ] e
u → ∞,
4
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
Remark 2.3. Let α ∈ (0, 2], a > 0 be given. If f ∈ C0∗ ([x1 , x2 ]) for x1 , x2 , y ∈ R, x1 < x2 , as shown in Appendix, we
have, with fy (t) := f (y + t), t ∈ R
f
fy
[x1 − y, x2 − y],
Pα,a
[x1 , x2 ] = Pα,a
(11)
f
fy
[x1 − y, ∞).
Pα,a
[x1 , ∞) = Pα,a
In particular, if f (t) = ct, c > 0, then for any x ∈ R
ct
cx+ct
ct
Pα,a
[x, ∞) = Pα,a
[0, ∞) = e−cx Pα,a
[0, ∞).
Next, for any fixed T ∈ (0, ∞), in order to analyse pT (u) we shall suppose that:
A1’: For all large u, σu (t) attains its maximum over [0, T ] at a unique point tu such that
σu (tu ) = 1
A4: For all u large enough
and
lim tu = t0 ∈ [0, T ].
u→∞
1
p(ln u)q
≥1+
u2
t∈[0,T ]\(tu +∆(u)) σu (t)
inf
holds for some constants p > 0, q > 1.
A5: For some positive constants G, ς > 0
E (X u (t) − X u (s))2 ≤ G|t − s|ς
holds for all s, t ∈ {x ∈ [0, T ] : σ(x) 6= 0} and X u (t) =
Xu (t)
σu (t) .
Below we define for λ given in A2 and ν, d positve
[0, δu ]
if tu ≡ 0,
if tu ∼ du−ν and ν ≥ λ,
[−tu , δu ],
(12)
where δu =
∆(u) =
(ln u)
u
λ
q
[−δu , δu ],
[−δu , T − tu ],
[−δu , 0]
if tu ∼ du−ν or T − tu ∼ du−ν when ν < λ, or t0 ∈ (0, T ),
if T − tu ∼ du−ν and ν ≥ λ,
if tu = T,
with q given in A4.
Theorem 2.4. Let Xu (t), t ∈ [0, T ] be a family of centered Gaussian processes with variance functions σu2 and
correlation functions ru . Assume that A1’,A2-A5 are satisfied with ∆(u) = [c1 (u), c2 (u)] given in (12) and
lim ci (u)uλ = xi ∈ [−∞, ∞], i = 1, 2,
u→∞
x1 < x2 .
If f ∈ C0∗ ([x1 , x2 ]), then for Mu suc that limu→∞ Mu /u = 1 we have
)
(
−I{η=∞}
−
ρ (u−1 )
(13)
Ψ(Mu ),
P sup Xu (t) > Mu ∼ C uλ ←
t∈[0,T ]
u → ∞,
where C is the same as in (10) if η ∈ (0, ∞] and C = 1 if η = 0.
Remark 2.5. Theorem 2.4 generalises both [26][Theorem 1] and [32][Theorem 4.1].
3. Applications
3.1. Locally stationary Gaussian processes with trend. In this section we consider the asymptotics of (1) for
X(t), t ∈ [0, T ] a centered locally stationary Gaussian process with unit variance and correlation function r satisfying
(14)
lim sup
h→0 t∈[0,T ]
1 − r(t, t + h)
=1
a(t) |h|α
with α ∈ (0, 2], a(·) a positive continuous function on [0, T ] and further
(15)
r(s, t) < 1, ∀s, t ∈ [0, T ] and s 6= t.
We refer to e.g., [9, 10, 44–46] for results on locally stationary Gaussian processes. Extensions of this class to α(t)locally stationary processes are discussed in [13, 47, 48].
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
5
Regarding the continuous trend function g, we define gm = maxt∈[0,T ] g(t) and set
H := {s ∈ [0, T ] : g(s) = gm } .
Set below, for any t0 ∈ [0, T ]
Qt0 = 1 + I{t0 ∈(0,T )} ,
(16)
wt0 =
(
−∞, if t0 ∈ (0, T ),
0,
if t0 = 0 or t0 = T.
Proposition 3.1. Suppose that (14) and (15) hold for a centered locally stationary Gaussian process X(t), t ∈ [0, T ]
and let g : [0, T ] → R be a continuous function.
i) If H = {t0 } and (4) holds, then as u → ∞
(
(17)
sup (X(t) + g(t)) > u
P
t∈[0,T ]
where (set with a = a(t0 ))
Ct0
)
2
1
∼ Ct0 u( α − γ )+ Ψ (u − gm ) ,
1/α −1/γ
Γ(1/γ + 1)Hα , if α < 2γ,
Qt0 aγ c
c|t|
=
if α = 2γ,
Pα,a [wt0 , ∞),
1,
if α > 2γ.
ii) If H = [A, B] ⊂ [0, T ] with 0 ≤ A < B ≤ T , then as u → ∞
(
)
Z
B
P
sup (X(t) + g(t)) > u
t∈[0,T ]
∼ Hα
A
2
(a(t))1/α dtu α Ψ (u − gm ) .
Remarks 3.2. i) If H = {t1 , . . . , tn }, then as mentioned in [10], the tail distribution of the corresponding supremum
is easily obtained assuming that for each ti the assumptions of Proposition 3.1 statement i) hold, implying that
(
)
n
2 1
X
Ctj u( α − γ )+ Ψ (u − gm ) , u → ∞.
P sup (X(t) + g(t)) > u ∼
t∈[0,T ]
j=1
ii) The novelty of Proposition 3.1 statement i) is that for the trend function g only a polynomial local behavior around
t0 is assumed. In the literature so far only the case that (4) holds with γ = 2 has been considered (see [28]).
iii) By the proof of Proposition 3.1 statement i), if g(t) is a measurable function which is continuous in a neighborhood
of t0 and smaller than gm − ε for some ε > 0 in the rest part over [0, T ], then the results still hold.
We present below the approximation of the conditional passage time τu |τu ≤ T with τu defined in (5).
Proposition 3.3. Suppose that (14) and (15) hold for a centered locally stationary Gaussian process X(t), t ∈ [0, T ].
Let g : [0, T ] → R be a continuous function, H = {t0 } and (4) holds.
i) If t0 ∈ [0, T ), then for any x ∈ (wt0 , ∞)
n
o
P u1/γ (τu − t0 ) ≤ x τu ≤ T ∼
ii) If t0 = T , then for any x ∈ (−∞, 0)
γc1/γ
Rx
wt
0
γ
e−c|t| dt
Qt0 Γ(1/γ)
,
if α < 2γ,
γ
P c|t| [wt ,x]
α,a
0
,
if α = 2γ,
c|t|γ
Pα,a [wt0 ,∞)
−c|t|γ
sup
, if α > 2γ,
t∈[wt0 ,x] e
n
o
P u1/γ (τu − t0 ) ≤ x τu ≤ T ∼
γc1/γ
R∞
γ
e−c|t| dt
,
Γ(1/γ)
−x
if α < 2γ,
γ
P c|t| [−x,∞)
α,a
,
c|t|γ
Pα,a [0,∞)
e−c|x|γ ,
if α = 2γ,
if α > 2γ.
6
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
Example 3.4. Let X(t), t ∈ [0, T ] be a centered stationary Gaussian process with unit variance and correlation
function r that satisfies r(t) = 1 − a|t|α (1 + o(1)), t → 0 for some a > 0, α ∈ (0, 2], and r(t) < 1, for all t ∈ (0, T ].
Let τu be defined as in (5) with g(t) = −ct, c > 0. Then we have
(
c−1 a1/α Hα ,
2
−1)+
(α
Ψ(u)
P max (X(t) − ct) > u ∼ u
ct
t∈[0,T ]
Pα,a
[0, ∞),
and for any x positive
n
o 1 − e−cx ,
ct
P uτu ≤ x τu ≤ T ∼
Pctα,a [0,x] ,
Pα,a [0,∞)
α ∈ (0, 2),
α = 2,
α ∈ (0, 2),
α = 2.
Example 3.5. Let X(t), t > 0 be a standardized fBm, i.e., X(t) = Bα (t)/tα/2 with Bα an fBm. Let c, T be positive
constants. Then for any n ∈ N, we have
n
X
1
1
2
2πt
T
X(t) + c sin
u α − 2 Ψ(u − c),
P
max
>u ∼
ajα Hα √
t∈[T,(n+1)T ]
T
2cπ
j=1
where aj =
1
2
(4j+1)T
4
−α
, j = 1, . . . , n.
3.2. Non-stationary Gaussian processes with trend. In this section we consider the asymptotics of (1) for
X(t), t ∈ [0, T ] a centered Gaussian process with non-constant variance function σ 2 . Define below whenever σ(t) 6= 0
X(t) :=
X(t)
,
σ(t)
t ∈ [0, T ],
and set for a continuous function g
mu (t) :=
(18)
σ(t)
,
1 − g(t)/u
t ∈ [0, T ],
u > 0.
Proposition 3.6. Let X and g be as above. Assume that tu = argmaxt∈[0,T ] mu (t) is unique with limu→∞ tu = t0
and σ(t0 ) = 1. Further, we suppose that A2-A5 are satisfied with σu (t) =
and ∆(u) = [c1 (u), c2 (u)] given in (12). If in A2 f ∈
C0∗ ([x1 , x2 ])
lim ci (u)uλ = xi ∈ [−∞, ∞], i = 1, 2,
(19)
P
(
sup (X(t) + g(t)) > u
t∈[0,T ]
)
−
λ←
∼ C u ρ (u
−1
ru (s, t) = r(s, t), X u (t) = X(t)
and
x1 < x2 ,
u→∞
then we have
mu (t)
mu (tu ) ,
−I{η=∞}
)
Ψ
u − g(tu )
σ(tu )
,
u → ∞,
where C is the same as in (10) when η ∈ (0, ∞] and C = 1 when η = 0.
Remarks 3.7. i) Proposition 3.6 extends [26][Theorem 3] and the results of [1] where (1) was analyzed for special X
with stationary increments and special trend function g.
ii) The assumption that σ(t0 ) = 1 is not essential in the proof. In fact, for the general case where σ(t0 ) 6= 1 we have
that (19) holds with
−2
R x2 −σ−2 f (t)
e 0
dt, if η = ∞,
σ αH
0 −2 α x1
σ0 f
C=
if η ∈ (0, ∞),
Pα,σ−2 η [x1 , x2 ],
0
1,
if η = 0,
σ0 = σ(t0 ).
Proposition 3.8. Under the notation and assumptions of Proposition 3.6 without assuming A3,A5, if X is differentiable in the mean square sense such that
r(s, t) < 1, s 6= t,
E X ′2 (t0 ) > σ ′2 (t0 ),
and E X ′2 (t) − σ ′2 (t) is continuous in a neighborhood of t0 , then (19) holds with
o
1 n ′2
α = 2, ρ2 (t) =
E X (t0 ) − σ ′2 (t0 ) t2 .
2
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
7
The next result is an extension of a classical theorem concerning the extremes of non-stationary Gaussian processes
discussed in the Introduction, see [10][Theorem D.3].
Proposition 3.9. Let X(t), t ∈ [0, T ] be a centered Gaussian process with correlation function r and variance function
σ 2 such that t0 = argmaxt∈[0,T ] σ(t) is unique with σ(t0 ) = σ > 0. Suppose that g is a bounded measurable function
being continuous in a neighborhood of t0 such that (4) holds. If further (2) is satisfied, then
(
)
2
u − g(t0 )
(α
− β2∗ )+
(20)
,
P sup (X(t) + g(t)) > u ∼ C0 u
Ψ
σ
t∈[0,T ]
where β ∗ = min(β, 2γ),
C0 =
with f (t) =
b
β
∗
σ3 |t| I{β=β }
+
R∞
−2/α 1/α
a Hα wt e−f (t) dt, if α < β ∗ ,
σ
0
f
Pα,σ
−2 a [wt0 , ∞),
1,
c
γ
∗
σ2 |t| I{2γ=β }
if α = β ∗ ,
if α > β ∗ ,
and wt0 defined in (16).
Proposition 3.10. i) Under the conditions and notation of Proposition 3.6,
R
x
e−f (t) dt
R xx1
if
2 e−f (t) dt ,
x1
f
Pα,η
[x1 ,x]
(21)
P uλ (τu − tu ) ≤ x τu ≤ T ∼
,
if
f
P
α,η [x1 ,x2 ]
−f (t)
sup
, if
t∈[x1 ,x] e
for any x ∈ [x1 , x2 ] we have
η = ∞,
η ∈ (0, ∞),
η = 0.
ii) Under the conditions and notation of Proposition 3.9, if t0 ∈ [0, T ), then for
Rx
e−f (t) dt
wt
0
R
,
∞
e−f (t) dt
wt
0
n
o
∗
f
Pα,a
[wt0 ,x]
P u2/β (τu − t0 ) ≤ x τu ≤ T ∼
,
f
Pα,a [wt0 ,∞)
−f (t)
sup
,
t∈[wt0 ,x] e
and if t0 = T , then for x ∈ (−∞, 0)
n
o
∗
P u2/β (τu − t0 ) ≤ x τu ≤ T ∼
x ∈ (wt0 , ∞)
if α < β ∗ ,
if α = β ∗ ,
if α > β ∗ ,
R ∞ −f (t)
e
dt
R−x
,
∞ −f (t)
e
dt
0
if α < β ∗ ,
Pf
if α = β ∗ ,
[−x,∞)
α,a
,
f
[0,∞)
Pα,a
e−f (x) ,
if α > β ∗ .
Example 3.11. Let X(t) = B(t) − tB(1), t ∈ [0, 1], where B(t) is a standard Brownian motion and suppose that τu
is defined by (5) with g(t) = −ct. Then
(
(22)
P
sup (X(t) − ct) > u
t∈[0,1]
P u τu −
u
c + 2u
≤ x τu ≤ 1
)
2
∼ e−2(u
+cu)
,
∼ Φ(4x), x ∈ (−∞, ∞).
We
2.7], the result in (22) is actually exact, i.e. for any u > 0,
n note that according to [49][Lemma
o
−2(u2 +cu)
P supt∈[0,1] (X(t) − ct) > u = e
.
Now, let T = 1/2. It appears that the asymptotics in this case is different, i.e.,
(
)
(23)
P
sup (X(t) − ct) > u
t∈[0,1/2]
2
∼ Φ(c)e−2(u
+cu)
,
8
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
and
P u τu −
u
c + 2u
≤ x τu ≤
1
2
∼
Φ(4x)
, x ∈ (−∞, c/4].
Φ(c)
Similarly, we have
(
)
2
c
1
P sup X(t) + − c t −
> u ∼ 2Ψ(c)e−2(u −cu)
2
2
t∈[0,1]
(24)
and
2
R 4x − (|t|+c)
2
dt
1
−∞ e
√
, x ∈ (−∞, ∞).
≤ x τu ≤ 1 ∼
P u τu −
2
2 2πΨ(c)
We conclude this section with an application of Proposition 3.6 to the calculation of the ruin probability of a Brownian
motion risk model with constant force of interest over infinite-time horizon.
3.3. Ruin probability in Gaussian risk model. Consider risk reserve process U (t), with interest rate δ, modeled
by
δt
U (t) = ue + c
Z
t
e
δ(t−v)
0
dv − σ
Z
t
eδ(t−v) dB(v),
0
t ≥ 0,
where c, δ, σ are some positive constants and B is a standard Brownian motion. The corresponding ruin probability
over infinite-time horizon is defined as
p(u) = P
inf
t∈[0,∞)
U (t) < 0 .
For this model we also define the ruin time τu = inf{t ≥ 0 : U (t) < 0}. Set below
2
δ p
c
t + r2 − r , t ∈ [0, ∞), r = .
h(t) = 2
σ
δ
We present next approximations of the ruin probability and the conditional ruin time τu |τu < ∞ as u → ∞.
Proposition 3.12. As u → ∞
2
h
p(u) ∼ P1,δ/σ
2 −r , ∞ Ψ
(25)
and for x ∈ (−r2 , ∞)
(
P u
2
e
−2δτu
−
c
δu + c
2 !
p
1
2δu2 + 4cu
σ
)
≤ x τu < ∞
∼
2
h
P1,δ/σ
2 −r , x
h
2
P1,δ/σ
2 [−r , ∞)
Remark 3.13. According to [50] (see also [51]) we have
!
√
.
2δ
(u + r)
(26)
Ψ
P
inf U (t) < 0 = Ψ
σ
t∈[0,∞)
By (25) and (11)
(
P
inf U (t) < 0
∼ E
t∈[0,∞]
.
√ !
2c
√
.
σ δ
!)
2
2δ
δ p
1p
δ
2
2
sup exp
t + r − r − 2 |t|
2δu + 4cu
B(t) − 2
Ψ
σ2
σ
σ
σ
t∈[−r 2 ,∞)
!
!
r
√
√
2δ
c2
c2
2c
2B(t) − t + 2
t + 2 − |t|
exp
+ √
(u + r)
Ψ
∼ E
sup
t∈[− c2 ,∞)
σ δ
σ δ
σ
σ δ
2
σ δ
!
(
√
)
√
2δ
2c √
(u + r) ,
2B (t) − 2t + √ t
Ψ
= E
sup exp
σ
σ δ
t∈[0,∞)
r
which combined with (26) implies that
(
)
√
2c √
(27)
E
sup exp
2B (t) − 2t + √ t
=
σ δ
t∈[0,∞)
Ψ
√ !!−1
2c
√
.
σ δ
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
9
4. Proofs
In the proofs presented in this section Ci , i ∈ N are some positive constants which may be different from line to line.
We first give two preliminary lemmas, which play an important role in the proof of Theorem 2.2.
Lemma 4.1. Let ξ(t), t ∈ R be a centered stationary Gaussian process with unit variance and correlation function r
satisfying
1 − r(t) ∼ aρ2 (|t|),
(28)
t → 0,
with a > 0, and ρ ∈ Rα/2 , α ∈ (0, 2]. Let f be a continuous function, Ku be a family of index sets and
Zu (t) :=
−
ξ(←
ρ (u−1 )t)
,
−
−2
1 + u f (←
ρ (u−1 )uλ t)
t ∈ [S1 , S2 ],
where λ > 0 and −∞ < S1 < S2 < ∞. If Mk (u), k ∈ Ku is such that
(29)
lim sup
u→∞ k∈Ku
then we have
1
lim sup
P
u→∞ k∈Ku Ψ(Mk (u))
(30)
where
Rfη [S1 , S2 ]
with η := limt↓0
ρ2 (t)
t2/λ
:= E
(
sup
e
(
Mk (u)
− 1 = 0,
u
sup
)
Zu (t) > Mk (u)
t∈[S1 ,S2 ]
√
2aBα (t)−a|t|α −f (η −1/α t)
t∈[S1 ,S2 ]
)
=
(
− Rfη [S1 , S2 ] = 0,
Hα [a1/α S1 , a1/α S2 ]
h
Pα,a
[S1 , S2 ]
f (·) ≡ 0,
otherwise,
∈ (0, ∞] and h(t) = f (η −1/α t) for η ∈ (0, ∞), h(t) = f (0) for η = ∞.
Proof of Lemma 4.1: We set η −1/α = 0 if η = ∞. The proof follows by checking the conditions of [52][Theorem 2.1]
where the results still holds if we omit the requirements f (0) = 0 and [S1 , S2 ] ∋ 0. By (29)
lim inf Mk (u) = ∞.
u→∞ k∈Ku
By continuity of f we have
(31)
lim
sup
u→∞ k∈K ,t∈[S ,S ]
u
1
2
−
Mk2 (u)u−2 f (←
ρ (u−1 )uλ t) − f (η −1/α t) = 0.
Moreover, (28) implies
−
−
−
−
ρ (u−1 )(t − t′ ) , u → ∞,
Var(ξ(←
ρ (u−1 )t) − ξ(←
ρ (u−1 )t′ )) = 2 − 2r ←
ρ (u−1 )(t − t′ ) ∼ 2aρ2 ←
holds for t, t′ ∈ [S1 , S2 ]. Thus
(32)
lim sup
sup
u→∞ k∈Ku t6=t′ ∈[S ,S ]
1
2
Mk2 (u)
−
−
Var(ξ(←
ρ (u−1 )t) − ξ(←
ρ (u−1 )t′ ))
− 1 = 0.
←
−
2
2
−1
2au ρ (| ρ (u )(t − t′ )|)
Since ρ2 ∈ Rα which satisfies the uniform convergence theorem (UCT) for regularly varying function, see, e.g., [53],
i.e.,
(33)
lim
sup
u→∞ t,t′ ∈[S ,S ]
1
2
α
−
ρ (u−1 )(t − t′ ) − |t − t′ | = 0,
u 2 ρ2 ←
and further by the Potter’s bound for ρ2 , see [53] we have
−
u 2 ρ2 ←
ρ (u−1 )(t − t′ )
α−ε1
α+ε1
(34)
< ∞,
≤
C
max
|S
−
S
|
,
|S
−
S
|
lim sup sup
1
1
2
1
2
α−ε
1
u→∞ t,t′ ∈[S1 ,S2 ]
|t − t′ |
t6=t′
where ε1 ∈ (0, min(1, α)). We know that for α ∈ (0, 2]
(35)
|t|α − |t′ |
α
≤ C2 |t − t′ |
α∧1
, t, t′ ∈ [S1 , S2 ].
10
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
By (28) for any small ǫ > 0, when u large enough
−
−
r(←
ρ (u−1 )t) ≤ 1 − ρ2 (←
ρ (u−1 ) |t|)(1 − ǫ),
(36)
−
−
r(←
ρ (u−1 )t) ≥ 1 − ρ2 (←
ρ (u−1 ) |t|)(1 + ǫ)
hold for t ∈ [S1 , S2 ], then by (29) for u large enough
− −1
−
sup
Mk2 (u)E [ξ(←
ρ (u )t) − ξ(←
ρ (u−1 )t′ )]ξ(0)
sup
k∈Ku |t−t′ |<ε,t,t′ ∈[S1 ,S2 ]
≤ C3 u 2
≤ C3
≤ C3
(37)
(38)
sup
|t−t′ |<ε,t,t′ ∈[S1 ,S2 ]
sup
|t−t′ |<ε,t,t′ ∈[S1 ,S2 ]
−
−
r(←
ρ (u−1 )t) − r(←
ρ (u−1 )t′ )
−
−
−
−
u2 ρ2 (←
ρ (u−1 ) |t|) − u2 ρ2 (←
ρ (u−1 ) |t′ |) + ǫ u2 ρ2 (←
ρ (u−1 ) |t|) + ǫ u2 ρ2 (←
ρ (u−1 ) |t′ |)
u 2 ρ2
sup
|t−t′ |<ε,t,t′ ∈[S1 ,S2 ]
α−ε1
α−ε1
+C4 ǫ |t|
+ |t′ |
←
−
ρ (u−1 )(t) − |t|α + u2 ρ2
α
α
←
−
ρ (u−1 )(t′ ) − |t′ | + |t|α − |t′ |
≤ C5 εα∧1 + C6 ǫ, u → ∞
→ 0, ε → 0, ǫ → 0,
where in (37) we use (34) and (38) follows from (33) and (35).
Hence the proof follows from [52][Theorem 2.1].
Lemma 4.2. Let Zu (s, t), (s, t) ∈ R2 be a centered stationary Gaussian field with unit variance and correlation function
rZu (·, ·) satisfying
1 − rZu (s, t) = au
(39)
−2
s
←
−
ρ (u−1 )
α/2
t
+ ←
−
ρ (u−1 )
α/2
,
(s, t) ∈ R2 ,
with a > 0, ρ2 ∈ Rα and α ∈ (0, 2]. Let Ku be some index sets. Then, for Mk (u), k ∈ Ku satisfying (29) and for any
S1 , S2 , T1 , T2 ≥ 0 such that max(S1 , S2 ) > 0, max(T1 , T2 ) > 0, we have
(
)
1
P
sup Zu (s, t) > Mk (u) − F (S1 , S2 , T1 , T2 ) = 0,
lim sup
u→∞ k∈Ku Ψ(Mk (u))
(s,t)∈D(u)
−
−
−
−
where D(u) = [−←
ρ (u−1 )S1 , ←
ρ (u−1 )S2 ] × [−←
ρ (u−1 )T1 , ←
ρ (u−1 )T2 ] and
F (S1 , S2 , T1 , T2 ) = Hα/2 [−a2/α S1 , a2/α S2 ]Hα/2 [−a2/α T1 , a2/α T2 ].
Proof of Lemma 4.2: The proof follows by checking the conditions of [35][Lemma 5.3].
For D = [−S1 , S2 ] × [−T1 , T2 ] we have
(
P
sup
)
Zu (s, t) > Mk (u)
(s,t)∈Du
=P
(
)
−
−
sup Zu (←
ρ (u−1 )s, ←
ρ (u−1 )t) > Mk (u) .
(s,t)∈D
Since by (39)
−
−
−
−
Var(Zu (←
ρ (u−1 )s, ←
ρ (u−1 )t) − Zu (←
ρ (u−1 )s′ , ←
ρ (u−1 )t′ ))
we obtain
(40)
lim sup
sup
u→∞ k∈Ku (s,t)6=(s′ ,t′ )∈D
Mk2 (u)
−
−
= 2 − 2rZu ←
ρ (u−1 )(s − s′ ), ←
ρ (u−1 )(t − t′ )
α/2
α/2
+ |t − t′ |
= au−2 |s − s′ |
−
−
−
−
Var(Zu (←
ρ (u−1 )s, ←
ρ (u−1 )t) − Zu (←
ρ (u−1 )s′ , ←
ρ (u−1 )t′ ))
− 1 = 0.
′
α/2
′
α/2
2a(|s − s |
+ |t − t | )
Further, since for α/2 ∈ (0, 1]
|t|α/2 − |t′ |
α/2
≤ C1 |t − t′ |
α/2
,
|s|α/2 − |s′ |
α/2
≤ C2 |s − s′ |
α/2
holds for t, t′ ∈ [−T1 , T2 ], s, s′ ∈ [−S1 , S2 ], we have by (39)
−
−
−
−
Mk2 (u)E [Zu (←
ρ (u−1 )s, ←
ρ (u−1 )t) − Zu (←
ρ (u−1 )s′ , ←
ρ (u−1 )t′ )]Zu (0, 0)
sup
sup
k∈Ku |(s,t)−(s′ ,t′ )|<ε
(s,t),(s′ ,t′ )∈D
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
≤ C3 u 2
11
−
−
−
−
ρ (u−1 )s′ , ←
ρ (u−1 )t′ )
ρ (u−1 )s, ←
ρ (u−1 )t) − rZu (←
rZu (←
sup
|(s,t)−(s′ ,t′ )|<ε
(s,t),(s′ ,t′ )∈D
= C3 a
≤ C3 a
sup
|(s,t)−(s′ ,t′ )|<ε
(s,t),(s′ ,t′ )∈D
sup
|(s,t)−(s′ ,t′ )|<ε
(s,t),(s′ ,t′ )∈D
|s|α/2 + |t|α/2 − |s′ |
|s|
α/2
− |s′ |
α/2
α/2
+ |t|
− |t′ |
α/2
α/2
− |t′ |
α/2
≤ C4 εα/2 → 0, u → ∞, ε → 0.
Hence the claim follows from [35][Lemma 5.3].
Proof of Theorem 2.2: We have from A3
ρ2 (t)
−
ρ (u−1 ) = η −λ/2 .
= η ∈ [0, ∞],
lim uλ ←
u→∞
t→0 t2/λ
Without loss of generality, we consider only the case tu = 0 for u large enough.
lim
By A2 for t ∈ ∆(u), for sufficiently large u,
1
(41)
Fu,+ε (t)
≤ σu (t) ≤
1
Fu,−ε (t)
Fu,±ε (t) = 1 + u−2 (1 ± ε)f (uλ t) ± ε
,
for small constant ε ∈ (0, 1). Since further
(
(42)
π(u) := P
sup Xu (t) > Mu
t∈∆(u)
we have
(
Xu (t)
π(u) ≤ P
sup
> Mu
F
t∈∆(u) u,−ε (t)
)
)
,
(
=P
sup Xu (t)σu (t) > Mu
t∈∆(u)
)
(
Xu (t)
π(u) ≥ P
sup
> Mu
F
t∈∆(u) u,+ε (t)
)
.
Set for some positive constant S
−
−
Ik (u) = [k ←
ρ (u−1 )S, (k + 1)←
ρ (u−1 )S], k ∈ Z.
Further, define
x1 (u)
− I{x1 ≤0} ,
Gu,+ε (k) = Mu sup Fu,+ε (s), N1 (u) =
−
S←
ρ (u−1 )
s∈Ik (u)
x2 (u)
Gu,−ε (k) = Mu inf Fu,−ε (s), N2 (u) =
+ I{x2 ≤0} .
−
S←
ρ (u−1 )
s∈Ik (u)
In view of [54], we can find centered stationary Gaussian processes Y±ε (t), t ∈ R with continuous trajectories, unit
variance and correlation function satisfying
r±ε (t) = 1 − (1 ± ε)ρ2 (|t|)(1 + o(1)), t → 0.
Case 1) η = ∞:
For any u positive
N2 (u)−1
(43)
X
k=N1 (u)+1
P
(
sup Xu (t) > Mu
t∈Ik (u)
where
)
N2 (u)
Λ1 (u) =
X
k=N1 (u)
and
Λ2 (u) =
X
P
−
(
2
X
i=1
Λi (u) ≤ π(u) ≤
sup Xu (t) > Mu ,
t∈Ik (u)
N1 (u)≤k,l≤N2 (u),l≥k+2
Set below
N2 (u)
P
X
P
k=N1 (u)
sup
(
Xu (t) > Mu
t∈Ik+1 (u)
(
sup Xu (t) > Mu
t∈Ik (u)
)
,
sup Xu (t) > Mu , sup Xu (t) > Mu
t∈Ik (u)
Hα
, Θ(u) = λ ←
u −
ρ (u−1 )
Z
x2
x1
t∈Il (u)
e−f (t) dtΨ(Mu ).
)
.
)
,
12
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
which is well-defined since
R x2
x1
e−f (t) dt < ∞ follows by the assumption f ∈ C0∗ ([x1 , x2 ]). By Slepian inequality (see
e.g., [55]), (42) and Lemma 4.1
)
(
N2 (u)
X
P
sup Xu (t) > Mu
k=N1 (u)
N2 (u)
≤
t∈Ik (u)
X
P
N2 (u)
≤
P
N2 (u)
=
P
k=N1 (u)
sup Xu (t) > Gu,−ε (k)
(
)
sup Y+ε (t) > Gu,−ε (k)
t∈Ik (u)
k=N1 (u)
X
)
t∈Ik (u)
k=N1 (u)
X
(
(
)
sup Y+ε (t) > Gu,−ε (k)
t∈I0 (u)
N2 (u)
∼
X
k=N1 (u)
Hα [0, (1 + ε)1/α S]Ψ(Gu,−ε (k))
N2 (u)
X
∼ Hα [0, (1 + ε)1/α S]Ψ(Mu )
Z
1/α
∼
Hα [0, (1 + ε) S]
−
Suλ ←
ρ (u−1 )
∼ Θ(u),
(44)
x2
2
−2
e−Mu u
inf s∈Ik (u) [(1−ε)f (uλ s)−ε]
k=N1 (u)
e−(1−ε)f (t)+ε dtΨ(Mu )
x1
u → ∞, S → ∞, ε → 0.
Similarly, we derive that
N2 (u)−1
X
(45)
P
k=N1 (u)+1
(
sup Xu (t) > u
t∈Ik (u)
)
≥ (1 + o(1))Θ(u), u → ∞, S → ∞, ε → 0.
Moreover,
N2 (u)
Λ1 (u) ≤
X
P
(
≤
(46)
X
sup
k=N1 (u)
sup Y+ε (t) > Gbu,−ε (k)
t∈Ik (u)∪Ik+1 (u)
N2 (u)
≤
)
t∈Ik (u)
k=N1 (u)
−P
(
+P
(
sup
t∈Ik+1 (u)
)!
)
Y+ε (t) > Gbu,−ε (k)
Y−ε (t) > G u,+ε (k)
2Hα [0, (1 + ε)1/α S] − Hα [0, 2(1 − ε)1/α S] Ψ(Gbu,−ε (k))
2Hα [0, (1 + ε)1/α S] − Hα [0, 2(1 − ε)1/α S]
= o(Θ(u)), u → ∞, S → ∞, ε → 0,
N2 (u)
X
k=N1 (u)
Ψ(Gbu,−ε (k))
where
Gbu,−ε (k) = min(Gu,−ε (k), Gu,−ε (k + 1)),
G u,+ε (k) = max(Gu,+ε (k), Gu,+ε (k + 1)).
By A3 for any (s, t) ∈ Ik (u) × Il (u) with N1 (u) ≤ k, l ≤ N2 (u), l ≥ k + 2 we have
2 ≤ V ar Xu (s) + Xu (t) = 4 − 2(1 − ru (s, t)) ≤ 4 − ρ2 (|t − s|) ≤ 4 − C1 u−2 |(l − k − 1)S|α/2
and for (s, t), (s′ , t′ ) ∈ Ik (u) × Il (u) with N1 (u) ≤ k, l ≤ N2 (u)
′
′
Xu (s) + Xu (t)
Xu (s ) + Xu (t )
1 − Cov q
, q
V ar Xu (s) + Xu (t)
V ar Xu (s′ ) + Xu (t′ )
2
′
′
Xu (s) + Xu (t)
Xu (s ) + Xu (t )
1
q
−q
= E
2
V ar X (s) + X (t)
V ar X (s′ ) + X (t′ )
u
u
u
u
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
13
n
2 o
1
E Xu (s) − Xu (s′ ) + Xu (t) − Xu (t′ )
V ar Xu (s) + Xu (t)
2
1
1
+V ar Xu (s′ ) + Xu (t′ ) q
−q
′
′
V ar Xu (s) + Xu (t)
V ar Xu (s ) + Xu (t )
o
n
o
n
n
2 o
2
2
+ 2E Xu (t) − Xu (t′ )
+ E Xu (s) − Xu (s′ ) + Xu (t) − Xu (t′ )
≤ 2E Xu (s) − Xu (s′ )
=
≤ 8(1 − ru (s, s′ ) + 1 − ru (t, t′ ))
t − t′
s − s′ α/2
−2
+
= 16u
←
−
←
−
ρ (u−1 )
ρ (u−1 )
α/2
.
In view of our assumptions, we can find centered homogeneous Gaussian random fields Zu (s, t) with correlation
!!
α/2
α/2
t
s
−2
.
+ ←
rZu (s, t) = exp −32u
←
−
−
ρ (u−1 )
ρ (u−1 )
Slepian inequality, Lemma 4.2 and (44) imply
Λ2 (u) ≤
≤
≤
≤
≤
X
P
X
P
N1 (u)≤k,l≤N2 (u),l≥k+2
N1 (u)≤k,l≤N2 (u),l≥k+2
X
(
(
sup Xu (s) > Mu , sup Xu (t) > Mu
s∈Ik (u)
t∈Il (u)
sup
(s,t)∈Ik (u)×Il (u)
)
)
e
(Xu (s) + Xu (t)) > 2Gu,−ε (k, l)
2Geu,−ε (k, l)
Zu (s, t) > p
4 − C1 u−2 |(l − k − 1)S|α/2
N1 (u)≤k,l≤N2 (u),l≥k+2
!
2
X
2Geu,−ε (k, l)
2/α
Hα/2 [0, 32 S] Ψ p
4 − C1 u−2 |(l − k − 1)S|α/2
N1 (u)≤k,l≤N2 (u),l≥k+2
!
N2 (u) N2 (u)−N1 (u)
2
X
X
2Gu,−ε (k)
2/α
Hα/2 [0, 32 S] Ψ p
2
4 − C1 u−2 (lS)α/2
l=1
k=N1 (u)
N2 (u)
≤
(
2
X
k=N1 (u)
P
sup
(s,t)∈I0 (u)×I0 (u)
)
∞
2
X
α/2
Hα/2 [0, 322/α S] Ψ (Gu,−ε (k))
e−C2 (lS)
l=1
N2 (u)
(47)
≤
2Hα/2 322/α Se−C3 S
=
o(Θ(u)),
α/2
X
k=N1 (u)
Hα/2 [0, 322/α S]Ψ (Gu,−ε (k))
u → ∞, S → ∞, ε → 0,
where Geu,−ε (k, l) = min(Gu,−ε (k), Gu,−ε (l)). Combing (43)-(46) with (47), we obtain
π(u) ∼ Θ(u),
u → ∞.
Case 2) η ∈ (0, ∞): This implies λ = 2/α.
Set for any small constant θ ∈ (0, 1) and any constant S1 > 0
(
(
(x2 − θ)η 1/α ,
−S
,
if
x
=
−∞;
1
1
(48)
S2∗ =
S1∗ =
S1 ,
(x1 + θ)η 1/α , if x1 ∈ (−∞, ∞),
(49)
S1∗∗
=
(
−S,
(x1 − θ)η 1/α ,
if x1 = −∞;
if x1 ∈ (−∞, ∞),
S2∗∗
=
(
(x2 + θ)η 1/α ,
S,
if x2 ∈ (−∞, ∞);
if x2 = ∞,
if x2 ∈ (−∞, ∞);
if x2 = ∞.
−
−
−
−
With K ∗ = [←
ρ (u−1 )S1∗ , ←
ρ (u−1 )S2∗ ] and K ∗∗ = [←
ρ (u−1 )S1∗∗ , ←
ρ (u−1 )S2∗∗ ] we have for any S1 > 0 and u large enough
(50)
π(u) ≥ P sup Xu (t) > Mu ,
t∈K ∗
14
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
π(u) ≤ P
(51)
sup Xu (t) > Mu
t∈K ∗∗
N2 (u)
+
X
P
k=N1 (u)
k6=0,−1
(
sup Xu (t) > Mu
t∈Ik (u)
)
.
Using Slepian inequality and Lemma 4.1, we have that
Y−ε (t)
≥ P sup
P sup Xu (t) > Mu
> Mu
t∈K ∗ Fu,+ε (t)
t∈K ∗
h
+ε
∼ Pα,1
[S1∗ , S2∗ ]Ψ(Mu ), u → ∞,
where h±ε (t) = (1 ± ε)f (η −1/α t) ± ε, and similarly
P sup Xu (t) > Mu
Y+ε (t)
> Mu
≤ P sup
t∈K ∗∗ Fu,−ε (t)
t∈K ∗∗
h
−ε
∼ Pα,1
[S1∗∗ , S2∗∗ ]Ψ(Mu ), u → ∞.
(52)
Moreover, in light of (6), the Slepian inequality and Lemma 4.1
(
)
(
)
N2 (u)
N2 (u)
X
X
Y+ε (t)
P
sup
≤
P
sup Xu (t) > Mu
> Mu
t∈Ik (u) Fu,−ε (t)
t∈Ik (u)
k=N1 (u)
k6=−1,0
k=N1 (u)
k6=−1,0
N2 (u)
≤
X
P
(
)
sup Y+ε (t) > Gu,−ε (k)
t∈I0 (u)
k=N1 (u)
k6=−1,0
N2 (u)
∼
X
k=N1 (u)
k6=−1,0
Hα [0, (1 + ε)1/α S]Ψ (Gu,−ε (k))
N2 (u)
(53)
X
e− inf s∈[k,k+1] ((1−ε)f (sη
∼
Hα [0, (1 + ε)1/α S]Ψ(Mu )
∼
C4 Hα Ψ(Mu )Se−C5 (η
=
o (Ψ(Mu )) , u → ∞, S → ∞, ε → 0.
−1/α
S)−ε)
k=N1 (u)
k6=−1,0
−1/α
S)ǫ1 /2 ε
e
Letting ε → 0, S1 → ∞, S → ∞, and θ → 0 we obtain
f
π(u) ∼ Pα,η
[x1 , x2 ]Ψ(Mu ),
λ
λ
Next, if we set x1 (u) = − lnuu , x2 (u) = lnuu , then
x1 = −∞,
x2 = ∞,
S1∗ = −S1 ,
u → ∞.
S2∗ = S1 ,
S1∗∗ = −S,
S2∗∗ = S.
Inserting (52), (53) into (51) and letting ε → 0 leads to
lim
u→∞
−1/α
π(u)
f
S)ǫ1 /2
≤ Pα,η
[−S, S] + C4 Hα Se−C5 (η
< ∞.
Ψ(Mu )
By (50), we have
lim
u→∞
π(u)
f
≥ Pα,η
[−S1 , S1 ] > 0.
Ψ(Mu )
Letting S1 → ∞, S → ∞ we obtain
f
Pα,η
(−∞, ∞) ∈ (0, ∞),
Case 3) η = 0: Note that
(
π(u) ≤ P
sup
t∈((I−1 (u)∪I0 (u))∩∆(u))
f
π(u) ∼ Pα,η
(−∞, ∞)Ψ(Mu ),
Xu (t)σu (t) > Mu
)
N2 (u)
+
X
k=N1 (u)
k6=−1,0
P
(
u → ∞.
sup Xu (t)σu (t) > Mu
t∈Ik (u)
)
=: J1 (u) + J2 (u).
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
15
By (41)
1
1
1
≤ σu (t) ≤
≤
Fu,+ε (t)
Fu,−ε (t)
1 + u−2 inf s∈∆(u) [(1 − ε)f (uλ s) − ε]
(54)
holds for all t ∈ ∆(u). Hence Lemma 4.1 implies
(
J1 (u) ≤
sup
P
−
−
t∈[−←
ρ (u−1 )S,←
ρ (u−1 )S]
(
)
−2
λ
inf [(1 − ε)f (u s) − ε]
Xu (t) > Mu 1 + u
s∈∆(u)
)
Y+ε (t) > Mu 1 + u−2 inf [(1 − ε)f (uλ s) − ε]
≤
P
∼
−2
λ
1/α
inf [(1 − ε)f (u s) − ε]
Hα [0, 2(1 + ε) S]Ψ Mu 1 + u
sup
−
−
t∈[−←
ρ (u−1 )S,←
ρ (u−1 )S]
s∈∆(u)
s∈∆(u)
∗
∼
Hα [0, 2(1 + ε)1/α S]Ψ (Mu ) e−(1−ε)ω
+ε
∼
Ψ (Mu ) e−ω , u → ∞, S → 0, ε → 0,
∗
where ω ∗ = inf t∈[x1 ,x2 ] f (t). Furthermore, by Lemma 4.1, for any x > 0
(
)
N2 (u)
N2 (u)
X
X
Hα [0, (1 + ε)1/α S]Ψ (Gu,−ε (k))
P
sup Y+ε (t) > Gu,−ε (k) ∼
J2 (u) ≤
k=N1 (u)
k6=−1,0
(55)
t∈I0 (u)
≤
2Hα [0, (1 + ε)1/α S]Ψ(Mu )
≤
(xS)ǫ1 /2
∞
X
k=N1 (u)
k6=−1,0
e−(1−2ε)(kxS)
ǫ1 /2
+2ε
k=1
C6 Hα Ψ(Mu )Se−C7
= o (Ψ(Mu )) , u → ∞, x → ∞, S → 0,
hence
lim
u→∞
∗
π(u)
≤ e−ω ,
Ψ(Mu )
u → ∞.
Next, since f ∈ C0∗ ([x1 , x2 ]) there exists y(u) ∈ ∆(u) satisfying
lim y(u)uλ = y ∈ {z ∈ [x1 , x2 ] : f (z) = ω ∗ }.
u→∞
Consequently, in view of (54)
π(u)
≥ P {Xu (y(u)) > Mu }
≥ P X u (y(u)) > Mu (1 + [(1 + ε)f (uλ y(u)) + ε]u−2 )
= Ψ Mu (1 + (1 + ε)[f (uλ y(u)) + ε]u−2 )
∼ Ψ (Mu ) e−f (y) , u → ∞, ε → 0,
which implies that
∗
π(u) ∼ Ψ (Mu ) e−ω ,
u→∞
establishing the proof.
Proof of Theorem 2.4: Clearly, for any u > 0
(
π(u) ≤ P
where with D(u) := [0, T ] \ (tu + ∆(u)),
(
π(u) := P
sup Xu (t) > Mu
t∈[0,T ]
sup Xu (tu + t) > Mu
t∈∆(u)
)
)
≤ π(u) + π1 (u),
, π1 (u) := P
(
Next, we derive an upper bound for π1 (u) which will finally imply that
(56)
π1 (u) = o(π(u)),
sup Xu (t) > Mu
t∈D(u)
u → ∞.
)
.
16
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
Thus by A4, A5 and Piterbarg inequality (see e.g., [10][Theorem 8.1], [56][Theorem 3] and [35][Lemma 5.1])
(
)
π1 (u) = P
sup X u (t)σu (t) > Mu
(
t∈D(u)
p(ln u)q
≤ P
sup X u (t) > Mu + C1
u
t∈D(u)
p(ln u)q
≤ C2 T Mu2/ς Ψ Mu + C1
u
(57)
)
u → ∞.
= o (Ψ (Mu )) ,
Since A1’ implies A1, by Theorem 2.2 and A2, A3, we have
R x2 −f (t)
Hα
e
dt, if η = ∞,
−
ρ (u−1 ) x1
uλ ←
f
π(u) ∼ Ψ (Mu )
(58)
Pα,η
[x1 , x2 ],
if η ∈ (0, ∞),
1,
if η = 0,
u → ∞,
where the result of case η = 0 comes from the fact that f (t) ≥ 0 for t ∈ [x1 , x2 ], f (0) = 0 and 0 ∈ [x1 , x2 ].
Consequently, it follows from (57) and (58) that (56) holds, and thus the proof is complete.
Proof of Proposition 3.1: Without loss of generality we assume that gm = g(t0 ) = 0.
1/γ
q
with some large q > 1.
i) We present first the proof for t0 ∈ (0, T ). Let ∆(u) = [−δ(u), δ(u)], where δ(u) = (lnuu)
By (4) for u large enough and some small ε ∈ (0, 1)
γ
(59)
1+
1
u − g(t + t0 )
g(t + t0 )
(1 + ε)c |t|
(1 − ε)c |t|
≤
:=
=1−
≤1+
u
σu (t + t0 )
u
u
u
holds for all t ∈ [−θ, θ], θ > 0. It follows that
(
Π(u) ≤ P
sup (X(t) + g(t)) > u
t∈[0,T ]
)
γ
≤ Π(u) + Π1 (u),
with
Π1 (u) := P
(
sup
)
(X(t) + g(t)) > u ,
t∈([0,T ]\[t0 −θ,t0 +θ]
and
Π(u) := P
(
sup
(X(t) + g(t)) > u
t∈[t0 −θ,t0 +θ]
)
(
)
u
=P
sup
X(t)
>u .
u − g(t)
t∈[t0 −θ,t0 +θ]
By (59), we may further write
(60)
lim
1
σu (t0 +t) −
cu−1 |t|γ
sup
u→∞ t∈∆(u),t6=0
1
− 1 = lim
sup
u→∞ t∈∆(u),t6=0
1
σu (t0 +t) − 1
cu−2 |u1/γ t|γ
− 1 = 0,
and
1
(1 − ε)c(ln u)q
.
≥1+
u2
t∈[−θ,θ]\∆(u) σu (t + t0 )
inf
In addition, from (14) we have that
lim
sup
u→∞ s,t∈∆(u)
t6=s
1 − r(t0 + t, t0 + s)
− 1 = 0,
a|t − s|α
and
sup
s,t∈[t0 −θ,t0 +θ]
E X(t) − X(s))2 ≤
sup
(2 − 2r(s, t)) ≤ C1 |t − s|α
s,t∈[t0 −θ,t0 +θ]
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
17
hold when θ is small enough. Therefore, by Theorem 2.4
γ
1 R∞
Hα a α wt e−c|t| dt, if α < 2γ,
0
1
2
c|t|γ
Π(u) ∼ u( α − γ )+ Ψ (u)
if α = 2γ,
Pα,a [wt0 , ∞),
1,
if α > 2γ.
Moreover, since gθ := supt∈[0,T ]\[t0 −θ,t0 +θ] g(t) < 0 we have
)
(
Z
Π1 (u) ≤ P
sup
X(t) > u − gθ ∼ Hα
t∈[0,T ]\[t0 −θ,t0 +θ]
T
0
2
1
dt u α Ψ (u − gθ ) = o(Π(u)), u → ∞,
a(t)
hence the claims follow.
For t0 = 0 and t0 = T , we just need to replace ∆(u) by ∆(u) = [0, δ(u)] and ∆(u) = [−δ(u), 0], respectively.
ii) Applying [10][Theorem 7.1] we obtain
(
)
sup (X(t) + g(t)) > u
P
(
=P
t∈[A,B]
sup X(t) > u
t∈[A,B]
)
∼
Z
P
≥
sup (X(t) + g(t)) > u
t∈[0,T ]
(
sup (X(t) + g(t)) > u
t∈[0,T ]
)
)
sup (X(t) + g(t)) > u ,
P
t∈[A,B]
≤
P
sup (X(t) + g(t)) > u + P
t∈∆ε
Since g is a continuous function and gε := supt∈[0,T ]\∆ε g(t) < 0
(
)
(
sup
P
sup (X(t) + g(t)) > u
t∈∆ε
sup
(X(t) + g(t)) > u .
)
≤
C2 u2/α Ψ(u − gε ) = o u2/α Ψ(u) ,
≤
P
∼
Z
Hence the claims follow.
sup
t∈[0,T ]\∆ε
X(t) > u − gε
Z
sup X(t) > u ∼
t∈∆ε
B
)
t∈[0,T ]\∆ε
P
Further, we have
(
≤
(X(t) + g(t)) > u
t∈[0,T ]\∆ε
P
2
(a(t))1/α dtHα u α Ψ (u) .
A
Set ∆ε = [A − ε, B + ε] ∩ [0, T ] for some ε > 0, then we have
(
)
(
P
B
B+ε
u → ∞, ε → 0.
2
1
(a(t)) α dtHα u α Ψ(u)
A−ε
1
2
(a(t)) α dtHα u α Ψ(u),
A
u → ∞, ε → 0.
Proof of Proposition 3.3: We give the proof only for t0 = 0. In this case, x ∈ (0, ∞). By definition
n
o
n
o P supt∈[0,u−1/γ x] (X(t) + g(t)) > u
n
o .
P u1/γ (τu − t0 ) ≤ x τu ≤ T =
P supt∈[0,T ] (X(t) + g(t)) > u
Set ∆(u) = [0, u−1/γ x]. For all u large
(
P
sup (X(t) + g(t)) > u
t∈∆(u)
u
and σu (t) =
Denote Xu (t) = X(t) u−g(t)
)
(
)
u
=P
sup X(t)
>u .
u − g(t)
t∈∆(u)
u
u−g(t) .
As in the proof
(
)
2
(α
− γ1 )+
P
sup (X(t) + g(t)) > u ∼ u
Ψ (u)
t∈∆(u)
of Proposition 3.1 i), by Theorem 2.2 we obtain
1
a α Hα
c|t|
γ
Rx
0
Pα,a [0, x],
1,
γ
e−c|t| dt, if α < 2γ,
if α = 2γ,
if α > 2γ.
Consequently, by Proposition 3.1 statement i), the results follow.
Proof of Proposition 3.6: Clearly, for any u > 0
(
)
P
sup (X(t) + g(t)) > u
t∈[0,T ]
(
mu (t)
u − g(tu )
>
= P sup X(t)
mu (tu )
σ(tu )
t∈[0,T ]
)
,
18
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
and A1’ is satisfied. By the continuity of σ(t), limu→∞ tu = t0 and σ(t0 ) = 1, we have that for u large enough
u − g(tu )
∼ u, u → ∞.
σ(tu )
σ(tu ) > 0, and
Set next
Xu (t) =X(t)
which has standard deviation function σu (t) =
mu (t)
,
mu (tu )
mu (tu +t)
mu (tu )
t ∈ [0, T ],
and correlation function ru (s, t) = r(s, t) satisfying assump-
tions A2–A4. Further, X u (t) = X(t) implies A5. Hence the claims follow from Theorem 2.4.
Proof of Proposition 3.8: For all u large
E [X(tu + t) − X(tu + s)]2 − [σ(tu + t) − σ(tu + s)]2
1 − r(tu + t, tu + s) =
(61)
.
2σ(tu + t)σ(tu + s)
Using that
E [X(tu + t) − X(tu + s)]2
[σ(tu + t) − σ(tu + s)]
we have, as u → ∞
= σ ′2 (tu + t)(t − s)2 + o((t − s)2 ),
E X ′2 (tu + t) − σ ′2 (tu + t)
(t − s)2 + o((t − s)2 ).
1 − r(tu + t, tu + s) =
2σ(tu + t)σ(tu + s)
E{X ′2 (t)}−σ′2 (t)
2σ(s)σ(t)
Since D(s, t) :=
2
= E X ′2 (tu + s) (t − s)2 + o((t − s)2 ),
is continuous at (t0 , t0 ), then setting D = D(t0 , t0 ) we obtain
lim
sup
u→∞ t∈∆(u),s∈∆(u)
t6=s
1 − r(tu + t, tu + s)
− 1 = 0,
D|t − s|2
which implies that A3 is satisfied. Next we suppose that σ(t) >
σ(t) ≤
1
2 },
by Borell-TIS inequality
P
sup (X(t) + g(t)) > u
t∈E1
2
E (X(t) − X(s))
for any t ∈ [0, T ], since if we set E1 = {t ∈ [0, T ] :
≤ exp −2 u − sup g(t) − C1
t∈[0,T ]
n
o
as u → ∞, where C1 = E supt∈[0,T ] X(t) < 0. Further by (61)
1
2
≤ 2 − 2r(t, s) ≤ 4
!2
= o Ψ u − g(tu )
σ(tu )
sup E X ′2 (θ) (t − s)2 − inf σ ′2 (θ)(t − s)2
θ∈[0,T ]
θ∈[0,T ]
!
,
then A5 is satisfied. Consequently, the conditions of Proposition 3.6 are satisfied and hence the claim follows.
Proof of Proposition 3.9: Without loss of generality we assume that g(t) satisfies (4) with g(t0 ) = 0.
First we present the proof for t0 ∈ (0, T ). Clearly, mu attains its maximum at the unique point t0 . Further, we have
1
g(t0 + t)
mu (t0 )
−1=
(1 − σ(t0 + t)) −
.
mu (t0 + t)
σ(t0 + t)
uσ(t0 + t)
Consequently, by (2) and (4)
mu (t0 )
c γ
β
= 1 + b |t| + |t| (1 + o(1)), t → 0
mu (t0 + t)
u
2/β ∗
q
holds for all u large. Further, set ∆(u) = [−δ(u), δ(u)], where δ(u) = (lnuu)
for some constant q > 1 with
(62)
β ∗ = min(β, 2γ), and let f (t) = b|t|β I{β=β ∗ } + c|t|γ I{2γ=β ∗ } . We have
mu (t0 )
2
2/β ∗
t)
mu (t0 +t) − 1 u − f (u
(63)
= 0.
lim
sup
u→∞ t∈∆(u),t6=0
f (u2/β ∗ t) + I{β6=2γ}
By (2)
(64)
E (X(t) − X(s))2 = E (X(t))2 + E (X(s))2 − 2E X(t)X(s) = 2 − 2r(s, t) ≤ C1 |t − s|α
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
19
holds for s, t ∈ [t0 − θ, t0 + θ], with θ > 0 sufficiently small. By (62), for any ε > 0
(ln u)q
mu (t0 )
≥ 1 + C2 (1 − ε)
mu (t0 + t)
u
(65)
holds for all t ∈ [−θ, θ] \ ∆(u). Further
(
Π(u) := P
sup
(X(t) + g(t)) > u
t∈[t0 −θ,t0 +θ]
with
Π1 (u) := P
(
)
≤P
(
sup (X(t) + g(t)) > u
t∈[0,T ]
sup
)
≤ Π(u) + Π1 (u),
)
(X(t) + g(t)) > u .
t∈([0,T ]\[t0 −θ,t0 +θ])
By(63), (2), (65), (64) which imply A2–A5 and Proposition 3.6, we have
R∞
Hα a1/α wt e−f (t) dt,
0
2
2
f
Π(u) ∼ u( α − β∗ )+ Ψ (u)
(66)
Pα,a
[wt0 , ∞),
1,
if α < β ∗ ,
if α = β ∗ ,
if α > β ∗ .
In order to complete the proof it suffices to show that
Π1 (u) = o(Π(u)).
Since σθ := maxt∈([0,T ]\[t0 −θ,t0 +θ]) σ(t) < 1 , by the Borell-TIS inequality we have
(
)
(u − C3 )2
= o(Π(u)),
Π1 (u) ≤ P
sup
X(t) > u ≤ exp −
2σθ2
t∈([0,T ]\[t0 −θ,t0 +θ])
n
o
where C3 = E supt∈[0,T ] X(t) < ∞.
For the cases t0 = 0 and t0 = T , we just need to replace ∆(u) by [0, δ(u)] and [−δ(u), 0], respectively. Hence the proof
is complete.
Proof of Proposition 3.10: i) We shall present the proof only for the case t0 ∈ (0, T ). In this case, [x1 , x2 ] = R. By
definition, for any x ∈ R
n
o
P
sup
(X(t)
+
g(t))
>
u
−λ
t∈[0,tu +u x]
n
o .
P uλ (τu − tu ) ≤ x τu ≤ T =
P supt∈[0,T ] (X(t) + g(t)) > u
For u > 0 define
Xu (t) = X(tu + t)
As in the proof of Proposition 3.6, we obtain
(
P
sup
mu (tu + t)
,
mu (tu )
(X(t) + g(t)) > u
t∈[0,tu +u−λ x]
)
σu (t) =
mu (tu + t)
.
mu (tu )
(
u − g(tu )
=P
sup
Xu (t) >
σ(tu )
t∈[0,tu +u−λ x]
)
,
and A1’, A2–A5 are satisfied with ∆(u) = [−δu , u−λ x]. Clearly, for any u > 0
(
)
u − g(tu )
π(u) ≤ P
sup
Xu (t) >
≤ π(u) + π1 (u),
σ(tu )
t∈[0,tu +u−λ x]
where
(
u − g(tu )
π(u) = P
sup
Xu (t) >
σ(tu )
−λ
t∈[tu −δ(u),tu +u x]
)
,
(
u − g(tu )
π1 (u) = P
sup
Xu (t) >
σ(tu )
t∈[0,tu −δ(u)]
Applying Theorem 2.2 we have
(67)
π(u) ∼ Ψ
u − g(tu )
σ(tu )
R x −f (t)
Hα
e
dt,
−
uλ ←
ρ (u−1 ) −∞
f
Pα,η (−∞, x],
supt∈(−∞,x] e−f (t) ,
if η = ∞,
if η ∈ (0, ∞),
if η = 0.
)
.
20
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
In view of (57)
u − g(tu )
,
π1 (u) = o Ψ
σ(tu )
hence
P
(
sup
(X(t) + g(t)) > u
t∈[0,tu +u−λ x]
)
u → ∞,
∼ π(u),
u→∞
and thus the claim follows by (67) and Proposition 3.6.
ii) We give the proof of t0 = T . In this case x ∈ (−∞, 0) implying
n
o
∗
n
o
P
sup
(X(t)
+
g(t))
>
u
−2/β
t∈[0,T +u
x]
∗
n
o
.
P u2/β (τu − T ) ≤ x τu ≤ T =
P supt∈[0,T ] (X(t) + g(t)) > u
Set δu =
(ln u)q
u
2/β ∗
for some q > 1 and let
∗
∆(u) = [−δu , u−2/β x],
σu (t) =
mu (t)
,
mu (T )
with
mu (t) =
For all u large, we have
(
π(u) ≤ P
sup
σ(t)
,
1 − g(t)/u
(X(t) + g(t)) > u
t∈[0,T +u−2/β ∗ x]
where
π(u) := P
(
Xu (t) = X(t)
)
≤ π(u) + P
sup (X(T + t) + g(T + t)) > u
t∈∆(u)
)
(
=P
mu (t)
.
mu (T )
sup
)
(X(t) + g(t)) > u ,
t∈[0,T −δu ]
(
)
sup Xu (T + t) > u .
t∈∆(u)
∗
As in the proof of Proposition 3.9 it follows that the Assumptions A2–A5 hold with ∆(u) = [−δu , u−2/β x]. Hence
an application of Theorem 2.2 yields
R∞
a1/α Hα −x e−f (t) dt, if α < β ∗ ,
2
2
f
π(u) ∼ u( α − β∗ )+ Ψ (u)
Pα,a
[−x, ∞),
if α = β ∗ ,
e−f (x) ,
if α > β ∗ .
(68)
In view of (57)
P
(
sup
(X(t) + g(t)) > u
t∈[0,T −δu ]
implying
P
(
sup
)
t∈[0,T +u−2/β ∗ x]
=P
(
sup
Xu (t) > u
t∈[0,T −δu ]
(X(t) + g(t)) > u
)
)
∼ π(u),
= o (Ψ (u)) ,
u→∞
u → ∞.
Consequently, the proof follows by (68) and Proposition 3.9.
Rt
Proof of Proposition 3.12: Set next A(t) = 0 e−δv dB(v) and define
Z t
e (t) = u + c
U
e−δv dv − σA(t), t ≥ 0.
0
Since
1
sup E [A(t)]2 =
2δ
t∈[0,∞)
e (∞) :=
implying supt∈[0,∞) E {|A(t)|} < ∞, then by the martingale convergence theorem in [57] we have that U
e (t) exists and is finite almost surely. Clearly, for any u > 0
limt→∞ U
e
p(u) = P
inf U (t) < 0
t∈[0,∞)
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
= P
(
(
21
)
Z t
−δv
σA(t) − c
e dv > u
sup
t∈[0,∞]
0
)
1
c
1
= P sup σA(− ln t) − (1 − t 2 ) > u .
2δ
δ
t∈[0,1]
The proof will follow by applying Proposition 3.6, hence we check next the assumptions therein for this specific model.
1
ln t) with variance function given by
Below, we set Z(t) = σA(− 2δ
!
Z − 2δ1 ln t
σ2
2
−δv
VZ (t) = V ar σ
(1 − t),
e dB(v) =
2δ
0
t ∈ [0, 1].
We show next that for u sufficiently large, the function
√
√σ
1−t
uVZ (t)
2δ
Mu (t) :=
,
=
c
Gu (t)
1 + δu (1 − t1/2 )
1
0 ≤ t ≤ 1,
with Gu (t) := u + δc (1 − t 2 ) attains its maximum at the unique point tu =
dMu (t)
[Mu (t)]t :=
dt
=
c
δu+c
2
. In fact, we have
"
#
1
u
dVZ2 (t)
dVZ (t)
u
VZ (t) cu − 1
ct− 2
2
− t 2 =
·
−
Gu (t) + VZ (t)
dt
Gu (t) G2u (t)
2δ
2G2u (t)Vz (t)
dt
δ
uσ 2 t−1/2 h c
c 1i
t2 .
− u+
2
4δGu (t)VZ (t) δ
δ
2
c
Letting [Mu (t)]t = 0, we get tu = δu+c
. By (69), [Mu (t)]t > 0 for t ∈ (0, tu ) and [Mu (t)]t < 0 for t ∈ (tu , 1], so tu
(69)
=
is the unique maximum point of Mu (t) over [0, 1]. Further
σ
σu
= √ (1 + o(1)), u → ∞.
Mu := Mu (tu ) = √
2
2δu + 4cu
2δ
We set δ(u) =
(ln u)q
u
2
for some q > 1, and ∆(u) = [−tu , δ(u)]. Next we check the assumption A2. It follows that
[Gu (tu + t)VZ (tu )]2 − [Gu (tu )VZ (tu + t)]2
Mu
−1=
.
Mu (tu + t)
VZ (tu + t)Gu (tu )[Gu (tu + t)VZ (tu ) + VZ (tu + t)Gu (tu )]
We further write
[Gu (tu + t)VZ (tu )]2 − [Gu (tu )VZ (tu + t)]2
h
h
i2 σ 2
c c√
c c √ i2 σ 2
= u+
−
−
(1 − tu ) − u +
(1 − tu − t)
tu + t
tu
δ
δ
2δ
δ
δ
2δ
√
c 2 σ 2
c2 σ 2
c cσ 2 √
= u+
( tu + t − tu )(1 − tu ) −
t
t−2 u+
2
δ
2δ
δ 2δ
2δ 3
√ √
√
c 2 σ 2
c 2 σ 2
t(1 − tu ) − 2 u +
(1 − tu ) tu ( tu + t − tu )
= u+
δ
2δ
δ
2δ
√ 2
c 2 c 2 √
σ2
u+
( t + tu − tu )
−
=
2δ
δ
δ
√
√
σ2
2c
2
=
u + u ( t + tu − tu )2 .
2δ
δ
Since for any t ∈ ∆(u)
r
r
c
cp
σ2
σ2
c
(1 − tu − δ(u)) ≤ VZ (tu + t) ≤
, u+ −
tu + δ(u) ≤ Gu (tu + t) ≤ u + ,
2δ
2δ
δ δ
δ
we have for all large u
VZ (tu + t)Gu (tu )[Gu (tu + t)VZ (tu ) + VZ (tu + t)Gu (tu )] ≤
and
VZ (tu + t)Gu (tu )[Gu (tu + t)VZ (tu ) + VZ (tu + t)Gu (tu )] ≥
c 2
σ2
u+
δ
δ
2
σ2
c
cp
(1 − tu − δ(u)) u + −
tu + δ(u)
δ
δ
δ
22
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
≥
Thus as u → ∞
(70)
inf
t∈∆(u),t6=0
Mu /Mu (tu + t) − 1
−1≥
q
2
c2
1
c
2
−2
u
u t + δ2 − δ
2
2
2c
1u +δ u √
2 (u+ δc )2 ( t
1
2
q
t+
where we used the fact that for t ∈ ∆(u)
√
√
( t + tu − tu )2 ≥
σ2
c 2
u+
−u .
δ
δ
+ tu −
c2
(δu)2
−
√ 2
tu )
u2 + 2c
δ u
2 − 1 ≥
c 2 − 1 → 0,
(u + δ )
c
δu
s
c
c2
−
t+
(δu)2
δu
!2
.
Furthermore, since
0
≤
=
we have as u → ∞
sup
t∈∆(u),t6=0
q
q
√
c
c2
c2
√
√
t + (δu)
t + (δu)
t + tu
2 + δu
2 −
t + tu − tu
q
√
√
√
√
−
1
=
−
1
≤
c
c2
t + tu + tu
t + tu + tu
t + (δu)
2 − δu
q
r
c2
c2
−
t
2
u
(δu)2 − tu
c 2
(δu)
√
q
− 1,
1
+
≤
=
√
√
√
δu
c2
tu
( t + tu + tu )( t + (δu)
t + tu )
2 +
Mu /Mu (tu + t) − 1
−1 ≤
q
2
c
c2
1
−2
2
u t + δ2 − δ u
2
≤
(71)
2
2c
1 u +δ u √
2 (u+ δc )2 −u ( t
1
2
q
t+
+ tu −
c2
(δu)2
u2 + 2c
δ u
(u + δc )2 − u
−
1+
√
c
δu
tu )2
2
r
−1
!2
c 2
− 1 − 1 → 0.
1+
δu
Consequently, (70) and (71) imply
(72)
lim
sup
u→∞ t∈∆(u),t6=0
Mu /Mu (tu + t) − 1
− 1 = 0.
q
2
c2
1
c
2
−2
u t + δ2 − δ u
2
Since for 0 ≤ t′ ≤ t < 1, the correlation function of Z(t) equals
n R 1
o
R − 1 ln t′ −δv
−
ln t −δv
√
E (σ 0 2δ
e dB(v))
e dB(v))(σ 0 2δ
1−t
t − t′
q
√
q
√
r(t, t′ ) =
=1− √
,
= √
′
′
σ2
σ2
1−t
1 − t ( 1 − t′ + 1 − t)
′)
(1
−
t)
(1
−
t
2δ
2δ
we have
sup
t,t′ ∈∆(u),t′ 6=t
1 − r(tu + t, tu + t′ )
−1
1
′
2 |t − t |
=
≤
(73)
1
1−
c
( c+δu
)2
Further, for some small θ ∈ (0, 1), we obtain (set below Z(t) =
(74)
for t, t′ ∈ [0, θ]. For all u large
Π(u) := P
where
e
Π(u)
:= P
2
√
√
√
−1
′
1 − t − tu ( 1 − t − tu + 1 − t − tu )
sup
t,t′ ∈∆(u),t′ 6=t
q
− ( (lnuu) )2
− 1 → 0, u → ∞.
Z(t)
VZ (t) )
2
E Z(t) − Z(t′ ) = 2 − 2r(t, t′ ) ≤ C1 |t − t′ |
(
(
sup
t∈[0,θ]
sup
t∈[θ,1]
1
c
Z(t) − (1 − t 2 ) > u
δ
1
c
Z(t) − (1 − t 2 ) > u
δ
)
)
e
≤ p(u) ≤ Π(u) + Π(u),
≤P
(
)
sup Z(t) > u .
t∈[θ,1]
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
23
Moreover, for all u large
1
1
−
Mu (t) Mu
≥
≥
(75)
√
√ 2
σ2
2c
2
tu )
[Gu (t)VZ (tu )]2 − [Gu (tu )VZ (t)]2
2δ (u + δ u)( t −
=
√
2
3
c
σ
3/2
2uVZ (tu )Gu (tu )
2u[ 2δ (1 − tu )] [u + δ (1 − tu )]
√
√
C2 δ 2 (u)
(ln u)2q
C2 ( t − tu )2 ≥ p
≥
C
3
√ 2
u2
δ(u) + tu + tu
holds for any t ∈ [tu + δ(u), θ], therefore
(ln u)q
Mu
.
≥ 1 + C3
u2
t∈[tu +δ(u),θ] Mu (t)
inf
The above inequality combined with (72), (73), (74) and Proposition 3.6 yields
2
p
c
1
h
2δu2 + 4cu , u → ∞.
− 2,∞ Ψ
Π(u) ∼ P1,δ/σ
2
δ
σ
Finally, since
sup
t∈[θ,1]
by Borell-TIS inequality
e
Π(u)
≤P
(
VZ2 (t)
σ2
(1 − θ), and E
≤
2δ
sup Z(t) > u
t∈[θ,1]
)
(
)
sup Z(t)
t∈[θ,1]
≤ C4 < ∞,
δ(u − C4 )2
≤ exp − 2
= o(Π(u)),
σ (1 − θ)
u → ∞,
which establishes the proof. Next, we consider that
o
n
(
)
2 !
e (t) < 0
P inf t∈[− 2δ1 ln(tu +u−2 x),∞) U
c
n
o
P u2 e−2δτu −
≤ x τu < ∞ =
δu + c
e (t) < 0
P inf
U
t∈[0,∞)
o
n
1
1
ln t) − δc (1 − t 2 ) > u
P supt∈[0,tu +u−2 x] σA(− 2δ
n
o
=
1
1
ln t) − δc (1 − t 2 ) > u
P supt∈[0,1] σA(− 2δ
= P u2 (τu∗ − tu ) ≤ x τu∗ < 1 ,
where
τu∗ = {t ∈ [0, 1] : σA(−
c
1
1
ln t) − (1 − t 2 ) > u}.
2δ
δ
The proof follows by Proposition 3.10 i).
5. Appendix
Proof of (11): Let ξ(t), t ∈ R be a centered stationary Gaussian process with unit variance and correlation function
r satisfying
1 − r(t) ∼ a|t|α , t → 0, a > 0, α ∈ (0, 2].
In view of by Theorem 2.2, for −∞ < x1 < x2 < ∞ and f ∈ C0∗ ([x1 , x2 ]) we have
(
)
ξ(t)
f
P
sup
> u ∼ Ψ(u)Pα,a
[x1 , x2 ],
−2 f (u2/α t)
1
+
u
−2/α
−2/α
t∈[u
x1 ,u
x2 ]
and for any y ∈ R
u→∞
)
ξ(t)
>u
P
sup
−2 f (u2/α t)
t∈[u−2/α x1 ,u−2/α x2 ] 1 + u
)
(
ξ(t + yu−2/α )(1 + u−2 f (y))
−2
> u(1 + u f (y))
=P
sup
1 + u−2 f (y + u2/α t)
t∈[u−2/α (x1 −y),u−2/α (x2 −y)]
(
fy (t)−f (y)
∼ Ψ(u(1 + u−2 f (y)))Pα,a
[x1 − y, x2 − y]
fy (t)
∼ Ψ(u)Pα,a
[x1 − y, x2 − y].
24
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
Let
Zu (t) =
ξ(t + yu−2/α )(1 + u−2 f (y))
,
1 + u−2 f (y + u2/α t)
t ∈ [u−2/α (x1 − y), u−2/α (x2 − y)]
2
and denote its variance function by σZ
(t). Then
u
f (y + u2/α t) − f (y)
1 + u−2 f (y + u2/α t)
1
2
− 1 u2 =
−
1
u
=
,
σZu (t)
1 + u−2 f (y)
1 + u−2 f (y)
i.e.,
lim
sup
u→∞ t∈[u−2/α (x −y),u−2/α (x −y)]
1
2
1
σZu (t)
− 1 u2
f (y + u2/α t) − f (y)
− 1 = 0.
Consequently, we have
f
fy
[x1 − y, x2 − y].
Pα,a
[x1 , x2 ] = Pα,a
f
y
f
[x1 − y, ∞). This completes the proof.
Further, letting x2 → ∞ yields Pα,a
[x1 , ∞) = Pα,a
Proof of Example 3.4: We have t0 = 0, γ = 1, gm = 0. Then by Proposition 3.1 statement i)
(
c−1 a1/α u2/α−1 Hα , α ∈ (0, 2),
P max (X(t) − ct) > u ∼ Ψ(u)
ct
t∈[0,T ]
Pα,a
[0, ∞),
α = 2.
Since for all u large
n
o
n
o P supt∈[0,u−1 x] (X(t) − g(t)) > u
n
o ,
P uτu ≤ x τu ≤ T =
P supt∈[0,T ] (X(t) − g(t)) > u
then using Proposition 3.3, we obtain for x ∈ (0, ∞)
n
o
P uτu ≤ x τu ≤ T ∼
Bα (t)
V ar(Bα (t))
Proof of Example 3.5: We have that X(t) = √
α
rX (t, t + h) =
2πt
T
α
α/2
α ∈ (0, 2),
α = 2.
is locally stationary with correlation function
α
|t| + |t + h| − |h|
2 |t(t + h)|
R x −ct
e
dt
R 0∞
,
e−ct dt
0
ct
Pα,a
[0,x]
ct [0,∞) ,
Pα,a
=1−
1
|h|α + o(|h|α ),
2tα
, t ∈ [T, (n + 1)T ] attains its maximum at tj =
π 2
g(t) = c − 2c
|t − tj |2 (1 + o(1)), t → tj , j ≤ n
T
for any t > 0. Since g(t) = c sin
h→0
(4j+1)T
4
, j ≤ n and
the claim follows by applying Remarks 3.2 statement i).
2
Proof of Example 3.11: First
√ note that the variance function of X(t) is given by σ (t) = t(1 − t) and correlation
s(1−t)
function is given by r(t, s) = √
, 0 ≤ s < t ≤ 1.
t(1−s)
√
t(1−t)
u
∈
Case 1) The proof of (22): Clearly, mu (t) := 1+ct/u attains its maximum over [0, 1] at the unique point tu = c+2u
1
1
∗
(0, 1) which converges to t0 = 2 as u → ∞, and mu := mu (tu ) = √
. Furthermore, we have
2
m∗u
−1 =
mu (t)
1+c/u
p
p
p
(u + ct) tu (1 − tu ) − (u + ctu ) t(1 − t)
tu (1 − tu )
u + ct
p
p
−1=
t(1 − t) u + ctu
t(1 − t)(u + ctu )
(u + ct)2 tu (1 − tu ) − (u + ctu )2 t(1 − t)
p
p
p
.
t(1 − t)(u + ctu )[(u + ct) tu (1 − tu ) + (u + ctu ) t(1 − t)]
i
h
q
q
Setting ∆(u) = − (lnuu) , (lnuu) , and (tu + ∆(u)) ⊂ [0, 21 ] for all u large, we have
(76)
=
(u + ct)2 tu (1 − tu ) − (u + ctu )2 t(1 − t)
(77)
= u2 [(tu − t2u ) − (t − t2 )] + 2cuttu (t − tu ) + c2 ttu (t − tu )
= (t − tu )2 u(u + c)
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
25
and
u4
2 u+
for all t ∈ (tu + ∆(u)). Then
lim
(78)
sup
u→∞ t∈∆(u),t6=0
−1/2
≤ 2(u + ct)2 [t(1 − t)] ≤
−u
c 2
2
c 2
1
u+
2
2
m∗u /mu (tu + t) − 1
m∗u /mu (tu + t) − 1
−
1
=
lim
sup
− 1 = 0.
u→∞ t∈∆(u),t6=0
2t2
2(ut)2 u−2
Furthermore, since
p
p
p
s(1 − t)
s(1 − t) − t(1 − s)
t−s
p
p
p
=1+
=1− p
,
r(t, s) = p
t(1 − s)
t(1 − s)
t(1 − s)( s(1 − t) + t(1 − s))
and
p
p
1 1 p
1 1
− ≤ t(1 − s)( s(1 − t) + t(1 − s)) ≤ +
2 u
2 u
for all s < t, s, t ∈ (tu + ∆(u)), we have
lim
sup
u→∞ t,s∈∆(u)
t6=s
1 − r(tu + t, tu + s)
− 1 = 0.
2|t − s|
Next for some small θ ∈ (0, 12 ), we have
|t − s|
E (X(t) − X(s))2 = 2(1 − r(t, s)) ≤ 1
( 2 − θ)2
holds for all s, t ∈ [ 21 − θ, 12 + θ]. Moreover, by (76), (77) and
2
2
1
1
+θ
+θ
2(u + ct)2 [t(1 − t)] ≤ 2 u + c
2
2
for all t ∈ [ 12 − θ, 21 + θ], we have that for any t ∈ [ 21 − θ, 12 + θ] \ (tu + ∆(u))
m∗u
(ln u)2q
−1≥
,
mu (t)
2[u + c( 12 + θ)]2 ( 12 + θ)2
and further
(ln u)q
m∗u
,
≥ 1 + C1
mu (t)
u2
(79)
Consequently, by Proposition 3.6
(
sup
P
(X(t) − ct) > u
t∈[t0 −θ,t0 +θ]
)
1
1
t ∈ [ − θ, + θ] \ (tu + ∆(u)).
2
2
∼ 8H1 u
Z
∞
−∞
p
2
2
e−8t dtΨ 2 cu + u2 ∼ e−2(u +cu) .
In addition, since σθ := maxt∈[0,1]/[t0 −θ,t0 +θ] σ(t) < σ(t0 ) = 12 , by Borell-TIS inequality
n
o2
(
)
(
)
u
−
E
sup
X(t)
t∈[0,1]
P
sup
(X(t) − ct) > u
≤ P
sup
X(t) > u ≤ exp −
2
2σ
t∈[0,1]\[t0 −θ,t0 +θ]
t∈[0,1]\[t0 −θ,t0 +θ]
θ
2
= o(e−2(u
(80)
Thus, by the fact that
P
(
sup (X(t) − ct) > u
t∈[0,1]
and
P
(
sup (X(t) − ct) > u
t∈[0,1]
)
+cu)
≤P
(
sup
t∈[t0 −θ,t0 +θ]
we conclude that
P
(
)
).
≥P
(
(X(t) − ct) > u
sup (X(t) − ct) > u
t∈[0,1]
sup
t∈[t0 −θ,t0 +θ]
)
)
+P
(X(t) − ct) > u
(
sup
t∈[0,1]\[t0 −θ,t0 +θ]
2
∼ e−2(u
+cu)
.
)
)
(X(t) − ct) > u ,
26
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
For any u > 0
P u τu −
u
c + 2u
≤ x τu ≤ 1
n
o
P supt∈[0,tu +u−1 x] (X(t) − ct) > u
n
o
=
P supt∈[0,1] (X(t) − ct) > u
and by Theorem 2.2
Z x
p
2
P
sup
(X(t) − ct) > u ∼ 8H1 u
e−8t dtΨ 2 cu + u2 .
t∈ t − (ln u)q ,t +u−1 x
−∞
[u
]
u
u
The above combined with (79) and (80) implies that as u → ∞
(
)
Z x
p
2
P
sup
(X(t) − ct) > u ∼ P
sup
(X(t) − ct) > u ∼ 8H1 u
e−8t dtΨ 2 cu + u2 .
t∈ t − (ln u)q ,t +u−1 x
t∈[0,tu +u−1 x]
−∞
[u
]
u
u
Consequently,
P u τu −
u
c + 2u
≤ x τu ≤ 1
Case 2) The proof of (23): We have tu =
u
c+2u
∼
Rx
−8t2
dt
−∞ e
R∞
2
−8t
dt
−∞ e
= Φ(4x), x ∈ (−∞, ∞).
∈ (0, 21 ) which converge to t0 =
1
c
− tu ∼
,
2
4u
1
2
as u → ∞. Since
u → ∞,
by Proposition 3.6
P
(
sup (X(t) − ct) > u
t∈[0,1/2]
)
∼ 8H1 u
Z
c/4
−∞
p
2
2
e−8t dtΨ 2 cu + u2 ∼ Φ(c)e−2(u +cu) .
As for the proof of Case 1) we obtain further
P u τu −
u
c + 2u
1
≤ x τu ≤
2
∼
Rx
2
e−8t dt
−∞
R c/4
e−8t2 dt
−∞
∼ Φ(4x)/Φ(c), x ∈ (−∞, c/4].
Case 3) The proof of (24): We have that σ(t) attains its maximum over [0, 1] at the unique point t0 = 12 , which is also
the unique maximum point of
c
2
−c t−
1
2
, t ∈ [0, 1]. Furthermore,
σ(t) =
and
2
p
1
1
1
t(1 − t) ∼ − t −
, t→
2
2
2
r(t, s) ∼ 1 − 2|t − s|,
s, t →
1
.
2
By Proposition 3.9 as u → ∞
(
)
Z ∞
2
2
c
1
P sup X(t) + − c t −
e−(8|t| +4c|t|) dtΨ (2u − c) ∼ 2Ψ(c)e−2(u −cu)
> u ∼ 8H1 u
2
2
t∈[0,1]
−∞
and in view of Proposition 3.10 ii)
R x −(8|t|2 +4c|t|)
e
dt
1
≤ x τu ≤ 1 ∼ R−∞
P u τu −
, u → ∞.
∞
2 +4c|t|)
−(8|t|
2
dt
−∞ e
Acknowledgments
Thanks to Swiss National Science Foundation Grant no. 200021-166274. KD acknowledges partial support by NCN
Grant No 2015/17/B/ST1/01102 (2016-2019).
EXTREMES OF THRESHOLD-DEPENDENT GAUSSIAN PROCESSES
27
References
[1] K. Dȩbicki and T. Rolski, “A note on transient Gaussian fluid models,” Queueing Syst., vol. 41, no. 4, pp. 321–342, 2002.
[2] K. Dȩbicki, “Ruin probability for Gaussian integrated processes,” Stochastic Process. Appl., vol. 98, no. 1, pp. 151–174, 2002.
[3] A. B. Dieker, “Extremes of Gaussian processes over an infinite horizon,” Stochastic Process. Appl., vol. 115, no. 2, pp. 207–248, 2005.
[4] E. Hashorva, L. Ji, and V. I. Piterbarg, “On the supremum of γ-reflected processes with fractional Brownian motion as input,”
Stochastic Process. Appl., vol. 123, no. 11, pp. 4111–4127, 2013.
[5] K. Debicki, E. Hashorva, and L. Ji, “Tail asymptotics of supremum of certain Gaussian processes over threshold dependent random
’
intervals,” Extremes, vol. 17, no. 3, pp. 411–429, 2014.
[6] J. Pickands, III, “Maxima of stationary Gaussian processes,” Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, vol. 7, pp. 190–223,
1967.
[7] V. I. Piterbarg, “On the paper by J. Pickands “Upcrossing probabilities for stationary Gaussian processes”,” Vestnik Moskov. Univ.
Ser. I Mat. Meh., vol. 27, no. 5, pp. 25–30, 1972.
[8] G. Samorodnitsky, “Probability tails of Gaussian extrema,” Stochastic Process. Appl., vol. 38, no. 1, pp. 55–84, 1991.
[9] S. M. Berman, Sojourns and extremes of stochastic processes. The Wadsworth & Brooks/Cole Statistics/Probability Series, Pacific
Grove, CA: Wadsworth & Brooks/Cole Advanced Books & Software, 1992.
[10] V. I. Piterbarg, Asymptotic methods in the theory of Gaussian processes and fields, vol. 148 of Translations of Mathematical Monographs. Providence, RI: American Mathematical Society, 1996.
[11] J. Azaı̈s and M. Wschebor, Level sets and extrema of random processes and fields. Hoboken, NJ: John Wiley & Sons Inc., 2009.
[12] E. Hashorva and J. Hüsler, “Extremes of Gaussian processes with maximal variance near the boundary points,” Methodology and
Computing in Applied Probability, vol. 2, no. 3, pp. 255–269, 2000.
[13] K. Dȩbicki and P. Kisowski, “Asymptotics of supremum distribution of α(t)-locally stationary Gaussian processes,” Stochastic Process.
Appl., vol. 118, no. 11, pp. 2022–2037, 2008.
[14] K. Dȩbicki and K. Tabiś, “Extremes of the time-average of stationary Gaussian processes,” Stochastic Process. Appl., vol. 121, no. 9,
pp. 2049–2063, 2011.
[15] V. I. Piterbarg, Twenty Lectures About Gaussian Processes. London, New York: Atlantic Financial Press, 2015.
[16] D. Cheng and Y. Xiao, “The mean Euler characteristic and excursion probability of Gaussian random fields with stationary increments,” Annals Appl. Probab., 2016, in press.
[17] D. Cheng and A. Schwartzman, “Distribution of the height of local maxima of Gaussian random fields,” Extremes, vol. 18, no. 2,
pp. 213–240, 2015.
[18] M. Arendarczyk, “On the asymptotics of supremum distribution for some iterated processes,” Extremes, 2016,10.1007/s10687-0160272-2.
[19] Q. M. Shao, “Bounds and estimators of a basic constant in extreme value theory of Gaussian processes,” Statistica Sinica, vol. 6,
pp. 245–258, 1996.
[20] A. J. Harper, “Bounds on the suprema of Gaussian processes, and omega results for the sum of a random multiplicative function,”
Ann. Appl. Probab., vol. 23, no. 2, pp. 584–616, 2013.
[21] A. J. Harper, “Pickands’ constant hα does not equal 1/γ(1/α), for small α,” Bernoulli, accepted, 2015.
[22] K. Dȩbicki, E. Hashorva, and L. Ji, “Parisian ruin over a finite-time horizon,” Science China Mathematics, vol. 59, no. 3, pp. 557–572,
2016.
[23] Z. Michna, “Remarks on Pickands constant,” Under revision in Probability and Mathematical Statistics, 2017.
[24] L. Bai, K. Dȩbicki, E. Hashorva, and L. Luo, “On generalised Piterbarg constants,” Comp. Meth. Appl. Prob., doi:10.1007/s11009016-9537-0, 2017.
[25] K.
Dȩbicki,
S.
Engelke,
and
E.
Hashorva,
“Generalized
Pickands
constants
and
stationary
max-stable
processes,”
http://arxiv.org/abs/1602.01613, 2016.
[26] V. I. Piterbarg and V. P. Prisjažnjuk, “Asymptotic behavior of the probability of a large excursion for a nonstationary Gaussian
process,” Teor. Verojatnost. i Mat. Statist., no. 18, pp. 121–134, 183, 1978.
[27] K. Debicki, “A note on LDP for supremum of Gaussian processes over infinite horizon,” Statist. Probab. Lett., vol. 44, no. 3, pp. 211–
’
219, 1999.
[28] V. I. Piterbarg and S. Stamatovich, “On maximum of Gaussian non-centered fields indexed on smooth manifolds,” in Asymptotic
methods in probability and statistics with applications (St. Petersburg, 1998), Stat. Ind. Technol., pp. 189–203, Boston, MA: Birkhäuser
Boston, 2001.
[29] E. Hashorva and L. Ji, “Piterbarg theorems for chi-processes with trend,” Extremes, vol. 18, no. 1, pp. 37–64, 2015.
[30] J. Hüsler and V. I. Piterbarg, “A limit theorem for the time of ruin in a Gaussian ruin problem,” Stochastic Process. Appl., vol. 118,
no. 11, pp. 2014–2021, 2008.
[31] J. Hüsler and V. I. Piterbarg, “On the ruin probability for physical fractional Brownian motion,” Stochastic Process. Appl., vol. 113,
no. 2, pp. 315–332, 2004.
28
LONG BAI, KRZYSZTOF DȨBICKI, ENKELEJD HASHORVA, AND LANPENG JI
[32] K. Debicki, E. Hashorva, and L. Ji, “Gaussian risk model with financial constraints,” Scandinavian Actuarial Journal, vol. 2015, no. 6,
’
pp. 469–481, 2015.
[33] E. Hashorva and L. Ji, “Approximation of passage times of γ-reflected processes with FBM input,” J. Appl. Probab., vol. 51, no. 3,
pp. 713–726, 2014.
[34] K. Debicki, E. Hashorva, and L. Ji, “Parisian ruin of self-similar Gaussian risk processes,” J. Appl. Probab, vol. 52, pp. 688–702, 2015.
’
[35] K. Dȩbicki, E. Hashorva, and P. Liu, “Ruin probabilities and passage times of γ-reflected Gaussian process with stationary increments,”
http://arXiv.org/abs/1511.09234, 2015.
[36] P. Embrechts, C. Klüppelberg, and T. Mikosch, Modelling extremal events, vol. 33 of Applications of Mathematics (New York). Berlin:
Springer-Verlag, 1997.
[37] S. I. Resnick, Heavy-tail phenomena. Springer Series in Operations Research and Financial Engineering, New York: Springer, 2007.
Probabilistic and statistical modeling.
[38] P. Soulier, Some applications of regular variation in probability and statistics. Instituto Venezolano de Investigaciones Cientcas: XXII
ESCUELA VENEZOLANA DE MATEMATICAS, 2009.
[39] J. Pickands, III, “Upcrossing probabilities for stationary Gaussian processes,” Trans. Amer. Math. Soc., vol. 145, pp. 51–73, 1969.
[40] K. Dȩbicki and K. Kosiński, “On the infimum attained by the reflected fractional Brownian motion,” Extremes, vol. 17, no. 3,
pp. 431–446, 2014.
[41] A. B. Dieker and B. Yakir, “On asymptotic constants in the theory of Gaussian processes,” Bernoulli, vol. 20, no. 3, pp. 1600–1619,
2014.
[42] K. Dȩbicki, E. Hashorva, L. Ji, and K. Tabiś, “Extremes of vector-valued Gaussian processes: Exact asymptotics,” Stochastic Process.
Appl., vol. 125, no. 11, pp. 4039–4065, 2015.
[43] A. B. Dieker and T. Mikosch, “Exact simulation of Brown-Resnick random fields at a finite number of locations,” Extremes, vol. 18,
pp. 301–314, 2015.
[44] S. M. Berman, “Sojourns and extremes of Gaussian processes,” Ann. Probab., vol. 2, pp. 999–1026, 1974.
[45] J. Hüsler, “Extreme values and high boundary crossings of locally stationary Gaussian processes,” Ann. Probab., vol. 18, no. 3,
pp. 1141–1158, 1990.
[46] D. Cheng, “Excursion probabilities of isotropic and locally isotropic Gaussian random fields on manifolds,” Extremes, 2016,
10.1007/s10687-016-0271-3.
[47] E. Hashorva and L. Ji, “Extremes of α(t)-locally stationary Gaussian random fields,” Trans. Amer. Math. Soc., vol. 368, no. 1,
pp. 1–26, 2016.
[48] L. Bai, “Extremes of α(t)-locally stationary Gaussian processes with non-constant variances,” J. Math. Anal. Appl., vol. 446, no. 1,
pp. 248–263, 2017.
[49] W. Bischoff, F. Miller, E. Hashorva, and J. Hüsler, “Asymptotics of a boundary crossing probability of a Brownian bridge with general
trend,” Methodology and Computing in Applied Probability, vol. 5, pp. 271–287, 2003.
[50] J. M. Harrison, “Ruin problems with compounding assets,” Stochastic Process. Appl., vol. 5, pp. 67–79, 1977.
[51] D. C. Emanuel, J. M. Harrison, and A. J. Taylor, “A diffusion approximation for the ruin function of a risk process with compounding
assets,” Scandinavian Actuarial Journal, vol. 4, pp. 240–247, 1975.
[52] K. Debicki, E. Hashorva, and P. Liu,
’
https://arxiv.org/abs/1607.01430, 2016.
“Uniform tail approximation of homogenous functionals of Gaussian fields,”
[53] N. H. Bingham, C. M. Goldie, and J. L. Teugels, Regular variation, vol. 27. Cambridge university press, 1989.
[54] J. Hüsler and V. I. Piterbarg, “Extremes of a certain class of Gaussian processes,” Stochastic Process. Appl., vol. 83, no. 2, pp. 257–271,
1999.
[55] R. Adler and J. Taylor, Random fields and geometry. Springer Monographs in Mathematics, New York: Springer, 2007.
[56] V. I. Piterbarg, “High extrema of Gaussian chaos processes,” Extremes, vol. 19, no. 2, pp. 253–272, 2016.
[57] P. A. Meyer, Probability and Potentials. MA, Waltham: Blaisdell, 1966.
Long Bai, Department of Actuarial Science, University of Lausanne UNIL-Dorigny, 1015 Lausanne, Switzerland
E-mail address: [email protected]
Krzysztof Dȩbicki, Mathematical Institute, University of Wroclaw, pl. Grunwaldzki 2/4, 50-384 Wroclaw, Poland
E-mail address: [email protected]
Enkelejd Hashorva, Department of Actuarial Science, University of Lausanne,, UNIL-Dorigny, 1015 Lausanne, Switzerland
E-mail address: [email protected]
Lanpeng Ji, Department of Actuarial Science, University of Lausanne, UNIL-Dorigny, 1015 Lausanne, Switzerland
E-mail address: [email protected]
| 10 |
Selection from heaps, row-sorted matrices and X + Y
using soft heaps
arXiv:1802.07041v1 [] 20 Feb 2018
Haim Kaplan
∗
László Kozma
†
Or Zamir
‡
Uri Zwick
§
Abstract
We use soft heaps to obtain simpler optimal algorithms for selecting the k-th smallest item,
and the set of k smallest items, from a heap-ordered tree, from a collection of sorted lists,
and from X + Y , where X and Y are two unsorted sets. Our results match, and in some
ways extend and improve, classical results of Frederickson (1993) and Frederickson and Johnson
(1982). In particular, for selecting the k-th smallest item, or the set of k smallest items, from a
collection of P
m sorted lists we obtain a new optimal “output-sensitive” algorithm that performs
m
only O(m + i=1 log(ki + 1)) comparisons, where ki is the number of items of the i-th list that
belong to the overall set of k smallest items.
1
Introduction
The input to the standard selection problem is a set of n items, drawn from a totally ordered
domain, and an integer 1 ≤ k ≤ n. The goal is to return the k-th smallest item in the set. A
classical result of Blum et al. [1] says that the selection problem can be solved deterministically in
O(n) time, i.e., faster than sorting the set. The number of comparisons required for selection was
reduced by Schönhage et al. [26] to 3n, and by Dor and Zwick [8, 9] to about 2.95n.
In the generalized selection problem, we are also given a partial order P known to hold for the set
of n input items. The goal is again to return the k-th smallest item. The corresponding generalized
sorting problem was extensively studied. It was shown by Kahn and Saks [20] that the problem
can be solved using only O(log e(P )) comparisons, where e(P ) is the number of linear extensions
of P . Thus, the information-theoretic lower bound is tight for generalized sorting. The algorithm
of Kahn and Saks [20] performs only O(log e(P )) comparisons, but may spend much more time on
deciding which comparisons to perform. Kahn and Kim [19] and Cardinal et al. [4] gave algorithms
that perform only O(log e(P )) comparisons and run in polynomial time.
A moment’s reflection shows that an algorithm that finds the k-th smallest item of a set, must also
identify the set of k smallest items of the set.1 Given a partial order P , let sk (P ) be the number of
∗
Blavatnik School of Computer Science, Tel Aviv University, Israel. Research supported by the Israel Science Foundation grant no. 1841-14 and by a grant from the Blavatnik Computer Science Fund. E-mail: [email protected].
†
Department of Mathematics and Computer Science, TU Eindhoven, The Netherlands.
E-mail:
[email protected].
‡
Blavatnik School of Computer Science, Tel Aviv University, Israel. E-mail: [email protected].
§
Blavatnik School of Computer Science, Tel Aviv University, Israel. E-mail: [email protected].
1
The information gathered by a comparison-based algorithm corresponds to a partial order which can be represented by a DAG (Directed Acyclic Graph). Every topological sort of the DAG corresponds to a total order of the
items consistent with the partial order. Suppose that e is claimed to be the k-th smallest item and suppose, for
the sake of contradiction, that the set I of the items that are incomparable with e is non-empty. Then, there is a
topological sort in which e is before all the items of I, and another topological sort in which e is after all the items
of I, contradicting the fact that e is the k-th smallest item in all total orders consistent with the partial order.
1
subsets of size k that may possibly be the set of k smallest items in P . Then, log2 sk (P ) is clearly
a lower bound on the number of comparisons required to select the k-th smallest item, or the set
of k smallest items. Unlike sorting, this information-theoretic lower bound for selection may be
extremely weak. For example, the information-theoretic lower bound for selecting the minimum
is only log2 n, while n − 1 comparisons are clearly needed (and are sufficient). To date, there is
no characterization of pairs (P, k) for which the information-theoretic lower bound for selection is
tight, nor an alternative general technique to obtain a tight lower bound.
Frederickson and Johnson [12, 13, 14] and Frederickson [11] studied the generalized selection problem for some interesting specific partial orders. Frederickson [11] considered the case in which the
input items are items of a binary min-heap, i.e., they are arranged in a binary tree, with each item
smaller than its two children. Frederickson [11] gave a complicated algorithm that finds the k-th
smallest item using only O(k) comparisons, matching the information-theoretic lower bound for
this case. (Each subtree of size k of the heap, containing the root, can correspond to the set of k
1 2k
smallest items, and there are k+1
k , the k-th Catalan number, such subtrees.)
Frederickson and Johnson [12, 13, 14] considered three other interesting special cases. (i) The input
items are in a collection of sorted lists, or equivalently they reside in a row-sorted matrix; (ii) The
input items reside in a collection of matrices, where each matrix is both row- and column-sorted;
(iii) The input items are X + Y , where X and Y are unsorted sets of items.2 For each of these
cases, they present a selection algorithm that matches the information-theoretic lower bound.
We note in passing that sorting X + Y is a well studied problem. Fredman [15] showed that
X + Y , where |X| = |Y | = n, can be sorted using only O(n2 ) comparisons, but it is not known
how to do it in O(n2 ) time. (An intriguing situation!) Fredman [15] also showed that Ω(n2 )
comparisons are required, if only comparisons between items in X + Y , i.e., comparisons of the
form xi + yj ≤ xk + y` , are allowed. Lambert [24] and Steiger and Streinu [27] gave algorithms that
sort X + Y in O(n2 log n) time using only O(n2 ) comparisons. Kane et al. [21], in a breakthrough
result, have shown recently that X + Y can be sorted using only O(n log2 n) comparisons of the
form (xi + yj ) − (xi0 + yj 0 ) ≤ (xk + y` ) − (xk0 + y`0 ), but it is again not known how to implement
their algorithm efficiently.
The median of X + Y , on the other hand, can be found in O(n log n) time, and O(n log n) comparisons of items in X + Y , as was already shown by Johnson and Mizoguchi [18] and Johnson and
Kashdan [17]. The selection problem from X + Y becomes more challenging when k = o(n2 ).
Frederickson [11] gives two applications for selection from a binary min-heap. The first is in
an algorithm for listing the k smallest spanning trees of an input graph. The second is a certain
resource allocation problem. Eppstein [10] uses the heap selection algorithm in his O(m+n log n+k)
algorithm for generating the k shortest paths between a pair of vertices in a digraph. As pointed
out by Frederickson and Johnson [13], selection from X + Y can be used to compute the HodgesLehmann [16] estimator in statistics. Selection from a matrix with sorted rows solves the problem
of
“optimum discrete distribution
of effort” with concave functions, i.e., the problem of maximizing
Pm
Pm
f
(k
)
subject
to
k
=
k,
where the fi ’s are concave and the ki ’s are non-negative integers.
i=1 i i
i=1 i
(See Koopman [23] and other references in [13].) Selection from a matrix with sorted rows is also
used by Brodal et al. [3] and Bremner et al. [2].
The O(k) heap selection algorithm of Frederickson [11] is fairly complicated. The naı̈ve algorithm
for the problem runs in O(k log k) time. Frederickson first improves this to O(k log log k), then to
∗
∗
O(k3log k ), then to O(k2log k ), and finally to O(k).
Our first result is a very simple O(k) heap selection algorithm obtained by running the naı̈ve
O(k log k) algorithm using an auxiliary soft heap instead of a standard heap. Soft heaps, discussed
2
By X + Y we mean the set of pairwise sums {x + y | x ∈ X, y ∈ Y }.
2
below, are fairly simple data structures whose implementation is not much more complicated than
the implementation of standard heaps. Our overall O(k) algorithm is thus simple and easy to
implement and comprehend.
Relying on our simple O(k) heap selection algorithm, we obtain simplified algorithms for selection
from row-sorted matrices and from X +Y . Selecting the k-th item from a row-sorted matrix with m
k
rows using our algorithms requires O(m + k) time, if k ≤ 2m, and O(m log m
) time, if k ≥ 2m,
matching the optimal results of Frederickson and Johnson [12]. Furthermore,
we obtain a new
P
optimal “output-sensitive” algorithm whose running time is O(m + m
log(k
+
1)), where ki is
i
i=1
the number of items of the i-th row that belong to the set of k smallest items in the matrix. We
also use our simple O(k) heap selection algorithm to obtain simple optimal algorithms for selection
from X + Y .
Soft heaps are “approximate” priority queues introduced by Chazelle [6]. They form a major
building block in his deterministic O(mα(m, n))-time algorithm for finding minimum spanning trees
[5], which is currently the fastest known deterministic algorithm for the problem. Chazelle [6] also
shows that soft heaps can be used to obtain a simple linear time (standard) selection algorithm. (See
the next section.) Pettie and Ramachandran [25] use soft heaps to obtain an optimal deterministic
minimum spanning algorithm with a yet unknown running time. A simplified implementation of
soft heaps is given in Kaplan et al. [22].
All algorithms considered in the paper are comparison-based, i.e., the only operations they perform
on the input items are pairwise comparisons. In the selection from X + Y problem, the algorithms
make pairwise comparisons in X, in Y and in X + Y . The number of comparisons performed by
the algorithms presented in this paper dominates the total running time of the algorithms.
The rest of the paper is organized as follows. In Section 2 we review the definition of soft heaps. In
Section 3 we describe our heap selection algorithms. In Section 3.1 we describe our basic algorithm
for selection from binary min-heaps. In Sections 3.2 and 3.3 we extend the algorithm to d-ary
heaps and then to general heap-ordered trees and forests. In Section 4 we describe our selection
algorithms from row-sorted matrices. In Section 4.1 we describe a simple O(m + k) algorithm
which is optimal if k = O(m). In Section 4.2 we build on the O(m + k) algorithm to obtain an
k
optimal O(m log m
) algorithm, for k ≥ 2m. In Sections 4.3 and 4.4 we obtain new results
that were
Pm
not obtained by Frederickson and Johnson [12]. In Section 4.3 we obtain an O(m + i=1 log ni )
algorithm, wherePni ≥ 1 is the length of the i-th row of the matrix. In Section 4.4 we obtain
the new O(m + m
i=1 log(ki + 1)) optimal output-sensitive algorithm. In Section 5 we present our
selection algorithms from X + Y . In Section 5.1 we give a simple O(m + n + k) algorithm, where
k
|X| = m, |Y | = n. In Section 5.2 we give a simple O(m log m
) algorithm, for m ≥ n and k ≥ 6m.
We conclude in Section 6 with some remarks and open problems.
2
Soft heaps
Soft heaps, invented by Chazelle [6], support the following operations:
soft-heap(ε): Create and return a new, empty soft heap with error parameter ε.
insert(Q, e): Insert item e into soft heap Q.
meld(Q1 , Q2 ): Return a soft heap containing all items in heaps Q1 and Q2 , destroying Q1 and Q2 .
extract-min(Q): Delete from the soft heap and return an item of minimum key in heap Q.
In Chazelle [6], extract-min operations are broken into find-min and delete operations. We only
need combined extract-min operations. We also do not need meld operations in this paper.
3
The main difference between soft heaps and regular heaps is that soft heaps are allowed to increase
the keys of some of the items in the heap by an arbitrary amount. Such items are said to become
corrupt. The soft heap implementation chooses which items to corrupt, and by how much to
increase their keys. The only constraint is that for a certain error parameter 0 ≤ ε < 1, the
number of corrupt items in the heap is at most εI, where I is the number of insertions performed
so far. The ability to corrupt items allows the implementation of soft heaps to use what Chazelle [6]
calls the “data structures equivalent of car pooling” to reduce the amortized time per operation
to O(log 1ε ), which is the optimal possible dependency on ε. In the implementation of Kaplan et
al. [22], extract-min operations take O(log 1ε ) amortized time, while all other operations take O(1)
amortized time. (In the implementation of Chazelle [6], insert operations take O(log 1ε ) amortized
time while the other operations take O(1) time.)
An extract-min operation returns an item whose current, possibly corrupt, key is the smallest in
the heap. Ties are broken arbitrarily. (Soft heaps usually give many items the same corrupt key,
even if initially all keys are distinct.) Each item e thus has two keys associated with it: its original
key e.key, and its current key in the soft heap e.key 0 , where e.key ≤ e.key 0 . If e.key < e.key 0 ,
then e is corrupt. The current key of an item may increase several times.
At first sight, it may seem that the guarantees provided by soft heaps are extremely weak. The only
bound available is on the number of corrupt items currently in the heap. In particular, all items
extracted from the heap may be corrupt. Nonetheless, soft heaps prove to be an extremely useful
data structure. In particular, they play a key role in the fastest known deterministic algorithm of
Chazelle [5] for finding minimum spanning trees.
Soft heaps can also be used, as shown by Chazelle [6], to select an approximate median of n items.
Initialize a soft heap with some error parameter ε < 21 . Insert the n items into the soft heap and
then perform (1 − ε) n2 extract-min operations. Find the maximum item e, with respect to the
original keys, among the extracted items. The rank of e is between (1 − ε) n2 and (1 + ε) n2 . The
rank is at least (1 − ε) n2 as e is the largest among (1 − ε) n2 items. The rank is at most (1 + ε) n2 as
the items remaining in the soft heap that are smaller than e must be corrupt, so there are at most
εn such items. For, say, ε = 41 , the running time of the algorithm is O(n).
Using a linear time approximate median algorithm, we can easily obtain a linear time algorithm for
selecting the k-th smallest item. We first compute the true rank r of the approximate median e.
If r = k we are done. If r > k, we throw away all items larger than e. Otherwise, we throw away
all items smaller than e and replace k by k − r. We then continue recursively. In O(n) time, we
reduced n to at most ( 12 + ε)n, so the total running time is O(n).
In this paper, we show the usefulness of soft heaps in solving generalized selection problems. We
obtain simpler algorithms than those known before, and some results that were not known before.
In Chazelle [6] and Kaplan et al. [22], soft heaps may corrupt items while performing any type of
operation. It is easy, however, to slightly change the implementation of [22] such that corruptions
only occur following extract-min operations. In particular, insert operations do not cause corruption, and an extract-min operation returns an item with a smallest current key at the beginning
of the operation. These assumptions simplify algorithms that use soft heaps, and further simplify
their analysis. The changes needed in the implementation of soft heaps to meet these assumptions
are minimal. The operations insert (and meld) are simply implemented in a lazy way. The implementation of [22] already has the property that extract-min operations cause corruptions only
after extracting an item with minimum current key.
We assume that an extract-min operation returns a pair (e, C), where e is the extracted item,
and C is a list of items that became corrupt after the extraction of e, i.e., items that were not
corrupt before the operation, but are corrupt after it. We also assume that e.corrupt is a bit that
says whether e is corrupt. (Note that e.corrupt is simply a shorthand for e.key < e.key 0 .) It is
4
Soft-Select(r):
S←∅
Q ← soft-heap(1/4)
insert(Q, r)
append(S, r)
Heap-Select(r):
S←∅
Q ← heap()
insert(Q, r)
for i ← 1 to k do
e ← extract-min(Q)
append(S, e)
insert(Q, e.left)
insert(Q, e.right)
return S
for i ← 1 to k − 1 do
(e, C) ← extract-min(Q)
if not e.corrupt then C ← C ∪ {e}
for e ∈ C do
insert(Q, e.left)
insert(Q, e.right)
append(S, e.left)
append(S, e.right)
Figure 1: Extracting the k smallest
items from a binary min-heap with
root r using a standard heap.
return select(S, k)
Figure 2: Extracting the k smallest items from a
binary min-heap with root r using a soft heap.
again easy to change the implementation of [22] so that extract-min operations return a list C of
newly corrupt items, without affecting the amortized running times of the various operations. (In
particular, the amortized running time of an extract-min operation is still O(log 1ε ), independent
of the length of C. As each item becomes corrupt only once, it is easy to charge the cost of adding
an item to C to its insertion into the heap.)
We stress that the assumptions we make on soft heaps in this paper can be met by minor and
straightforward modifications of the implementation of Kaplan et al. [22], as sketched above. No
complexities are hidden here. We further believe that due to their usefulness, these assumptions
will become the standard assumptions regarding soft heaps.
3
Selection from heap-ordered trees
In Section 3.1 we present our simple, soft heap-based, O(k) algorithm for selecting the k-th smallest
item, and the set of k smallest items from a binary min-heap. This algorithm is the cornerstone
of this paper. For simplicity, we assume throughout this section that the input heap is infinite. In
particular, each item e in the input heap has two children e.left and e.right. (A non-existent child
is represented by a dummy item with key +∞.) In Section 3.2 we adapt the algorithm to work for
d-ary heaps, for d ≥ 3, using “on-the-fly ternarization via heapification”. In Section 3.3 we extend
the algorithm to work on any heap-ordered tree or forest. The results of Section 3.3 are new.
3.1
Selection from binary heaps
The naı̈ve algorithm for selection from a binary min-heap is given in Figure 1. The root r of the
input heap is inserted into an auxiliary heap (priority queue). The minimal item e is extracted
from the heap and appended to a list S. The two children of e, if they exist, are inserted into the
heap. This operation is repeated k times. After k iterations, the items in S are the k smallest items
in the input heap, in sorted order. Overall, 2k + 1 items are inserted into the heap and k items are
5
Figure 3: Types of items in the input heap. White nodes belong to A, i.e., were not inserted yet
into the soft heap Q; black nodes belong to the barrier B; gray nodes belong to C, i.e., are corrupt;
striped nodes belong to D, i.e., were already deleted.
extracted, so the total running time is O(k log k), which is optimal if the k smallest items are to be
reported in sorted order.
Frederickson [11] devised a very complicated algorithm that outputs the k smallest items, not
necessarily in sorted order, in only O(k) time, matching the information-theoretic lower bound. In
Figure 2 we give our very simple algorithm Soft-Select(r) for the same task, which also runs in
optimal O(k) time and performs only O(k) comparisons. Our algorithm is a simple modification
of the naı̈ve algorithm of Figure 1 with the auxiliary heap replaced by a soft heap. The resulting
algorithm is much simpler than the algorithm of Frederickson [11].
Algorithm Soft-Select(r) begins by initializing a soft heap Q with error parameter ε = 1/4 and by
inserting the root r of the input heap into it. Items inserted into the soft heap Q are also inserted
into a list S. The algorithm then performs k − 1 iterations. In each iteration, the operation
(e, C) ← extract-min extracts an item e with the smallest (possibly corrupt) key currently in Q,
and also returns the set of items C that become corrupt as a result of the removal of e from Q. If e
is not corrupt, then it is added to C. Now, for each item e ∈ C, we insert its two children e.left
and e.right into the soft heap Q and the list S.
Lemma 3.1 below claims that Soft-Select(r) inserts the k smallest items of the input heap into
the soft heap Q. Lemma 3.2 claims that, overall, only O(k) items are inserted into Q, and hence
into S. Thus, the k smallest items in the input heap can be found by selecting the k smallest items
in the list S using a standard selection algorithm.
Lemma 3.1 Algorithm Soft-Select(r) inserts the k smallest items from the input binary minheap into the soft heap Q. (Some of them may subsequently be extracted from the heap.)
Proof: At the beginning of an iteration of algorithm Soft-Select, let A be the set of items of the
input binary heap that were not yet inserted into the soft heap Q; let B be the set of items that
were inserted, not yet removed and are not corrupt; let C be the set of items that were inserted,
not yet removed, and are corrupt; let D be the set of items that were inserted and already deleted
from Q. We prove below, by easy induction, the following two invariants:
(a) All strict ancestors of items in B are in C ∪ D.
(b) Each item in A has an ancestor in B.
6
Thus, the items in B form a barrier that separates the items of A, i.e., items that were not inserted
yet into the heap, from the items of C ∪ D, i.e., items that were inserted and are either corrupt or
were already removed from the soft heap Q. For an example, see Figure 3.
Invariants (a) and (b) clearly hold at the beginning of the first iteration, when B = {r} and the
other sets are empty. Assume that (a) and (b) hold at the beginning of some iteration. Each
iteration removes an item from the soft heap. The item removed is either a corrupt item from C, or
an item (in fact the smallest item) on the barrier B. Following the extraction, some items on the
barrier B become corrupt and move to C. The barrier is ‘mended’ by inserting to Q the children
of items on B that were extracted or became corrupt. By our simplifying assumption, insertions
do not corrupt items, so the newly inserted items belong to B and are thus part of the new barrier,
reestablishing (a) and (b).
We now make the following two additional claims:
(c) The item extracted at each iteration is smaller than or equal to the smallest item on the
barrier. (With respect to the original keys.)
(d) The smallest item on the barrier cannot decrease.
Claim (c) follows immediately from the definition of an extract-min operation and our assumption
that corruption occurs only after an extraction. All the items on the barrier, and in particular the
smallest item e on the barrier, are in the soft heap and are not corrupt. Thus, the extracted item
is either e, or a corrupt item f whose corrupt key is still smaller than e. As corruption can only
increase keys, we have f < e.
Claim (d) clearly holds as items on the barrier at the end of an iteration were either on the barrier
at the beginning of the iteration, or are children of items that were on the barrier at the beginning
of the iteration.
Consider now the smallest item e on the barrier after k − 1 iterations. As all extracted items are
smaller than it, the rank of e is at least k. Furthermore, all items smaller than e must be in C ∪ D,
i.e., inserted at some stage into the heap. Indeed, let f be an item of A, i.e., an item not inserted
into Q. By invariant (b), f has an ancestor f 0 on the barrier. By heap order and the assumption
that e is the smallest item on the barrier we indeed get e ≤ f 0 < f . Thus, the smallest k items
were indeed inserted into the soft heap as claimed.
2
The proof of Lemma 3.1 relies on our assumption that corruptions in the soft heap occur only
after extract-min operations. A slight change in the algorithm is needed if insert operations may
cause corruptions; we need to repeatedly add children of newly corrupt items until no new items
become corrupt. (Lemma 3.2 below shows that this process must end if ε < 21 . The process may not
end if ε ≥ 21 .) The algorithm, without any change, remains correct, and in particular Lemma 3.1
holds, if extract-min operations are allowed to corrupt items before extracting an item of minimum
(corrupt) key. The proof, however, becomes more complicated. (Claim (c), for example, does not
hold in that case.)
Lemma 3.2 Algorithm Soft-Select(r) inserts only O(k) items into the soft heap Q.
Proof: Let I be the number of insertions made by Soft-Select(r), and let C be the number of
items that become corrupt during the running of the algorithm. (Note that Soft-Select(r) clearly
terminates.) Let ε(= 41 ) be the error parameter of the soft heap. We have I < 2k + 2C, as each
inserted item is either the root r, or a child of an item extracted during one of the k − 1 iterations
of the algorithm, and there are at most 2k − 1 such insertions, or a child of a corrupt item, and
there are exactly 2C such insertions. We also have C < k + εI, as by the definition of soft heaps,
7
at the end of the process at most εI items in the soft heap may be corrupt, and as only k − 1
(possibly corrupt) items were removed from the soft heap. Combining these two inequalities we get
C < k + ε(2k + 2C), and hence (1 − 2ε)C < (1 + 2ε)k. Thus, if ε < 12 we get
1 + 2ε
1 + 2ε
k.
C <
k , I < 2 1+
1 − 2ε
1 − 2ε
The number of insertions I is therefore O(k), as claimed. (For ε = 41 , I < 8k.)
2
Combining the two lemmas we easily get:
Theorem 3.3 Algorithm Soft-Select(r) selects the k smallest items of a binary min-heap in O(k)
time.
Proof: The correctness of the algorithm follows from Lemmas 3.1 and 3.2. Lemma 3.2 also implies
that only O(k) operations are performed on the soft heap. As ε = 1/4, each operation takes O(1)
amortized time. The total running time, and the number of comparisons, performed by the loop of
Soft-Select(r) is thus O(k). As the size of S is O(k), the selection of the smallest k items from S
also takes only O(k) time.
2
3.2
Selection from d-ary heaps
Frederickson [11] claims, in the last sentence of his paper, that his algorithm for binary min-heaps
can be modified to yield an optimal O(dk) algorithm for d-ary min-heaps, for any d ≥ 2, but no
details are given. (In a d-ary heap, each node has (at most) d children.)
We present two simple O(dk) algorithms for selecting the k smallest items from a d-ary min-heap.
The first is a simple modification of the algorithm for the binary case. The second in a simple
reduction from the d-ary case to the binary case.
Algorithm Soft-Select(r) of Figure 2 can be easily adapted to work on d-ary heaps. We simply
insert the d children of an extracted item, or an item that becomes corrupt, into the soft heap. If
we again let I be the number of items inserted into the sort heap, and C be the number of items
that become corrupt, we get I < d(k + C) and C < k + εI, and hence
1 + dε
1 + dε
C <
k , I < d 1+
k,
1 − dε
1 − dε
1
provided that ε < d1 , e.g., ε = 2d
. The algorithm then performs O(dk) insert operations, each
with an amortized cost of O(1), and k − 1 extract-min operations, each with an amortized cost of
O(log 1ε ) = O(log d). The total running time is therefore O(dk). (Note that it is important here to
use the soft heap implementation of [22], with an O(1) amortized cost of insert.)
An alternative O(dk) algorithm for d-ary heaps, for any d ≥ 2, can be obtained by a simple reduction
from d-ary heaps to 3-ary (or binary) heaps using a process that we call “on-the-fly ternarization
via heapification”. We use the well-known fact that an array of d items can be heapified, i.e.,
converted into a binary heap, in O(d) time. (See Williams [28] or Cormen et al. [7].) We describe
this alternative approach because we think it is interesting, and because we use it in the next section
to obtain an algorithm for general heap-ordered trees, i.e., trees in which different nodes may have
different degrees, and the degrees of the nodes are not necessarily bounded by a constant.
In a d-ary heap, each item e has (up to) d children e.child[1], . . . , e.child[d]. We construct a ternary
heap on the same set of items in the following way. We heapify the d children of e, i.e., construct
8
Figure 4: On-the-fly ternarization of a 7-ary heap. Thin lines represent the original 7-ary heap.
Bold arrows represent new left and right children. Dashed arrows represent new middle children.
a binary heap whose items are these d children. This gives each child f of e two new children
f.left and f.right. (Some of these new children are null.) We let e.middle be the root of the heap
composed of the children of e. Overall, this gives each item e in the original heap three new children
e.left, e.middle and e.right, some of which may be null. Note that e gets its new children e.left
and e.right when it and its siblings are heapified. (The names left, middle and right are, of course,
arbitrary.) For an example, see Figure 4.
This heapification process can be carried out on-the-fly while running Soft-Select(r) on the resulting ternary heap. The algorithm starts by inserting the root of the d-ary heap, which is also the
root of its ternarized version, into the soft heap. When an item e is extracted from the soft heap, or
becomes corrupt, we do not immediately insert its d original children into the soft heap. Instead, we
heapify its d children, in O(d) time. This assigns e its middle child e.middle. Item e already has its
left and right children e.left and e.right defined. The three new children e.left, e.middle and e.right
are now inserted into the soft heap. We call the resulting algorithm Soft-Select-Heapify(r).
Theorem 3.4 Algorithm Soft-Select-Heapify(r) selects the k smallest items from a d-ary heap
with root r in O(dk) time.
Proof: Algorithm Soft-Select-Heapify(r) essentially works on a ternary version of the input
d-ary heap constructed on the fly. Simple adaptations of Lemmas 3.1 and 3.2 show that the total
running time, excluding the heapifications’ cost, is O(k). As only O(k) heapifications are performed,
the cost of all heapifications is O(dk), giving the total running time of the algorithm.
2
It is also possible to binarize the input heap on the fly. We first ternarize the heap as above.
We now convert the resulting ternary tree into a binary tree using the standard first child, next
sibling representation. This converts the ternary heap into a binary heap, if the three children
of each item are sorted. During the ternarization process, we can easily make sure that the three
children of each item appear in sorted order, swapping children if necessary, so we can apply this
final binarization step.
9
3.3
Selection from general heap-ordered trees
Algorithm Soft-Select-Heapify(r) works, of course, on arbitrary heap-ordered trees in which
different nodes have different degrees. Algorithm Soft-Select(r), on the other hand, is not easily
adapted to work on general heap-ordered trees, as it is unclear how to set the error parameter ε
to obtain an optimal running time. To bound the running time of Soft-Select-Heapify(r) on an
arbitrary heap-ordered tree, we introduce the following definition.
Definition 3.5 (D(T, k)) Let T be a (possibly infinite) rooted tree and let k ≥ 1. Let D(T, k) be
the maximum sum of degrees over all subtrees of T of size k rooted at the root of T . (The degrees
summed are in T , not in the subtree.)
For example, if Td is an infinite d-ary tree, then D(Td , k) = dk, as the sum of degrees in each
subtree of Td containing k vertices is dk. For a more complicated example, let T be the infinite
tree in which each node at level i has degree
P i + 2, i.e., the root has two children, each of which
has three children, etc. Then, D(T, k) = k+1
i=2 i = k(k + 3)/2, where the subtrees achieving this
maximum are paths starting at the root. A simple adaptation of Theorem 3.4 gives:
Theorem 3.6 Let T be a heap-ordered tree with root r. Algorithm Soft-Select-Heapify(r) selects
the k-th smallest item in T , and the set of k smallest items in T , in O(D(T, 3k)) time.
Proof: We use the on-the-fly binarization and a soft heap with ε = 61 . The number of corrupt
items is less than 2k. The number of extracted items is less than k. Thus, the algorithm needs to
heapify the children of less than 3k items that form a subtree T 0 of the original tree T . The sum
of the degrees of these items is at most D(T, 3k), thus the total time spent on the heapifications,
which dominates the running time of the algorithm, is O(D(T, 3k)). We note that D(T, 3k) can be
replaced by D(T, (2 + δ)k), for any δ > 0, by choosing ε small enough.
2
Theorem 3.7 Let T be a heap-ordered tree and let k ≥ 1. Any comparison-based algorithm for
selecting the k-th smallest item in T must perform at least D(T, k − 1) − (k − 1) comparisons on
some inputs.
Proof: Let T 0 be the subtree of T of size k − 1 that achieves the value D(T, k − 1), i.e., the sum of
the degrees of the nodes of T 0 is D(T, k − 1). Suppose the k − 1 items of T 0 are the k − 1 smallest
items in T . The nodes of T 0 have at least D(T, k − 1) − (k − 2) children that are not in T 0 . The
k-th smallest item is the minimum item among these items, and no information on the order of
these items is implied by the heap order of the tree. Thus, finding the k-th smallest item in this
case requires at least D(T, k − 1) − (k − 1) comparisons.
2
4
Selection from row-sorted matrices
In this section we present algorithms for selecting the k smallest items from a row-sorted matrix, or
equivalently from a collection of sorted lists. Our results simplify and extend results of Frederickson
and Johnson [12]. The algorithms presented in this section use our Soft-Select algorithm for
selection from a binary min-heap presented in Section 3.1. (Frederickson’s [11] algorithm could also
be used, but the resulting algorithms would become much more complicated, in particular more
complicated than the algorithms of Frederickson and Johnson [12].)
10
Figure 5: Partitioning the items in each row to blocks of size b. Block representatives are shown
as small filled circles. The shaded regions contains the k smallest items. The darkly shaded region
depicts blocks all of whose items are among the k smallest.
In Section 4.1 we give an O(m + k) algorithm, where m is the number of rows, which is optimal
if m = O(k). In Section 4.2 we give an O(m log k+m
m ) algorithm which is optimal for k = Ω(m).
These results match results given by Frederickson and Johnson [12]. In Sections 4.3 and 4.4 we
give two new algorithms that improve in some cases over the previous algorithms.
4.1
An O(m + k) algorithm
A sorted list may be viewed as a heap-sorted path, i.e., a 1-ary heap. We can convert a collection
of m sorted lists into a (degenerate) binary heap by building a binary tree whose leaves are the first
items in the lists. The values of the m − 1 internal nodes in this tree are set to −∞. Each item in
a list will have one real child, its successor in the list, and a dummy child with value +∞. To find
the k smallest items in the lists, we simply find the m+k −1 smallest items in the binary heap. This
can be done in O(m + k) time using algorithm Soft-Select of Section 3.1. More directly, we can
use the following straightforward modification of algorithm Soft-Select. Insert the m first items
in the lists into a soft heap. Perform k − 1 iterations in which an item with minimum (corrupt)
key is extracted. Insert into the soft heap the child of the item extracted as well as the children of
all the items that became corrupt following the extract-min operation.
Alternatively, we can convert the m sorted lists into a heap-ordered tree Tm,1 by adding a root
with value −∞ that will have the m first items as its children. All other nodes in the tree will have
degree 1. It is easy to see that D(Tm,1 , k) = m + k − 1. By Theorem 3.6 we again get an O(m + k)
algorithm. We have thus presented three different proofs of the following theorem.
Theorem 4.1 Let A be a row-sorted matrix containing m rows. Then, the k-th smallest item in A,
and the set of k smallest items in A, can be found in O(m + k) time.
We refer to the algorithm of Theorem 4.1 as Mat-Select1 (A, k). The O(m + k) running time of
Mat-Select1 (A, k) is asymptotically optimal if k = O(m), as Ω(m) is clearly a lower bound; each
selection algorithm must examine at least one item in each row of the input matrix.
4.2
k
An O(m log m
) algorithm, for k ≥ 2m
We begin with a verbal description of the algorithm. Let A be
the input matrix and let k ≥ 2m.
k
Partition each row of the matrix A into blocks of size b = 2m
. The last item in each block is the
11
Mat-Select2 (A, k):
m ← num-rows(A)
if k ≤ 2m then
return Mat-Select1 (A, k)
else
b ← bk/(2m)c
A0 ← jump(A, b)
K ← Mat-Select1 (A0 , m)
A00 ← shift(A, bK)
return bK + Mat-Select2 (A00 , k − bm)
Figure 6: Selecting the k smallest items from a row-sorted matrix A. (Implicit handling of submatrices passed to recursive calls.)
Mat-Select2 (hA, c, Di, k):
m ← num-rows(A)
if k ≤ 2m then
return Mat-Select1 (hA, c, Di, k)
else
b ← bk/(2m)c
K ← Mat-Select1 (hA, bc, Di, m)
return bK + Mat-Select2 (hA, c, D + bcKi, k − bm)
Figure 7: Selecting the k smallest items from a row-sorted matrix A. (Explicit handling of submatrices passed to recursive calls.)
representative of the block. Consider the (yet unknown) distribution of the k smallest items among
the m rows of the matrix. Let ki be the number of items in the i-th row that are among the k
smallest items in the whole matrix. These ki items are clearly the first ki items of the i-th row.
They are partitioned into a number of full blocks, followed possibly by one partially filled block.
k
(For an example, see Figure 5.) The number of items in partially filled blocks is at most m 2m
.
Thus, the number of filled blocks is at least
k
k − m 2m
k
≥ m.
2m
Apply algorithm Mat-Select1 to select the smallest m block representatives. This clearly takes
only O(m) time. (Algorithm Mat-Select1 is applied on the implicitly represented matrix A0 of
block representatives.) All items in the m blocks whose representatives were
are among
k selected
k
the k smallest items of the matrix. The number of such items is mb = m 2m ≥ 4 , as k ≥ 2m.
These items can be removed from the matrix. All that remains is to select the k − mb smallest
remaining items using a recursive call to the algorithm. In each recursive call (or iteration), the
total work is O(m). The number of items to be selected drops by a factor of at least 3/4. Thus
k
k
after at most log4/3 2m
= O(log m
) iterations, k drops below 2m and then Mat-Select1 is called to
finish the job in O(m) time.
Pseudo-code of the algorithm described above, which we call Mat-Select2 (A, k) is given in Figure 6.
The algorithm returns an array K = (k1 , k2 , . . . , km ), where ki is the number of items in the i-th
12
row that are among the k smallest items of the matrix. The algorithm uses a function num-rows(A)
that returns the number of rows of a given matrix, a function jump(A, b) that returns an (implicit)
representation of a matrix A0 such that A0i,j = Ai,bj , for i, j ≥ 1, and a function shift(A, K) that
returns an (implicit) representation of a matrix A00 such that A00i,j = Ai,j+ki , for i, j ≥ 1.
In Figure 7 we eliminate the use of jump and shift and make everything explicit. The input
matrix is now represented by a triplet hA, c, Di, where A is a matrix, c ≥ 1 is an integer, and
D = (d1 , d2 , . . . , dm ) is an array of non-negative integral displacements. Mat-Select2 (hA, c, Di, k)
selects the k smallest items in the matrix A0 such that A0i,j = Ai,cj+di , for i, j ≥ 1. To select the k
smallest items in A itself, we simply call Mat-Select2 (hA, 1, 0i, k), where 0 represents an array
of m zeros. The implementation of Mat-Select2 (hA, c, Di, k) in Figure 7 is recursive. It is easy to
convert it into an equivalent iterative implementation.
Theorem 4.2 Let A be a row-sorted matrix containing m rows and let k ≥ 2m. Algorithm
k
Mat-Select2 (A, k) selects the k smallest items in A in O(m log m
) time.
k
Frederickson and Johnson [12] showed that the O(m log m
) running time of Mat-Select2 (A, k) is
optimal, when k ≥ 2m. A simple proof of this claim can also be found in Section 4.5.
4.3
An O(m +
Pm
i=1
log ni ) algorithm
Assume now that the i-th row of A contains only ni items. We assume that ni ≥ 1, as otherwise,
we can simply remove the i-th row. We can run algorithms Mat-Select1 and Mat-Select2 of the
previous sections by adding dummy +∞ items at the end of each row, but this may be wasteful.
We now show that a simple modification
of Mat-Select2 , which we call Mat-Select3 , can solve
Pm
the selection problem in O(m + i=1 log ni ) time. We focus first on the number of comparisons
performed by the new algorithm.
k
At the beginning of each iteration, Mat-Select2 sets the block size to b = 2m
. PIf ni < b,
m
then the last item in the first block of the i-th row is +∞. Assuming that k ≤
i=1 ni , no
representatives from the i-th row will be selected in the current iteration. There is therefore no
point in considering the i-th row in the current iteration. Let m0 be the number of long rows, i.e.,
k
rows for which ni ≥ 2m
. We want to reduce the running time of the iteration to O(m0 ) and still
reduce k by some constant factor.
k k
The total number of items in the short rows is less than m 2m
≤ 2 . The long rows thus contain
k
at least 2 of the k smallest items of the matrix. We can thus run an iterationjof Mat-Select
2 on
k
k
k
k0
0
0
the long rows with k = 2 . In other words, we adjust the block size to b = 2m0 = 4m0 and
0
use Mat-Select1 to select the m0 smallest representatives. This identifies b0 m0 ≥ k4 ≥ k8 items as
belonging to the k smallest items in A. Thus, each iteration takes O(m0 ) time and reduces k by a
factor of at least 87 .
In how many iterations did each row of the matrix participate?
j k Let kj be the number of items still
kj
be the threshold for long rows used
to be selected at the beginning of iteration j. Let bj = 2m
in iteration j. As kj drops exponentially, so does bj . Thus, row i participates in at most O(log ni )
of the last
P iterations of the algorithm. The total number of comparisons performed is thus at most
O(m + m
i=1 log ni ), as claimed.
P
To show that the algorithm can also be implemented to run in O(m + m
i=1 log ni ) time, we need
to show that we can quickly identify the rows that are long enough to participate in each iteration.
To do that, we sort dlog ni e using bucket sort. This takes only O(m + maxi dlog ni e) time. When a
row loses some of its items, it is easy to move it to the appropriate bucket in O(1) time. In each
13
iteration we may need to examine
Prows in one bucket that turn out not to be long enough, but this
does not affect the total O(m + m
i=1 log ni ) running time of the algorithm.
Theorem 4.3 Let A be a row-sorted matrix containing m rows, and let N = (n1 , n2 , .P
. . , nm ),
where ni ≥ 1 be the number of items in the i-th row of the matrix, for 1 ≤ i ≤P
m. Let k ≤ m
i=1 ni .
Algorithm Mat-Select3 (A, N, k) selects the k smallest items in A in O(m + m
log
n
)
time.
i
i=1
In Section 4.5 below we show that the running time of P
Mat-Select3 (A, N, k) is optimal for some
values of N = (n1 , n2 , . . . , nm ) and k, e.g., if k = 12 m
The
i=1 ni , i.e., for median selection.
Pm
k
O(m log m ) running time of Mat-Select2 (A, k) is sometimes better than the O(m + i=1 log ni )
running time of Mat-Select3 (A, N, k). We next describe an algorithm, Mat-Select4 (A, k), which
is always at least as fast as the three algorithms already presented, and sometimes faster.
4.4
An O(m +
Pm
i=1
log(ki + 1)) algorithm
As before, let ki be the (yet unknown) number of items in the i-th row that belong to the smallest k
items ofPthe matrix. In this section we describe an algorithm for finding these ki ’s that runs in
O(m + m
i=1 log(ki + 1)) time.
We partition each row this time into blocks of size 1, 2, 4, . . .. The representative of a block is again
the last item in the block. Note that the first ki items in row i reside in blog(k
Pmi + 1)c complete
blocks, plus one incomplete block, if log(ki + 1) is not an integer. Thus L = i=1 blog(ki + 1)c is
exactly the number of block representatives that belong to the k smallest items of the matrix.
Suppose that ` ≥ L is an upper bound on the true value of L. We can run Mat-Select1 to select the `
smallest block representatives in O(m+`) time. If `i representatives
Pm were selected from
Pmrow i, we let
`
+1
i
ni = 2
−1. We now run Mat-Select3 which runs in O(m+ P
i=1 log ni ) = O(m+
i=1 (`i +1)) =
log(k
+
1)),
as
promised.
O(m + `). Thus, if ` = O(L), the total running time is O(m + m
i
i=1
P
blog(k
+
1)c?
We
simply
try ` = m, 2m, 4m, . . .,
How do we find a tight upper bound on L = m
i
i=1
until we obtain a value of ` thatP
is high enough. If ` < L, i.e., ` is not large enough, we can discover
it in one of two ways. Either m
i=1 ni < k, in which case ` is clearly too small. Otherwise, the
algorithm returns an array of ki values. We can check whether these values are the correct ones in
O(m) time. First compute M = maxm
i=1 Ai,ki . Next check that Ai,ki +1 > M , for 1 ≤ i ≤ m. As `
is doubled in each iteration, the
cost
of the last iteration dominates the total running time which
Pm
is thus O(m + 2L) = O(m + i=1 log(ki + 1)). We call the resulting algorithm Mat-Select4 .
Theorem 4.4 Let A be a row-sorted matrix containing m rows
P and let k ≥ 2m. Algorithm
Mat-Select4 (A, k) selects the k smallest items in A in O (m + m
i=1 log(ki + 1)) time, where ki
is the number of items selected from row i.
4.5
Lower bounds for selection from row-sorted matrices
k
We begin with a simple proof that the O(m log m
) algorithm is optimal for k ≥ 2m.
Theorem 4.5 Any algorithm for selecting the k smallest items from a matrix with m sorted rows
must perform at least (m − 1) log m+k
m comparisons on some inputs.
Proof: We use the information-theoretic lower bound. We need to lowerPbound sk (m), which is
the number of m-tuples (k1 , k2 , . . . , km ), where 0 ≤ ki , for 1 ≤ i ≤ m, and m
i=1 ki = k. It is easily
14
seen that sk (m) = m+k−1
m−1 , as this is the number of ways to arrange k identical balls and m − 1
identical dividers in a row. We thus get a lower bound of
m+k−1
m + k − 1 m−1
m+k−1
m+k
log
≥ log
= (m − 1) log
≥ (m − 1) log
,
m−1
m−1
m−1
m
k
where we used the well-known relation nk > nk .
2
Pm
We next show
that our new O(m + i=1 log ni ) algorithm is optimal, at least in some cases, e.g.,
1 Pm
when k = 2 i=1 ni which corresponds to median selection.
P
maTheorem 4.6 Any algorithm for selecting the k = 12 m
i=1 ni smallest
P items from a row-sorted
Pm
trix with m rows of lengths n1 , n2 , . . . , nm ≥ 1 must perform at least m
log(n
+1)−log
(1
+
i
i=1
i=1 ni )
comparisons on some inputs.
Pm
Proof:
The
number
of
possible
solutions
to
the
selection
problem
for
all
values
of
0
≤
k
≤
i=1 ni
Q
(n
+
1).
(Each
solution
corresponds
to
a
choice
0
≤
k
≤
n
,
for
i
=
1,
2,
.
.
.
,
m.)
We
is m
i
i
i
i=1
Pm
1 Pm
prove
below that the number of solutions is maximized for k = 12 Q
n
n
(and
k
=
i=1 i
i=1 i ). The
2
P
m
m
number of possible solutions for this value of k is thus at least ( i=1 (ni + 1))/(1 + i=1 ni ). Taking
logarithm, we get the promised lower bound.
1 Pm
We next prove that the number of solutions is maximized
when
k
=
n
. Let Xi be a
i
i=1
2
Pm
uniform random variable on {0, 1, . . . , ni }, and let Y = i=1 Xi . The number of solutions for a
P
given value k is proportional to the probability that Y attains the value k. Let Yj = ji=1 Xi . We
P
prove by induction on j that the distribution of Yj is maximized at µj = 21 ji=1 ni , is symmetric
around µj , and is increasing up to µj and decreasing after µj . The base case is obvious as Y1 = X1 is
a uniform distribution. The induction step follows from an easy calculation. Indeed, Yj = Yj−1 +Xj ,
where Xj is uniform and Yj−1 has the required properties. The distribution of Yj is the convolution
of the distributions of Yj−1 and Xj , which corresponds to taking the average of nj + 1 values of the
distribution of Yj−1 . It follows easily that Yj also has the required properties.
2
Pm
Pm
We next compare
Pmthe lower bound obtained, i=1 log(ni + 1) − log (1 + i=1 ni ), with the upper
bound O(m + i=1 log ni ). The subtracted term in the lower bound is dominated by the first
P
log(m+1) Pm
term, i.e., log (1 + m
i=1 ni ) ≤
i=1 log(ni + 1), where equality holds only if ni = 1, for
m
every
i.
When
the
n
’s
are
large,
the
subtracted
term becomes negligible. Also, as ni ≥ 1, we have
i
Pm
i=1 log(ni +1) ≥ m. Thus, the lower and upper bound are always within a constant multiplicative
factor of each other.
Pm
The optimality of the
O(m
+
i=1 log ni ) algorithm also implies the optimality of our new “outputPm
sensitive”
P O(m + i=1 log(ki + 1)) algorithm. As ki ≤ ni , an algorithm that performs less than
c(m + m
on all inputs, for some small enough c, would contradict the
i=1 log(ki + 1)) comparisons
P
lower bounds for the O(m + m
log
ni ) algorithm.
i=1
5
Selection from X + Y
We are given two unsorted sets X and Y and we would like to find the k-th smallest item, and the
set of k smallest items, in the set X + Y . We assume that |X| = m, |Y | = n, where m ≥ n.
15
5.1
An O(m + n + k) algorithm
Heapify X and heapify Y , which takes O(m + n) time. Let x1 , . . . , xm be the heapified order of X,
i.e., xi ≤ x2i , x2i+1 , whenever the respective items exists. Similarly, let y1 , . . . , yn be the heapified
order of Y . Construct a heap of maximum degree 4 representing X + Y as follows. The root
is x1 + y1 . Item xi + y1 , for i ≥ 1 has four children x2i + y1 , x2i+1 + y1 , xi + y2 , xi + y3 . Item
xi + yj , for i ≥ 1, j > 1, has two children xi + y2j , xi + y2j+1 , again when the respective items
exist. (Basically, this is a heapified version of X + y1 , where each xi + y1 is the root of a heapified
version of xi + Y .) We can now apply algorithm Soft-Select on this heap. We call the resulting
algorithm X+Y-Select1 (X, Y ).
Theorem 5.1 Let X and Y be unordered sets of m and n items respectively. Then, algorithm
X+Y-Select1 (X, Y ) finds the k-th smallest item, and the set of k smallest items in X + Y , in
O(m + n + k) time.
5.2
k
An O(m log m
) algorithm, for k ≥ 6m, m ≥ n
If Y is sorted, then X + Y is a row-sorted matrix, and we can use algorithm
k
+ n log n)
Theorem 4.2. We can sort Y in O(n log n) time and get an O(m log m
k
ε
running time of this algorithm is O(m log m ) when k ≥ mn , for any fixed ε > 0.
values of m, n and k, e.g., m = n and k = mno(1) , the cost of sorting is dominant.
that the sorting can always be avoided.
Mat-Select2 of
algorithm. The
But, for certain
We show below
k
We first regress and describe an alternative O(m log m
) algorithm for selection from row-sorted
matrices. The algorithm is somewhat more complicated than algorithm Mat-Select2 given in
Section 4.2. The advantage of the new algorithm is that much less assumptions are made about
the order of the items in each row. A similar approach was used by Frederickson and Johnson [12]
but we believe that our approach is simpler. In particular we rely on a simple partitioning lemma
(Lemma 5.3 below) which is not used, explicitly or implicitly, in [12].
Instead of partitioning each row into blocks of equal size, as done by algorithm Mat-Select2 of
Section 4.2, we partition each row into exponentially increasing blocks, similar, but not identical,
to the partition made by algorithm Mat-Select3 of Section 4.3.
k
Let b = 3m
. Partition each row into blocks of size b, b, 2b, 4b, . . . , 2j b, . . .. The representative
of a block is again the last item in the block. We use algorithm Mat-Select1 to select the m
smallest representatives. This takes O(m) time. Let e1 < e2 < . . . < em denote the m selected
representatives in (the unknown) sorted order, and let s1 , s2 , . . . , sm be the sizes of their blocks. We
next use an O(m) weighted selection
P algorithm (see, e.g., Cormen et al. [7], Problem 9.2, p. 225) to
find the smallest ` such that k6 ≤ `j=1 sj and the items e1 , e2 , . . . , e` , in some order. Such an ` ≤ m
k
P
k
must exist, as m 3m
− 1) = 31 (k − 3m) ≥ k6 , as k ≥ 6m. Also note that k6 ≤ `j=1 sj < k3 ,
≥ m( 3m
P
P
as the addition of each block at most doubles the total size, i.e., `j=1 sj < 2 `−1
j=1 sj , for ` > 1.
Claim 5.2 All items of the blocks whose representatives are e1 , e2 , . . . , e` are among the k smallest
items in the matrix.
Proof: Let Sk be the set of k smallest items of the matrix.
Consider again the partition of Sk
k
≤ k3 of the items of Sk belong to rows
among the m rows of the matrix. Less than mb = m 3m
that do not contain a full block of Sk items. Thus, at least 2k
3 of the items of Sk are contained in
rows that contain at least one full block of Sk items. The exponential increase in the size of the
blocks ensures that in each such row, at least half of the items of Sk are contained in full blocks.
16
Figure 8: Partitions of Y for n = 32.
Thus, at least k3 of the items of Sk are contained in full blocks. In particular, if e1 , e2 , . . . , e` are the
P
smallest block representatives, and `j=1 sj ≤ k3 , then all the items in the blocks of e1 , e2 , . . . , e`
belong to Sk .
2
We can thus
P remove all the items in the blocks of e1 , e2 , . . . , e` from the matrix and proceed to find
the k − `j=1 sj ≤ 5k
6 smallest items of the remaining matrix. When k drops below 6m, we use
k
) iterations,
the algorithm of Mat-Select1 of Section 4.1. The resulting algorithm performs O(log m
k
each taking O(m) time, so the total running time is O(m log m ). (This matches the running time
of Mat-Select2 , using a somewhat more complicated algorithm.)
k
) algorithm before returning to the seWe make another small adaptation to the new O(m log m
k
lection from X + Y problem. Instead of letting b = 3m and using blocks of size b, b, 2b, 4b, . . .,
we let b0 = 2blog2 bc , i.e., b0 is the largest power of 2 which is at most b, and use blocks of size
b0 , b0 , 2b0 , 4b0 , . . .. All block sizes are now powers of 2. As the sizes of the blocks may be halved, we
select the 2m smallest block representatives. The number of items removed from each row in each
iteration is now also a power of 2.
Back to the X + Y problem. The main advantage of the new algorithm is that we do not really
need the items in each row to be sorted. All we need are the items of ranks b, b, 2b, 4b, . . ., where
b = 2` for some ` > 0, in each row. In the X + Y problem the rows, or what remains of them after
a certain number of iterations, are related, so we can easily achieve this task.
At the beginning of the first iteration, we use repeated median selection to find the items of Y
whose ranks are 1, 2, 4, . . .. This also partitions Y into blocks of size 1, 2, 4, . . . such that items of
each block are smaller than the items of the succeeding block. We also place the items of ranks
1, 2, 4, . . . in their corresponding places in Y . This gives us enough information to run the first
iteration of the matrix selection algorithm.
In each iteration, we refine the partition of Y . We apply repeated median selection on each block
of size 2` in Y , breaking it into blocks of size 1, 1, 2, 4, . . . , 2`−1 . The total time needed is O(n) per
iteration, which we can easily afford. We assume for simplicity that n = |Y | is a power of 2 and
that all items in Y are distinct. We now have the following fun lemma:
17
Lemma 5.3 After i iterations of the above process, if 1 ≤ r ≤ n has at most i 1’s in its binary
representation, then Y [r] is the item of rank r in Y , i.e., Y [1 : r − 1] < Y [r] < Y [r + 1 : n].
Additionally, if r1 < r2 both have at most i 1’s in their binary representation, and r2 is the smallest
number larger than r1 with this property, then Y [1 : r1 ] < Y [r1 + 1 : r2 ] < Y [r2 + 1 : n], i.e., the
items in Y [r1 + 1 : r2 ] are all larger than the items in Y [1 : r1 ] and smaller than the items in
Y [r2 + 1 : n].
For example, if n = 32, then after the first iteration we have the partition
Y [1], Y [2], Y [3 : 4], Y [5 : 8], Y [9 : 16], Y [17 : 32] .
After the second iteration, we have the partition
Y [1], Y [2], Y [3], Y [4], Y [5], Y [6], Y [7 : 8], Y [9], Y [10], Y [11:12], Y [13 : 16],
Y [17], Y [18], Y [19 : 20], Y [21 : 24], Y [25 : 32] .
(Actually, blocks of size 2 are also sorted.) The partitions obtained for n = 32 in the first five
iterations are also shown in Figure 8.
Proof: The claim clearly holds after the first iteration, as numbers with a single 1 in their binary
representation are exactly powers of 2. Let Y [r1 +1 : r2 ] be a block of Y generated after i iterations.
If r1 has less than i 1’s, then r2 = r1 + 1, so the block is trivial. Suppose, therefore, that r1 has
exactly i 1’s in its representation and that r2 = r1 + 2` . (` is actually the index of the rightmost 1
in the representation of r1 , counting from 0.) In the (i + 1)-st iteration, this block is broken into
the blocks Y [r1 + 1], Y [r1 + 2], Y [r1 + 3 : r1 + 4], . . . , Y [r1 + 2`−1 + 1 : r1 + 2` ]. As the numbers
r1 + 2j , for 1 ≤ j < ` are exactly the number between r1 and r2 with at most i + 1 1’s in their
binary representation, this establishes the induction step.
2
After i iterations of the modified matrix selection algorithm applied to an X + Y instance, we have
removed a certain number of items di from each row. The number of items removed from each
row in each iteration is a power of 2. By induction, di has at most i 1’s in its representation. In
the (i + 1)-st iteration we set b = 2` , for some ` ≥ 1 and need the items of rank b, 2b, 4b, . . . from
what remains of each row. The items needed from the i-th row are exactly X[i] + Y [di + 2j b], for
j = 0, 1, . . .. The required items from Y are available, as di + 2j b has at most i + 1 1s in its binary
representation! We call the resulting algorithm X+Y-Select2 (X, Y ).
Theorem 5.4 Let X and Y be unordered sets of m and n items respectively, where m ≥ n, and
let k ≥ 6m. Algorithm X+Y-Select2 (X, Y ) finds the k-th smallest item, and the set of k smallest
k
items in X + Y , in O(m log m
) time.
6
Concluding remarks
We used soft heaps to obtain a very simple O(k) algorithm for selecting the k-th smallest item
from a binary min-heap, greatly simplifying the previous O(k) algorithm of Frederickson [11]. We
used this simple heap selection algorithm to obtain simpler algorithms for selection from row-sorted
matrices and from X + Y , simplifying results of Frederickson and Johnson [12]. The simplicity of
our algorithms allowed us to go one
P step further and obtain some improved algorithms for these
problems, in particular an O(m + m
i=1 log(ki + 1)) “output-sensitive” algorithm for selection from
row-sorted matrices.
Our results also demonstrate the usefulness of soft heaps outside the realm of minimum spanning
tree algorithms. It would be nice to find further applications of soft heaps.
18
References
[1] Manuel Blum, Robert W. Floyd, Vaughan R. Pratt, Ronald L. Rivest, and Robert Endre
Tarjan. Time bounds for selection. J. Comput. Syst. Sci., 7(4):448–461, 1973.
[2] David Bremner, Timothy M. Chan, Erik D. Demaine, Jeff Erickson, Ferran Hurtado, John
Iacono, Stefan Langerman, Mihai Patrascu, and Perouz Taslakian. Necklaces, convolutions,
and X + Y . Algorithmica, 69(2):294–314, 2014.
[3] Gerth Stølting Brodal, Rolf Fagerberg, Mark Greve, and Alejandro López-Ortiz. Online sorted
range reporting. In Algorithms and Computation, 20th International Symposium, ISAAC 2009,
Honolulu, Hawaii, USA, December 16-18, 2009. Proceedings, pages 173–182, 2009.
[4] Jean Cardinal, Samuel Fiorini, Gwenaël Joret, Raphaël M. Jungers, and J. Ian Munro. Sorting
under partial information (without the ellipsoid algorithm). Combinatorica, 33(6):655–697,
2013.
[5] Bernard Chazelle. A minimum spanning tree algorithm with inverse-ackermann type complexity. J. ACM, 47(6):1028–1047, 2000.
[6] Bernard Chazelle. The soft heap: an approximate priority queue with optimal error rate. J.
ACM, 47(6):1012–1027, 2000.
[7] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction
to Algorithms, 3rd Edition. MIT Press, 2009.
[8] Dorit Dor and Uri Zwick. Selecting the median. SIAM J. Comput., 28(5):1722–1758, 1999.
[9] Dorit Dor and Uri Zwick. Median selection requires (2 + )n comparisons. SIAM J. Discrete
Math., 14(3):312–325, 2001.
[10] David Eppstein. Finding the k shortest paths. SIAM J. Comput., 28(2):652–673, 1998.
[11] Greg N. Frederickson. An optimal algorithm for selection in a min-heap. Inf. Comput.,
104(2):197–214, 1993.
[12] Greg N. Frederickson and Donald B. Johnson. The complexity of selection and ranking in
X + Y and matrices with sorted columns. J. Comput. Syst. Sci., 24(2):197–208, 1982.
[13] Greg N. Frederickson and Donald B. Johnson. Generalized selection and ranking: Sorted
matrices. SIAM J. Comput., 13(1):14–30, 1984.
[14] Greg N. Frederickson and Donald B. Johnson. Erratum: Generalized selection and ranking:
Sorted matrices. SIAM J. Comput., 19(1):205–206, 1990.
[15] Michael L. Fredman. How good is the information theory bound in sorting? Theor. Comput.
Sci., 1(4):355–361, 1976.
[16] Joseph L. Hodges Jr and Erich L. Lehmann. Estimates of location based on rank tests. The
Annals of Mathematical Statistics, pages 598–611, 1963.
[17] Donald B. Johnson and Samuel D. Kashdan. Lower bounds for selection in X + Y and other
multisets. J. ACM, 25(4):556–570, 1978.
[18] Donald B. Johnson and Tetsuo Mizoguchi. Selecting the k-th element in X + Y and X1 + X2 +
. . . + Xm . SIAM J. Comput., 7(2):147–153, 1978.
19
[19] Jeff Kahn and Jeong Han Kim. Entropy and sorting. J. Comput. Syst. Sci., 51(3):390–399,
1995.
[20] Jeff Kahn and Michael Saks. Balancing poset extensions. Order, 1(2):113–126, 1984.
[21] Daniel M. Kane, Shachar Lovett, and Shay Moran. Near-optimal linear decision trees for
k-SUM and related problems. CoRR, abs/1705.01720, 2017.
[22] Haim Kaplan, Robert Endre Tarjan, and Uri Zwick. Soft heaps simplified. SIAM J. Comput.,
42(4):1660–1673, 2013.
[23] Bernard O. Koopman. The optimum distribution of effort. Journal of the Operations Research
Society of America, 1(2):52–63, 1953.
[24] Jean-Luc Lambert. Sorting the sums (xi + yj ) in O(n2 ) comparisons. Theoretical Computer
Science, 103(1):137–141, 1992. 7th Annual Symposium on Theoretical Aspects of Computer
Science (STACS 90) (Rouen, 1990).
[25] Seth Pettie and Vijaya Ramachandran. An optimal minimum spanning tree algorithm. J.
ACM, 49(1):16–34, 2002.
[26] Arnold Schönhage, Mike Paterson, and Nicholas Pippenger. Finding the median. J. Comput.
Syst. Sci., 13(2):184–199, 1976.
[27] William L. Steiger and Ileana Streinu. A pseudo-algorithmic separation of lines from pseudolines. Inf. Process. Lett., 53(5):295–299, 1995.
[28] J.W.J. Williams. Algorithm 232: Heapsort. Communications of the ACM, 7:347–348, 1964.
20
| 8 |
arXiv:1707.08197v2 [] 26 Sep 2017
Fast Label Extraction in the CDAWG
Djamal Belazzougui1 and Fabio Cunial2
1
2
DTISI, CERIST Research Center, Algiers, Algeria.
Max Planck Institute of Molecular Cell Biology and Genetics,
Dresden, Germany.
September 27, 2017
Abstract
The compact directed acyclic word graph (CDAWG) of a string T of
length n takes space proportional just to the number e of right extensions of the maximal repeats of T , and it is thus an appealing index
for highly repetitive datasets, like collections of genomes from similar
species, in which e grows significantly more slowly than n. We reduce
from O(m log log n) to O(m) the time needed to count the number of occurrences of a pattern of length m, using an existing data structure that
takes an amount of space proportional to the size of the CDAWG. This
implies a reduction from O(m log log n + occ) to O(m + occ) in the time
needed to locate all the occ occurrences of the pattern. We also reduce
from O(k log log n) to O(k) the time needed to read the k characters of the
label of an edge of the suffix tree of T , and we reduce from O(m log log n)
to O(m) the time needed to compute the matching statistics between a
query of length m and T , using an existing representation of the suffix
tree based on the CDAWG. All such improvements derive from extracting
the label of a vertex or of an arc of the CDAWG using a straight-line
program induced by the reversed CDAWG.
1
Introduction
Large, highly repetitive datasets of strings are the hallmark of the post-genomic
era, and locating and counting all the exact occurrences of a pattern in such
collections has become a fundamental primitive. Given a string T of length
n, the compressed suffix tree [15, 18] and the compressed suffix array can be
used for such purpose, and they achieve an amount of space bounded by the
k-th order empirical entropy of T . However, such measure of redundancy is
known not to be meaningful when T is very repetitive [10]. The space taken
by such compressed data structures also includes an o(n) term which can be
a practical bottleneck when T is very repetitive. Conversely, the size of the
1
compact directed acyclic word graph (CDAWG) of T is proportional just to the
number of maximal repeats of T and of their right extensions (defined in Section
2.2): this is a natural measure of redundancy for very repetitive strings, which
grows sublinearly with n in practice [2].
In previous work we described a data structure that takes an amount of space
proportional to the size eT of the CDAWG of T , and that counts all the occ
occurrences in T of a pattern of length m in O(m log log n) time, and reports
all such occurrences in O(m log log n + occ) time [2]. We also described a representation of the suffix tree of T that takes space proportional to the CDAWG
of T , and that supports, among other operations, reading the k characters of
the label of an edge of the suffix tree in O(k log log n) time, and computing the
matching statistics between a pattern of length m and T in O(m log log n) time.
In this paper we remove the dependency of such key operations on the length
n of the uncompressed, highly repetitive string, without increasing the space
taken by the corresponding data structures asymptotically. We achieve this by
dropping the run-length-encoded representation of the Burrows-Wheeler transform of T , used in [2], and by exploiting the fact that the reversed CDAWG
induces a context-free grammar that produces T and only T , as described in
[1]. A related grammar, already implicit in [6], has been concurrently exploited
in [21] to achieve similar bounds to ours. Note that in some strings, for example in the family Ti for i ≥ 0, where T0 = 0 and Ti = Ti−1 iTi−1 , the length
of the string grows exponentially in the size of the CDAWG, thus shaving an
O(log log n) term is identical to shaving an O(log eT ) term.
This work can be seen as a continuation of the research program, started in
[1, 2], of building a fully functional, repetition-aware representation of the suffix
tree based on the CDAWG.
2
Preliminaries
We work in the RAM model with word length at least log n bits, where n is the
length of a string that is implicit from the context. We index strings and arrays
starting from one. We call working space the maximum amount of memory that
an algorithm uses in addition to its input and its output.
2.1
Graphs
We assume the reader to be familiar with the notions of tree and of directed
acyclic graph (DAG). In this paper we only deal with ordered trees and DAGs,
in which there is a total order among the out-neighbors of every node. The
i-th leaf of a tree is its i-th leaf in depth-first order, and to every node v of a
tree we assign the compact integer interval [sp(v)..ep(v)], in depth-first order,
of all leaves that belong to the subtree rooted at v. In this paper we use the
expression DAG also for directed acyclic multigraphs, allowing distinct arcs to
have the same source and destination nodes. In what follows we consider just
DAGs with exactly one source and one sink. We denote by T (G) the tree
2
generated by DAG G with the following recursive procedure: the tree generated
by the sink of G consists of a single node; the tree generated by a node v of
G that is not the sink, consists of a node whose children are the roots of the
subtrees generated by the out-neighbors of v in G, taken in order. Note that:
(1) every node of T (G) is generated by exactly one node of G; (2) a node of G
different from the sink generates one or more internal nodes of T (G), and the
subtrees of T (G) rooted at all such nodes are isomorphic; (3) the sink of G can
generate one or more leaves of T (G); (4) there is a bijection, between the set of
root-to-leaf paths in T (G) and the set of source-to-sink paths in G, such that
every path v1 , . . . , vk in T (G) is mapped to a path v1′ , . . . , vk′ in G.
2.2
Strings
Let Σ = [1..σ] be an integer alphabet, let # = 0 ∈
/ Σ be a separator, and let
T = [1..σ]n−1 # be a string. Given a string W ∈ [1..σ]k , we call the reverse
of W the string W obtained by reading W from right to left. For a string
W ∈ [1..σ]k # we abuse notation, denoting by W the string W [1..k]#. Given
a substring W of T , let PT (W ) be the set of all starting positions of W in T .
A repeat W is a string that satisfies |PT (W )| > 1. We conventionally assume
that the empty string occurs n + 1 times in T , before the first character of T
and after every character of T , thus it is a repeat. We denote by ΣℓT (W ) the set
of left extensions of W , i.e. the set of characters {a ∈ [0..σ] : |PT (aW )| > 0}.
Symmetrically, we denote by ΣrT (W ) the set of right extensions of W , i.e. the
set of characters {b ∈ [0..σ] : |PT (W b)| > 0}. A repeat W is right-maximal
(respectively, left-maximal ) iff |ΣrT (W )| > 1 (respectively, iff |ΣℓT (W )| > 1). It
is well known that T can have at most n − 1 right-maximal repeats and at most
n − 1 left-maximal repeats. A maximal repeat of T is a repeat that is both
left- and right-maximal. Note that the empty string is a maximal repeat. A
near-supermaximal repeat is a maximal repeat with at least one occurrence that
is not contained in an occurrence of another maximal repeat (see e.g. [13]). A
minimal absent word of T is a string W that does not occur in T , but such that
any substring of W occurs in T . It is well known that a minimal absent word
W can be written as aV b, where a and b are characters and V is a maximal
repeat of T [8]. It is also well known that a maximal repeat W = [1..σ]m of T
is the equivalence class of all the right-maximal strings {W [1..m], . . . , W [k..m]}
such that W [k + 1..m] is left-maximal, and W [i..m] is not left-maximal for all
i ∈ [2..k] (see e.g. [2]). By matching statistics of a string S with respect to T ,
we denote the array MSS,T [1..|S|] such that MSS,T [i] is the length of the longest
prefix of S[i..|S|] that occurs in T .
For reasons of space we assume the reader to be familiar with the notion of
suffix trie of T , as well as with the related notion of suffix tree STT = (V, E) of
T , which we do not define here. We denote by ℓ(γ), or equivalently by ℓ(u, v),
the label of edge γ = (u, v) ∈ E, and we denote by ℓ(v) the string label of node
v ∈ V . It is well known that a substring W of T is right-maximal iff W = ℓ(v)
for some internal node v of the suffix tree. Note that the label of an edge of STT
is itself a right-maximal substring of T , thus it is also the label of a node of STT .
3
We assume the reader to be familiar with the notion of suffix link connecting a
node v with ℓ(v) = aW for some a ∈ [0..σ] to a node w with ℓ(w) = W . Here
we just recall that inverting the direction of all suffix links yields the so-called
explicit Weiner links. Given an internal node v and a symbol a ∈ [0..σ], it might
happen that string aℓ(v) does occur in T , but that it is not right-maximal, i.e.
it is not the label of any internal node: all such left extensions of internal nodes
that end in the middle of an edge or at a leaf are called implicit Weiner links.
The suffix-link tree is the graph whose edges are the union of all explicit and
implicit Weiner links, and whose nodes are all the internal nodes of STT , as well
as additional nodes corresponding to the destinations of implicit Weiner links.
We call compact suffix-link tree the subgraph of the suffix-link tree induced by
maximal repeats.
We assume the reader to be familiar with the notion and uses of the BurrowsWheeler transform of T . In this paper we use BWTT to denote the BWT
of T , and we use range(W ) = [sp(W )..ep(W )] to denote the lexicographic
interval of a string W in a BWT that is implicit from the context. For a
node v (respectively, for an edge e) of STT , we use the shortcut range(v) =
[sp(v)..ep(v)] (respectively, range(e) = [sp(e)..ep(e)]) to denote range(ℓ(v))
(respectively, range(ℓ(e))). We denote by rT the number of runs in BWTT , and
we call run-length encoded BWT (denoted by RLBWTT ) any representation of
BWTT that takes O(rT ) words of space, and that supports rank and select
operations (see e.g. [16, 17, 20]).
Finally, in this paper we consider only context-free grammars in which the
right-hand side of every production rule consists either of a single terminal, or
of at least two nonterminals. We denote by π(F ) the sequence of characters
produced by a nonterminal F of a context-free grammar. Every node in the
parse tree of F corresponds to an interval in π(F ). Given a nonterminal F and
an integer interval [i..j] ⊆ [1..|π(F )|], let a node of the parse tree from F be
marked iff its interval is contained in [i..j]. By blanket of [i..j] in F we denote
the set of all marked nodes in the parse tree of F . Clearly the blanket of [i..j]
in F contains O(j − i) nodes and edges.
2.3
CDAWG
The compact directed acyclic word graph of a string T (denoted by CDAWGT in
what follows) is the minimal compact automaton that recognizes all suffixes of
T [5, 9]. We denote by eT the number of arcs in CDAWGT , and by hT the length
of a longest path in CDAWGT . We remove subscripts when string T is implicit
from the context. The CDAWG of T can be seen as the minimization of STT ,
in which all leaves are merged to the same node (the sink) that represents T
itself, and in which all nodes except the sink are in one-to-one correspondence
with the maximal repeats of T [19]. Every arc of CDAWGT is labeled by a
substring of T , and the out-neighbors w1 , . . . , wk of every node v of CDAWGT
are sorted according to the lexicographic order of the distinct labels of arcs
(v, w1 ), . . . , (v, wk ). We denote again with ℓ(v) (respectively, with ℓ(γ)) the
label of a node v (respectively, of an arc γ) of CDAWGT .
4
Since there is a bijection between the nodes of CDAWGT and the maximal
repeats of T , and since every maximal repeat of T is the equivalence class of a
set of roots of isomorphic subtrees of STT , it follows that the node v of CDAWGT
with ℓ(v) = W is the equivalence class of the nodes {v1 , . . . , vk } of STT such
that ℓ(vi ) = W [i..m] for all i ∈ [1..k], and such that vk , vk−1 , . . . , v1 is a maximal
unary path in the suffix-link tree. The subtrees of STT rooted at all such nodes
are isomorphic, and T (CDAWGT ) = STT . It follows that a right-maximal string
can be identified by the maximal repeat W it belongs to, and by the length of
the corresponding suffix of W . Similarly, a suffix of T can be identified by a
length relative to the sink of CDAWGT .
The equivalence class of a maximal repeat is related to the equivalence classes
of its in-neighbors in the CDAWG in a specific way:
Property 1 ([2]). Let w be a node in the CDAWG with ℓ(w) = W ∈ [1..σ]m ,
and let Sw = {W [1..m], . . . , W [k..m]} be the right-maximal strings that belong
to the equivalence class of node w. Let {v 1 , . . . , v t } be the in-neighbors of w in
CDAWGT , and let {V 1 , . . . , V t } be their labels. Then, Sw is partitioned into t
1
t
i
disjoint sets Sw
, . . . , Sw
such that Sw
= {W [xi + 1..m], W [xi + 2..m], . . . , W [xi +
|Svi |..m]}, and the right-maximal string V i [p..|V i |] labels the parent of the locus
of the right-maximal string W [xi + p..m] in the suffix tree, for all p ∈ [1..|Svi |].
Property 1 partitions every maximal repeat of T into left-maximal factors,
and applied to the sink w of CDAWGT , it partitions T into t left-maximal factors,
where t is the number of in-neighbors of w, or equivalently the number of nearsupermaximal repeats of T . Moreover, by Property 1, it is natural to say that
in-neighbor v i of node w is smaller than in-neighbor v j of node w iff xi < xj ,
j
i
. We call
are longer than the strings in Sw
or equivalently if the strings in Sw
CDAWGT the ordered DAG obtained by applying this order to the reversed
CDAWGT , i.e. to the DAG obtained by inverting the direction of all arcs of
CDAWGT , and by labeling every arc (v, w), where w is the source of CDAWGT ,
with the first character of the string label of arc (w, v) in CDAWGT . Note
that some nodes of CDAWGT can have just one out-neighbor: for brevity we
denote by CDAWGT the graph obtained by collapsing every such node v, i.e. by
redirecting to the out-neighbor of v all the arcs directed to v, propagating to
such arcs the label of the out-neighbor of v, if any.
The source of CDAWGT is the sink of CDAWGT , which is the equivalence
class of all suffixes of T in string order. There is a bijection between the distinct
paths of CDAWGT and the suffixes of T ; thus, the i-th leaf of T (CDAWGT ) in
depth-first order corresponds to the i-th suffix of T in string order. Moreover,
the last arc in the source-to-sink path of CDAWGT that corresponds to suffix
T [i..|T |] is labeled by character T [i]. It follows that:
Property 2 ([1]). CDAWGT is a context-free grammar that generates T and
only T , and T (CDAWGT ) is its parse tree. Let v be a node of CDAWGT with t
in-neighbors, and let ℓ(v) = V W , where W is the longest proper suffix of ℓ(v)
that is a maximal repeat (if any). Then, v corresponds to a nonterminal F of the
grammar such that π(F ) = V = π(F1 ) · · · π(Ft ), and Fi are the nonterminals
that correspond to the in-neighbors of v, for all i ∈ [1..t].
5
Note that the nonterminals of this grammar correspond to unary paths in
the suffix-link tree of T , i.e. to edges in the suffix tree of T . This parallels the
grammar implicit in [6] and explicit in [21], whose nonterminals correspond to
unary paths in the suffix trie of T , i.e. to edges in the suffix tree of T .
2.4
Counting and Locating with the CDAWG
CDAWGT can be combined with RLBWTT to build a data structure that takes
O(eT ) words of space, and that counts all the occ occurrences of a pattern
P of length m in O(m log log n) time, and reports all such occurrences in
O(m log log n + occ) time [2].
Specifically, for every node v of the CDAWG, we store |ℓ(v)| in a variable
v.length. Recall that an arc (v, w) in the CDAWG means that maximal repeat
ℓ(w) can be obtained by extending maximal repeat ℓ(v) to the right and to the
left. Thus, for every arc γ = (v, w) of the CDAWG, we store the first character of
ℓ(γ) in a variable γ.char, and we store the length of the right extension implied
by γ in a variable γ.right. The length γ.left of the left extension implied by
γ can be computed by w.length − v.length − γ.right. For every arc of the
CDAWG that connects a maximal repeat W to the sink, we store just γ.char
and the starting position γ.pos of string W · γ.char in T . The total space used
by the CDAWG is O(eT ) words, and the number of runs in BWTT can be shown
to be O(eT ) as well [2].
We use the RLBWT to count the number of occurrences of P in T , in
O(m log log n) time: if this number is not zero, we use the CDAWG to report
all the occ occurrences of P in O(occ) time, using a technique already sketched
in [7]. Specifically, since we know that P occurs in T , we perform a blind search
for P in the CDAWG, as follows. We keep a variable i, initialized to zero, that
stores the length of the prefix of P that we have matched so far, and we keep
a variable j, initialized to one, that stores the starting position of P inside the
last maximal repeat encountered during the search. For every node v in the
CDAWG, we choose the arc γ such that γ.char = P [i + 1] in constant time
using hashing, we increment i by γ.right, and we increment j by γ.left. If
the search leads to the sink by an arc γ, we report γ.pos + j − 1 and we stop.
If the search ends at a node v that is associated with a maximal repeat W , we
determine all the occurrences of W in T by performing a depth-first traversal
of all nodes reachable from v in the CDAWG, updating variables i and j as
described above, and reporting γ.pos + j − 1 for every arc γ that leads to the
sink. Clearly the total number of nodes and arcs reachable from v is O(occ).
Note that performing the blind search for a pattern in the CDAWG is analogous to a descending walk on the suffix tree, thus we can compute the BWT
interval of every node of STT that we meet during the search, by storing in
every arc of the CDAWG a suitable offset between BWT intervals, as described
in the following property:
Property 3 ([2]). Let {W [1..m], . . . , W [k..m]} be the right-maximal strings that
belong to the equivalence class of maximal repeat W ∈ [1..σ]m of string T , and
6
Algorithm 1: Reading the first k characters of the string produced by
a nonterminal F of a straight-line program represented as a DAG G. F
corresponds to node u′ of G. Notation follows Lemma 1.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
S ← empty stack;
S.push((u′ , 0, 0));
extracted ← 0;
repeat
t ← S.top;
if t.lastChild < |t.node.outNeighbors| then
t.lastChild ← t.lastChild + 1 ;
v ′ ← t.node.outNeighbors[t.lastChild];
if v ′ = G.sink then
print(label(t.node, v ′ ));
extracted ← extracted + 1;
else if t.lastChild = 1 then
t.depth ← 1;
S.push((levelAncestor(t.node, t.depth), 0, t.depth));
else S.push((v ′ , 0, 0)) ;
else
S.pop;
if S = ∅ then return extracted ;
t ← S.top;
if t.depth < t.node.depth then t.depth ← t.depth + 1 ;
if t.depth < t.node.depth then
S.push((levelAncestor(t.node, t.depth), 1, t.depth)) ;
end
until extracted = k;
return k;
let range(W [i..m]) = [pi ..qi ] for i ∈ [1..k]. Then |qi − pi + 1| = |qj − pj + 1|
for all i and j in [1..k]. Let c ∈ [0..σ], and let range(W [i..m]c) = [xi ..yi ] for
i ∈ [1..k]. Then, xi = pi + x1 − p1 and yi = pi + y1 − p1 .
Properties 1 and 3, among others, can be used to implement a number of
suffix tree operations in O(1) or O(log log n) time, using data structures that
take just O(eT ) or O(eT + eT ) words of space [1, 2]. Among other information,
such data structures store a pointer, from each node v of the CDAWG, to the
longest proper suffix of ℓ(v) (if any) that is a maximal repeat. Note that such
suffix pointers can be charged to suffix links in STT , thus they take overall
O(eT ) words of space.
7
3
Faster Count and Locate Queries in the CDAWG
In this paper we focus on deciding whether a pattern P occurs in T , a key step
in the blind search of Section 2.4. Rather than using the RLBWT for such
decision, we exploit Property 2 and use the grammar induced by CDAWGT .
Our methods will require a data structure, of size linear in the grammar,
that extracts in O(k) time the first k characters of the string produced by a
nonterminal. Previous research described an algorithm that extracts the whole
string produced by a nonterminal in linear time, using just constant working
space, by manipulating pointers in the grammar [12]. This solution does not
guarantee linear time when just a prefix of the string is extracted. A linearsize data structure with the stronger guarantee of constant-time extraction per
character has also been described [11], and this solution can be used as a black
box in our methods. However, since we just need amortized linear time, we
describe a significantly simpler alternative that needs just a level ancestor data
structure (an idea already implicit in [14]) and that will be useful in what follows:
Lemma 1. Let G = (V, E) be the DAG representation of a straight-line program. There is a data structure that: (1) given an integer k and a nonterminal F , allows one to read the first k characters of π(F ) in O(k) time and
O(min{k, h}) words of working space, where h is the height of the parse tree of
F ; (2) given a string S and a nonterminal F , allows one to compute the length
k of the longest prefix of S that matches a prefix of π(F ), in O(k) time and
O(min{k, h}) words of working space. Such data structure takes O(|V |) words
of space.
Proof. We mark the arc of G that connects each node v ′ to its first out-neighbor.
The set of all marked arcs induces a spanning tree τ of G, rooted at the sink
and arbitrarily ordered [11]. In what follows we identify the nodes of τ with
the corresponding nodes of G. We build a data structure that supports level
ancestor queries on τ : given a node v and an integer d, such data structure
returns the ancestor u of v in τ such that the path from the root of τ to u
contains exactly d edges. The level ancestor data structure described in [3, 4]
takes O(|V |) words of space and it answers queries in constant time. To read
the first k characters of string π(F ) = W , we explore the blanket of W [1..k] in
F recursively, as described in Algorithm 1. The tuples in the stack used by the
algorithm have the following fields: (node, lastChild, depth), where node is a
node of G, u′ .outNeighbors is the sorted list of out-neighbors of node u′ in G,
u′ .depth is the depth of u′ in τ , and function label(u′ , v ′ ) returns the character
that labels arc (u′ , v ′ ) in G. Algorithm 1 returns the number of characters read,
which might be smaller than k. A similar procedure can be used for computing
the length of the longest prefix of π(F ) that matches a prefix of a query string.
Every type of operation in Algorithm 1 takes constant time, it can be charged
to a distinct character in the output, and it pushes at most one element on the
stack. Thus, the stack contains O(k) tuples at every step of the algorithm. It
is also easy to see that the stack never contains more elements than the length
of the longest path from the node of G that corresponds to F to the sink.
8
If necessary, Algorithm 1 can be modified to take constant time per character:
Corollary 1. Let G = (V, E) be the DAG representation of a straight-line
program. There is a data structure that takes O(|V |) words of space and that,
given a nonterminal F , allows one to read the characters of π(F ), from left
to right, in constant time per character and in O(min{k, h}) words of working
space, where h is the height of the parse tree of F .
Proof. After having printed character i of π(F ), the time Algorithm 1 has
to wait before printing character i + 1 is always bounded by a constant, except when the procedure repeatedly pops tuples from the stack. This can be
avoided by preventively popping a tuple t for which t.lastChild has reached
|t.node.outNeighbors| after Line 7 is executed, before pushing new tuples on
the stack.
Moreover, Lemma 1 can be generalized to weighted DAGs, by storing in
each node of τ the sum of weights of all edges from the node to the root of
τ , by saving sums of weights in the tuples on the stack, and by summing and
subtracting the weights of the arcs of the DAG:
Corollary 2. Let G = (V, E) be an ordered DAG with a single sink and with
weights on the arcs, and let the weight of a path be the sum of weights of all its
arcs. There is a data structure that, given an integer k and a node v, reports
the weights of the first k paths from v to the sink in preorder, in constant time
per path and in O(min{k, h}) words of working space, where h is the length of
a longest path from v to the sink. Such data structure takes O(|V |) words of
space.
Lemma 1 is all we need to verify in linear time whether a pattern occurs in
the indexed text:
Theorem 1. Let T ∈ [1..σ]n be a string. There is a data structure that takes
O(eT ) words of space, and that counts (respectively, reports) all the occ occurrences of a pattern P ∈ [1..σ]m in O(m) time (respectively, in O(m + occ) time)
and in O(min{m, hT }) words of working space.
Proof. We assume that every node v ′ of CDAWGT stores in a variable v ′ .freq
the number of occurrences of ℓ(v ′ ) in T . Recall that, for a node v ′ of CDAWGT ,
ℓ(v ′ ) = π(F1 )π(F2 ) · · · π(Fk ) · W , where Fp for p ∈ [1..k] are nonterminals of the
grammar, and W is the maximal repeat that labels the node w′ of CDAWGT
that is reachable from v ′ by a suffix pointer. For each arc (u′ , v ′ ) of CDAWGT ,
we store a pointer to the nonterminal Fp of v ′ that corresponds to u′ . We
perform a blind search for P in CDAWGT as described in Section 2.4: either the
search is unsuccessful, or it returns a node v ′ of CDAWGT and an integer interval
[i..j] such that, if P occurs in T , then P = V [i..j] where V = ℓ(v ′ ), and the
number of occurrences of P in T is v ′ .freq. To decide whether P occurs in T ,
we reconstruct the characters in V [i..j] as follows (Figure 1a). Clearly i belongs
to a π(Fp ) for some p, and such Fp can be accessed in constant time using the
9
pointers described at the beginning of the proof. If i is the first position of π(Fp ),
we extract all characters of π(Fp ) by performing a linear-time traversal of the
parse tree of Fp . Otherwise, we extract the suffix of π(Fp ) in linear time using
Lemma 1. Note that j must belong to π(Fq ) for some q > p, since the search
reaches v ′ after right-extending a suffix of an in-neighbor u′ of v ′ that belongs to
the equivalence class of u′ (recall Property 1). We thus proceed symmetrically,
traversing the entire parse tree of Fp+1 . . . Fq−1 and finally extracting either
the entire π(Fq ) or a prefix. Finally, j could belong to W , in which case we
traverse the entire parse tree of Fp+1 . . . Fk and we recur on w′ , resetting j to
Pk
j − x=1 |π(Fx )|. If the verification is successful, we proceed to locate all the
occurrences of P in T as described in Section 2.4.
Note that the data structure in Theorem 1 takes actually O(min{eT , eT })
words of space, since one could index either T or T for counting and locating.
Lemma 1 can also be used to report the top k occurrences of a pattern P in T ,
according to the popularity of the right-extensions of P in the corpus:
Corollary 3. Let P be a pattern, let P = {p1 , p2 , . . . , pm } be the set of all its
starting positions in a text T . Let sequence Q = q1 , q2 , . . . , qm be such that qi ∈ P
for all i ∈ [1..m], qi 6= qj for all i 6= j, and i < j iff T [qi ..|T |] is lexicographically
smaller than T [qj ..|T |]. Let sequence S = s1 , s2 , . . . , sm be such that si ∈ P for
all i ∈ [1..m], si 6= sj for all i 6= j, and i < j iff the frequency of T [si ..si + x]
in T is not smaller than the frequency of T [sj ..sj + x] in T (with ties broken
lexicographically), where x is the length of the longest common prefix between
T [si ..|T |] and T [sj ..|T |]. There is a data structure that allows one to return
the first k elements of sequence Q or S in constant time per element and in
O(min{k, hT }) words of working space. Such data structure takes O(eT ) words
of space.
Proof. Recall that Theorem 1 builds the spanning tree τ of Lemma 1 on the
reversed CDAWG that represents a straight-line program of T . To print Q,
we build τ and the corresponding level-ancestor data structure on CDAWGT ,
connecting each vertex of the CDAWG to its lexicographically smallest outneighbor, and storing in each node of τ the sum of lengths of all edges from the
node to the root of τ . Given the locus v ′ of P in CDAWGT , we can print the first
k elements of Q in O(k) time and in O(min{k, h}) words of space, where h is the
length of a longest path from v ′ to the sink of CDAWGT , by using Corollary 2.
To print S we add to each node of CDAWGT an additional list of children, sorted
by nondecreasing frequency with ties broken lexicographically, and we build the
spanning tree τ by connecting each vertex of CDAWGT to its first out-neighbor
in such new list.
Finally, Theorem 1 allows one to reconstruct the label of any arc of the
CDAWG, in linear time in the length k of such label. This improves the
O(k log log n) bound described in [2], where n is the length of the uncompressed
text, and it removes the eT term from the space complexity, since RLBWTT is
not needed.
10
Figure 1: (a) The verification step of pattern search, implemented with the
CDAWG. Notation follows Theorem 1. (b) Reconstructing the label of an arc
of the CDAWG. Notation follows Theorem 2.
Theorem 2. There is a data structure that allows one to read the k characters
of the label of an arc (v ′ , w′ ) of CDAWGT , in O(k) time and in O(min{k, hT })
words of working space. Such data structure takes O(eT ) words of space.
Proof. Recall that every arc (v ′ , w′ ) that does not point to the sink of CDAWGT
is a right-maximal substring of T . If it is also a maximal repeat, then we can
already reconstruct it as described in Theorem 1, storing a pointer to such maximal repeat, starting extraction from the first nonterminal of the maximal repeat,
and recurring to the maximal repeat reachable from its suffix pointer. Otherwise, let W = ℓ(w′ ) = V U , where U is the maximal repeat that corresponds to
the node u′ reachable from the suffix pointer of w′ , and let V = π(F1 ) · · · π(Fk )
where Fp for p ∈ [1..k] are nonterminals in the grammar. The label of (v ′ , w′ )
coincides with suffix W [i..|W |], and its length is stored in the index.
If i ≤ |V |, let V [i..|V |] = X · π(Fp+1 ) · · · π(Fk ) for some p. To reconstruct
U , we traverse the whole parse tree of Fk , Fk−1 , . . . , Fp+1 , and we reconstruct
the suffix of length |X| of π(Fp ) using Lemma 1. Otherwise, if i > |V |, we
could recur to U , resetting i to i − |V | (Figure 1b). Let U = V ′ U ′ , where U ′
is the maximal repeat that corresponds to the node reachable from the suffix
pointer of u′ . Note that it could still happen that i > |V ′ |, thus we might need
to follow a sequence of suffix pointers. During the construction of the index, we
store with arc (v ′ , w′ ) a pointer to the first maximal repeat t′ , in the sequence of
suffix pointers from w′ , such that |ℓ(t′ )| ≥ |ℓ(v ′ , w′ )|, and such that the length
of the longest proper suffix of ℓ(t′ ) that is a maximal repeat is either zero or
smaller than |ℓ(v ′ , w′ )|. To reconstruct ℓ(v ′ , w′ ), we just follow such pointer and
proceed as described above.
Reading the label of an arc that is directed to the sink of CDAWGT can be
implemented in a similar way: we leave the details to the reader.
We can also read the label of an arc (v ′ , w′ ) from right to left, with the
stronger guarantee of taking constant time per character:
11
Corollary 4. There is a data structure that allows one to read the k characters
of the label of an arc (v ′ , w′ ) of CDAWGT , from right to left, in constant time per
character and in O(min{k, hT }) words of working space. Such data structure
takes O(eT ) words of space.
Proof. We proceed as in Theorem 2, but we also keep the tree τ of explicit
Weiner links from every node of CDAWGT , imposing an arbitrary order on the
children of every node t of τ , and we build a data structure that supports level
ancestor queries on τ . As in Theorem 2, we move to a maximal repeat u′ such
that |ℓ(u′ )| ≥ |ℓ(v ′ , w′ )|, and such that the length of the longest proper suffix of
ℓ(u′ ) that is a maximal repeat is either zero or smaller than |ℓ(v ′ , w′ )|. Then, we
move to node x′ = levelAncestor(u′ , 1), we reconstruct ℓ(x′ ) from right to left
using Corollary 1, and we use levelAncestor(u′ , 2) to follow an explicit Weiner
link from x′ . After a sequence of such explicit Weiner links we are back to u′ ,
and we reconstruct from right to left the prefix of ℓ(v ′ , w′ ) that does not belong
to the longest suffix of ℓ(u′ ) that is a maximal repeat, using again Corollary
1.
Since the label of arc (v ′ , w′ ) is a suffix of ℓ(w′ ), and since the label of
every node w′ of the CDAWG can be represented as π(F ) · ℓ(u′ ), where F is
a nonterminal of the grammar and u′ is the longest suffix of ℓ(w′ ) that is a
maximal repeat, we could implement Corollary 4 by adding to the grammar
the nonterminals W ′ and U ′ and a new production W ′ → F U ′ for nodes w′
and u′ , and by using Corollary 1 for extraction. This does not increase the size
of the grammar asymptotically. Note that the subgraph induced by the new
nonterminals in the modified grammar is the reverse of the compact suffix-link
tree of T .
4
Faster Matching Statistics in the CDAWG
A number of applications, including matching statistics, require reading the
label of an arc from left to right : this is not straightforward using the techniques
we described, since the label of an arc (v ′ , w′ ) can start e.g. in the middle of one
of the nonterminals of w′ rather than at the beginning of one such nonterminal
(see Figure 1b). We circumvent the need for reading the characters of the label
of an arc from left to right in matching statistics, by applying the algorithm in
Theorem 1 to prefixes of the pattern of exponentially increasing length:
Lemma 2. There is a data structure that, given a string S and an arc (v ′ , w′ )
of CDAWGT , allows one to compute the length k of the longest prefix of S that
matches a prefix of the label of (v ′ , w′ ), in O(k) time and in O(min{k, hT })
words of working space. Such data structure takes O(eT ) words of space.
Proof. Let γ = (v ′ , w′ ). If ℓ(γ) is a maximal repeat of T , we can already read
its characters from left to right by applying Theorem 1. Otherwise, we perform
a doubling search over the prefixes of S, testing iteratively whether S[1..2i ]
matches a prefix of ℓ(γ) for increasing integers i, and stopping when S[1..2i ]
12
does not match a prefix of ℓ(γ). We perform a linear amount of work in the
length of each prefix, thus a linear amount of total work in the length of the
longest prefix of S that matches a prefix of ℓ(γ).
We determine whether S[1..2i ] is a prefix of ℓ(γ) as follows. Recall that
an arc of CDAWGT (or equivalently of STT ) is a right-maximal substring of
T , therefore it is also a node of STT . We store for each arc γ of CDAWGT
the interval range(γ) of the corresponding string in BWTT . Given S[1..2i ], we
perform a blind search on the CDAWG, simulating a blind search on STT and
using Property 3 to keep the BWT intervals of the corresponding nodes of STT
that we meet. We stop at the node v of the suffix tree at which the blind search
fails, or at the first node whose interval does not contain range(γ) (in which
case we reset v to its parent), or at the last node reached by a successful blind
search in which the BWT intervals of all traversed nodes contain range(γ). In
the first two cases, we know that the longest prefix of S that matches ℓ(γ) has
length smaller than 2i . Then, we read (but don’t explicitly store) the label of
v in linear time as described in Theorem 1, finding the position of the leftmost
mismatch with S[1..2i ], if any.
Lemma 2 is all we need to implement matching statistics with the CDAWG:
Theorem 3. There is a data structure that takes O(eT ) words of space, and
that allows one to compute MSS,T in O(|S|) time and in O(min{µ, hT }) words
of working space, where µ is the largest number in MSS,T .
Proof. We fill array MSS,T from left to right, by implementing with CDAWGT the
classical matching statistics algorithm based on suffix link and child operations
on the suffix tree. Assume that we have computed MSS,T [1..i] for some i. Let
c = S[i + MSS,T [i]] and let U = S[i..i + MSS,T [i] − 1] = V X, where V is the
longest prefix of U that is right-maximal in T , and v is the node of STT with
label V . Assume that we know v and the node v ′ of CDAWGT that corresponds
to the equivalence class of v. Let w′ be the node of CDAWGT that corresponds
to the longest suffix of ℓ(v ′ ) that is a maximal repeat of T . If |ℓ(v)| > |ℓ(w′ )|+ 1,
then MSS,T [i + 1] = MSS,T [i] − 1, since no suffix of U longer than |ℓ(w′ )| + |X|
can be followed by character c. Otherwise, we move to w′ in constant time by
following the suffix pointer of v ′ , and we perform a blind search for X from
w′ . Let ℓ(w′ )X = ZX ′ , where Z = ℓ(z) is the longest prefix of ℓ(w′ )X that is
right-maximal in T , and let z ′ be the node of the CDAWG that corresponds to
the equivalence class of z. If |X ′ | > 0, or if no arc from z ′ is labeled by c, then
again MSS,T [i + 1] = MSS,T [i] − 1. Otherwise, we use Lemma 2 to compute the
length of the longest prefix of S[i + MSS,T [i]..|S|] that matches a prefix of the
arc from z ′ labeled by c. The claimed time complexity comes from Lemma 2
and from standard amortization arguments used in matching statistics.
Note that the data structure in Theorem 3 takes actually O(min{eT , eT })
words of space, since one could index either T or T for computing the matching
statistics vector (in the latter case, S is read from right to left).
13
Another consequence of Property 2 is that we can compute the minimal
absent words of T using an index of size proportional just to the number of
maximal repeats of T and of their extensions:
Lemma 3. There is a data structure that takes O(eT + eT ) words of space, and
that allows one to compute the minimal absent words of T in O(eT + eT + out)
time and in O(λT + min{µT , hT }) words of working space, where out is the size
of the output, λT is the maximum number of left extensions of a maximal repeat
of T , and µT is the length of a longest maximal repeat of T .
Proof. For every arc γ = (v ′ , w′ ) of CDAWGT , we store in a variable γ.order
the order of v ′ among the in-neighbors of w′ induced by Property 1 and used
in CDAWGT (see Section 2.3), and we store in a variable γ.previousChar the
character a, if any, such that aℓ(v ′ )b is a substring of ℓ(w′ ) and b = γ.char is
the first character of ℓ(γ).
Then, we traverse every node v ′ of CDAWGT , and we scan every arc γ =
′
(v , w′ ). If γ.order > 1, then ℓ(v ′ )b, where b = γ.char, is always preceded
by γ.previousChar in T , thus we print aℓ(v ′ )b to the output for all a that
label explicit and implicit Weiner links from v ′ and that are different from
γ.previousChar. If γ.order = 1 then ℓ(v ′ )b is a left-maximal substring of T , so
we subtract the set of all Weiner links of w′ from the set of all Weiner links of
v ′ by a linear scan of their sorted lists, and we print aℓ(v ′ )b to the output for all
characters a in the resulting list. Note that the same Weiner link of v ′ could be
read multiple times, for multiple out-neighbors w′ of v ′ . However, every such
access can be charged either to the output or to a corresponding Weiner link
from w′ , and each w′ takes part in at most one such subtraction. It follows that
the time taken by all list subtractions is O(eT + out).
We reconstruct each ℓ(v ′ ) in linear time as described in Theorem 1.
Acknowledgements
We thank the anonymous reviewers for simplifying some parts of the paper, for
improving its overall clarity, and for suggesting references [11, 12, 14] and the
current version of Lemma 3.
References
[1] Djamal Belazzougui and Fabio Cunial. Representing the suffix tree with the
CDAWG. In CPM 2017, volume 78 of Leibniz International Proceedings in
Informatics (LIPIcs), pages 7:1–7:13. Schloss Dagstuhl–Leibniz-Zentrum
fuer Informatik, 2017.
[2] Djamal Belazzougui, Fabio Cunial, Travis Gagie, Nicola Prezza, and Mathieu Raffinot. Composite repetition-aware data structures. In CPM 2015,
Lecture Notes in Computer Science, pages 26–39. Springer, 2015.
14
[3] Michael A Bender and Martın Farach-Colton. The level ancestor problem
simplified. Theoretical Computer Science, 321(1):5–12, 2004.
[4] Omer Berkman and Uzi Vishkin. Finding level-ancestors in trees. Journal
of Computer and System Sciences, 48(2):214–230, 1994.
[5] Anselm Blumer, Janet Blumer, David Haussler, Ross McConnell, and Andrzej Ehrenfeucht. Complete inverted files for efficient text retrieval and
analysis. Journal of the ACM, 34(3):578–595, 1987.
[6] Maxime Crochemore, Chiara Epifanio, Roberto Grossi, and Filippo
Mignosi. Linear-size suffix tries. Theoretical Computer Science, 638:171–
178, 2016.
[7] Maxime Crochemore and Christophe Hancart. Automata for matching
patterns. In Handbook of Formal Languages, pages 399–462. Springer, 1997.
[8] Maxime Crochemore, Filippo Mignosi, and Antonio Restivo. Automata
and forbidden words. Information Processing Letters, 67(3):111–117, 1998.
[9] Maxime Crochemore and Renaud Vérin. Direct construction of compact
directed acyclic word graphs. In CPM 1997, volume 1264 of Lecture Notes
in Computer Science, pages 116–129. Springer, 1997.
[10] Travis Gagie. Large alphabets and incompressibility. Information Processing Letters, 99(6):246–251, 2006.
[11] Leszek Gasieniec, Roman M Kolpakov, Igor Potapov, and Paul Sant. Realtime traversal in grammar-based compressed files. In DCC 2005, page 458,
2005.
[12] Leszek Gasieniec and Igor Potapov. Time/space efficient compressed pattern matching. Fundamenta Informaticae, 56(1-2):137–154, 2003.
[13] Dan Gusfield. Algorithms on strings, trees and sequences: computer science
and computational biology. Cambridge University Press, 1997.
[14] Markus Lohrey, Sebastian Maneth, and Carl Philipp Reh. Traversing
grammar-compressed trees with constant delay. In DCC 2016, pages 546–
555, 2016.
[15] Luı́s S. Russo, Gonzalo Navarro, and Arlindo L. Oliveira. Fully-compressed
suffix trees. ACM Transactions on Algorithms, 7(4):53, 2011.
[16] Veli Mäkinen and Gonzalo Navarro. Succinct suffix arrays based on runlength encoding. In CPM 2005, Lecture Notes in Computer Science, pages
45–56. Springer, 2005.
[17] Veli Mäkinen, Gonzalo Navarro, Jouni Sirén, and Niko Välimäki. Storage
and retrieval of highly repetitive sequence collections. Journal of Computational Biology, 17(3):281–308, 2010.
15
[18] Gonzalo Navarro and Luis MS Russo. Fast fully-compressed suffix trees.
In DCC 2014, pages 283–291. IEEE, 2014.
[19] Mathieu Raffinot. On maximal repeats in strings. Information Processing
Letters, 80(3):165–169, 2001.
[20] Jouni Sirén, Niko Välimäki, Veli Mäkinen, and Gonzalo Navarro. Runlength compressed indexes are superior for highly repetitive sequence collections. In SPIRE 2008, Lecture Notes in Computer Science, pages 164–175,
2008.
[21] Takuya Takagi, Keisuke Goto, Yuta Fujishige, Shunsuke Inenaga, and Hiroki Arimura. Linear-size CDAWG: new repetition-aware indexing and
grammar compression. In String Processing and Information Retrieval 24th International Symposium, SPIRE 2017, Palermo, Italy, September
26-29, 2017, Proceedings, pages 304–316, 2017. arXiv:1705.09779.
16
| 8 |
Synchronization Strings: Explicit Constructions, Local Decoding,
and Applications∗.
arXiv:1710.09795v2 [] 10 Nov 2017
Bernhard Haeupler
Carnegie Mellon University
[email protected]
Amirbehshad Shahrasbi
Carnegie Mellon University
[email protected]
Abstract
This paper gives new results for synchronization strings, a powerful combinatorial object
that allows to efficiently deal with insertions and deletions in various communication problems:
• We give a deterministic, linear time synchronization string construction, improving over an O(n5 ) time randomized construction. Independently of this work, a deterministic O(n log2 log n) time construction was just put on arXiv by Cheng, Li, and Wu.
• We give a deterministic construction of an infinite synchronization string which
outputs the first n symbols in O(n) time. Previously it was not known whether such a
string was computable.
• Both synchronization string constructions are highly explicit, i.e., the ith symbol can be
deterministically computed in O(log i) time.
• This paper also introduces a generalized notion we call long-distance synchronization
strings. Such strings allow for local and very fast decoding. In particular only
O(log3 n) time and access to logarithmically many symbols is required to decode any
index.
The paper also provides several applications for these improved synchronization strings:
• For any δ < 1 and ε > 0 we provide an insdel error correcting block code with rate
1 − δ − ε which can correct any O(δ) fraction of insertion and deletion errors in O(n log3 n)
time. This near linear computational efficiency is surprising given that we do not
even know how to compute the (edit) distance between the decoding input and output in
sub-quadratic time.
• We show that local decodability implies that error correcting codes constructed with longdistance synchronization strings can not only efficiently recover from δ fraction of insdel
errors but, similar to [Schulman, Zuckerman; TransInf’99], also from any O(δ/ log n) fraction of block transpositions and block replications. These block corruptions allow
arbitrarily long substrings to be swapped or replicated anywhere.
• We show that highly explicitness and local decoding allow for infinite channel simulations with exponentially smaller memory and decoding time requirements.
These simulations can then be used to give the first near linear time interactive coding scheme for insdel errors, similar to the result of [Brakerski, Naor; SODA’13] for
Hamming errors.
∗
Supported in part by the National Science Foundation through grants CCF-1527110 and CCF-1618280.
1
Introduction
This paper gives new results for ε-synchronization strings, a powerful combinatorial object that
can be used to effectively deal with insertions and deletions in various communication problems.
Synchronization strings are pseudo-random non-self-similar sequences of symbols over some
finite alphabet that can be used to index a finite or infinte sequence of elements similar to the trivial
indexing sequence 1, 2, 3, 4, . . . , n. In particular, if one first indexes a sequence of n elements with
the trivial indexing sequence and then applies some k insertions or deletions of indexed elements
one can still easily recover the original sequence of elements up to k half-errors, i.e., erasures or
substitutions (where substitutions count twice). An ε-synchronization strings allows essentially the
same up to an arbitrarily small error of εn half-errors but instead of having indexing symbols from
a large alphabet of size n, which grows with the length of the sequence, a finite alphabet size of
ε−O(1) suffices for ε-synchronization strings. Often this allows to efficiently transform insertion
and deletion errors into ordinary Hamming errors which are much better understood and easier to
handle.
One powerful application of synchronization strings is the design of efficient insdel error correcting codes (ECC), i.e., codes that can efficiently correct insertions and deletions. While codes
for Hamming errors have been well understood making progress on insdel codes has been difficult [12, 15, 17, 23, 24, 30]. Synchronization strings solve this problem by transforming any regular
error correcting block code C with a sufficiently large finite alphabet into an essentially equally
efficient insdel code by simply indexing the symbols of C. This leads to the first insdel codes that
approach the Singleton bound, i.e., for any δ < 1 and ε > 0 one can get an insdel code with rate
1 − δ − ε which, in quadratic time, recovers from any δ fraction of insertions or deletions. Further
applications are given in [20]. Most importantly, [20] introduces the notion of a channel simulation
which allows one to use any insertion deletion channel like a black-box regular symbol corruption
channel with an slightly increased error rate. This can be used to give the first computationally
efficient interactive coding schemes for insdel errors and the first interactive coding scheme for
insdel errors whose communication rate goes to one as the amount of noise goes to zero.
This paper provides drastically improved constructions of finite and infinite synchronization
strings and a stronger synchronization string property which allows for decoding algorithms that
are local and significantly faster. We furthermore give several applications for these results, including near linear time insertion-deletion codes, a near linear time coding scheme for interactive
communication over insertion-deletion channels, exponentially better channel simulations in terms
of time and memory, infinite channel simulations, and codes that can correct block transposition
and block replication corruptions.
2
Our Results, Structure of this Paper, and Related Work
Next we give an overview over the main results and the overall structure of this paper. We also
put our result in relation to related prior works.
2.1
Deterministic, Linear Time, Highly Explicit Construction of Infinte Synchronization Strings
In [19] the authors introduced synchronization strings and gave a O(n5 ) time randomized synchronization string construction. This construction could not be easily derandomized. In order to provide deterministic explicit constructions of insertion deletion block codes [19] introduced a strictly
1
weaker notion called self-matching strings, showed that these strings could be used for code constructions as well, and gave a deterministic nO(1) time self-matching string construction. Obtaining
a deterministic synchronization string construction of was left open. [19] also showed the existence
of infinite synchronization strings. This existence proof however is highly non-constructive. In fact
even the existence of a computable infinite synchronization string was left open; i.e., up to this
paper there was no algorithm that would compute the ith symbol of some infinite synchronization
string in finite time.
In this paper we give deterministic constructions of finite and infinite synchronization strings.
Instead of going to a weaker notion, as done in [19], Section 4.1 introduces a stronger notion
called long-distance synchronization strings. Interestingly, while the existence of these generalized synchronization strings can be shown with a similar Lovasz local lemma based proof as for
plain synchronization strings, this proof allows for an easier derandomization, which leads to a
deterministic polynomial time construction of (long-distance) synchronization strings.
Beyond this derandomization the notion of long-distance synchronization strings turns out to be
very useful and interesting in its own right, as will be shown later.
Next, two different boosting procedures, which make synchronization string constructions faster
and more explicit, are given. The first boosting procedure, given in Section 4.4, leads to a deterministic linear time synchronization string construction. We remark that concurrently and
independently Cheng, Li, and Wu obtained a deterministic O(n log2 log n) time synchronization
string construction [8].
Our second boosting step, which is introduced in Section 4.3, makes our synchronization string
construction highly-explicit, i.e., allows to compute any position of an n long synchronization
string in time O (log n). This highly-explicitness is an property of crucial importance in most of
our new applications.
Lastly, in Section 4.5 we give a simple transformation which allows to use any construction
for finite length synchronization strings and utilize it to give an construction of an infinite synchronization string. This transformation preserves highly-explicitness. Infinite synchronization
strings are important for applications in which one has no a priori bound on the running time
of a system, such as, streaming codes, channel simulations, and some interactive coding schemes.
Overall we get the following simple to state theorem:
Theorem 2.1. For any ε > 0 there exists an infinite ε-synchronization string S over an alphabet of
size ε−O(1) and a deterministic algorithm which for any i takes O(log i) time to compute S[i, i+log i],
i.e., the ith symbol of S (as well as the next log i symbols).
Since any substring of an ε-synchronization string is also an ε-synchronization string itself this
infinite synchronization string construction also implies a deterministic linear time construction of
finite synchronization strings which is fully parallelizable. In particular, for any n there is a linear
work parallel NC1 algorithm with depth O(log n) and O(n/ log n) processors which computes the
ε-synchronization string S[1, n].
2.2
Long Distance Synchronization Strings and Fast Local Decoding
Section 5 shows that the long-distance property we introduced in Section 4.1, together with our
highly explicit constructions from Section 4.3, allows the design of a much faster and highly local
decoding procedure. In particular, to decode the index of an element in a stream that was indexed
with a synchronization string it suffices to look at the only the O(log n) previously received symbols.
The decoding of the index itself furthermore takes only O(log3 n) time and can be done in a
2
streaming fashion. This is significantly faster than the O(n3 ) streaming decoder or the O(n2 )
global decoder given in [19].
The paper furthermore gives several applications which demonstrate the power of these improved
synchronization string constructions and the local decoding procedure.
2.3
2.3.1
Application: Codes Against Insdels, Block Transpositions and Replications
Near Linear Time Decodable Error Correcting Codes
Fast encoding and decoding procedures for error correcting codes have been important and influencial in both theory and practice. For regular error correcting block codes the celebrated expander
code framework given by Sipser and Spielman [29] and in Spielman’s thesis [31] as well as later refinements by Alon, Edmonds, and Luby [1] as well as Guruswami and Indyk [13,14] gave good ECCs
with linear time encoding and decoding procedures. Very recently a beautiful work by Hemenway,
Ron-Zewi, and Wooters [21] achieved linear time decoding also for capacity achieving list decodable
and locally list recoverable codes.
The synchronization string based insdel codes in [19] have linear encoding times but quadratic
decoding times. As pointed out in [19] the later seemed almost inherently to the harsher setting
of insdel errors because “in contrast to Hamming codes, even computing the distance between the
received and the sent/decoded string is an edit distance computation. Edit distance computations in
general do usually not run in sub-quadratic time, which is not surprising given the recent SETHconditional lower bounds [2]”. Very surprisingly to us, our fast decoding procedure allows us to
construct insdel codes with near linear decoding complexity:
Theorem 2.2. For any δ < 1 and ε > 0 there exists an insdel error correcting block code
with rate 1 − δ − ε that can correct from any O(δ) fraction of insertions and deletions in O(n log3 n)
1
time. The encoding time is linear and the alphabet bit size is near linear in δ+ε
.
Note that for any input string the decoder finds the codeword that is closest to it in edit
distance, if a codeword with edit distance of at most O(δn) exists. However, computing the distance
between the input string and the codeword output by the decoder is an edit distance computation.
Shockingly, even now, we do not know of any sub-quadratic algorithm that can compute or even
crudely approximate this distance between input and output of our decoder, even though intuitively
this seems to be much easier almost prerequisite step for the distance minimizing decoding problem
itself. After all, decoding asks to find the closest (or a close) codeword to the input from an
exponentially large set of codewords, which seems hard to do if one cannot even approximate the
distance between the input and any particular codeword.
2.3.2
Application: High-Rate InsDel Codes that Efficiently Correct Block Transpositions and Replications
Section 6.2 gives another interesting application of our local decoding procedure. In particular, we
show that local decodability directly implies that insdel ECCs constructed with our highly-explicit
long-distance synchronization strings can not just efficiently recover from δ fraction of insdel errors
but also from any O(δ/ log n) fraction of block transpositions and block replications. Block
transpositions allow for arbitrarily long substrings to be swapped while a block replication allows
for an arbitrarily long substring to be duplicated and inserted anywhere else. A similar result, albeit
for block transpositions only, was shown by Schulman, Zuckerman [27] for the efficient constant
distance constant rate insdel codes given by them. They also show that the O(δ/ log n) resilience
against block errors is optimal up to constants.
3
2.4
Application: Exponentially More Efficient Infinite Channel Simulations
[20] introduced the powerful notion of a channel simulation. In particular, [20] showed that for
any adversarial one-way or two-way insdel channel one can put a simple black-box at both ends
such that to any two parties interacting with these black-boxes the behavior is indistinguishable
from a much nicer Hamming channel which only introduces (a slightly larger fraction of) erasures
and symbol corruptions. To achieve this these black-boxes were required to know a prior for how
many steps T the channel would be used and required an amount of memory size that is linear in
T . Furthermore, for each transmission at a time step t the receiving black-box would perform a
O(t3 ) time computation. We show that using our locally decodable highly explicit long-distance
synchronization strings can reduce both the memory requirement and the computation complexity
exponentially. In particular each box is only required to have O(log t) bits of memory (which is
optimal because at the very least it needs to store the current time) and any computation can
be done in O(log3 t) rounds. Furthermore due to our infinite synchronization string constructions
the channel simulations black-boxes are not required to know anymore for how much time overall
the channel will be used. These drastic improvements make channel simulations significantly more
useful and indeed potentially quite practical.
2.5
Application: Near-Linear Time Interactive Coding Schemes for InsDel Errors
Interactive coding schemes, as introduced by Schulman [25, 26], allow to add redundancy to any
interactive protocol between two parties in such a way that the resulting protocol becomes robust
to noise in the communication. Interactive coding schemes that are robust to symbol corruptions
have been intensely studied over the last few years [3, 4, 6, 9–11, 18, 22]. Similar to error correcting
codes the main parameters for an interactive coding scheme is the fraction of errors it can tolerate
[6,9,25,26] its communication rate [18,22] and its computational efficiency [3,4,10,11]. In particular,
Brakerski and Kalai [3] gave the first computationally efficient polynomial time interactive coding
scheme. Brakerski and Naor [4] improved the complexity to near linear. Lastly, Ghaffari and
Haeupler [11] gave a near-linear time interactive coding scheme that also achieved the optimal
maximal robustness. More recently interactive coding schemes that are robust to insertions and
deletions have been introduced by Braverman, Gelles, Mao, and Ostrovsky [5] subsequently Sherstov
and Wu [28] gave a scheme with optimal error tolerance and Haeupler, Shahrasbi, and Vitercik [20]
used channel simulations to give the first computationally efficient polynomial time interactive
coding scheme for insdel errors. Our improved channel simulation can be used together with
the coding scheme from [11] to directly get the first interactive coding scheme for insertions and
deletions with a near linear time complexity - i.e., the equivalent of the result of Brakerski and
Naor [4] but for insertions and deletions.
3
Definitions and Preliminaries
In this section, we provide the notation and definitions we will use throughout the rest of the paper.
We also briefly review key definitions and techniques from [19, 20].
3.1
String Notation
0
String Notation. For two strings S ∈ Σn and S 0 ∈ Σn be two strings over alphabet Σ. We define
0
S · S 0 ∈ Σn+n to be their concatenation. For any positive integer k we define S k to equal k copies
of S concatenated together. For i, j ∈ {1, . . . , n}, we denote the substring of S from the ith index
4
through and including the j th index as S[i, j]. Such a consecutive substring is also called a factor of
S. For i < 1 we define S[i, j] = ⊥−i+1 · S[1, j] where ⊥ is a special symbol not contained in Σ. We
refer to the substring from the ith index through, but not including, the j th index as S[i, j). The
substrings S(i, j] and S[i, j] are similarly defined. S[i] denotes the ith symbol of S and |S| = n is
the length of S. Occasionally, the alphabets we use are the cross-product of several alphabets, i.e.
Σ = Σ1 × · · · × Σn . If T is a string over Σ, then we write T [i] = [a1 , . . . , an ], where ai ∈ Σi . Finally,
symbol by symbol concatenation of two strings S and T of similar length is [(S1 , T1 ), (S2 , T2 ), · · · ].
Edit Distance. Throughout this work, we rely on the well-known edit distance metric defined as
follows.
Definition 3.1 (Edit distance). The edit distance ED(c, c0 ) between two strings c, c0 ∈ Σ∗ is the
minimum number of insertions and deletions required to transform c into c0 .
It is easy to see that edit distance is a metric on any set of strings and in particular is symmetric
and satisfies the triangle inequality property. Furthermore, ED (c, c0 ) = |c| + |c0 | − 2 · LCS (c, c0 ),
where LCS (c, c0 ) is the longest common substring of c and c0 .
Definition 3.2 (Relative Suffix Distance). For any two strings S, S 0 ∈ Σ∗ we define their relative
suffix distance RSD as follows:
RSD(S, S 0 ) = max
k>0
ED (S(|S| − k, |S|], S 0 (|S 0 | − k, |S 0 |])
2k
Lemma 3.3. For any strings S1 , S2 , S3 we have
• Symmetry: RSD(S1 , S2 ) = RSD(S2 , S1 ),
• Non-Negativity and Normalization: 0 ≤ RSD(S1 , S2 ) ≤ 1,
• Identity of Indiscernibles: RSD(S1 , S2 ) = 0 ⇔ S1 = S2 , and
• Triangle Inequality: RSD(S1 , S3 ) ≤ RSD(S1 , S2 ) + RSD(S2 , S3 ).
In particular, RSD defines a metric on any set of strings.
3.2
Synchronization Strings
We now recall synchronization string based techniques and relevant lemmas from [19, 20] which we
will be of use here. In short, synchronization strings allow communicating parties to protect against
synchronization errors by indexing their messages without blowing up the communication rate. The
general idea of coding schemes introduced and utilized in [19, 20], is to index any communicated
symbol in the sender side and then guess the actual position of received symbols on the other end
using the attached indices.
A straightforward candidate for such technique is to attach 1, · · · , n to communicated symbols
where n indicates the rounds of communication. However, this trivial indexing scheme would not
lead to an efficient solution as it requires assigning a log n-sized space to indexing symbols. This
shortcoming accentuates a natural trade-off between the size of the alphabet among which indexing
symbols are chosen and the accuracy of the guessing procedure on the receiver side.
Haeupler and Shahrasbi [19] introduce and discuss ε-synchronization strings as well-fitting candidates for this matter. This family of strings, parametrized by ε, are over alphabets of constant
size in terms of communication length n and dependent merely on parameter ε. ε-synchronization
strings can convert any adversarial k synchronization errors into hamming-type errors. The extent of disparity between the number translated hamming-type errors and k can be controlled by
parameter ε.
5
Imagine Alice and Bob as two parties communicating over a channel suffering from up to δfraction of adversarial insertions and deletions. Suppose Alice sends a string S of length n to Bob.
On the other end of the communication, Bob will receive a distorted version of S as adversary might
have inserted or deleted a number of symbols. A symbol which is sent by Alice and is received by
Bob without being deleted by the adversary is called a successfully transmitted symbol.
Assume that Alice and Bob both know string S a priori. Bob runs an algorithm to determine
the actual index of each of the symbols he receives, in other words, to guess which element of S
they correspond to. Such algorithm has to return an number in [1, n] or “I don’t know” for any
symbol of Sτ . We call such an algorithm an (n, δ)-indexing algorithm.
Ideally, a indexing algorithm is supposed to correctly figure out the indices of as many successfully transmitted symbols as possible. The measure of misdecodings has been introduced in [19]
to evaluate the quality of a (n, δ)-indexing algorithm as the number of successfully transmitted
symbols that an algorithm might not decoded correctly. An indexing algorithm is called to be
streaming if its output for a particular received symbol depends only on the symbols that have
been received before it.
Haeupler and Shahrasbi [19] introduce and discuss ε-synchronization strings along with several
decoding techniques for them.
Definition 3.4 (ε-Synchronization String). String S ∈ Σn is an ε-synchronization string if for
every 1 ≤ i < j < k ≤ n + 1 we have that ED (S[i, j), S[j, k)) > (1 − ε)(k − i). We call the set of
prefixes of such a string an ε-synchronization code.
We will make use of the global decoding algorithm from [19] described as follows.
Theorem 3.5 (Theorems and 6.14 from [19]). There is a decoding algorithm for an ε-synchronization
√
√
string of length n which guarantees decoding with up to O(n ε) misdecodings and runs in O(n2 / ε)
time.
Theorem 3.6 (Theorem 4.1 from [19]). Given a synchronization string S over alphabet ΣS with
an (efficient) decoding algorithm DS guaranteeing at most k misdecodings and decoding complexity
TDS (n) and an (efficient) ECC C over alphabet ΣC with rate RC , encoding complexity TEC , and
decoding complexity TDC that corrects up to nδ + 2k half-errors, one obtains an insdel code that can
be (efficiently) decoded from up to nδ insertions and deletions. The rate of this code is at least
RC
1+
log ΣS
log ΣC
The encoding complexity remains TEC , the decoding complexity is TDC +TDS (n) and the preprocessing
complexity of constructing the code is the complexity of constructing C and S.
4
Highly Explicit Constructions of Long-Distance and Infinite εSynchronization Strings
We start this section by introducing a generalized notion of synchronization strings in Section 4.1
and then provide a deterministic efficient construction for them in Section 4.2. In Section 4.3,
we provide a boosting step which speeds up the construction to linear time in Theorem 4.7. In
Section 4.4, we use the linear time construction to obtain a linear-time high-distance insdel code
(Theorem 4.14) and then use another boosting step to obtain a highly-explicit linear-time construction for long-distance synchronization strings in Theorem 4.15. We provide similar construction for
infinite synchronization strings in Section 4.5. A pictorial representation of the flow of theorems
and lemmas in this section can be found in Figure 1.
6
Theorem 4.5
Polynomial Time Long
Distance Synchronization
String
Guruswami and Indyk [‘05]
Linear Time High Distance
Large Alphabet
Error Correcting Code
(Outer Code)
Lemma 4.12
(Inner Code)
Lemma 4.6
Boosting Step I
Concatenation
Lemma 4.10
Boosting Step II
Theorem 4.7
Linear Time
Synchronization String
Lemma 4.13
Linear Time High Distance
Small Alphabet
Error Correcting Code
Theorem 4.14
Linear Time
Haeupler and
High Distance
Shahrasbi [’17] Insertion Deletion Code
Indexing
Theorem 4.15
Highly-Explicit Linear-Time
Long-Distance
Synchronization String
Theorem 4.16
Highly-Explicit Linear-Time
Infinite Synchronization
String
Figure 1: Schematic flow of Theorems and Lemmas of Section 4
4.1
Long-Distance Synchronization Strings
The existence of synchronization strings is proven in [19] using an argument based on Lovász local
lemma. This lead to an efficient randomized construction for synchronization strings which cannot
be easily derandomized. Instead, the authors introduced the weaker notion of self-matching strings
and gave a deterministic construction for them. Interestingly, in this paper we introduce a revised
notion, denoted by f (l)-distance ε-synchronization strings, which generalizes ε-synchronization
strings and allows for a deterministic construction.
Note that the synchronization string property poses a requirement on the edit distance of
neighboring substrings. f (l)-distance ε-synchronization string property extends this requirement
to any pair of intervals that are nearby. More formally, any two intervals of aggregated length l
that are of distance f (l) or less have to satisfy the edit distance property in this generalized notion.
Definition 4.1 (f (l)-distance ε-synchronization string). String S ∈ Σn is an f (l)-distance εsynchronization string if for every 1 ≤ i < j ≤ i0 < j 0 ≤ n + 1 we have that ED (S[i, j), S[i0 , j 0 )) >
(1 − ε)(k − i) if i0 − j ≤ f (l) where l = j + j 0 − i − i0 .
It is noteworthy to mention that the constant function f (l) = 0 gives the original ε-synchronization
strings. Haeupler and Shahrasbi [19] have studied the existence and construction of synchronization
strings for this case. In particular, they have shown that arbitrarily long ε-synchronization strings
exist over an alphabet that is polynomially large in terms of ε−1 . Besides f (l) = 0, there are other
several other functions that might be of interest in this context.
One can show that, as we do in Appendix A, that for any polynomial function f (l), arbitrarily long f (l)-distance ε-synchronization strings exist over alphabet sizes that are polynomially
large in terms of ε−1 . Also, for exponential functions, these strings exist over exponentially large
alphabets in terms of ε−1 but not over sub-exponential alphabet sizes. Finally, if function f is
super-exponential, f (l)-distance ε-synchronization strings do not exist over any alphabet whose
size is independent of n.
While studying existence, construction, and alphabet sizes of f (l)-distance ε-synchronization
strings might be of interest by its own, we will show that having synchronization string edit distance
guarantee for pairs of intervals that are exponentially far in terms of their aggregated length is of
significant interest as it leads to improvements over applications of ordinary synchronization strings
described in [19, 20] from several aspects. Even though distance function f (l) = cl provides such
property, throughout the rest of this paper, we will focus on a variant of it, i.e., f (l) = n · 1l>c log n
which allows polynomial-sized alphabet. 1l>c log n is the indicator function for l > c log n, i.e., one
if l > c log n and zero otherwise
To compare distance functions f (l) = cl and f (l) = n · 1l>c log n , note that the first one allows
intervals to be exponentially far away in their total length. In particular, intervals of length
7
l > c log n or larger can be arbitrarily far away. The second function only asks for the guarantee
over large intervals and does not strengthen the ε-synchronization property for smaller intervals.
We refer to the later as c-long-distance ε-synchronization string property.
Definition 4.2 (c-long-distance ε-synchronization strings). We call n·1l>c log n -distance ε-synchronization
strings c-long-distance ε-synchronization strings.
4.2
Polynomial Time Construction of Long-Distance Synchronization Strings
An LLL-based proof for existence of ordinary synchronization strings has benn provided by [19].
Here we provide a similar technique along with the deterministic algorithm for Lovász local lemma
from Chandrasekaran et al. [7] to prove the existence and give a deterministic polynomial-time
construction of strings that satisfy this quality over an alphabet of size ε−O(1) .
Before giving this proof right away, we first show a property of the these strings which allows
us to simplify the proof and, more importantly, get a deterministic algorithm using deterministic
algorithms for Lovász local lemma from Chandrasekaran et al. [7].
Lemma 4.3. If S is a string and there are two intervals i1 < j1 ≤ i2 < j2 of total length l =
j1 − i1 + j2 − i2 and ED(S[i1 , j1 ), S[i2 , j2 )) ≤ (1 − ε)l then there also exists intervals i1 ≤ i01 < j10 ≤
i02 < j20 ≤ i2 of total length l0 ∈ {dl/2e − 1, dl/2e, dl/2e + 1} with ED(S[i01 , j10 ), S[i02 , j20 )) ≤ (1 − ε)l0 .
Proof. As ED(S[i1 , j1 ), [i2 , j2 )) ≤ (1−ε)l, there has to be a monotone matching M = {(a1 , b1 ), · · · , (am , bm )}
from S[i1 , j1 ) to S[i2 , j2 ) of size m ≥ εl2 . Let 1 ≤ i ≤ m be the largest number such that
|S[i1 , ai ]| + |S[i2 , bi ]| ≤ dl/2e. It is easy to verify that there are integers ai < k1 ≤ ai+1 and
bi < k2 ≤ bi+1 such that |S[i1 , k1 )| + |S[i2 , k2 )| ∈ {dl/2e − 1, dl/2e}.
Therefore, we can split the pair of intervals (S[i1 , j1 ), S[i2 , j2 )) into two pairs of intervals
(S[i1 , k1 ), S[i2 , k2 )) and (S[k1 , j1 ), S[k2 , j2 )) such that each pair of the matching M falls into exactly
one of these pairs. Hence, in at least one of those pairs, the size of the matching is larger than 2ε
times the total length. This gives that the edit distance of those pairs is less than 1 − ε and finishes
the proof.
Lemma 4.3 shows that if there is a pair of intervals of total length l that have small relative
edit distance, we can find a pair of intervals of size {dl/2e − 1, dl/2e, dl/2e + 1} which have small
relative edit distance as well. Now, let us consider a string S with a pair of intervals that violate
the c-long distance ε-synchronization property. If the total length of the intervals exceed 2c log n,
using Lemma 4.3 we can find another pair of intervals of almost half the total length which still
violate the c-long distance ε-synchronization property. Note that as their total length is longer
than c log n, we do not worry about the distance of those intervals. Repeating this procedure, we
can eventually find a pair of intervals of a total length between c log n and 2c log n that violate the
c-long distance ε-synchronization property. More formally, we can derive the following statement
by Lemma 4.3.
Corollary 4.4. If S is a string which satisfies the c-long-distance ε-synchronization property for
any two non-adjacent intervals of total length 2c log n or less, then it satisfies the property for all
pairs of non-adjacent intervals.
Proof. Suppose, for the sake of contradiction, that there exist two intervals of total length 2 logc n
or more that violate the c-long-distance ε-synchronization property. Let [i1 , j1 ) and [i2 , j2 ) where
i1 < j1 ≤ i2 < j2 be two intervals of the smallest total length l = j1 − i1 + j2 − i2 larger than
2 logc n (breaking ties arbitrarely) for which ED(S[i1 , j1 ), [i2 , j2 )) ≤ (1 − ε)l. By Lemma 4.3 there
exists two intervals [i01 , j10 ) and [i02 , j20 ) where i01 < j10 ≤ i02 < j20 of total length l0 ∈ [l/2, l) with
8
ED(S[i01 , j10 ), [i02 , j20 )) ≤ (1−ε)l. If l0 ≤ 2 logc n, the assumption of c-long-distance ε-synchronization
property holding for intervals of length 2 logc n or less is contradicted. Unless, l0 > 2 logc n that
contradicts the minimality of our choice of l.
Theorem 4.5. For any 0 < ε < 1 and every n there is a deterministic nO(1) time algorithm for
computing a c = O(1/ε)-long-distance ε-synchronization string over an alphabet of size O(ε−4 ).
Proof. To proof this, we will make use of the Lovász local lemma and deterministic algorithms
proposed for it in [7]. We generate a random string R over an alphabet of size |Σ| = O(ε−2 ) and
define bad event Bi1 ,l1 ,i2 ,l2 as the event of intervals [i1 , i1 + l1 ) and [i2 , i2 + l2 ) violating the O(1/ε)long-distance synchronization string property over intervals of total length 2/ε2 or more. In other
words, Bi1 ,l1 ,i2 ,l2 occurs if and only if ED(R[i1 , i1 + l1 ), R[i2 , i2 + l2 )) ≤ (1 − ε)(l1 + l2 ). Note that
by the definition of c-long-distance ε-synchronization strings, Bi1 ,l1 ,i2 ,l2 is defined for (i1 , l1 , i2 , l2 )s
where either l1 + l2 ≥ c log n and i1 + l1 ≤ i2 or 1/ε2 < l1 + l2 < c log n and i2 = i1 + l1 . We aim to
show that for large enough n, with non-zero probability, none of these bad events happen. This will
prove the existence of a string that satisfies c = O(1/ε)-long-distance ε-synchronization strings for
all pairs of intervals that are of total length 2/ε2 or more. To turn this string into a c = O(1/ε)-longdistance ε-synchronization strings, we simply concatenate it with a string consisting of repetitions
of 1, · · · , 2ε−2 , i.e., 1, 2, · · · , 2ε−2 , 1, 2, · · · , 2ε−2 , · · · . This string will take care of the edit distance
requirement for neighboring intervals with total length smaller than 2ε−2 .
Note that using Lemma 4.3, by a similar argument as in Claim 4.4, we only need to consider
bad events where l1 + l2 ≤ 2c log n. As the first step, note that Bi1 ,l1 ,i2 ,l2 happens only if there is a
common subsequence of length ε(l1 + l2 )/2 or more between R[i1 , i1 + l1 ) and R[i2 , i2 + l2 ). Hence,
the union bound gives that
ε(l1 +l2 )
l1
l1
|Σ|− 2
ε(l1 + l2 )/2 ε(l1 + l2 )/2
ε(l1 +l2 )/2
ε(l1 +l2 )/2
ε(l1 +l2 )
l2 e
l1 e
|Σ|− 2
≤
ε(l1 + l2 )/2
ε(l1 + l2 )/2
!ε(l1 +l2 )
√
2e l1 l2
p
=
ε(l1 + l2 ) |Σ|
!εl
!εl
e
el
p
p
=
≤
εl |Σ|
ε |Σ|
Pr {Bi1 ,l1 ,i2 ,l2 } ≤
where l = l1 + l2 . In order to apply LLL, we need to find real numbers xi1 ,l1 ,i2 ,l2 ∈ [0, 1] such that
for any Bi1 ,l1 ,i2 ,l2
Y
Pr{Bi1 ,l1 ,i2 ,l2 } ≤ xi1 ,l1 ,i2 ,l2
(1 − xi01 ,l10 ,i02 ,l20 )
(1)
[S[i1 ,i1 +l1 )∪S[i2 ,i2 +l2 )]∩[S[i01 ,i01 +l10 )∪S[i02 ,i02 +l20 )]6=∅
We eventually want to show that our LLL argument satisfies the conditions required for polynomialtime deterministic algorithmic LLL specified in [7]. Namely, it suffices to certify two other properties
in addition to (1). The first additional requirement is to have each bad event in LLL depend on up
to logarithmically many variables and the second is to have (1) hold with a constant exponential
slack. The former is clearly true as our bad events consist of pairs of intervals each of which is
of a length between c log n and 2c log n. To have the second requirement, instead of (1) we find
9
xi1 ,l1 ,i2 ,l2 ∈ [0, 1] that satisfy the following stronger property.
Y
Pr{Bi1 ,l1 ,i2 ,l2 } ≤ xi1 ,l1 ,i2 ,l2
1.01
(1 − xi01 ,l10 ,i02 ,l20 )
(2)
[S[i1 ,i1 +l1 )∪S[i2 ,i2 +l2 )]∩[S[i01 ,i01 +l10 )∪S[i02 ,i02 +l20 )]6=∅
Any small constant can be used as slack. We pick 1.01 for the sake of simplicity. We propose
xi1 ,l1 ,i2 ,l2 = D−ε(l1 +l2 ) for some D > 1 to be determined later. D has to be chosen such that for
any i1 , l1 , i2 , l2 and l = l1 + l2 :
1.01
!εl
Y
e
0
0
p
≤ D−εl
1 − D−ε(l1 +l2 )
(3)
ε |Σ|
0
0
0
0
0
0
[S[i ,i +l )∪S[i ,i +l )]∩[S[i ,i +l )∪S[i ,i +l )]6=∅
1 1
1
2 2
2
1 1
1
2 2
2
Note that:
Y
D−εl
0
0
1 − D−ε(l1 +l2 )
(4)
[S[i1 ,i1 +l1 )∪S[i2 ,i2 +l2 )]∩[S[i01 ,i01 +l10 )∪S[i02 ,i02 +l20 )]6=∅
≥ D
−εl
l0
Y
2cY
log n
1−D
−εl0
[(l1 +l10 )+(l1 +l20 )+(l2 +l10 )+(l2 +l20 )]n
l0 =c log n l10 =1
×
cY
log n
1 − D−εl
00
l+l00
(5)
l00 =1/ε2
= D
−εl
l0
Y
2cY
log n
1−D
−εl0
4(l+l0 )n
×
l0 =c log n l10 =1
= D−εl
2cY
log n
cY
log n
−εl0
4l0 (l+l0 )n
cY
log n
×
l0 =c log n
l00 =1/ε2
≥ D−εl 1 −
1 − D−εl
00
l+l00
(6)
l00 =1/ε2
1−D
2cX
log n
l
1−D
−εl00
×
cY
log n
1 − D−εl
00
l00
(7)
l00 =1/ε2
0
4l0 (l + l0 )n D−εl
l0 =c log n
× 1 −
cX
log n
l
00
D−εl × 1 −
l00 =1/ε2
≥ D−εl 1 −
cX
log n
00
l00 D−εl
(8)
l00 =1/ε2
2cX
log n
(4 · 2c log n(2c log n + 2c log n)n) D
−εl0
(9)
l0 =c log n
× 1 −
∞
X
l
00
D−εl × 1 −
l00 =1/ε2
= D
−εl
1 −
∞
X
00
l00 D−εl
2cX
log n
2
2
32c n log n D
"
2
D−ε·1/ε
× 1−
1 − D−ε
!
#l
−εl0
l0 =c log n
2
× 1−
(10)
l00 =1/ε2
D−ε·1/ε (D−ε + 1/ε2 − D−ε /ε2 )
(1 − D−ε )2
10
(11)
"
D−1/ε
≥ D−εl 1 − 32c3 n log3 nD−εc log n 1 −
1 − D−ε
!
D−1/ε (D−ε + 1/ε2 − D−ε /ε2 )
× 1−
(1 − D−ε )2
#l
(12)
To justify equation (5), note that there are two kinds of bad events that might intersect Bi1 ,l1 ,i2 ,l2 .
The first product term is considering all pairs of long intervals of length l10 and l20 where l1 + l2 ≥
c log n that overlap a fixed pair of intervals of length l1 and l2 . The number of such intervals is at
most [(l1 + l10 ) + (l1 + l20 ) + (l2 + l10 ) + (l2 + l20 )] n. The second one is considering short neighboring
pairs of intervals (ε−2 ≤ l00 = l100 + l200 ≤ c log n).
Equation (8) is a result of the following inequality for 0 < x, y < 1:
(1 − x)(1 − y) > 1 − x − y.
We choose D = 2 and c = 2/ε. Note that limε→0
enough ε,
2−1/ε
1−2−ε
2−1/ε (2−ε +1/ε2 −2−ε /ε2 )
(1−2−ε )2
= 0. So, for small
< 12 . Also, for D = 2 and c = 2/ε,
32c3 n log3 nD−εc log n =
28 log3 n
·
= o(1).
ε3
n
−1/ε
2
−ε . Therefore, for sufficiently small ε
Finally, one can verify that for small enough ε, 1 − 1−2
−ε > 2
and sufficiently large n, (12) is satisfied if the following is satisfied.
Y
0
0
1 − D−ε(l1 +l2 )
(13)
D−εl
[S[i1 ,i1 +l1 )∪S[i2 ,i2 +l2 )]∩[S[i01 ,i01 +l10 )∪S[i02 ,i02 +l20 )]6=∅
l
4−εl
1
1
≥
≥ 2−εl 1 −
2−ε
1−
2
2
4
(14)
So, for LLL to work, the following have to be satisfied.
e
p
ε |Σ|
!
εl
1.01
4−εl
≤
⇔4≤
4
Therefore, for |Σ| =
proof.
4.3
44.04 e2
ε2
p ! εl
ε |Σ| 1.01
⇐4≤
e41.01
2
p ! ε·1/ε
ε |Σ| 1.01
42.02(1+ε) e2
⇔
≤ |Σ|
e41.01
ε2
= O(ε−2 ), the deterministic LLL conditions hold. This finishes the
Boosting I: A Linear Time Construction of Synchronization Strings
Next, we provide a simple boosting step which allows us to polynomially speed up any ε-synchronization
string construction. Essentially, we propose a way to construct an O(ε)-synchronization string of
length Oε (n2 ) having an ε-synchronization string of length n.
Lemma 4.6. Fix an even n ∈ N and γ > 0 such that γn ∈ N. Suppose S ∈ Σn is an ε2
synchronization string. The string S 0 ∈ Σ0γn with Σ0 = Σ3 and
i
0
S [i] = S[i mod n], S[(i + n/2) mod n], S
γn
is an (ε + 6γ)-synchronization string of length γn2 .
11
Proof. Intervals of length at most n/2 lay completely within a copy of S and thus have the εsynchronization property. For intervals of size l larger than n/2 we look at the synchronization
string which is blown up by repeating each symbol γn times. Ensuring that both sub-intervals
contain complete blocks changes the edit distance by at most 3γn and thus by at most 6γl. Once
only complete blocks are contained we use the observation that the longest common subsequence
of any two strings becomes exactly a factor k larger if each symbols is repeated k times in each
string. This means that the relative edit distance does not change and is thus at least ε. Overall
this results in the (ε + 6γ)-synchronization string property to hold for large intervals in S 0 .
We use this step to speed up the polynomial time deterministic ε-synchronization string construction in Theorem 4.5 to linear time.
Theorem 4.7. There exists an algorithm that, for any 0 < ε < 1, constructs an ε-synchronization
string of length n over an alphabet of size ε−O(1) in O(n) time.
Proof. Note that if one takes an ε0 -synchronization strings of length n0 and applies the boosting
step in Theorem 4.6 k times with parameter γ, he would obtain a (ε0 + 6kγ)-synchronization string
k
k
of length γ 2 −1 n2 .
For any 0 < ε < 1, Theorem 4.5 gives a deterministic algorithm for constructing an εsynchronization string over an alphabet O(ε−4 ) that takes O(nT ) time for some constant T independent of ε and n. We use the algorithm in Theorem 4.5 to construct an ε0 = 2ε synchronization
1/T
ε
−4
0T
string of length n0 = n γ for γ = 12 log
T over an alphabet of size O(ε ) in O(n ) = O(n) time.
ε
0
Then, we apply boosting step I k = log T times with γ = 12 log
T to get an (ε + 6γ log T = ε)synchronization string of length γ T −1 n0T ≥ n. As boosting step have been employed constant
times, the eventual alphabet size will be ε−O(1) and the run time is O(n).
4.4
Boosting II: Explicit Constructions for Long-Distance Synchronization Strings
We start this section by a discussion of explicitness quality of synchronization string constructions.
In addition to the time complexity of synchronization strings’ constructions, an important quality
of a construction that we take into consideration for applications that we will discuss later is
explicitness or, in other words, how fast one can calculate a particular symbol of a synchronization
string.
Definition 4.8 (T (n)-explicit construction). If a synchronization string construction algorithm can
compute ith index of the string it is supposed to find, i.e., S[i], in T (n) we call it an T (n)-explicit
algorithm.
We are particularly interested in cases where T (n) is polylogarithmically large in terms of n.
For such T (n), a T (n)-explicit construction implies a near-linear construction of the entire string
as one can simply compute the string by finding out symbols one by one in n · T (n) overall time.
We use the term highly-explicit to refer to O(log n)-explicit constructions.
We now introduce a boosting step in Lemma 4.10 that will lead to explicit constructions of (longdistance) synchronization strings. Lemma 4.10 shows that, using a high-distance insertion-deletion
code, one can construct strings that satisfy the requirement of long-distance synchronization strings
for every pair of substrings that are of total length Ωε (log n) or more. Having such a string, one
can construct a Oε (1)-long-distance ε-synchronization string by simply concatenating the outcome
of Lemma 4.10 with repetitions of an Oε (log n)-long ε-synchronization string.
This boosting step is deeply connected to our new definition of long-distance ε-synchronization
strings. In particular, we observe the following interesting connection between insertion-deletion
codes and long-distance ε-synchronization strings.
12
Lemma 4.9. If S is a c-long-distance ε-synchronization string where c = θ(1) then C = {S(i ·
n
c log n, (i + 1) · c log n]|0 ≤ i < c log
n − 1} is an insdel error correcting code with minimum distance
at least 1 − ε and constant rate. Further, if S has a highly explicit construction, C has a linear
encoding time.
Proof. The distance follows from the definition of long-distance ε-synchronization strings. The rate
log
n
log |C|
c log n
follows because the rate R is equal to R = c log
n log q = O(log n) = Ω(1). Finally, as S is highly
explicit and |S(i · c log n, (i + 1) · c log n]| = c log n, one can compute S(i · c log n, (i + 1) · c log n] in
linear time of its length which proves the linear construction.
Our boosting step is mainly built on the converse of this observation.
Lemma 4.10. Suppose C is a block insdel code over alphabet of size q, block length N , distance
1 − ε and rate R and let S be a string obtained by attaching all codewords back to back in any
order. Then, for ε0 = 4ε, S is a string of length n = q R·N · N which satisfies the long-distance
4
ε0 -synchronization property for any pair of intervals of aggregated length 4ε N ≤ ε log
q (log n − log R)
or more. Further, if C is linearly encodable, S has a highly explicit construction.
Proof. The length of S follows from the definition of rate. Moreover, the highly explicitness follows
1
from the fact that every substring of S of length log n may include parts of ε log
q + 1 codewords
each of which can be computed in linear
time
in
terms
of
their
length.
Therefore,
any substring
n
o
log n
S[i, i+log n] can be constructed in O max ε log q , log n
= Oε,q (log n). To prove the long distance
property, we have to show that for every four indices i1 < j1 ≤ i2 < j2 where j1 + j2 − i1 − i2 ≥ 4N
ε ,
we have
ED(S[i1 , j1 ), S[i2 , j2 )) ≥ (1 − 4ε)(j1 + j2 − i1 − i2 ).
(15)
Assume that S[i1 , j1 ) contains a total of p complete blocks of C and S[i2 , j2 ) contains q complete
blocks of C. Let S[i01 , j10 ) and S[i02 , j20 ) be the strings obtained be throwing the partial blocks away
from S[i1 , j1 ) and S[i2 , j2 ). Note that the overall length of the partial blocks in S[i1 , j1 ) and S[i2 , j2 )
4N
is less than 4N , which is at most an ε-fraction of S[i1 , j1 ) ∪ S[i2 , j2 ), since 4N/ε
< ε.
Assume by contradiction that ED(S[i1 , j1 ), S[i2 , j2 )) < (1 − 4ε)(j1 + j2 − i1 − i2 ). Since edit
distance preserves the triangle inequality, we have that
ED S[i01 , j10 ), S[i02 , j20 ) ≤ ED (S[i1 , j1 ), S[i2 , j2 )) + |S[i1 , i01 )| + |S[j10 , j1 )| + |S[i2 , i02 )| + |S[j20 , j2 )|
≤ (1 − 4ε) (j1 + j2 − i1 − i2 ) + ε(j1 + j2 − i1 − i2 )
≤ (1 − 4ε + ε) (j1 + j2 − i1 − i2 )
1 − 3ε
(j10 − i01 ) + (j20 − i02 ) .
<
1−ε
This means that the longest common subsequence of S[i01 , j10 ) and S[i02 , j20 ) has length of at least
1
1 − 3ε
0
0
0
0
|S[i1 , j1 )| + |S[i2 , j2 )| 1 −
,
2
1−ε
which means that there exists a monotonically increasing matching between S[i01 , j10 ) and S[i02 , j20 )
of the same size. Since the matching is monotone, there can be at most p + q pairs of errorcorrecting code blocks having edges to each other. The Pigeonhole Principle implies that there are
two error-correcting code blocks B1 and B2 such that the number of edges between them is at least
13
1
2
h
(|S[i1 , j1 )| + |S[i2 , j2 )|) 1 −
(p + q)N 1 −
=
>
1
2
1−3ε
1−ε
i
p+q
1−3ε
1−ε
2(p + q)
1 − 3ε
1−
· N.
1−ε
Notice that this is also a lower bound on the longest common subsequence of B1 and B2 . This
means that
1 − 3ε/4
1 − 3ε
2 − 4ε
ED(B1 , B2 ) < 2N − 1 −
N < 1+
N=
N < 2 (1 − ε) N.
1 − ε/4
1−ε
1−ε
This contradicts the error-correcting code’s distance property, which we assumed to be larger
than 2(1 − ε)N , and therefore we may conclude that for all indices i1 < j1 ≤ i2 < j2 where
j1 + j2 − i1 − i2 ≥ 4N
ε , (15) holds.
We point out that even a brute force enumeration of a good insdel code could be used to give
an ε-synchronization string for long distance. All is needed is a string for small intervals. This
one could be brute forced as well. Overall this gives an alternative polynomial time construction
(still using the inspiration of long-distance codes, though). More importantly, if we use a linear
time construction for the short distances and a linear time encodable insdel code, we get a simple
Oε (log n)-explicit long-distance ε-synchronization string construction for which any interval [i, i +
Oε (log n)] is computable in Oε (log n).
In the rest of this section, as depicted in Figure 1, we first introduce a high distance, small
alphabet error correcting code that is encodable in linear time in Lemma 4.13 using a high-distance
linear-time code introduced in [14]. We then turn this code into a high distance insertion deletion
code using the indexing technique from [19]. Finally, we will employ this insertion-deletion code
in the setup of Lemma 4.10 to obtain a highly-explicit linear-time long-distance synchronization
strings.
Our codes are based on the following code from Guruswami and Indyk [14].
Theorem 4.11 (Theorem 3 from [14]). For every r, 0 < r < 1, and all sufficiently small > 0,
there exists a family of codes of rate r and relative distance at least (1 − r − ) over an alphabet
−4 −1
of size 2O( r log(1/)) such that codes from the family can be encoded in linear time and can also
be (uniquely) decoded in linear time from 2(1 − r − ) fraction of half-errors, i.e., a fraction e of
errors and s of erasures provided 2e + s ≤ (1 − r − ).
One major downside of constructing ε-synchronization strings based on the code from Theorem 4.11 is the exponentially large alphabet size in terms of ε. We concatenate this code with an
appropriate small alphabet code to obtain a high-distance code over a smaller alphabet size.
−5
Lemma 4.12. For sufficiently small ε and A, R > 1, and any set Σi of size |Σi | = 2O(ε
R
−A ).
there exists a code C : Σi → ΣN
o with distance 1 − ε and rate ε where |Σo | = O(ε
log(1/ε)) ,
Proof. To prove the existence of such code, we show that a random code with distance δ = 1 − ε,
rate r = εA , alphabet size |Σo | = ε−A , and block length
−5
log |Σi | 1
ε log(1/ε) 1
1
N=
· =O
· R = · O ε−5−R
log |Σo | r
A log(1/ε) ε
A
14
exists with non-zero probability. The probability of two randomly selected codewords of length N
out of Σo being closer than δ = 1 − ε can be bounded above by the following term.
N
1 −N ε
Nε
|Σo |
Hence, the probability of the random code with |Σo |N r = |Σ1 | codewords having a minimum
distance smaller than δ = 1 − ε is at most the following.
N
1 N ε |Σi |
|Σo |
2
Nε
N ε
Ne
|Σi |2
≤
Nε
|Σo |N ε
e N ε 2O(ε−5 log(1/ε))
=
ε
(ε−A )N ε
= 2O((1−A) log(1/ε)N ε+ε
= 2
(1−A)O(ε−4−R
−5
log(1/ε))
log(1/ε))+O(ε−5 log(1/ε))
For A > 1, 1 − A is negative and for R > 1, ε−4−R log(1/ε) is asymptotically larger than
ε−5 log(1/ε). Therefore, for sufficiently small ε, the exponent is negative and the desired code
exists.
Concatenating the code from Theorem 4.11 (as the outer code) and the code from Lemma 4.12
(as inner code) gives the following code.
Lemma 4.13. For sufficiently small ε and any constant 0 < γ, there exists an error correcting
code of rate O(ε2.01 ) and distance 1 − ε over an alphabet of size O(ε−(1+γ) ) which is encodable in
linear time and also uniquely decodable from an e fraction of erasures and s fraction of symbol
substitutions when s + 2e < 1 − ε in linear time.
Proof. To construct such code, we simply codes from Theorem 4.11 and Lemma 4.12 as outer and
inner code respectively. Let C1 be an instantiation of the code from Theorem 4.11 with parameters
r = ε/4 and = ε/4. Code C1 is a code of rate r1 = ε/4 and distance δ1 = 1 − ε/4 − ε/4 = 1 − ε/2
−4 −1
−5
over an alphabet Σ1 of size 2O( r log(1/)) = 2O(ε log(1/ε)) which is encodable and decodable in
linear time.
−(1+γ) with
2
Further, according to Lemma 4.12, one can find a code C2 : Σ1 → ΣN
2 for Σ2 = ε
distance δ2 = 1 − ε/2 rate r2 = O(ε1.01 ) by performing a brute-force search. Note that block length
and alphabet size of C2 is constant in terms of n. Therefore, such code can be found in Oε (1)
and by forming a look-up table can be encoded and decoded from δ half-errors in O(1). Hence,
concatenating codes C1 and C2 gives a code of distance δ = δ1 · δ2 = (1 − ε/2)2 ≥ 1 − ε and rate
r = r1 · r2 = O(ε2.01 ) over an alphabet of size |Σ2 | = O ε−(1+γ) which can be encoded in linear
time in terms of block length and decoded from e fraction of erasures and s fraction of symbol
substitutions when s + 2e < 1 − ε in linear time as well.
Indexing the codewords of a code from Lemma 4.13 with linear-time constructible synchronization strings of Theorem 4.7 using the technique from [19] summarized in Theorem 3.6 gives
Theorem 4.14.
Theorem 4.14. For sufficiently small ε, there exists a family of insertion-deletion codes with rate
εO(1) that correct from 1 − ε fraction of insertions and deletions over an alphabet of size εO(1) that
is encodable in linear time and decodable in quadratic time in terms of the block length.
15
Proof. Theorem 3.6 provides a technique to convert an error correcting code into an insertiondeletion code by indexing the codewords with a synchronization string. We use the error correcting
code C from Lemma 4.13 with parameter ε0 = ε/2 and γ = 0.01 along with a linear-time constructible synchronization strings S from Theorem 4.7 with parameter ε00 = (ε/2)2 in the context
of Theorem 3.6. We also use the global decoding algorithm from Haeupler and Shahrasbi [19] for
O(1)
the synchronization string.
√ This will give an insertion deletion code over an alphabet of size ε
0
corrects from (1 − ε ) − ε00 = 1 − ε insdels with a rate of
O ε2.01
rC
=
= εO(1) .
1 + |ΣS |/|ΣC |
1 + O(ε00−O(1) /ε−1.01 )
As C is encodable and S is constructible in linear time, the encoding time for the insdel code will
be linear. Further, as C is decodable in linear time and S is decodable in quadratic time (using
global decoding from [19]), the code is decodable in quadratic time.
Using insertion-deletion code from Theorem 4.14 and boosting step from Lemma 4.10, we can
now proceed to the main theorem of this section that provides a highly explicit construction for
c = Oε (1)-long-distance synchronization strings.
Theorem 4.15. There is a deterministic algorithm that, for any constant 0 < ε < 1 and n ∈
N, computes an c = ε−O(1) -long-distance ε-synchronization string S ∈ Σn where |Σ| = ε−O(1) .
Moreover, this construction is O(log n)-explicit and can even compute S[i, i + log n] in Oε (log n)
time.
Proof. We simply use an insertion-deletion code from Theorem 4.14 with parameter ε0 = ε/4 and
log n
block length N = Rq where q = ε−O(1) is the size of the alphabet from Theorem 4.14. Using this
code in Lemma 4.10 gives a string S of length q RN · N ≥ n that satisfies
4ε0 = ε-synchronization
log n
property over any pair of intervals of total length 4N
= O ε−O(1) log n or more.
ε = O εR log q
Since the insertion-deletion code from Theorem 4.14 is linearly encodable, the construction will be
highly-explicit.
4N
−O(1) , we simply
To turn S into a c-long-distance ε-synchronization string for c = ε log
n =O ε
concatenate it with a string T that satisfies ε-synchronization property for neighboring intervals of
total size smaller than c log n. In other words, we propose the following structure for constructing
c-long-distance ε-synchronization string R.
!
i
R[i] = (S[i], T [i]) = C
[i (mod N )] , T [i]
(16)
N
Let S 0 be an ε-synchronization string of length 2c log n. Using linear-time construction from
Theorem 4.7, one can find S 0 in linear time in its length, i.e, O(log n). We define strings T1 and T2
consisting of repetitions of S 0 as follows.
T1 = (S 0 , S 0 , · · · , S 0 ),
T2 = (0c log n , S 0 , S 0 , · · · , S 0 )
The string T1 ·T2 satisfies ε-synchronization strings for neighboring intervals of total length c log n or
less as any such substring falls into one copy of S 0 . Note that having S 0 one can find any symbol of T
in linear time. Hence, T has a highly-explicit linear time construction. Therefore, concatenating S
and T gives a linear time construction for c-long-distance ε-synchronization strings over an alphabet
of size ε−O(1) that is highly-explicit and, further, allows computing any substring [i, i + log n] in
O(log n) time. A schematic representation of this construction can be found in Figure 2.
16
𝑖’th element
of the string
𝑁
C(1)
C(2)
C(3)
C(4)
4
⋅𝑁
ε
Figure 2: Pictorial representation of the construction of a long-distance ε-synchronization string of length n.
𝑞1
𝑞3
𝑈
𝑆𝑘
𝑆𝑘 3
𝑆𝑘 5
𝑞2
𝑞4
⋯
𝑉
𝑆𝑘 2
𝑆𝑘 4
Figure 3: Construction of Infinite synchronization string T
4.5
Infinite Synchronization Strings: Highly Explicit Construction
Throughout this section we focus on construction of infinite synchronization strings. To measure
the efficiency of a an infinite string’s construction, we consider the required time complexity for
computing the first n elements of that string. Moreover, besides the time complexity, we employ a
generalized notion of explicitness to measure the quality of infinite string constructions.
In a similar fashion to finite strings, an infinite synchronization string is called to have a T (n)explicit construction if there is an algorithm that computes any position S[i] in O (T (i)). Moreover,
it is said to have a highly-explicit construction if T (i) = O(log i).
We show how to deterministically construct an infinitely-long ε-synchronization string over an
alphabet Σ which is polynomially large in ε−1 . Our construction can compute the first n elements
of the infinite string in O(n) time, is highly-explicit, and, further, can compute any [i, i + log i] in
O(log i).
Theorem 4.16. For all 0 < ε < 1, there exists an infinite ε-synchronization string construction
over a poly(ε−1 )-sized alphabet that is highly-explicit and also is able to compute S[i, i + log i] in
O(log i). Consequently, using this construction, the first n symbols of the string can be computed
in O(n) time.
Proof. Let k =
follows:
6
ε
and let Si denote a 2ε -synchronization string of length i. We define U and V as
U = (Sk , Sk3 , Sk5 , . . . ),
V = (Sk2 , Sk4 , Sk6 , . . . )
17
In other words, U is the concatenation of 2ε -synchronization strings of length k, k 3 , k 5 , . . . and V is
the concatenation of 2ε -synchronization strings of length k 2 , k 4 , k 6 , . . . . We build an infinite string
T such that T [i] = (U [i], V [i]) (see Figure 3).
First, if finite synchronization strings Skl used above are constructed using the highly-explicit
construction algorithm introduced in Theorem 4.15, any index i can be computed by simply finding
one index in two of Skl s in O(log n). Further, any substring of length n of this construction can
be computed by constructing finite synchronization strings of total length O(n). According to
Theorem 4.15, that can be done in Oε (n).
Now, all that remains is to show that T is an ε-synchronization string. We use following lemma
to prove this.
Lemma 4.17. Let x < y < z be positive integers and let t be such that k t ≤ |T [x, z)| < k t+1 . Then
there exists a block of Ski in U or V such that all but a k3 fraction of T [x, z) is covered by Ski .
Note that this lemma shows that ED(T [x, y), T [y, z)) > 1 − 2ε (|T [x, y)| + |T [y, z)|) 1 − k3 =
2
1 − 2ε (|T [x, y)| + |T [y, z)|) ≥ (1−ε) (|T [x, y)| + |T [y, z)|), which implies that T is an ε-synchronization
string.
Proof of Lemma 4.17. We first define ith turning point qi to be the index of T at which Ski+1 starts,
i.e., qi = k i + k i−2 + k i−4 + · · · . Note that
( 2
k + k 4 + · · · + k i Even i
qi =
(17)
k + k3 + · · · + ki
Odd i
( 2 ki −1
k k2 −1 Even i
=
(18)
i+1 −1
k kk2 −1
Odd i
Note that qt−1 < 2k t−1 and |T [x, z)| ≥ k t . Therefore, one can throw away all the elements of T [x, z)
whose indices are less than qt−1 without losing more than a k2 fraction of the elements of T [x, z).
We will refer to the remaining part of T [x, z) as T̃ .
Now, the distance of any two turning points qi and qj where t ≤ i < j is at least qt+1 − qt , and
(
qt+1 − qt =
t
t+1
t+1
Even t
−1
−1
− k kk2 −1
k 2 kk2 −1
(
=
t+2
−1
k kk2 −1
− k 2 kk2−1
−1
(k−1)(kt+2 +k)
k2 −1
(k−1)(kt+2 −k)
k2 −1
(19)
Odd t
=
kt+2 +k
k+1
Even t
=
kt+2 −k
Odd t.
k+1
(20)
1
Hence, qt+1 − qt > k t+1 1 − k . Since |T̃ | ≤ |T [x, z)| < k t+1 , this fact gives that there exists a
Ski which covers a 1 − k1 fraction of T̃ . This completes the proof of the lemma.
A similar discussion for infinite long-distance synchronization string can be found in Appendix B.
5
Local Decoding
In Section 4, we discussed the close relationship between long-distance synchronization strings and
insdel codes and provided highly-explicit constructions of long-distance synchronization strings
based on insdel codes.
18
In this section, we make a slight modification to the highly explicit structure (16) we introduced
in Theorem 4.15 where we showed one can use a constant rate insertion-deletion code C with distance
1− 4ε and block length N = O(log n) and a string T satisfying ε-synchronization property for pairs of
neighboring intervals of total length c log n or less to make a c-long-distance synchronization string
of length n. In addition to the
symbols of the string consisting of codewords of C and symbols of
string T , we append Θ log 1ε extra bits to each symbol to enable local decodability. This extra
symbol, as described in (21), essentially works as a circular index counter for insertion-deletion
code blocks.
!
i
i
8
R[i] = C
(21)
[i (mod N )] , T [i],
mod 3
N
N
ε
With this extra information appended to the construction, we claim that relative suffix error density
is smaller than ε upon arrival of some symbol, then one can decode the corresponding index correctly
by only looking at the last O(log n) symbols. At any point of a communication over an insertiondeletion channel, relative suffix error density is defined as maximum fraction of errors occurred over
all suffixes of the message sent so far. (see Definition 5.12 from [19]).
Theorem 5.1. Let R be a highly-explicit long-distance ε-synchronization string constructed according to (21). Let R[1, i] be sent by Alice and be received as R0 [1, j] by Bob. If relative suffix error den2
2
sity is smaller than 1− 2ε , then Bob can find i in 4ε ·TDec (N )+ 4N
ε ·(TEnc (N )+ExT (c log n)+c log n)
4N
only by looking at the last max( ε2 , c log n) received symbols where TEnc and TDec is the encoding
and decoding complexities of C and ExT (l) is the amount of time it takes to construct a substring
of T of length l.
For linear-time encodable, quadratic-time decodable code C and highly-explicit string T constructed by repetitions of short synchronization strings used in Theorem 4.15, construction (21)
provides the following.
Theorem 5.2. Let R be a highly-explicit long-distance ε-synchronization string constructed according to (21) with code C and string T as described in Theorem 4.15. Let R[1, i] be sent by Alice and
be received as R0 [1, j] by Bob. If relative suffix error density is smaller than 1 − 2ε , then Bob can
find i in O(log3 n) only by looking at the last O(log n) received symbols.
This decoding procedure, which we will refer to as local decoding consists of two principal
phases upon arrival of each symbol. During the first phase, the receiver finds a list of 1ε numbers
that is guaranteed to contain the index of the current insertion-deletion code block. This gives Nε
candidates for the index of the received symbol. The second phase uses the relative suffix error
density guarantee to choose the correct candidate among the list. The following lemma formally
presents the first phase. This idea of using list decoding as a middle step to achieve unique decoding
has been used by several previous work [11, 15–17].
Lemma 5.3. Let S be an ε-synchronization string constructed as described in (21). Let S[1, i]
be sent by Alice and be received as Sτ [1, j] by Bob. If relative suffix error density is smaller than
1 − ε/2, then Bob can compute a list of 4N
ε numbers that is guaranteed to contain i.
Proof. Note that as relative suffix error density is smaller than 1 − ε/2 < 1, the last received
symbol has to be successfully transmitted. Therefore, Bob can correctly figure out the insertiondeletion code block index counter value which we denote by count. Note that if there are no errors,
all symbols in blocks with index counter value of count, count − 1, · · · , count − 4/ε + 1 mod ε83
that was sent by Bob right before the current symbol, have to be arrived within the past 4/ε · N
19
symbols. However, as adversary can insert symbols, those symbols can appear anywhere within
8N
the last 2ε 4N
ε = ε2 symbols.
Hence, if Bob looks at the symbols arrived with index i ∈ {count, count − 1, · · · , count − 4/ε +
1} mod ε83 within the last 8N
received symbols, he can observe all symbols coming from blocks with
ε2
index count, count − 1, · · · , count − 4/ε + 1 mod ε83 that was sent right before S[i]. Further, as our
counter counts modulo ε83 , no symbols from older blocks with indices count, count − 1, · · · , count −
1/ε + 1 mod ε43 will appear within the past 8N
symbols. Therefore, Bob can find the symbols from
ε2
4
the last ε blocks up to some insdel errors. By decoding those blocks, he can make up a list of 4ε
candidates for the actual block number. As each block contains N elements, there are a total of
4N
ε many candidates for i.
Note that as relative suffix error density is at most 1 − ε/2 and the last block may not have
been completely sent yet, the total fraction of insdels in reconstruction of the last 4ε blocks on Bob
ε
N
side smaller than 1 − ε/2 + 4N/ε
2 ≤ 1 − 4 . Therefore, the error density in at least one of those
blocks is not larger than 1 − 4ε . This guarantees that at least one block will be correctly decoded
and henceforth the list contains the correct actual index.
We now define a limited version of relative suffix distance (defined in [19]) which enables us to
find the correct index among candidates found in Lemma 5.3.
Definition 5.4 (Limited Relative Suffix Distance). For any two strings S, S 0 ∈ Σ∗ we define their
l-limited relative suffix distance, l − LRSD, as follows:
ED (S(|S| − k, |S|], S 0 (|S 0 | − k, |S 0 |])
0<k<l
2k
l − LRSD(S, S 0 ) = max
Note that l = O(log n)-limited suffix distance of two strings can be computed in O(l2 ) =
O(log2 n) by computing edit distance of all pairs of prefixes of their l-long suffixes.
Lemma 5.5. If string S is a c-long distance ε-synchronization string, then for any two distinct
prefixes S[1, i] and S[1, j], (c log n)-LRSD(S[1, i], S[1, j]) > 1 − ε.
Proof. If j − i < c log n, the synchronization string property gives that ED(S(2i − j, i], S(i, j]) >
2(j − i)(1 − ε) which gives the claim for k = j − i. If j − i ≥ c log n, the long-distance property
gives that ED(S(i − log n, i], S(j − log n, j]) > 2(1 − ε)c log n which again, proves the claim.
Lemmas 5.3 and 5.5 enable us to prove Theorem 5.1.
Proof of Theorem 5.1. Using Lemma 5.3, by decoding 4/ε codewords, Bob forms a list of 4N/ε
candidates for the index of the received symbol. This will take 4/ε · TDec (N ) time. Then, using
Lemma 5.5, for any of the 4N/ε candidates, he has to construct a c log n substring of R and
compute the (c log n)-LRSD of that with the string he received. This requires looking at the last
max(4n/ε, c log n) recieved symbols and takes 4N/ε · (TEnc (N ) + ExT (c log n) + c2 log2 n) time.
6
Application: Near Linear Time Codes Against Insdels, Block
Transpositions, and Block Replications
In Sections 4 and 5, we provided highly explicit constructions and local decodings for synchronization strings. Utilizing these two important properties of synchronization strings together suggests
important improvements over insertion-deletion codes introduced by Haeupler and Shahrasbi [19].
We start by stating the following important lemma which summarizes the results of Sections 4
and 5.
20
Lemma 6.1. For any 0 < ε < 1, there exists an streaming (n, δ)-indexing solution with εsynchronization string S and streaming decoding algorithm D that figures out the index of each
3
symbol by merely considering the last Oε (log n) received symbols and in Oε (log
n) time. Further,
n
−O(1)
S ∈ Σ is highly-explicit and constructible in linear-time and |Σ| = O ε
. This solution may
nδ
contain up to 1−ε misdecodings.
Proof. Let S be a long-distance 2ε-synchronization string constructed according to Theorem 4.15
and enhanced as suggested in (21) to ensure local decodablity. As discussed in Sections 4 and 5,
these strings trivially satisfy all properties claimed in the statement other than the misdecoding
guarantee.
According to Theorem 5.2, correct decoding is ensured whenever relative suffix error density is
less than 1 − 2ε
2 = 1 − ε. Therefore, as relative suffix error density can exceed 1 − ε upon arrival of
nδ
nδ
at most 1−ε many symbols (see Lemma 5.14 from [19]), there can be at most 1−ε
many successfully
received symbols which are not decoded correctly. This proves the misdecoding guarantee.
6.1
Near-Linear Time Insertion-Deletion Code
Using the indexing technique proposed by Haeupler and Shahrasbi [19] summarized in Theorem 3.6
with synchronization strings and decoding algorithm from Theorem 3.5, one can obtain the following
insdel codes.
Theorem 6.2. For any 0 < δ < 1/3 and sufficiently small ε > 0, there exists an encoding map
E : Σk → Σn and a decoding map D : Σ∗ → Σk , such that, if EditDistance(E(m), x) ≤ δn then
D(x) = m. Further, nk > 1 − 3δ − ε, |Σ| = f (ε), and E and D can be computed in O(n) and
O(n log3 n) time respectively.
Proof. We closely follow the proof of Theorem 1.1 from [19] and use Theorem 3.6 to convert a
near-MDS error correcting code to an insertion-deletion code satisfying the claimed properties.
ε
Given the δ and ε, we choose ε0 = 12
and use locally decodable Oε0 (1)-long-distance ε0 synchronization string S of length n over alphabet ΣS of size ε0−O(1) = ε−O(1) from Theorem 5.2.
We plug this synchronization string with the local decoding from Theorem 5.2 into Theorem 3.6
with a near-MDS expander code [14] C (see Theorem 4.11) which can efficiently correct up to
δC = 3δ + 3ε half-errors and has a rate of RC > 1 − δC − 3ε over an alphabet ΣC = exp(ε−O(1) ) such
log ΣS
C
that log |ΣC | ≥ 3 logε|ΣS | . This ensures that the final rate is indeed at least R
=
log ΣS ≥ RC − log Σ
C
1+ log Σ
C
1 − 3δ − 3 3ε = 1 − 3δ − ε and the fraction of insdel errors that can be efficiently corrected is
δ
0
δC − 2 1−ε
0 ≥ 3δ + ε/3 − 2δ(1 + 2ε ) ≥ δ. The encoding and decoding complexities are furthermore
straight forward according to guarantees stated in Theorem 6.1 and the linear time construction of
S.
6.2
Insdels, Block Transpositions, and Block Replications
In this section, we introduce block transposition and block replication errors and show that code
from Theorem 6.2 can overcome these types errors as well.
One can think of several way to model transpositions and replications of blocks of data. One
possible model would be to have the string of data split into blocks of length l and then define transpositions and replications over those fixed blocks. In other words, for message m1 , m2 , · · · , mn ∈ Σn ,
a single transposition or replication would be defined as picking a block of length l and then move
or copy that blocks of data somewhere in the message.
21
Another (more general) model is to let adversary choose any block, i.e., substring of the message
he wishes and then move or copy that block somewhere in the string. Note that in this model, a
constant fraction of block replications may make the message length exponentially large in terms
of initial message length. We will focus on this more general model and provide codes protecting
against them running near-linear time in terms of the received block length. Such results automatically extend to the weaker model that does not lead to exponentially large corrupted messages.
We now formally define (i, j, l)-block transposition as follows.
Definition 6.3 ((i, j, l)-Block Transposition). For a given string M = m1 · · · mn , the (i, j, l)-block
transposition operation for 1 ≤ i ≤ i + l ≤ n and j ∈ {1, · · · , i − 1, i + l + 1, · · · , n} is defined as an
operation which turns M into
M 0 = m1 , · · · , mi−1 , mi+l+1 · · · , mj , mi · · · mi+l , mj+1 , · · · , mn if j > i + l
or
M 0 = m1 , · · · , mj , mi , · · · , mi+l , mj+1 , · · · , mi−1 , mi+l+1 · · · , mn if j < i
by removing M [i, i + l] and inserting it right after M [j].
Also, (i, j, l)-block replication is defined as follows.
Definition 6.4 ((i, j, l)-Block Replication). For a given string M = m1 · · · mn , the (i, j, l)-block
replication operation for 1 ≤ i ≤ i + l ≤ n and j ∈ {1, · · · , n} is defined as an operation which
turns M into M 0 = m1 , · · · , mj , mi · · · mi+l , mj+1 , · · · , mn which is obtained by copying M [i, i + l]
right after M [j].
We now proceed to the following theorem that implies the code from Theorem 6.2 recovers from
block transpositions and replications as well.
Theorem 6.5. Let S ∈ ΣnS be a locally-decodable highly-explicit c-long-distance ε-synchronization
string from Theorem 5.2 and C be an half-error correcting code of block length n, alphabet ΣC ,
rate r, and distance d with encoding function EC and decoding function DC that run in TEC and
n
TDC respectively. Then, one can obtain an encoding function En : Σnr
C → [ΣC × ΣS ] that runs
∗
nr
in TEC + O(n) and decoding function Dn : [ΣC × ΣS ] → ΣC which runs in TDC + O log3 n and
recovers from nδinsdel fraction
of synchronization
errors and δblock fraction of block transpositions
2
or replications as long as 2 + 1−ε/2 δinsdel + (12c log n)δblock < d.
Proof. To obtain such codes, we simply index the symbols of the given error correcting code with
the symbols of the given synchronization strings. More formally, the encoding function E(x) for
x ∈ Σnr
C first computes EC (x) and then indexes it, symbol by symbol, with the elements of the given
synchronization string.
On the decoding end, D(x) first uses the indices on each symbol to guess the actual position of
the symbols using the local decoding of the c-long-distance ε-synchronization string. Rearranging
the received symbols in accordance to the guessed indices, the receiving end obtains a version of
EC (x), denoted by x̄, that may suffer from a number of symbol corruption errors due to incorrect
index misdecodings. As long as the number of such misdecodings, k, satisfies nδinsdel + 2k ≤ nd,
computing DC (x̄) gives x. The decoding procedure naturally consists of decoding the attached
synchronization string, rearranging the indices, and running DC on the rearranged version. Note
that if multiple symbols where detected to be located at the same position by the synchronization
string decoding procedure or no symbols where detected to be at some position, the decoder can
simply put a special symbol ‘?’ there and treat it as a half-error. The decoding and encoding
complexities are trivial.
22
In order to find the actual index of a received symbol correctly, we need the local decoding
procedure to compute the index correctly. For that purpose, it suffices that no block operations
cut or paste symbols within an interval of length 2c log n before that index throughout the entire
block transpositions/replications performed by the adversary and the relative suffix error density
caused by synchronization errors for that symbol does not exceed 1 − ε/2. As any block operation
might cause three new cut/cop/paste edges and relative suffix error density is larger than 1 − ε/2
1
many symbols (according to Lemma 5.14 from [19]), the positions of all but at most
for up to 1−ε/2
1
k ≤ 3nδblock × 2c log n + nδinsdel 1 + 1−2ε
symbols will be decoded incorrectly via synchronization
2
<
string decoding procedure. Hence, as long as nδinsdel +2k ≤ 6δblock ×2c log n+nδinsdel 3 + 1−2ε
d the decoding procedure succeeds. Finally, the encoding and decoding complexities follow from the
fact that indexing codewords of length n takes linear time and the local decoding of synchronization
strings takes O(n log3 n) time.
Employing locally-decodable Oε (1)-long-distance synchronization strings of Theorem 5.2 and
error correcting code of Theorem 4.11 in Theorem 6.5 gives the following code.
Theorem 6.6. For any 0 < r < 1 and sufficiently small ε there exists a code with rate r that
corrects nδinsdel synchronization errors and nδblock block transpositions or replications as long as
6δinsdel + c log nδblock < 1 − r − ε for some c = O(1). The code is over an alphabet of size Oε (1)
and has O(n) encoding and O(N log3 n) decoding complexities where N is the length of the received
message.
7
Applications: Near-Linear Time Infinite Channel Simulations
with Optimal Memory Consumption
We now show that the indexing algorithm introduced in Theorem 6.1 can improve the efficiency
of channel simulations from [20] as well as insdel codes. Consider a scenario where two parties are
maintaining a communication that suffers from synchronization errors, i.e, insertions and deletions.
Haeupler et al. [20] provided a simple technique to overcome this desynchronization. Their solution
consists of a simple symbol by symbol attachment of a synchronization string to any transmitted
symbol. The attached indices enables the receiver to correctly detect indices of most of the symbols
he receives. However, the decoding procedure introduced in Haeupler et al. [20] takes polynomial
time in terms of the communication length. The explicit construction introduced in Section 4
and local decoding provided in Section 5 can reduce the construction and decoding time and space
complexities to polylogarithmic. Further, the decoding procedure only requires to look up Oε (log n)
recently received symbols upon arrival of any symbol.
Interestingly, we will show that, beyond the time and space complexity improvements over
simulations in [20], long-distance synchronization strings can make infinite channel simulations
possible. In other words, two parties communicating over an insertion-deletion channel are able to
simulate a corruption channel on top of the given channel even if they are not aware of the length
of the communication before it ends with similar guarantees as of [20]. To this end, we introduce
infinite strings that can be used to index communications to convert synchronization errors into
symbol corruptions. The following theorem analogous to the indexing algorithm of Lemma 6.1
provides all we need to perform such simulations.
Theorem 7.1. For any 0 < ε < 1, there exists an infinite string S that satisfies the following
properties:
23
1. String S is over an alphabet of size ε−O(1) .
2. String S has a highly-explicit construction and, for any i, S[i, i + log i] can be computed in
O(log i).
3. Assume that S[1, i] is sent over an insertion-deletion channel. There exists a decoding algorithm for the receiving side that, if relative suffix error density is smaller than 1 − ε, can
correctly find i by looking at the last O(log i) and knowing the number of received symbols in
O(log3 i) time.
Proof. To construct such a string S, we use our finite-length highly-explicit locally-decodable longdistance synchronization string constructions from Theorem 5.2 and use to construct finite substrings of S as proposed in the infinite string construction of Theorem 4.16 which is depicted in
Figure 3. We choose length progression parameter k = 10/ε2 . Similar to the proof of Lemma 4.17,
we define turning point qi as the index at which Ski+1 starts. Wee append one extra bit to each
symbol S[i] which is zero if qj ≤ i < qj+1 for some even j and one otherwise.
This construction clearly satisfies the first two properties claimed in the theorem statement. To
prove the third property, suppose that S[1, i] is sent and received as S 0 [1, i0 ] and the error suffix
density is less than 1 − ε. As error suffix density is smaller than 1 − ε, iε ≤ i0 ≤ i/ε which implies
that i0 ε ≤ i ≤ i0 /ε. This gives an uncertainty interval whose ends are close by a factor of 1/ε2 .
By the choice of k, this interval contains at most one turning point. Therefore, using the extra
appended bit, receiver can figure out index j for which qj ≤ i < qj+1 . Knowing this, it can simply
use the local decoding algorithm for finite string Sj−1 to find i.
Theorem 7.2. (a) Suppose that n rounds of a one-way/interactive insertion-deletion channel
over an alphabet Σ with a δ fraction of insertions and deletions are given. Using an εsynchronization string over an alphabet Σsyn , it is possible to simulate n (1 − Oε (δ)) rounds
of a one-way/interactive corruption channel over Σsim with at most Oε (nδ) symbols corrupted
so long as |Σsim | × |Σsyn | ≤ |Σ|.
(b) Suppose that n rounds of a binary one-way/interactive insertion-deletion channel
p with a δ
fraction of insertions and deletions are given. It is possible to simulate
pn(1 − Θ( δ log(1/δ)))
rounds of a binary one-way/interactive corruption channel with Θ( δ log(1/δ)) fraction of
corruption errors between two parties over the given channel.
Having an explicitly-constructible, locally-decodable, infinite string from Theorem 7.1 utilized in
the simulation, all of the simulations mentioned above take O(log n) time for sending/starting party
of one-way/interactive communications. Further, on the other side, the simulation spends O(log3 n)
time upon arrival of each symbol and only looks up O(log n) many recently received symbols. Overall,
these simulations take a O(n log3 n) time and O(log n) space to run. These simulations can be
performed even if parties are not aware of the communication length.
Proof. We simply replace ordinary ε-synchronization strings used in all such simulations in [20]
with the highly-explicit locally-decodable infinite string from Theorem 7.1 with its corresponding
local-decoding procedure instead of minimum RSD decoding procedure that is used in [20]. This
keeps all properties that simulations proposed by Haeupler et. al. [20] guarantee. Further, by
properties stated in Theorem 7.1, the simulation is performed in near-linear time, i.e., O(n log3 n).
Also, constructing and decoding each symbol of the string from Theorem 7.1 only takes O(log n)
space which leads to an O(log n) memory requirement on each side of the simulation.
24
8
Applications: Near-Linear Time Coding Scheme for Interactive
Communication
Using the near-linear time interactive channel simulation in Theorem 7.2 with the near-linear time
interactive coding scheme of Haeupler and Ghaffari [11] (stated in Theorem 8.1) gives the nearlinear time coding scheme for interactive communication over insertion-deletion channels stated in
Theorem 8.2.
Theorem 8.1 (Theorem 1.1 from [11]). For any constant ε > 0 and n-round protocol Π there is a
randomized non-adaptive coding scheme that robustly simulates Π against an adversarial error rate
of ρ ≤ 1/4 − ε using N = O(n) rounds, a near-linear n logO(1) n computational complexity, and
failure probability 2−Θ(n) .
Theorem 8.2. For a sufficiently small δ and n-round alternating protocol Π, there is a randomized
coding scheme simulating Π in presence of δ fraction of edit-corruptions with constant rate (i.e., in
O(n) rounds) and in near-linear time. This coding scheme works with probability 1 − 2Θ(n) .
25
References
[1] Noga Alon, Jeff Edmonds, and Michael Luby. Linear time erasure codes with nearly optimal recovery. In Proceedings of the Annual Symposium on Foundations of Computer Science
(FOCS), pages 512–519. IEEE, 1995.
[2] Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic
time (unless seth is false). In Proceedings of the Forty-Seventh Annual ACM on Symposium
on Theory of Computing, pages 51–58. ACM, 2015.
[3] Zvika Brakerski and Yael Tauman Kalai. Efficient interactive coding against adversarial noise.
In Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), pages
160–166. IEEE, 2012.
[4] Zvika Brakerski and Moni Naor. Fast algorithms for interactive coding. In Proceedings of the
twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, pages 443–456. Society
for Industrial and Applied Mathematics, 2013.
[5] Mark Braverman, Ran Gelles, Jieming Mao, and Rafail Ostrovsky. Coding for interactive
communication correcting insertions and deletions. IEEE Transactions on Information Theory,
63(10):6256–6270, 2017.
[6] Mark Braverman and Anup Rao. Toward coding for maximum errors in interactive communication. IEEE Transactions on Information Theory, 60(11):7248–7255, 2014.
[7] Karthekeyan Chandrasekaran, Navin Goyal, and Bernhard Haeupler. Deterministic algorithms
for the lovász local lemma. SIAM Journal on Computing, 42(6):2132–2155, 2013.
[8] Kuan Cheng, Xin Li, and Ke Wu. Synchronization strings: Efficient and fast deterministic
constructions over small alphabets. arXiv preprint arXiv:1710.07356, 2017.
[9] Matthew Franklin, Ran Gelles, Rafail Ostrovsky, and Leonard J Schulman. Optimal coding for
streaming authentication and interactive communication. IEEE Transactions on Information
Theory, 61(1):133–145, 2015.
[10] Ran Gelles, Ankur Moitra, and Amit Sahai. Efficient coding for interactive communication.
IEEE Transactions on Information Theory, 60(3):1899–1913, 2014.
[11] Mohsen Ghaffari and Bernhard Haeupler. Optimal error rates for interactive coding ii: Efficiency and list decoding. In Proceedings of the Annual Symposium on Foundations of Computer
Science (FOCS), pages 394–403. IEEE, 2014.
[12] SW Golomb, J Davey, I Reed, H Van Trees, and J Stiffler. Synchronization. IEEE Transactions
on Communications Systems, 11(4):481–491, 1963.
[13] Venkatesan Guruswami and Piotr Indyk. Expander-based constructions of efficiently decodable
codes. In Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS),
pages 658–667. IEEE, 2001.
[14] Venkatesan Guruswami and Piotr Indyk. Linear-time encodable/decodable codes with nearoptimal rate. IEEE Transactions on Information Theory, 51(10):3393–3400, 2005.
26
[15] Venkatesan Guruswami and Ray Li. Efficiently decodable insertion/deletion codes for highnoise and high-rate regimes. In Proceedings of the 2016 IEEE International Symposium on
Information Theory, 2016.
[16] Venkatesan Guruswami and Atri Rudra. Explicit capacity-achieving list-decodable codes. In
Proceedings of the Annual Symposium on Theory of Computing (STOC), pages 1–10. ACM,
2006.
[17] Venkatesan Guruswami and Carol Wang. Deletion codes in the high-noise and high-rate
regimes. In Proceedings of the 19th International Workshop on Randomization and Computation (RANDOM), pages 867–880, 2015.
[18] Bernhard Haeupler. Interactive channel capacity revisited. In Proceedings of the Annual
Symposium on Foundations of Computer Science (FOCS), pages 226–235, 2014.
[19] Bernhard Haeupler and Amirbehshad Shahrasbi. Synchronization strings: Codes for insertions
and deletions approaching the singleton bound. In Proceedings of the Annual Symposium on
Theory of Computing (STOC), 2017.
[20] Bernhard Haeupler, Amirbehshad Shahrasbi, and Ellen Vitercik. Synchronization strings:
Channel simulations and interactive coding for insertions and deletions. arXiv preprint
arXiv:1707.04233, 2017.
[21] Brett Hemenway, Noga Ron-Zewi, and Mary Wootters. Local list recovery of high-rate tensor
codes and applications. arXiv preprint arXiv:1706.03383, 2017.
[22] Gillat Kol and Ran Raz. Interactive channel capacity. In Proceedings of the forty-fifth annual
ACM symposium on Theory of computing, pages 715–724. ACM, 2013.
[23] Vladimir Levenshtein. Binary codes capable of correcting deletions, insertions, and reversals.
Doklady Akademii Nauk SSSR 163, 4:845–848, 1965.
[24] Hugues Mercier, Vijay K Bhargava, and Vahid Tarokh. A survey of error-correcting codes
for channels with symbol synchronization errors. IEEE Communications Surveys & Tutorials,
1(12):87–96, 2010.
[25] Leonard J. Schulman. Communication on noisy channels: A coding theorem for computation.
In Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), pages
724–733, 1992.
[26] Leonard J. Schulman. Coding for interactive communication. IEEE transactions on information theory, 42(6):1745–1756, 1996.
[27] Leonard J. Schulman and David Zuckerman. Asymptotically good codes correcting insertions,
deletions, and transpositions. IEEE transactions on information theory, 45(7):2552–2557, 1999.
[28] Alexander A Sherstov and Pei Wu. Optimal interactive coding for insertions, deletions, and
substitutions. In Electronic Colloquium on Computational Complexity (ECCC), volume 24,
page 79, 2017.
[29] Michael Sipser and Daniel A Spielman. Expander codes. IEEE Transactions on Information
Theory, 42(6):1710–1722, 1996.
27
[30] Neil JA Sloane. On single-deletion-correcting codes. Codes and Designs, de Gruyter, Berlin,
pages 273–291, 2002.
[31] Daniel Alan Spielman. Computationally efficient error-correcting codes and holographic proofs.
PhD thesis, Massachusetts Institute of Technology, 1995.
28
Appendices
A
Alphabet Size vs Distance Function
In this section, we study the dependence of alphabet size over distance function, f , for f (l)distance synchronization strings. We will discuss this dependence for polynomial, exponential,
and super exponential function f . As briefly mentioned in Section 4.1, we will show that for any
polynomial function f , one can find arbitrarily long f (l)-distance ε-synchronization strings over
an alphabet that is polynomially large in terms of ε−1 (Theorem A.1). Also, in Theorem A.2,
we will show that one cannot hope for such guarantee over alphabets with sub-polynomial size in
terms of ε−1 . Further, for exponential distance function f , we will show that arbitrarily long f (l)distance ε-synchronization strings exist over alphabets that are exponentially large in terms of ε−1
(Theorem A.1) and, furthermore, cannot hope for such strings over alphabets with sub-exponential
size in terms of ε−1 (Theorem A.3). Finally, in Theorem A.4, we will show that for super-exponential
f , f (l)-distance ε-synchronization string does not exist over constant-sized alphabets in terms of
string length.
Theorem A.1. For any polynomial function f , there exists an alphabet of size O(ε−4 ) over which
arbitrarily long f (l)-distance ε-synchronization strings exist. Further, for any exponential function
f , such strings exist over an alphabet of size exp(ε−1 ).
Proof. To prove this we follow the same LLL argument as in Theorem 4.5 and [19] to prove the
existence of a string that satisfies the f (l)-distance ε-synchronization string property for intervals
of length t or more and then concatenate it with 1, 2, · · · , t, 1, 2, · · · , t, · · · to take care of short
intervals. We define bad events Bi1 ,l1 ,i2 ,l2 in the same manner as in Theorem 4.5 and follow similar
steps up until (3) by proposing xi1 ,l1 ,i2 ,l2 = D−ε(l1 +l2 ) for some D > 1 to be determined later. D
has to be chosen such that for any i1 , l1 , i2 , l2 and l = l1 + l2 :
e
p
ε |Σ|
!εl
Y
≤ D−εl
0
0
1 − D−ε(l1 +l2 )
(22)
[S[i1 ,i1 +l1 )∪S[i2 ,i2 +l2 )]∩[S[i01 ,i01 +l10 )∪S[i02 ,i02 +l20 )]6=∅
Note that:
Y
D−εl
0
0
1 − D−ε(l1 +l2 )
(23)
[S[i1 ,i1 +l1 )∪S[i2 ,i2 +l2 )]∩[S[i01 ,i01 +l10 )∪S[i02 ,i02 +l20 )]6=∅
≥ D
−εl
n Y
l0
Y
1 − D−εl
0
[(l1 +l10 )+(l1 +l20 )+(l2 +l10 )+(l2 +l20 )]f (l0 )
(24)
l0 =t l10 =1
= D−εl
n
Y
1 − D−εl
0
4l0 (l+l0 )f (l0 )
(25)
l0 =t
"
= D−εl
n
Y
1−D
−εl0
4l0 f (l0 )
#l
×
l0 =t
"
≥ D−εl 1 −
n
Y
1 − D−εl
0
4l0 2 f (l0 )
(26)
l0 =t
n
X
#l
4l0 f (l0 )D
−εl0
×
l0 =t
1−
n
X
l0 =t
29
!
2
4l0 f (l0 )D
−εl0
(27)
i
To bound below this term we use an upper-bound for series Σ∞
i=t g(i)x . Note that the proportion of
t+1
g(t)xt
∞
i
.
two consecutive terms in such summation is at most g(t+1)x
g(t+1)xt+1
g(t)xt . Therefore, Σi=t g(i)x ≤
1−
Therefore, for LLL to work, it suffices to have the following.
l
!εl
2 f (t)D −εt
−εt
4t
e
4tf
(t)D
× 1 −
p
≤ D−εl 1 −
−ε(t+1)
4t2 f (t+1)D−ε(t+1)
ε |Σ|
1 − 4tf (t+1)D
1
−
−εt
2
−εt
4tf (t)D
4t f (t)D
l
2
−εt
−εt
4tf (t)D
× 1 − 4t f (t)D −ε
= D−εl 1 −
f (t+1)D−ε
1−
1 − f (t+1)D
f (t)
f (t)
Polynomial Distance Function: For polynomial function f (l) =
t = 1/ε2 and D = e. This choice gives that
L1 =
and
L2 =
4tf (t)D−εt
1−
f (t+1)D−ε
1−
=
4ε−2 f (ε−2 )e−1/ε
1 − (1 + ε2 )d e−ε
=
4ε−4 f (ε−2 )e−1/ε
.
1 − (1 + ε2 )d e−ε
f (t)
4t2 f (t)D−εt
f (t+1)D−ε
f (t)
Pd
i=0 ai l
i
g(t)xt
(28)
(29)
of degree d, we choose
We study the following terms in ε → 0 regime. Note that 4ε−2 and 4ε−4 are polynomials in ε−1
but e−1/ε is exponential in ε−1 . Therefore, for sufficiently small ε,
4ε−2 f (ε−2 )e−1/ε , 4ε−4 f (ε−2 )e−1/ε ≤ e−0.9/ε .
Also, 1 − (1 + ε2 )d e−ε ≤ 1 − (1 + ε2 )d (1 − ε/2) = 1 − (1 − ε/2 + o(ε2 )). So, for small enough ε,
1 − (1 + ε2 )d e−ε ≤ 43 ε. This gives that, for small enough ε,
L1 , L2 ≤
e−0.9/ε
≤ e−0.8/ε .
(3/4)ε
(30)
Note that 1 − e−0.8/ε ≥ e−ε for 0 < ε < 1. Plugging this fact into (29) gives that, for small enough
ε, the LLL condition is satisfied if
!εl
!εl
e
e3
1
e6+2/l
e8
−εl
−εl
−ε
p
p
⇐ |Σ| ≥ 2 = O(ε−2 )
≤e ·e ·e ⇔
≤ ε ⇔ |Σ| ≥
2
e
ε
ε
ε |Σ|
ε |Σ|
Therefore, for any polynomial f , f (l)-distance ε-synchronization strings exist over alphabets of size
t × |Σ| = O(ε−4 ).
Exponential Distance Function: For exponential function f (l) = cl , we choose t = 1 and
D = (8c)1/ε . Plugging this choice of t and D into (29) turns it into the following.
l
!εl
−εt
2
−εt
e
4tf (t)D
× 1 − 4t f (t)D −ε
p
(31)
≤ D−εl 1 −
f (t+1)D−ε
ε |Σ|
1−
1 − f (t+1)D
f (t)
f (t)
"
#l
!
4c(8c)−1
4c(8c)−1
−l
= (2c)
1−
× 1−
(32)
1
1
1 − c 8c
1 − c 8c
1
1/2 l+1 2 · (3/14)l+1
=
· 1−
=
(33)
7/8
(2c)l
cl
30
Therefore, if |Σ| satisfies the following, the LLL condition will be satisfied.
!εl
2 2/ε
e
e2
2 · (3/14)l+1
14 c
p
⇐ |Σ| ≥ 2 ·
≤
l
ε
32
c
ε |Σ|
Therefore, for any exponential f , f (l)-distance ε-synchronization strings exist over alphabets of
1/ε
size c0 where c0 is a constant depending on the basis of the exponential function f .
Theorem A.2. Any alphabet Σ over which arbitrarily long f (l)-distance ε-synchronization strings
exist has to be of size Ω(ε−1 ). This holds for any function f .
Proof. We simply prove this theorem for f (l) = 0, i.e., ordinary synchronization strings which
trivially extends to general f . Note that ε-synchronization guarantee for any pair of intervals [i, j)
and [j, k) where k − i < ε−1 dictates that no symbol have to appear more than once in [i, k).
Therefore, the alphabet size has to be at least ε−1 − 1.
Theorem A.3. Let f be an exponential function. If arbitrarily long f (l)-distance ε-synchronization
strings exist over an alphabet Σ, the size of Σ has to be at least exponentially large in terms of ε−1 .
Proof. Let f (l) = cl . In a given f (l)-distance ε-synchronization string, take two intervals of length
l1 and l2 where l1 +l2 ≤ ε−1 /2 < ε−1 . The edit distance requirement of ε-synchronization definition
requires those two intervals not to contain any similar symbols. Note that this holds for any two
−1
intervals of total length l = ε−1 /2 in a prefix of length cl = cε /2 . Therefore, no symbol can be
−1
appear more than once throughout the first cε /2 symbols of the given strings. This shows that
the alphabet size has to be at least exponentially large in terms of ε−1 .
Theorem A.4. For any super-exponential function f and any finite alphabet Σ, there exists a
positive integer n such that there are no f (l)-distance ε-synchronization strings of length n or more
over Σ.
Proof. Consider a substring of length l in a given string over alphabet Σ. There are |Σ|l many
possible assignments for such substring. Since f is a super-exponential function, for sufficiently
l
large l ≥ ε−1 , f (l)
l ≥ |Σ| . For such l, consider a string of length n ≥ f (l). Split the first f (l)
f (l)
l
elements into f (l)
l blocks of length l. As l > |Σ| , two of those blocks have to be identical. As l
−1
was assumed to be larger than ε , this violates f (l)-distance ε-synchronization property for those
two blocks and therefore finishes the proof.
B
Infinite long-Distance Synchronization Strings: Efficient Constructions
In this section, we introduce and discuss the construction of infinite long-distance synchronization
strings. The definition of c-long-distance ε-synchronization property strongly depends on the length
of the string. This definition requires any two neighboring intervals as well as any two intervals of
aggregated length of c log n or more to satisfy ε-synchronization property. A natural generalization
of this property to infinite strings would be to require similar guarantee to hold over all prefixes of
it.
Definition B.1 (Infinite Long-Distance Synchronization Strings). An infinite string S is called
a c-long-distance ε-synchronization string if any prefix of S like S[1, n] is a c-long-distance εsynchronization string of length n.
31
We prove infinite long distance synchronization strings exist and provide efficient constructions
for them. We prove this by providing a structure similar to the one proposed in Theorem 4.16 that
constructed an infinite ε-synchronization string using finite ε-synchronization strings.
Lemma B.2. Let A(n) be an algorithm that computes a c-long-distance ε-synchronization string
S ∈ Σn in T (n) time. Further, let Ap (n, i) be an algorithm that computes ith position of a c-longdistance ε-synchronization string of length n in Tp (n). Then, for any integer number m ≥ 2, one
can compose algorithms A0 (n) and A0p (i) that compute S 0 [1, n] and S 0 [i] respectively where S 0 is an
4
0
0
infinite c-long-distance ε + c log
m -synchronization string over Σ × Σ. Further, A (n) and Ap (i)
run in min {T (mn ), n · Tp (mn )} and Tp (mi ) time respectively.
Proof. We closely follow the steps we took in Theorem 4.16, except, instead of using geometrically increasing synchronization strings in construction of U and V , we will use c-long-distance
ε-synchronization strings whose length increase in the form of a tower function. We define the
tower function tower(p, i) for p ∈ R, i ∈ Z+ recursively as follows: Let tower(p, 1) = p and for
i > 1, tower(p, i) = ptower(p,i−1) . Then, we define two infinite strings U and V as follows:
U = (Sm , Smmm , . . . ),
V = (Smm , Smmmm , . . . ).
where Sl is a c-long-distance ε-synchronization string of length l. We define the infinite string T as
the point by point concatenation of U and V .
4
We now show that this string satisfies the c-long-distance ε + c log
m -synchronization property.
∞
We define turning points {qi }i=1 in the same manner as we did in Theorem 4.16, i.e., the indices
of T where a Stower(m,i) starts. Let qi be the index where Stower(m,i+1) starts.
Consider two intervals [i1 , j1 ) and [i2 , j2 ) where j1 ≤ i2 and (j1 − i1 ) + (j2 − i2 ) ≥ c log j2 . Let k
be an integer for which qk < j2 ≤ qk+1 . Then, (j1 − i1 ) + (j2 − i2 ) ≥ c log j2 ≥ c log (tower(m, k)) =
c log m · tower(m, k − 1). Note that all but tower(m, k − 1) + tower(m, k − 3) + · · · ≤ 2 · tower(m, k −
1) many elements of T [i1 , j1 ) ∪ T [i2 , j2 ) lay in T [qk−1 , qk+1 ) which is covered by Stower(m,k−1) .
Therefore, for l = (j1 − i1 ) + (j2 − i2 )
ED(T [i1 , j1 ), T [i2 , j2 )) ≥ ED(T [max{i1 , qk−1 }, j1 ), T [i2 , j2 )) − 2 · tower(m, k − 1)
≥ (1 − ε) · [(j2 − i2 ) + (j1 − max{i1 , qk−1 })] − 2 · tower(m, k − 1)
≥ (1 − ε) · [l − 2 · tower(m, k − 1)] − 2 · tower(m, k − 1)
≥ (1 − ε) · l − 4 · tower(m, k − 1)
4 · tower(m, k − 1)
≥
1−ε−
·l
l
4
≥
1−ε−
·l
c log m
Further, any two neighboring intervals [i1 , i2 ) and [i2 , i3 ) where i3 − i1 < c log i3 and k ≤
i3 < k + 1, [i1 , i3 ) completely lies in Sk−1 and therefore ε-synchronization property
for short
4
neighboring intervals holds as well. Thus, this string satisfies infinite c-long-distance ε + c log
m synchronization property.
Finally, to compute index i of infinite string T constructed as mentioned above, one needs
to compute a single index of two finite c-long-distance ε-synchronization strings of length mi or
less. Therefore, computing T [i] takes Tp (mi ). This also implies that T [1, n] can be computed in
n · Tp (mn ). Clearly, on can also compute T [1, n] by computing all finite strings that appear within
the first n elements. Hence, T [1, n] is computable in min {T (mn ), n · Tp (mn )}.
32
Utilizing the construction proposed in Lemma B.2 with m = 2 along with the highly-explicit
finite Oε (1)-long-distance 2ε -synchronization string construction introduced in Theorem 4.15, results
in the following infinite string construction:
Theorem B.3. For any constant 0 < ε < 1 there is a deterministic algorithm which computes
ith position of an infinite c-long-distance ε-synchronization string S over an alphabet of size |Σ| =
ε−O(1) where c = Oε (1) in Oε (i) time. This implies a quadratic time construction for any prefix of
such string.
33
| 7 |
Variance reduction for antithetic integral control of stochastic
reaction networks
arXiv:1711.08291v1 [math.OC] 22 Nov 2017
Corentin Briat∗, Ankit Gupta∗, Mustafa Khammash†
Department of Biosystems Science and Engineering,
ETH-Zürich, Switzerland
Abstract
The antithetic integral feedback motif recently introduced in [6] is known to ensure robust perfect adaptation
for the mean dynamics of a given molecular species involved in a complex stochastic biomolecular reaction
network. However, it was observed that it also leads to a higher variance in the controlled network than that
obtained when using a constitutive (i.e. open-loop) control strategy. This was interpreted as the cost of the
adaptation property and may be viewed as a performance deterioration for the overall controlled network. To
decrease this variance and improve the performance, we propose to combine the antithetic integral feedback
motif with a negative feedback strategy. Both theoretical and numerical results are obtained. The theoretical
ones are based on a tailored moment closure method allowing one to obtain approximate expressions for the
stationary variance for the controlled network and predict that the variance can indeed be decreased by
increasing the strength of the negative feedback. Numerical results verify the accuracy of this approximation
and show that the controlled species variance can indeed be decreased, sometimes below its constitutive level.
Three molecular networks are considered in order to verify the wide applicability of two types of negative
feedback strategies. The main conclusion is that there is a trade-off between the speed of the settling-time of
the mean trajectories and the stationary variance of the controlled species; i.e. smaller variance is associated
with larger settling-time.
Author summary
Homeostasis, the ability of living organisms to regulate their internal state, is of fundamental importance
for their adaptation to environmental changes and their survival. This is the reason why complex regulatory
genetic networks evolved and allowed for the emergence of more and more complex organisms. Recently, the
theoretical study of those regulatory networks using ideas and concepts from control theory and the design of
novel ones have gained a lot of attention. Synthetic regulatory circuits are seen as elementary building blocks
for the design of complex networks that need to incorporate some regulating elements to be fully functional.
This is for instance the case in metabolic engineering where the production of biomolecules, such as drugs
or biofuels, need to be optimized and tightly regulated. A particular circuit, the so-called antithetic integral
controller, is now known to ensure homeostasis even when regulatory circuits are subject to randomness.
However, it is also known that this circuit increases variability in the network. The effects of a correcting
negative feedback loop on the variance are discussed here and it is shown that variability can be reduced
this way. Notably, we show that there is a tradeoff between speed of the network and variability.
∗ These
two coauthors contributed equally
author: [email protected]
† Corresponding
1
Introduction
The design and implementation of artificial in-vivo biomolecular controllers have become very popular [4, 6,
9,11,20,25] because of their potential applications for the tight and robust control of gene expression [6], the
optimization of metabolic networks for the efficient production of biomolecules [10, 30] , or the development
of new treatments for certain genetic diseases [31]. Indeed, many of the instances of those problems can
be interpreted from an homeostatic point of view in the sense that they may all be solved by achieving or
restoring homeostasis in the corresponding genetic network using synthetic regulatory circuits [6,10,27,30,31].
In this regard, those problems essentially reduce to the design and the implementation of robust and reliable
regulatory circuits that can optimize an inefficient network or correct a malfunctioning one – an observation
which strongly suggests that ideas from control theory and control engineering [2] could be adapted to
biochemical control problems [6, 12, 16]. A cornerstone in control theory and engineering is the so-called
integral controller that can ensure precise constant set-point regulation for a regulated variable in a given
system. Such mechanism, where the action onto the controlled system is depending on the integral of
the deviation of the regulated variable from the desired set-point, is to be contrasted with the so-called
proportional controller where the system is simply actuated proportionally to the deviation of the regulated
variable from the desired set-point. Unlike integral control, the latter one is unable to achieve robust constant
set-point regulation for the controlled variable and to reject constant disturbances. In other words, integral
control has the capacity of ensuring perfect adaptation for the regulated variable. The downside, however,
is that it has a destabilizing effect on the dynamics (emergence of oscillations or even diverging trajectories)
of the overall controlled system which can be then compensated by adjoining a proportional action, thus
giving rise to the so-called Proportional-Integral (PI) controller [1].
Based on the strength of these facts, an integral controller referred to as the antithetic integral controller
was proposed in [6] for the control of the mean level of some molecular species of interest in a given biochemical reaction network. This controller is implementable in terms of elementary biochemical reactions with
mass-action kinetics, making it practically implementable in-vivo using, for instance, sigma- and anti-sigmafactors [20]. This controller theoretically works in both the deterministic and the stochastic settings. In the
latter setting, it was notably shown that, under some reasonable conditions, the ergodicity properties of the
controlled network are independent of the parameters of the antithetic integral controller – a surprising key
property that has no counterpart in the deterministic setting and that dramatically simplifies its implementation. A drawback, however, is the increase of the stationary variance of the regulated species compared
to the constitutive variance that would be obtained by using a static open-loop strategy, even though the
latter one would be unable to ensure regulation and perfect adaptation for the mean level of the regulated
species. This phenomenon is seemingly analogous to the destabilizing behavior of the deterministic integral
controller mentioned in the previous paragraph. This variance increase can hence be interpreted as the price
to pay for perfect adaptation at the mean species level.
The goal of this paper is to investigate the effect of adding a negative feedback to the antithetic integral
motif in a way akin, yet different, to deterministic PI controllers. As discussed above, adding a proportional
action in the deterministic setting compensates for the destabilizing effect of the integrator. Comparatively,
it may seem reasonable to think that, in the stochastic setting, a proportional action could have an analogous
effect and would result in a decreased variance for the controlled variable (this is, for instance, what happens
when considering certain linear systems driven by white noise). In fact, it has been shown that negative
feedback at a transcriptional level in a gene expression network leads to a variance reduction in the protein
levels; see e.g. [5, 18, 24, 29] and the references therein.
Two types of negative feedback are considered in the present paper: the first one consists of an ON/OFF
proportional action whereas the second one is governed by a repressing Hill function. First we theoretically
prove using a tailored moment closure method that, in a gene expression network controlled with an antithetic
integral controller, the stationary variance in the protein copy number can be decreased by the use of a
negative feedback. In this specific case, the steady-state variance is decreasing monotonically as a function
of the strength of the negative feedback. An immediate consequence is that it is theoretically possible to
set the steady-state variance to a level that lies below the constitutive steady-state variance, which is the
value of the steady-state variance that would have been obtained using a constitutive (i.e. open-loop) control
2
strategy. The theoretical prediction will also be observed by exact numerical predictions using Gillespie’s
algorithm (Stochastic Simulation Algorithm - SSA [13]). A caveat, however, is that setting the gain of the
negative feedback very high will likely result in a very low steady-state variance but may also result in a
regulation error for the mean of the controlled species and in a loss of ergodicity for the overall controlled
network. In this regard, reducing the steady-state variance below its constitutive level may not always be
physically possible. Finally, it is also emphasized that a low stationary variance for the controlled species
is often associated with higher settling-time for the controlled species. Hence, there is a tradeoff between
variability and fast dynamics/small settling-time. The two negative feedback actions also exhibit quite
different behaviors. Indeed, while the ON/OFF proportional feedback seems to be efficient at reducing the
stationary variance through an increase of its gain, the dynamics of the mean gets first improved by reducing
the settling-time but then gets dramatically deteriorated by the appearance of a fast initial transient phase
followed by a very slow final one resulting then in a high settling-time. On the other hand, the Hill controller
leads to very homogeneous mean dynamics for different feedback strength but the steady-state variance is
also much less sensitive and does not vary dramatically. It is then argued that those differences may find
an explanation by reasoning in a deterministic point of view. The ON/OFF controller (an error-feedback)
introduces a stable zero in the dynamics of the closed-loop network which is small in magnitude when the
gain of the negative feedback is high. When this is the case, the zero is close to the origin and the closed-loop
dynamics will almost contain a derivative action, whence the fast initial transient phase. On the other hand,
the Hill negative feedback (an output-feedback) does not introduce such a zero in the closed-loop dynamics,
which may explain the homogeneity of the mean trajectories. Another possible reason is that the effective
proportional gain (which will be denoted by β) is much less sensitive to changes in the feedback strength
than the ON/OFF controller.
Approximate equations for the stationary variance are then obtained in the general unimolecular network
case. The obtained expressions shed some light on an interesting connection between the covariances of the
molecular species involved in the stochastic reaction network and the stability of a deterministic linear system
controlled with a standard PI controller, thereby unveiling an unexpected, yet coherent, bridge between
the stochastic and deterministic settings. Applying this more general framework to the a gene expression
network with protein maturation allows one to reveal that the steady-state variance may not be necessarily
a monotonically decreasing function of the negative feedback strength. In spite of this, the same conclusions
as in the gene expression network hold: the variance can sometimes be decreased below its constitutive
level but this may also be accompanied with a loss of ergodicity. The same qualitative conclusions for the
transient of the mean dynamics and the properties of the controller also hold in this case.
Even though the proposed theory only applies to unimolecular networks, stochastic simulations are
performed for a gene expression network with protein dimerization; a bimolecular network. Once again,
the same conclusions as in for previous networks hold with the difference that the constitutive variance level
is unknown in this case due to openness of the moment equations. These results tend to suggest that negative
feedback seems to operate in the same way in bimolecular networks as in unimolecular networks.
Reaction networks
Let us consider a stochastic reaction network (X, R) involving d molecular species X1 , . . . , Xd interacting
through K reaction channels R1 , . . . , RK defined as
Rk :
d
X
i=1
ρk
l
ζk,i
Xi −−−→
d
X
r
ζk,i
Xi , k = 1, . . . , K
(1)
i=1
r
r
l
l
where ρk ∈ R>0 is the reaction rate parameter and ζkr = col(ζk,1
, . . . , ζk,d
), ζkl = col(ζk,1
, . . . , ζk,d
) are
the left/right stoichiometric vectors of the reaction Rk . The corresponding stoichiometric vector is hence
given by ζk := ζkr − ζkl ∈ Zd indicating that when this reaction fires, the state jumps
from x to x + ζk .
The stoichiometric matrix S ∈ Zd×K of this reaction network is defined as S := ζ1 · · · ζK . When the
Qd
xi !
kinetics is mass-action, the propensity function λk of the reaction Rk is given by λk (x) = ρk i=1 (xi −ζ
l )! .
n,i
3
Under the well-mixed assumption, the above network can be described by a continuous-time Markov process
(X1 (t), . . . , Xd (t))t≥0 with the d-dimensional nonnegative lattice Zd≥0 as state-space; see e.g. [3].
The regulation/perfect adaptation problems and antithetic integral control
Let us consider here a stochastic reaction network (X, R). The regulation problem consists of finding another
reaction network (i.e. a set of additional species and additional reactions) interacting with (X, R) in a way
that makes the interconnection well-behaved (i.e. ergodic) and such that the mean of some molecular species
X` for some given ` ∈ {1, . . . , d} converges to a desired set-point (given here by µ/θ for some µ, θ > 0) in a
robust way; i.e. irrespective of the values of the parameters of the network (X, R).
It was shown in [6] that, under some assumptions on the network (X, R), the antithetic integral controller
defined as
η
µ
θX`
kZ1
X1 ,
Z2 , Z1 + Z2 −−−→ ∅, ∅ −−−→
(2)
∅ −−−→ Z1 , ∅ −−−→
{z
} |
{z
} |
{z
} |
{z
}
|
comparison
actuation
reference measurement
solves the above regulation problem. This regulatory network consists of two additional species Z1 and Z2 ,
and four additional reactions. The species Z1 is referred to as the actuating species as it is the species that
governs the rate of the actuation reaction which produces the actuated species X1 at a rate proportional to
Z1 . The species Z2 is the sensing species as it is produced at a rate proportional to the controlled species
X` through the measurement reaction. The first reaction is the reference reaction as it encodes part of the
set-point µ/θ whereas the third reaction is the comparison reaction that compares the population of the
controller species and annihilates them accordingly, thereby closing negatively the loop while, and at the
same time correlating the populations of the controller species. The comparison (or titration) reaction is the
crucial element of the above controller network and, to realize such a reaction, one needs to rely on intrinsic
strongly binding properties of certain molecules such as sigma- and anti-sigma-factors [6] or small RNAs and
RNAs [19, 25, 32].
Variance amplification in antithetic integral control
We discussed above about the convergence properties of the mean level of the controlled species X` when
network (X, R) is controlled with the antithetic integral controller (2). However, it was remarked in [6]
that while the mean of X` converges to the desired steady-state, the stationary variance of the controlled
species could be much larger than its constitutive value that would be obtained by simply considering a naive
constitutive production of the species X1 that would lead to the same mean steady-state value µ/θ. This
was interpreted as the price to pay for having the perfect adaptation property for the controlled species. To
illustrate this phenomenon, let us consider the following gene expression network:
k
kp
γr
γp
r
∅ −−−→
X1 , X1 −−−→ X1 + X2 , X1 −−−→ ∅, X2 −−−→ ∅
(3)
where X1 denotes mRNA and X2 denotes protein. The objective here is to control the mean level of the
protein by acting at a transcriptional level using the antithetic controller (2); hence, we set kr = kZ1 . Using
a tailored moment closure method, it is proved in the SI that the stationary variance VarIπ (X2 ) for the
protein copy number is approximately given by the following expression
kkp
kp
1+
+
µ
γr + γp
γr γp
, k > 0, k/η 1.
VarIπ (X2 ) ≈
(4)
kθkp
θ
1−
γr γp (γr + γp )
The rationale for the assumption k/η 1 is that it allows for closing the moments equation (which is open
because of the presence of the comparison reaction) and obtain a closed-form solution for the stationary
4
variance. On the other hand, the constitutive (i.e. open-loop) stationary variance VarOL
π (X2 ) for the protein
copy number obtained with the constitutive strategy
kr =
µ γr γp
θ kp
(5)
is given by
VarOL
π (X2 ) =
µ
θ
1+
kp
γr + γp
.
(6)
It is immediate to see that the ratio
VarIπ (X2 )
≈
VarOL
π (X2 )
kkp
γr γp (γr + γp )
, k, θ > 0, k/η 1
kθkp (kp + γr + γp )
1−
γr γp (γr + γp )
1+
(7)
is greater than 1 for all k, θ > 0 such that the denominator is positive. Note that the above formula is not
valid when k = 0 or θ = 0 since this would result in an open-loop network for which set-point regulation
could not be achieved. This expression is also a monotonically increasing function of the gain k, a fact that
was numerically observed in [6]. This means that choosing k very small will only result in a small increase
of the stationary variance of the controlled species when using an antithetic integral feedback. However, this
will very likely result in very slow dynamics for the mean of the controlled species.
Finally, it is important to stress that while this formula is obviously not valid when the denominator is
nonpositive, we know from [6] that in the case of the gene expression network, the closed-loop network will
be ergodic with converging first and second-order moments for all k > 0 and all θ > 0 (assuming that the
ratio µ/θ is kept constant). This inconsistency stems from the fact that the proposed theoretical approach
relies on a tailored moment closure approximation that will turn out to be connected to the Hurwitz stability
of a certain matrix that may become unstable when the gain k of the integrator is too large. This will be
elaborated more in the following sections.
Negative feedback action
We will consider in this paper two types of negative feedback action. The first one, referred to as the
ON/OFF proportional feedback, is essentially theoretical and cannot be exactly implemented, but it may be
seen as a local approximation of some more complex (e.g. nonlinear) repressing function. It is given by the
reaction
F (X` )
∅ −−−→ X1
(8)
together with the propensity function F (X` ) = Kp max{0, µ − θX` } where Kp is the so-called feedback
gain/strength. It is similar to the standard proportional feedback action used in control theory with the
difference that a regularizing function, in the form of a max function, is involved in order the restrict the
propensity function to nonnegative values. Note that this controller can still be employed for the in-silico
control of single-cells using a stochastic controller as, in this case, we would not be restricted anymore to
mass-action, Hill or Michaelis-Menten kinetics. This was notably considered in the case of in-silico population
control in [7, 8, 14].
The second type of negative feedback action, referred to as the Hill feedback, consists of the reaction (8)
but involves the non-cooperative repressing Hill function F (X` ) = Kp /(1 + X` ) as propensity function. This
type of negative feedback is more realistic as such functions have empirically been shown to arise in many
biochemical, physiological and epidemiological models; see e.g. [22].
In both cases, the total rate of production of the molecular species X1 can be expressed as the sum
kZ1 + F (X` ) which means that, at stationarity, we need to have that Eπ [kZ1 + F (X` )] = u∗ where u∗ is
equal to the value of the constitutive (i.e. deterministic) production rate for X1 for which we would have
that Eπ [X` ] = µ/θ. Noting now that for both negative feedback functions, we will necessarily have that
5
Eπ [F (X` )] > 0, then this means that if the gain Kp is too large, it may be possible that the mean of the
controlled species do not converge to the desired set-point implying, in turn, that the overall controlled
network will fail to be ergodic. This will be notably the case when Eπ [F (X` )] > u∗ . In particular, when
F (X` ) = Kp max{0, µ − θX` }, a very conservative sufficient condition for the closed-loop network to be
ergodic is that Kp < u∗ /µ whereas when F (X` ) = Kp /(1 + X` ), this condition becomes Kp < u∗ . These
conditions can be determined by considering the worst-case mean value of the negative feedback strategies;
i.e. Kp µ and Kp , respectively.
Results
Invariants for the antithetic integral controller
We describe some important invariant properties of the antithetic integral controller (2) which are independent of the parameters of the controlled network under the assumption that these invariants exist; i.e. they
are finite. Those invariants are given by
Covπ (X` , Z1 − Z2 ) =
µ
,
θ
(9)
µ
,
η
(10)
Eπ (Z12 Z2 ) =
µ
(1 + Eπ (Z1 ))
η
(11)
Eπ (Z1 Z22 ) =
µ + θEπ (X` Z2 )
η
(12)
Eπ (Z1 Z2 ) =
and
play an instrumental role in proving all the theoretical results of the paper. Interestingly, we can notice that
Covπ (X` , Z1 − Z2 ) = Eπ [X` ], which seems rather coincidental. From the second invariant we can observe
that, if η µ, then Eπ (Z1 Z2 ) ≈ 0, which indicates that the values taken by the random variable Z2 (t)
will be most of the time equal to 0. Note that it cannot be Z1 (t) to be mostly taking zero values since Z1
is the actuating species whose mean must be nonzero (assuming here that the natural production rates of
the molecular species in the controlled network are small). Similarly, setting η large enough in the third
expression will lead to a similar conclusion. Note that Eπ (Z1 ) is independent of η here and only depends
on the set-point µ/θ, the integrator gain k and the parameters of the network which is controlled. The
last expression again leads to similar conclusions. Indeed, if η is sufficiently large, then Eπ (X` Z2 ) ≈ 0 and,
hence, Eπ (Z1 Z22 ) ≈ 0 which implies that Z2 (t) needs to be most of the time equal to 0. These properties
will be at the core of the moment closure method used to obtain an approximate closed-form formula for
the covariance matrix for the closed-loop network.
An approximate formula for the stationary variance of the controlled species
Let us assume here that the open-loop network (X, R) is mass-action and involves, at most, unimolecular
reactions. Hence, the vector of propensity functions can be written as
λ(x) = W x + w0
(13)
for some nonnegative matrix W ∈ RK×d and nonnegative vector w0 ∈ RK . It is proved in the SI that, under
the assumption k/η 1, we can overcome the moment closure problem arising from the presence of the
6
annihilation reaction in the antithetic controller and show that the exact stationary covariance matrix of the
network given by
I
PI
CovP
π (X, X) Covπ (X, Z)
, Z := Z1 − Z2
I
I
CovP
VarP
π (Z, X)
π (Z)
is approximatively given by the matrix Σ solving the Lyapunov equation
RΣ + ΣRT + Q = 0
where
(14)
SW − βe1 eT` ke1
,
−θeT`
0
SDS T + ce1 eT1
0
Q =
,
0
2µ
D = diag(W Eπ [X] + w0 ),
µ
1
T
−1
+
e
(SW
)
Sw
,
c = − T
0
`
e` (SW )−1 e1 θ
I
CovP
π (F (X` ), X` )
β = −
.
I
CovP
π (X` , X` )
R
=
Note that since the function F is decreasing then the effective proportional gain, β, is always a positive
constant and seems to be mostly depending on Kp but does not seem to change much when k varies (see
e.g. Figure 19 and Figure 20 in the appendix). It can also be seen that for the Lyapunov equation to have a
positive definite solution, we need that the matrix R be Hurwitz stable; i.e. all its eigenvalues have negative
real part. In parallel of that, it is known from the results in [6] that the closed-loop network will remain
ergodic when β = 0 even when the matrix R is not Hurwitz stable. In this regard, the formula (14) can only
be valid when the parameters β and k are such that the matrix R is Hurwitz stable. When this is not the
case, the formula is out its domain of validity and is meaningless. The stability of the matrix R is discussed
in more details in the SI.
Connection to deterministic proportional-integral control
Interestingly, the matrix R coincides with the closed-loop system matrix of a deterministic linear system
controlled with a particular proportional-integral controller. To demonstrate this fact, let us consider the
following linear system
ẋ(t) = SW x(t) + e1 u(t)
(15)
y(t) = eT` x(t)
where x is the state of the system, u is the control input and y is the measured/controlled output. We
propose to use the following PI controller in order to robustly steers the output to a desired set-point µ/θ
β
u(t) = (µ − θy(t)) + k
θ
Z
t
(µ − θy(s))ds
(16)
0
where θ is the sensor gain, β/θ is the proportional gain and k is the integral gain. The closed-loop system
is given in this case by
β
ẋ(t)
SW − βe1 eT` ke1 x(t)
=
(17)
+ θ µ
˙
I(t)
−θeT`
0
I(t)
1
where we can immediately recognize the R matrix involved in the Lyapunov equation (14).
7
Example - Gene expression network
We present here the results obtained for the gene expression network (3) using the two negative feedback
actions. In particular, we will numerically verify the validity of the formula (4) and study the influence of
the controller parameters on various properties of the closed-loop network. The matrix R is given in this
case by
−γr −β k
R = kp −γp 0 .
(18)
0
−θ 0
It can be shown that the above matrix is Hurwitz stable (i.e. all its eigenvalues are located in the open left
half-plane) if and only if the parameters k, β > 0 satisfy the inequality
1−
βkp
kθkp
+
> 0.
γr γp (γr + γp ) γr γp
(19)
Hence, given k > 0, the matrix R will be Hurwitz stable for any sufficiently large β > 0 illustrating the
stabilizing effect of the proportional action. When the above condition is met, then the closed-loop stationary
I
variance VarP
π (X2 ) of the protein copy number is approximately given by the expression
kp
kkp
βkp
1+
+
+
µ
γr + γp
γr γp
γr (γr + γp )
I
.
(20)
VarP
π (X2 ) ≈ Σ22 =
βkp
kθkp
θ
+
1−
γr γp (γr + γp ) γr γp
For any fixed k > 0 such that (19) is satisfied, the closed-loop steady-state variance is a monotonically
decreasing function of β. As a consequence, there will exist a βc > 0 such that
µ
kp
Σ22 <
1+
(21)
θ
γr + γp
for all β > βc . In particular, when β → ∞, then we have that
Σ22 →
µ
µ γp
< .
θ γr + γp
θ
(22)
We now analyze the results obtained with the antithetic integral controller combined with an OF/OFF
proportional feedback. The first step is the numerical comparison of the approximate formula (20) with
the stationary variance computed using 106 SSA simulations with the parameters kp = 2, γr = 2, γp = 7,
µ = 10, θ = 2 and η = 100. The absolute value of the relative error between the exact and the approximate
stationary variance of the protein copy number for several values for the gains k and Kp is depicted in Figure
1. We can observe there that the relative error is less than 15% except when k is very small where the relative
error is much larger. However, in this latter case, the mean trajectories do not have time to converge to
their steady state value and, therefore, what is depicted in the figure for this value is not very meaningful.
In spite of that, we can observe that the approximation is reasonably accurate.
We now look at the performance of the antithetic integral controller combined with an OF/OFF proportional feedback. Figure 2 depicts the trajectories of the mean protein copy number while Figure 3 depicts
the trajectories of the variance of the protein copy number, both in the case where k = 3. Regarding the
mean copy number, we can observe that while at the beginning increasing Kp seems to improve the transient
phase, then the dynamics gets more and more abrupt at the start of the transient phase as the gain Kp
continues to increase and gets slower and slower at the end of the transient phase, making the means very
slow to converge to their set-point. On the other hand, we can see that the stationary variance seems to
be a decreasing function of the gain Kp . More interestingly, when the gain Kp exceeds 20, the stationary
variance becomes smaller than its constitutive value. Figure 4 helps at establishing the influence of the gains
k and Kp onto the stationary variance of the protein copy number. We can see that, for any k, increasing
8
Kp reduces the stationary variance while for any Kp , reducing k reduces the variance, as predicted by the
approximate formula (20). Hence, a suitable choice would be to pick k small and Kp large. We now compare
this choice for the parameters with the one that would lead to a small settling-time for the mean dynamics;
see Figure 5. We immediately see that a small k is not an option if one wants to have fast mean dynamics.
A sweet spot in this case would be around the right-bottom corner where the settling-time is the smallest.
Interestingly, the variance is still at a quite low level even if sometimes higher than the constitutive value.
We now perform the same analysis for the antithetic integral controller combined with the Hill feedback
and first verify the accuracy of the approximate formula (20). We can observe in Figure 6 that the formula
is very accurate in this case. To explain this, it is important to note that the gains Kp in both controllers
are not directly comparable, only the values for the parameter β are. For identical Kp ’s, the value of β for
the ON/OFF proportional feedback is much larger than for the Hill feedback (see Figure 19 and Figure 20
in the appendix). The Figure 1 and Figure 6 all together simply say that the formula is very accurate when
β is small.
We now look at the performance of the antithetic integral controller combined with a Hill feedback.
Similarly to as previously, Figure 7 depicts the trajectories of the mean protein copy number while Figure 8
depicts the trajectories of the variance of the protein copy number, both in the case where k = 3. Regarding
the mean copy number, we can observe than the dynamics are much more homogeneous than in the previous
case and that increasing Kp reduces the overshoot and, hence, the settling-time. This can again be explained
by the fact that β is much smaller in this case. Similarly, the spread of the variances is much tighter than when
using the other negative feedback again because of the fact that β is small in this case. This homogeneity
is well illustrated in Figure 9 and Figure 10 where we conclude on the existence of a clear tradeoff between
settling-time and stationary variance.
As can been seen in Figure 2 and Figure 7, the mean dynamics are quite different and it would be
interesting to explain this difference in terms of control theoretic ideas. A first explanation lies in the
sensitivity of the parameter β in terms of the feedback strength Kp . In the case of the ON/OFF proportional
feedback, this sensitivity is quite high whereas it is very low in the case of the Hill feedback (see Figure 19
and Figure 20 in the appendix). This gives an explanation on why the mean trajectories are very different in
the case of the ON/OFF proportional feedback for different values of Kp while the mean trajectories are very
close to each other in the case of the Hill feedback. A second explanation lies in the type of feedback in use.
Indeed, the ON/OFF proportional feedback is an error-feedback and, when combined with the antithetic
integral controller, may introduce a stable zero in the mean dynamics. On the other hand, the Hill feedback
is an output-feedback that does not seem to introduce such a zero. When increasing the negative feedback
gain Kp , this zero moves towards the origin. Once very close to the origin, this zero will have an action in the
closed-loop mean dynamics that is very close to a derivative action, leading then to abrupt initial transient
dynamics. A theoretical basis for this discussion is developed in more details in the SI.
9
Figure 1: Absolute value of the relative error between the exact stationary variance of the protein copy
number and the approximate formula (20) when the gene expression network is controlled with the antithetic
integral controller (2) and an ON/OFF proportional controller.
10
Figure 2: Mean trajectories for the protein copy number when the gene expression network is controlled
with the antithetic integral controller (2) with k = 3 and an ON/OFF proportional controller. The set-point
value is indicated as a black dotted line.
11
Figure 3: Variance trajectories for the protein copy number when the gene expression network is controlled
with the antithetic integral controller (2) with k = 3 and an ON/OFF proportional controller. The stationary
constitutive variance is depicted in black dotted line.
12
Figure 4: Stationary variance for the protein copy number when the gene expression network is controlled
with the antithetic integral controller (2) and an ON/OFF proportional controller.
13
Figure 5: Settling-time for the mean trajectories for the protein copy number when the gene expression
network is controlled with the antithetic integral controller (2) and an ON/OFF proportional controller.
14
Figure 6: Absolute value of the relative error between the exact stationary variance of the protein copy
number and the approximate formula (20) when the gene expression network is controlled with the antithetic
integral controller (2) and a Hill controller.
15
Figure 7: Mean trajectories for the protein copy number when the gene expression network is controlled
with the antithetic integral controller (2) with k = 3 and a Hill controller. The set-point value is indicated
as a black dotted line.
16
Figure 8: Variance trajectories for the protein copy number when the gene expression network is controlled
with the antithetic integral controller (2) with k = 3 and a Hill controller. The stationary constitutive
variance is depicted in black dotted line.
17
Figure 9: Stationary variance for the protein copy number when the gene expression network is controlled
with the antithetic integral controller (2) and a Hill controller.
18
Figure 10: Settling-time for the mean trajectories for the protein copy number when the gene expression
network is controlled with the antithetic integral controller (2) and a Hill controller.
Example - Gene expression network with protein maturation
The results obtained in the previous section clearly only hold for the gene expression network and it would be
quite hasty to directly generalize those results to more complex unimolecular networks. This hence motivates
the consideration of a slightly more complicated example, namely, the gene expression network involving a
protein maturation reaction given by
k
kp
γr
γp
r
∅ −−−→
X1 , X1 −−−→ X1 + X2 , X1 −−−→ ∅, X2 −−−→ ∅
0
kp
γp0
(23)
X2 −−−→ X3 , X3 −−−→ ∅
where, as before, X1 denotes mRNA, X2 denotes protein and, now, X3 denotes the mature protein. In this
case, the goal is to control the average mature protein copy number by, again, acting at a transcriptional
level. As this network is still unimolecular, the proposed framework remains valid. In particular, the matrix
R is given by
−γr
0
−β k
kp −(γp + kp0 )
0
0
R=
(24)
0
0
0
kp
−γp 0
0
0
−θ 0
and is Hurwitz stable provided that the two following conditions are satisfied
β<
1
(γr + γp + γp0 + kp0 )(γr γp + γr γp . + γp γp0 + γr kp0 + γp0 kp0 ) − γr γp0 (γp + kp0 )
kp kp0
(25)
and
kp2 kp02 β 2 + σ1 β + σ0 < 0
19
(26)
where
= −kp kp0 (γr + γp0 + γp + kp0 )(γr γp + γr γp0 + γp γp0 + γr kp0 + γp0 kp0 )
+2γr γp0 kp kp0 (γp + kp0 ),
= −γr γp0 (γp + kp0 )(γr + γp + γp0 + kp0 )(γr γp + γr γp0 + γp γp0 + γr kp0 + γp0 kp0 )
+γr2 γp 22 (γp + kp0 )2 + kkp kp0 θ(γr + γp + γp0 + kp0 )2 .
σ1
σ0
(27)
Considering, for instance, the following parameters kp = 1, γr = 2, γp = 1, kp0 = 3, γp0 = 1, µ = 10, θ = 2
and η = 100, the above conditions reduce to
β < 30
(28)
and
9β 2 − 246β + 294k − 720 < 0.
The intersection of these conditions yield the stability conditions
√
√
41 − 7 49 − 6k 41 + 7 49 − 6k
,
∩ (0, ∞).
k ∈ (0, 49/6) and β ∈
3
3
(29)
(30)
It can be verified that for values on the boundary of at least one of those intervals, the matrix R has
eigenvalues on the imaginary axis. Standard calculations on the moments equation show that the open-loop
variance is given by
kp0 + γr + γp + γp0
µ
0
VarOL
(X
)
=
1
+
k
k
.
(31)
3
p p
π
θ
(γr + γp0 )(γr + γp + kp0 )(γp + γp0 + kp0 )
With the numerical values for the parameters previously given, the open-loop variance is approximately
equal to 37/6 ≈ 6.1667. The closed-loop variance, however, is approximately given by
ζk
θ
ζβ
ζkβ
OL
(X
)
+
Var
k
+
β
+
kβ
3
π
µ
ζd
ζd
ζd
I
µ
VarP
(32)
π (X3 ) ≈ Σ33 =
ξβ 2 2
ξk
ξβ
θ
1+ k+ β+
β
ξd
ξd
ξd
where
= γr γp0 (γr + γp0 )(γp + kp0 )(γr + γp + kp0 )(γp + γp0 + kp0 )
= −kp kp0 θ(γr + γp + γp0 + kp0 )2
= kp kp0 (γr2 γp + γr2 γp0 + γr2 kp0 + γr γp2 + γr γp γp0 + 2γr γp kp0 + γr γp02
+γr γp0 kp0 + γr kp02 + γp2 γp0 + γp γp02 + 2γp γp0 kp0 + γp02 kp0 + γp0 kp02 )
= −kp2 kp02
(33)
ξd
kp kp0 (γr2 γp + γr2 γp0 + γr2 kp0 + γr γp2 + 2γr γp γp0 + 2γr γp kp0 + γr γp02
+2γr γp0 kp0 − θγr γp0 + γr kp02 + γp2 γp0 + γp γp02 + 2γp γp0 kp0 − θγp γp0
+γp02 kp0 − θγp02 + γp0 kp02 − θγp0 kp0 )
= γp0 kp kp0 (γr2 + γr γp + γr kp0 + γp0 γr + γp2 + 2γp kp0 + γp0 γp + kp02 + γp0 kp0 )
= −kp2 kp02 .
(34)
ξd
ξk
ξβ
ξβ 2
and
ζd
ζk
ζβ
ζkβ
=
=
An expression that is more complex than, yet very similar to, the formula (20) obtained for the simple gene
expression network. For the considered set of parameter values, the approximated variance is a nonmonotonic
function of the parameter β as it can be theoretically observed in Figure 21 in the appendix. It turns out
that this behavior can also be observed in the numerical simulations depicted in Figure 23 in the appendix
where we can see that the variance exhibits this monotonic behavior. However, it should also be pointed
out that this increase is accompanied with the emergence of an tracking error for the mean dynamics (see
Figure 23 in the appendix) and a loss of ergodicity for the overall controlled network as emphasized by
diverging mean dynamics for the sensing species (see Figure 25 in the appendix). This contrasts with the
20
gene expression case where the variance was a monotonically decreasing function of β. Regarding the mean
dynamics, we can see that increasing Kp and, hence, β, to reasonable levels improves the settling-time as
depicted in Figure 11 for the special case of k = 3. However, this is far from being the general case since
the settling-time can exhibit a quite complex behavior for this network (see 14). The stationary variance
depicted in Figure 13 exhibits here a rather standard and predictive behavior where a small k and a large
Kp both lead to its reduction. Similar conclusions can be drawn when the network is controlled with a Hill
negative feedback controller; see Figure 26, Figure 26, Figure 26 and Figure 26 in the appendix.
Figure 11: Mean trajectories for the mature protein copy number when the gene expression network with
protein maturation is controlled with the antithetic integral controller (2) with k = 3 and an ON/OFF
proportional controller. The set-point value is indicated as a black dotted line.
21
Figure 12: Variance trajectories for the mature protein copy number when the gene expression network
with protein maturation is controlled with the antithetic integral controller (2) with k = 3 and an ON/OFF
proportional controller. The stationary constitutive variance is depicted in black dotted line.
22
Figure 13: Stationary variance for the mature protein copy number when the gene expression network with
protein maturation is controlled with the antithetic integral controller (2) and an ON/OFF proportional
controller.
23
Figure 14: Settling-time for the mean trajectories for the mature protein copy number when the gene
expression network with protein maturation is controlled with the antithetic integral controller (2) and an
ON/OFF proportional controller.
Example - Gene expression network with protein dimerization
The proposed theory is only valid for unimolecular networks but, in spite of that, it is still interesting to see
whether similar conclusions could be obtained for a network that is not unimolecular. This motivates the
consideration of the following gene expression network with protein dimerization:
kp
k
γp
γr
r
∅ −−−→
X1 , X1 −−−→ X1 + X2 , X1 −−−→ ∅, X2 −−−→ ∅
k
γd
γ0
(35)
d
d
X2 + X2 −−−→
X3 , X3 −−−→ X2 + X2 , X3 −−−→
∅
where, as before, X1 denotes mRNA, X2 denotes protein but, now, X3 denotes a protein homodimer. In this
case, the Lyapunov equation (14) is not valid anymore because of the presence of the dimerization reaction
but we can still perform stochastic simulations. The considered parameter values are given by kp = 1, γr = 2,
γp = 1, kd = 3, γd = γd0 = 1, µ = 10, θ = 2 and η = 100. We can see in Figure 15, Figure 16, Figure 17,
Figure 18.
24
Figure 15: Mean trajectories for the homodimer copy number when the gene expression network with protein
dimerization is controlled with the antithetic integral controller (2) with k = 3 and an ON/OFF proportional
controller. The set-point value is indicated as a black dotted line.
25
Figure 16: Variance trajectories for the homodimer copy number when the gene expression network with
protein dimerization is controlled with the antithetic integral controller (2) with k = 3 and an ON/OFF
proportional controller. The stationary constitutive variance is depicted in black dotted line.
26
Figure 17: Stationary variance for the homodimer copy number when the gene expression network with
protein dimerization is controlled with the antithetic integral controller (2) and an ON/OFF proportional
controller.
27
Figure 18: Settling-time for the mean trajectories for the homodimer copy number when the gene expression
network with protein dimerization is controlled with the antithetic integral controller (2) and an ON/OFF
proportional controller.
Discussion
Adjoining a negative feedback strategy to the antithetic integral controller was shown to reduce the stationary
variance for the controlled species, an effect that was expected from previous studies and predicted by the
obtained theoretical results. The structure of the negative feedback strategy was notably emphasized to have
important consequences on the magnitude of the variance reduction. Indeed, the ON/OFF controller can
be used to dramatically reduce the variance while still preserving the ergodicity of the closed-loop network.
This can be explained mainly because the proportional effective gain β is very sensitive to changes in the
feedback strength Kp and can reach reasonably large values (still smaller than Kp ); see Fig. 19 in the
appendix. The preservation of the ergodicity property for the closed-loop network comes from the fact that
Eπ [Kp max{0, µ−θX` }] remains smaller than the value of the nominal stationary control input (the constant
input for which the stationary mean of the controlled species equals the desired set-point) for a wide range
of values for Kp . Regarding the mean dynamics, this feedback leads to a decrease of the settling-time but
also also leads to abrupt transient dynamics for large values of Kp because of the presence of a stable zero
in the mean closed-loop dynamics that is inversely proportional to β (which is very sensitive to changes
in Kp in this case and which can reach high values). Unfortunately, this controller cannot be implemented
in-vivo because it does not admit any reaction network implementation. However, it can still be implemented
in-silico for the stochastic single-cell control for the control of cell populations using, for instance, targeted
optogenetics; see e.g. [26]. On the other hand, the Hill feedback, while being practically implementable, has
a much less dramatic impact on the stationary variance and on the mean dynamics. The first reason is that
the effective proportional gain β is less sensitive with respect to changes in Kp and remains very small even
when Kp is large; see Fig. 20. The absence of zero does not lead to any abrupt transient dynamics even for
28
large values for Kp but this may also be due to the fact that β always remains small as opposed to as in
the ON/OFF proportional feedback case. A serious issue with this feedback is that ergodicity can be easily
lost since Eπ [Kp /(1 + X` )] becomes very quickly larger than the value of the nominal control input as we
increase Kp . The properties of both feedback strategies are summarized in Table 1.
To prove the main theoretical results, a tailored closure method had to be developed to deal with the
bimolecular comparison reaction. A similar one has also been suggested in [23] for exactly the same purpose.
These methods rely on the assumption that the molecular count of the controller species Z2 is, most of the
time, equal to 0, a property that is ensured by assuming that k/η 1. This allowed for the simplification
and the closure of the moment equations. The theory was only developed for unimolecular networks because
of the problem solvability. However, the extension of those theoretical results to more general reaction
networks, such as bimolecular networks, is a difficult task mainly because of the moment closure problem
that is now also present at the level of the species of the controlled network. In this regard, this extension is,
at the moment, only possible using existing moment closure methods (see e.g. [17, 21, 28]) which are known
to be potentially very inaccurate and would then compromise the validity of the obtained approximation.
We believe that obtaining accurate and general theoretical approximations for the stationary variance for
bimolecular networks is currently out of reach. It is also unclear whether the obtained qualitative and
quantitative results still hold when the assumption k/η 1 on the controller parameters is not met.
Interestingly, the results obtained in the current paper provide some interesting insights on an unexpected
connection between deterministic PI control and its stochastic analogues. In particular, it is possible to
observe that the destabilizing effect of deterministic integral control is analogous to the variance increase
due to the use of the stochastic antithetic integral controller. In a similar way, the stabilizing property of
deterministic proportional controllers is the deterministic analogue of the property of variance decrease of
the stochastic proportional controller; see Table 2).
The controller considered in this paper is clearly analogous to PI controllers. A usual complemental
element is the so-called derivative action (or a filtered version of it) in order to add an anticipatory effect
to the controller and prevent high overshoot; see [1]. So far, filtered versions of the derivative action have
been proposed in a deterministic setting. Notably, the incoherent feedforward loop locally behaves like a
filtered derivative action. More recently, a reaction network approximating a filtered derivative action was
proposed in [15] in the deterministic setting. It is unclear at the moment whether a stochastic version for
the derivative action can be found but it is highly possible that such a stochastic derivative action can be
implemented in terms of elementary reactions.
The negative feedback strategy considered here is an ideal/simplified one. Indeed, it was assumed in this
paper that the controlled species was directly involved in the negative feedback. However, it is very likely
that, the controlled species may not be directly usable in the feedback, that intermediary species may be
involved (e.g. a gene expression network is involved in the feedback) or that the feedback is in terms of a
species upstream the controlled species (for instance feedback uses a protein while the controlled species is
the corresponding homodimer). The theory may be adapted to deal with such cases as long as the controlled
network is unimolecular. It is however expected that the same qualitative behavior will be observed. The
reason for that is that in unimolecular networks, species cooperate in the sense that they act positively on
each other. Hence, decreasing the variance of one species will also decrease the variance of all the species
that are created from it. For instance, in a gene expression network, if the mRNA variance is decreased, the
protein variance will decrease as well, and vice-versa.
Finally, the implementation of such negative feedback loops is an important, yet elusive, task. It is
unclear at the moment how in-vivo experiments could be conducted. Preliminary experimental results to
validate the theoretical/computational ones could be obtained using optogenetics and single-cell control for
population control. In-vivo experiments will certainly require a lot more effort.
29
Table 1: Effects of the different feedback strategies on the mean dynamics and the stationary variance.
ON/OFF Proportional Feedback
Hill Feedback
Ergodicity
robust (+)
fragile (-)
β
very sensitive (+)
poorly sensitive (-)
wide range (+)
small range
Mean Dynamics
reduce settling-time (+)
reduce settling-time (+)
zero dynamics (-)
no zero dynamics (+)
Stationary variance
dramatic reduction (++)
slight reduction (+)
Table 2: The effects of the proportional and integral actions on the dynamics of a system in both the
deterministic and stochastic setting.
Integral action
Proportional action
Deterministic
regulation (+)
no regulation (-)
Setting
destabilizing (-)
stabilizing (+)
Stochastic
regulation (+)
no regulation (-)
Setting
increases variance (-) decreases variance (+)
Supplementary figures for the gene expression network
Figure 19: Evolution of β as a function of the gains k and Kp in a gene network controlled with an antithetic
integral controller combined with an ON/OFF proportional feedback.
30
Figure 20: Evolution of β as a function of the gains k and Kp in a gene network controlled with an antithetic
integral controller combined with a Hill feedback.
31
Supplementary figures for the gene expression network with protein
maturation
4
log10 ('33 )
3.5
3
2.5
2
1.5
1
0.5
0
5
10
15
20
25
30
Figure 21: Evolution of the logarithm of the predicted stationary variance for the mature protein copy
number as a function of the gain β. We can observe a nonmonotonic behavior due to the fact that the
stability region for R in terms of the parameters k and β is bounded. The black dashed line is the logarithm
of the open-loop variance.
32
Figure 22: Mean trajectories for the mature protein copy number when the gene expression network with
protein maturation is controlled with the antithetic integral controller (2) with k = 3 and an ON/OFF
proportional controller. The set-point value is indicated as a black dotted line.
Figure 23: Variance trajectories for the mature protein copy number when the gene expression network
with protein maturation is controlled with the antithetic integral controller (2) with k = 3 and an ON/OFF
proportional controller. The stationary constitutive variance is depicted in black dotted line.
33
Figure 24: Mean trajectories for the actuating species copy number when the gene expression network with
protein maturation is controlled with the antithetic integral controller (2) with k = 3 and an ON/OFF
proportional controller.
Figure 25: Mean trajectories for the sensing species copy number when the gene expression network with
protein maturation is controlled with the antithetic integral controller (2) with k = 3 and an ON/OFF
proportional controller.
34
Figure 26: Mean trajectories for the mature protein copy number when the gene expression network with
protein maturation is controlled with the antithetic integral controller (2) with k = 3 and a Hill negative
feedback controller. The set-point value is indicated as a black dotted line.
Figure 27: Variance trajectories for the mature protein copy number when the gene expression network with
protein maturation is controlled with the antithetic integral controller (2) with k = 3 and a Hill negative
feedback controller. The stationary constitutive variance is depicted in black dotted line.
35
Figure 28: Stationary variance for the mature protein copy number when the gene expression network with
protein maturation is controlled with the antithetic integral controller (2) and a Hill feedback controller.
Figure 29: Settling-time for the mean trajectories for the mature protein copy number when the gene
expression network with protein maturation is controlled with the antithetic integral controller (2) and a
Hill negative feedback controller.
36
Supplementary figures for the gene expression network with protein
dimerization
Figure 30: Mean trajectories for the homodimer copy number when the gene expression network with protein
dimerization is controlled with the antithetic integral controller (2) with k = 3 and a Hill negative feedback.
The set-point value is indicated as a black dotted line.
37
Figure 31: Variance trajectories for the homodimer copy number when the gene expression network with
protein dimerization is controlled with the antithetic integral controller (2) with k = 3 and a Hill negative
feedback. The stationary constitutive variance is depicted in black dotted line.
Figure 32: Stationary variance for the homodimer copy number when the gene expression network with
protein dimerization is controlled with the antithetic integral controller (2) and aa Hill negative feedback.
38
Figure 33: Settling-time for the mean trajectories for the homodimer copy number when the gene expression
network with protein dimerization is controlled with the antithetic integral controller (2) and a Hill negative
feedback.
References
[1] K. J. Åström and T. Hägglund. PID Controllers: Theory, Design, and Tuning. Instrument Society of
America, Research Triangle Park, North Carolina, USA, 1995.
[2] P. Albertos and I. Mareels. Feedback and Control for Everyone. Springer, Berlin Heidelberg, Germany,
2010.
[3] D. Anderson and T. G. Kurtz. Continuous time Markov chain models for chemical reaction networks. In
H. Koeppl, D. Densmore, G. Setti, and M. di Bernardo, editors, Design and analysis of biomolecular circuits - Engineering Approaches to Systems and Synthetic Biology, pages 3–42. Springer Science+Business
Media, 2011.
[4] F. Annunziata, A. Matyjaszkiewicz, G. Fiore, C. S. Grierson, L. Marucci, M. di Bernardo, and N. J.
Savery. An orthogonal multi-input integration system to control gene expression in escherichia coli (in
press). ACS Synthetic Biology, 2017.
[5] A. Becskei and L. Serrano. Engineering stability in gene networks by autoregulation. Nature, 405:590–
593, 2000.
[6] C. Briat, A. Gupta, and M. Khammash. Antithetic integral feedback ensures robust perfect adaptation
in noisy biomolecular networks. Cell Systems, 2:17–28, 2016.
[7] C. Briat and M. Khammash. Computer control of gene expression: Robust setpoint tracking of protein
mean and variance using integral feedback. In 51st IEEE Conference on Decision and Control, pages
3582–3588, Maui, Hawaii, USA, 2012.
[8] C. Briat and M. Khammash. Integral population control of a quadratic dimerization process. In 52nd
IEEE Conference on Decision and Control, pages 3367–3372, Florence, Italy, 2013.
39
[9] C. Briat, C. Zechner, and M. Khammash. Design of a synthetic integral feedback circuit: dynamic
analysis and DNA implementation. ACS Synthetic Biology, 5(10):1108–1116, 2016.
[10] B. F. Cress, E. A. Trantas, F. Ververidis, R. J. Linhardt, and M. A. G. Koffas. Sensitive cells: enabling
tools for static and dynamic control of microbial metabolic pathways. Current Opinion in Biotechnology,
36:205–214, 2015.
[11] C. Cuba Samaniego and E. Franco. An ultrasensitive biomolecular network for robust feedback control.
In 20th IFAC World Congress, pages 11437–11443, 2017.
[12] D. Del Vecchio and R. M. Murray, editors. Biomolecular Feedback Systems. Princeton University Press,
2015.
[13] D. T. Gillespie. A general method for numerically simulating the stochastic time evolution of coupled
chemical reactions. Journal of Computational Physics, 22(4):403–434, 1976.
[14] C. Guiver, H. Logemann, R. Rebarber, A. Bill, B. Tenhumberg, D. Hodgson, and S. Townley. Integral
control for population management. Journal of Mathematical Biology, 70:1015–1063, 2015.
[15] W. Halter, Z. A. Tuza, and F. Allgöwer. Signal differentiation with genetic networks. In 20th IFAC
World Congress, pages 10938–10943, 2017.
[16] A. W. K. Harris, J. A. Dolan, C. L. Kelly, J. Anderson, and A. Papachristodoulou. Designing genetic
feedback controllers. IEEE Transactions on Biomedical Circuits and Systems, 9(4):475–484, 2015.
[17] J. P. Hespanha. Moment closure for biochemical networks. In 3rd International Symposium on Communications, Control and Signal Processing, pages 142–147, St. Julian’s, Malta, 2008.
[18] M. Kaern, T. C. Elston, W. J. Blake, and J. J. Collins. Stochasticity in gene expression: from theories
to phenotypes. Nature Reviews Genetics, 6(6):451–464, 2005.
[19] E. Levine, Z. Zhang, T. Kuhlman, and T. Hwa. Quantitative characteristics of gene regulation by small
rna. PLoS Biol, 5(9):e229, 2013.
[20] G. Lillacci, S. K. Aoki, D. Schweingruber, and M. Khammash. A synthetic integral feedback controller
for robust tunable regulation in bacteria. bioRxiv, 2017.
[21] P. Milner, C. S. Gillepsie, and D. J. Wilkinson. Moment closure approximations for stochastic kinetic
models with rational rate laws. Mathematical Biosciences, 231:99–104, 2011.
[22] J. D. Murray. Mathematical Biology Part I. An Introduction. 3rd Edition. Springer-Verlag Berlin
Heidelberg, 2002.
[23] N. Olsman, A.-A. Ania-Ariadna, F. Xiao, Y. P. Leong, J. Doyle, and R. Murray. Hard limits and
performance tradeoffs in a class of sequestration feedback systems. bioRxiv, 2017.
[24] J. Paulsson. Summing up the noise in gene networks. Nature, 427:415–418, 2004.
[25] Y. Qian and D. Del Vecchio. Realizing “integral control” in living cells: How to overcome leaky
integration due to dilution? bioRxiv, 2017.
[26] M. Rullan, D. Benzinger, G. W. Schmidt, A. Gupta, A. Milias-Argeitis, and M. Khammash. Optogenetic
single-cell control of transcription achieves mrna tunability and reduced variability. BioRxiv, 2017.
[27] L. Schukur and M. Fussenegger. Engineering of synthetic gene circuits for (re-)balancing physiological
processes inchronic diseases. WIREs Systems Biology and Medicine, 8:402–422, 2016.
[28] P. Smadbeck and Y. N. Kaznessis. A closure scheme for checmical master equations. Proc. Natl. Acad.
Sci. USA., 110(35):14261–14265, 2013.
40
[29] M. Thattai and A. van Oudenaarden. Intrinsic noise in gene regulatory networks. Proceedings of the
National Academy of Sciences, 98(15):8614–8619, 2001.
[30] N. Venayak, N. Anesiadis, W. R. Cluett, and R. Mahadevan. Engineering metabolism through dynamic
control. Current Opinion in Biotechnology, 34:142–152, 2015.
[31] H. Ye and M. Fussenegger. Synthetic therapeutic gene circuits in mammalian cells. FEBS Letters,
588(15):2537–2544, 2014.
[32] S. M. Yoo, D. Na, and S. Y. Lee. Design and use of synthetic regulatory small rnas to control gene
expression in escherichia coli. Nat. Protocols, 8(9):1694–1707, 2013.
41
| 3 |
Data-Aided Secure Massive MIMO Transmission
with Active Eavesdropping
arXiv:1801.07076v1 [] 22 Jan 2018
Yongpeng Wu, Chao-Kai Wen, Wen Chen, Shi Jin, Robert Schober, and Giuseppe Caire
Abstract—In this paper, we study the design of secure communication for time division duplexing multi-cell multi-user massive multiple-input multiple-output (MIMO) systems with active
eavesdropping. We assume that the eavesdropper actively attacks
the uplink pilot transmission and the uplink data transmission
before eavesdropping the downlink data transmission phase of
the desired users. We exploit both the received pilots and data
signals for uplink channel estimation. We show analytically that
when the number of transmit antennas and the length of the data
vector both tend to infinity, the signals of the desired user and the
eavesdropper lie in different eigenspaces of the received signal
matrix at the base station if their signal powers are different. This
finding reveals that decreasing (instead of increasing) the desire
user’s signal power might be an effective approach to combat a
strong active attack from an eavesdropper. Inspired by this result,
we propose a data-aided secure downlink transmission scheme
and derive an asymptotic achievable secrecy sum-rate expression
for the proposed design. Numerical results indicate that under
strong active attacks, the proposed design achieves significant
secrecy rate gains compared to the conventional design employing
matched filter precoding and artificial noise generation.
I. I NTRODUCTION
Wireless networks are widely used in civilian and military
applications and have become an indispensable part of our
daily lifes. Therefore, security is a critical issue for future
wireless networks. Conventional security approaches based on
cryptographic techniques have many well-known weaknesses.
Therefore, new approaches to security based on information
theoretical concepts, such as the secrecy capacity of the
propagation channel, have been developed and are collectively
referred to as physical layer security [1]–[4].
Massive MIMO is a promising approach for efficient transmission of massive amounts of information and is regarded
as one of the “big three” 5G technologies [5]. Most studies
on physical layer security in massive MIMO systems assume
that the eavesdropper is passive and does not attack the
The work of Y. Wu was supported in part by NSFC No. 61701301.
The work of C.-K. Wen was supported by the Ministry of Science and
Technology of Taiwan under Grant MOST 106-2221-E-110-019. The work of
W. Chen is supported by Shanghai STCSM 16JC1402900 and 17510740700,
by National Science and Technology Major Project 2017ZX03001002-005
and 2018ZX03001009-002, by NSF China 61671294, and by Guangxi NSF
2015GXNSFDA139037. The work of S. Jin was supported in part by the
NSFC under Grant 61531011.
Y. Wu and W. Chen are with the Department of Electronic Engineering,
Shanghai Jiao Tong University, Minhang 200240, China (e-mail: [email protected]; [email protected]).
C. K. Wen is with the Institute of Communications Engineering, National Sun Yat-sen University, Kaohsiung 804, Taiwan (Email:
[email protected]).
S. Jin is with the National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, P. R. China. (Emails: [email protected]).
R. Schober is with Institute for Digital Communications, Universität
Erlangen-Nürnberg, Cauerstrasse 7, D-91058 Erlangen, Germany (Email:
[email protected]).
G. Caire is with Institute for Telecommunication Systems, Technical
University Berlin, Einsteinufer 25, 10587 Berlin, Germany (Email: [email protected]).
communication process of the systems [6]–[9]. However, a
smart eavesdropper can perform the pilot contamination attack
to jeopardize the channel estimation process at the base station
[10]. Due to the channel hardening effect caused by large
antenna arrays, the pilot contamination attack results in a
serious secrecy threat to time division duplexing (TDD)-based
massive MIMO systems [10].
The authors of [11] propose a secret key agreement protocol
for single-cell multi-user massive MIMO systems under the
pilot contamination attack. An estimator for the base station
(BS) is designed to evaluate the information leakage. Then,
the BS and the desired users perform secure communication
by adjusting the length of the secrecy key based on the
estimated information leakage. Other works have studied how
to combat the pilot contamination attack. The authors of
[12] investigate the pilot contamination attack problem for
single-cell multi-user massive MIMO systems over independent and identically distributed (i.i.d.) fading channels. The
eavesdropper is assumed to only know the pilot signal set
whose size scales polynomially with the number of transmit
antenna. For each transmission, the desired users randomly
select certain pilot signals from this set, which are unknown
to the eavesdropper. In this case, it is proved that the impact of
the pilot contamination attack can be eliminated as the number
of transmit antenna goes to infinity. For the more pessimistic
assumption that the eavesdropper knows the exact pilot signals
of the desired users for each transmission, the secrecy threat
caused by the pilot contamination attack in multi-cell multiuser massive MIMO systems over correlated fading channels
is analyzed in [10]. Based on this, three transmission strategies
for combating the pilot contamination attack are proposed.
However, the designs in [10] are not able to guarantee a high
(or even a non-zero) secrecy rate for weakly correlated or i.i.d.
fading channels under a strong pilot contamination attack.
In this paper, we investigate secure transmission for i.i.d.
fading1 TDD multi-cell multi-user massive MIMO systems under a strong active attack. We assume the system performs first
uplink training followed by an uplink data transmission phase
and a downlink data transmission phase. The eavesdropper
jams the uplink training phase and the uplink data transmission
phase and then eavesdrops the downlink data transmission2 .
We utilize the uplink transmission data to aid the channel
estimation at the BS. Then, based on the estimated channels,
the BS designs precoders for the downlink transmission.
This paper makes the following key contributions:
1 For simplicity of presentation, we assume i.i.d. fading to present the basic
idea of data-aided secure massive MIMO transmission. The results can be
extended to the general case of correlated fading channels by combining the
techniques in [10] with those in this paper. This will be considered in extended
journal version of this paper.
2
1) We prove that when the number of transmit antennas
and the amount transmitted data both approach infinity,
the desired users’ and the eavesdropper’s signals lie in
different eigenspaces of the uplink received signal matrix
due to their power differences. Our results reveal that
increasing the power gap between the desired users’ and
the eavesdropper’s signals is beneficial for separating the
desired users and the eavesdropper. This implies that
when facing a strong active attack, decreasing (instead
of increasing) the desired users’ signal power could be
an effective approach to enable secrete communication.
2) Inspired by this observation, we propose a joint uplink and downlink data-aided transmission scheme to
combat strong active attacks from an eavesdropper.
Then, we derive an asymptotic achievable secrecy sumrate expression for this scheme. The derived expression
indicates that the impact of an active attack on the
uplink transmission can be completely eliminated by the
proposed design.
3) Our numerical results reveal that the proposed design
achieves a good secrecy performance under strong active attacks, while the conventional design employing
matched filter precoding and artificial noise generation
(MF-AN) [10] is not able to guarantee secure communication in this case.
Notation: Vectors are denoted by lower-case bold-face letters; matrices are denoted by upper-case bold-face letters.
Superscripts (·)T , (·)∗ , and (·)H stand for the matrix transpose,
conjugate, and conjugate-transpose operations, respectively.
We use tr(A) and A−1 to denote the trace and the inverse
of matrix A, respectively. diag {b} denotes a diagonal matrix
with the elements of vector b on its main diagonal. Diag {B}
denotes a diagonal matrix containing the diagonal elements of
matrix B on the main diagonal. The M × M identity matrix
is denoted by IM , and the M × N all-zero matrix and the
N × 1 all-zero vector are denoted by 0. The fields of complex
and real numbers are denoted by C and R, respectively. E [·]
denotes statistical expectation. [A]mn denotes the element in
the mth row and nth column of matrix A. [a]m denotes the
mth entry of vector a. ⊗ denotes the Kronecker product.
x ∼ CN (0, RN ) denotes a circularly symmetric complex
vector x ∈ CN ×1 with zero mean and covariance matrix RN .
+
var(a) denotes the variance of random variable a. [x] stands
for max {0, x}. a ≫ b means that a is much larger than b.
II. U PLINK T RANSMISSION
Throughout the paper, we adopt the following transmission
protocol. We assume the uplink transmission phase, composing
the uplink training and the uplink data transmission, which is
followed by a downlink data transmission phase.
We assume the main objective of the eavesdropper is to
eavesdrop the downlink data. The eavesdropper chooses to
attack the uplink transmission phase to impair the channel
estimation phase at the BS. The resulting mismatched channel
estimation will increase the information leakage in the subsequent downlink transmission. In the downlink transmission
phase, the eavesdropper does not attack but focuses on eavesdropping the data.
We study a multi-cell multi-user system with L + 1 cells.
We assume an Nt -antenna BS and K single-antenna users are
present in each cell. The cells are index by l = (0, . . . , L),
where cell l = 0 is the cell of interest. We assume an Ne antenna active eavesdropper3 is located in the cell of interest
and attempts to eavesdrop the data intended for all users in the
cell. The eavesdropper sends pilot signals and artificial noise
to interfere channel estimation and uplink data transmission4 ,
respectively. Let T and τ denote the coherence time of the
channel and the length of the pilot signal, respectively. Then,
for uplink transmission, the received pilot signal matrix Ypm ∈
CNt ×τ and the received data signal matrix Ydm ∈ CNt ×(T −τ )
at the BS in cell m are given by5
K
L X
K p
X
p X
T
T
Pl hm
hm
ω
+
Ypm = P0
lk ωk
0k k
k=1
l=1 k=1
K
X
Pe
Hm
W k + Nm
p
KNe e
k=1
K
L X
K p
X
p X
T
T
= P0
Pl hm
hm
d
+
lk dlk
0k 0k
+
Ydm
r
r
k=1
(1)
l=1 k=1
Pe m
H A + Nm
(2)
d
Ne e
where P0 , ωk ∈ Cτ ×1 , and d0k ∼ CN (0, IT −τ ) denote
the average transmit power, the pilot sequence, and the uplink transmission data of the kth user in cell of interest,
respectively. It is assumed that the same K orthogonal pilot
sequences are used in each cell where ωkH ωk = τ and
ωkH ωl = 0. Pl and dlk denote the average transmit power
and the uplink transmission data of the kth user in the lth
p
cell, respectively. hplk ∼ CN (0, βlk
INt ) denotes the channel
between the kth user in the lth cell and the BS in the pth cell,
p
where βlk
is the corresponding large-scale path loss. Hle and
Pe denote the channel between the eavesdropper and the base
station in the lth cell and the average transmit power of the
eavesdropper, respectively. We assume the columns
of Hle are
l
i.i.d. with Gaussian distribution CN 0, βe INt , where βel is
the large-scale path loss for the eavesdropper. For the training
phase, the eavesdropper attacks all the users in cell P
of interest.
K
Therefore, it uses the attacking pilot sequences
k=1 Wk
T
Nt ×τ
[12], where Wk = [ωk · · · ωk ] ∈ C
. For the uplink data
transmission phase, the eavesdropper generates artificial noise
A ∈ CNt ×T −τ , whose elements conform i.i.d. standard GausNt ×τ
Nt ×(T −τ )
sian distribution. Nm
and Nm
are
p ∈ C
d ∈ C
noise matrices whose columns are i.i.d. Gaussian distributed
with CN (0, N0 INt ).
We define Y0 = Yp0 Yd0 and the eigenvalue decomposition T 1Nt Y0 Y0H = [v1 , · · · , vNt ] Σ[v1 , · · · , vNt ]H , where
the eigenvalues on the main diagonal of matrix Σ are originated in ascending order. For the following, we make the
important assumption that due to the strong active attack
and the large-scale path loss difference between the cell of
0
0
interest and other cells, Pe βe0 , P0 β0k
, and Pl βlk
have the
+
3 An N -antenna eavesdropper is equivalent to N cooperative singlee
e
antenna eavesdroppers.
4 We note that if the eavesdropper only attacks the channel estimation phase
and remains silent during the uplink data transmission, then the impact of
this attack can be easily eliminated with the joint channel estimation and
data detection scheme in [13]. Therefore, a smart eavesdropper will attack
the entire uplink transmission.
5 For notation simplicity, we assume the users in each cell use the same
transmit power [6]. Following the similar techniques in this paper, the results
can be easily extended to the case of different transmit powers of the users
in each cell.
0
0
relationship Pe βe0 ≫ P0 β0k
≫ Pl βlk
. Let M = (L + 1)K +
Ne and vector (θ1 , · · · , θM ) has the same element as vec-
0
0
0
0
, · · · , PL βLK
, P0 β01
, · · · , P0 β0K
, Pe βe , · · · Pe βe
tor P1 β11
but with the elements originated in ascending order whose
index 1 ≤ i1 ≤ i2 · · · ≤ iK ≤ M satis0
0
fies θik = P0 β0k
, k = 1, 2, · · · , K. Define Veq
=
Define H0 =
K ].
[vN0 t −M+i10, vNt −M+i2 , · · · , 0vNt −M+i
h01 , · · · h0K and HI = h11 , · · · h01K , · · · , h0L1 , · · · , h0LK .
Then, we have the following theorem.
0 H
√1
Theorem 1. Let Z0p
=
Veq
Yp0
=
T Nt
H
1
0
0
√
Veq H0 =
[z0p,1 , · · · , z0p,K ] and Heq =
T Nt
[heq,01 , · · · , heq,0K ]. Then, when T → ∞ and Nt → ∞,
b eq,0k of
the minimum mean square error (MMSE) estimate h
heq,0k based on Z0p is
by
√ given
p
P
0
b eq,0k =
(3)
P0 τ heq,0k + neq
h
P0 τ + N0
0
where neq = Veq
ñeq and ñeq ∼ CN (0, τ N0 INt ).
Proof. Please refer to Appendix A.
Remark 1: The basic intuition behind Theorem 1 is that
when T → ∞ and Nt → ∞, each channel tends to be an
eigenvector of the received signal matrix. As a result, we
project the received signal matrix along the eigenspace which
corresponds to the desired users’ channel. In this case, the
impact of the strong active attack can be effectively eliminated.
Remark 2: In Theorem 1, we assume that the coherence time
of the channel is significantly larger than the symbol duration
[14]. This assumption can be justified based on the expression
for the coherence time in [14, Eq. (1)]. For typical speeds of
mobile users and typical symbol duration, the coherence time
can be more than hundreds symbol durations or even more.
Remark 3: The simulation results in Section IV indicate
that a sufficient power gap between P0 and Pe can guarantee
a good secrecy performance when the number of transmit
antennas and the coherence time of the channel are large
but not infinite. We note that allocating more power to the
desired users to combat a strong active attack is not needed.
0
In contrast, the larger gap between P0 β0k
and Pe βe0 will
be beneficial to approach the channel estimation result in
Theorem 1. This implies that decreasing the power of the
desire users can be an effective secure transmission strategy
under a strong active attack.
Remark 4: We can use large dimension random matrix
theory [15] to obtain a more accurate approximation for the
eigenvalue distribution of T 1Nt YYH for the case when Nt and
T are large but not infinite. Then, power design policies for
P0 , Pl , and Pe can be obtained. This will be discussed in the
extended journal version of this work.
Based on Theorem 1, we can design the precoders for
downlink transmission.
III. D OWNLINK T RANSMISSION
In this section, we consider the downlink transmission. We
assume the BSs in all L + 1 cells perform channel estimation
b eq,0k , heq,0k , P0 , and
according to Theorem 1 by replacing h
0
l
b
Veq by heq,lk , heq,lk , Pl , and Veq , respectively. Then, the lth
BS designs the transmit signal as follows
K
√ X
tlk slk , l = 0, · · · , L,
(4)
xl = P
k=1
where P is the downlink transmission power, tlk =
l H ĥeq,lk
, and slk is the downlink transmitted signal
Veq
kĥeq,lk k
for the kth user in the lth cell.
For the proposed precoder design, the base station only
needs to know the statistical channel state information of the
eavesdropper Pe βe0 in order to construct V0 . This assumption
is justified in [10].
Because each user in the cell of interest has the risk of being
eavesdropped, an achievable ergodic secrecy sum-rate can be
expressed as [16]
K
X
+
Rsec =
[Rk − Ckeve ]
(5)
k=1
where Rk and Ckeve denote an achievable ergodic rate between
the BS and the kth user and the ergodic capacity between the
BS and the eavesdropper seeking to decode the information
of the kth user, respectively.
The received signal y0k at the kth user in the cell of interest
is given by
L
X
H
y0k =
h0lk xl + nd
l=0
=
+
√
P h00k
H
0
Veq
H ĥeq,0k
√
H
0 H
Veq
P h00k
s0k
ĥeq,0k
K
X
t=1,t6=k
ĥeq,0t
s0t
ĥeq,0t
L
K
√ X
H
X
ĥeq,lt
l H
P
h0lk
Veq
slt + nd . (6)
t=1 ĥeq,lt
l=1
where nd ∼ CN (0, N0d ) is the noise in the downlink
transmission.
We use a lower bound for the achievable ergodic rate Rk
as follows [17]
R̄k = log (1 + γk )
(7)
where
γk =
i2
h
0
E g0k,k
K
L P
K
2
2
P
P
0
0
0
E glt,k
E g0t,k
+
N0d + var g0k,k +
+
l=1 t=1
t=1,t6=k
and
0
glt,k
√
H
l H
= P h0lk
Veq
(8)
ĥeq,lt
.
kĥeq,lt k
For Ckeve , we adopt the same pessimistic assumption as in
[10], i.e., we assume that the eavesdropper can eliminate all
interference from intra and inter-cell users to obtain an upper
bound of Ckeve as follows
P
eve
Ck,upper
= E log2 1 +
N0
geve
ĥeq,0k
2
(9)
where
H
H
0 H
0
ĥeq,0k . (10)
H0e Veq
H0e
geve = ĥeq,0k
Veq
Based on (5), (7), and (9), we have the following theorem.
Theorem 2. For the considered multi-cell multi-user massive
MIMO system, an asymptotic achievable secrecy sum-rate for
the transmit signal design in (4) is given by
K
Nt →∞ X
log (1 + γ̄k )
Rsec, ach →
18
16
Secrecy Rate (b/s/Hz)
k=1
where
γ̄k =
a1
N0d + P (a2 − a1 ) + P (K −
a1 =
a2 =
20
(11)
0
1)β0k
+ PK
0
P0 τ P0 τ β0k
(Nt + K − 1) + KN0
(P0 τ + N0 )
2
L
P
l=1
0
βlk
(12)
14
12
10
8
6
Exact Secrecy Rate for K = 5
Asymptotic Secrecy Rate for K = 5
Exact Secrecy Rate for K = 3
Asymptotic Secrecy Rate for K =3
4
(13)
2
0
0
0
0
+ 3 (K − 1) β0k
P0 τ β0k
Nt + β0k
(K − 1) + N0 Nt β0k
0 (N + K − 1) + N
P0 τ β0k
t
0
(14)
2
0
−20
Fig. 1:
−15
−10
−5
SNR (dB)
0
5
10
Secrecy rate vs. SNR for T = 1024, P0 /N0 = 5
dB, ρ = 30, and different numbers of users
Proof. Please refer to Appendix B.
20
Theorem 2 is a general expression which is valid for
arbitrary K and L. Also, Theorem 2 indicates that when
Nt tends to infinity, the impact of the active attack from
the eavesdropper disappears if the proposed joint uplink and
downlink transmission design is adopted.
V. C ONCLUSIONS
In this paper, we have proposed a data-aided secure transmission scheme for multi-cell multi-user massive MIMO systems which are under a strong active attack. We exploit the
received uplink data signal for joint uplink channel estimation
and secure downlink transmission. We show analytically that
when the number of transmit antennas and the length of
the data vector both approach infinity, the proposed design
can effectively eliminate the impact of an active attack by
an eavesdropper. Numerical results validate our theoretical
16
Secrecy Rate (b/s/Hz)
IV. N UMERICAL R ESULTS
In this section, we present numerical results to examine the
proposed design and the obtained analytical results. We set
0
0
L = 3, Nt = 128, β0k
= 1, k = 1, · · · , K, βlk
= 0.2,
k = 1, · · · , K, l = 1, · · · , L, and P0 = P1 = ... = PL .
We define the signal-to-noise ratio (SNR) as SNR = P/N0d .
Also, we define ρ = PE /P0 K.
Figure 1 plots the asymptotic and exact secrecy rate performance vs. the SNR for T = 1024, P0 /N0 = 5 dB, ρ = 30,
and different numbers of users, respectively. The exact secrecy
rate is obtained based on Monte Carlo simulation of (8) and
(9). We note from Figure 1 that the asymptotic secrecy rate
in Theorem 2 provides a good estimate for the exact secrecy
rate.
Figure 2 compares the secrecy performance of the proposed
design and the MF-AN design in [10] for large but finite Nt
and T as a function of ρ for K = 5, P0 /N0 = 5 dB, SNR
= 5dB, and different values of T . We keep P0 constant and
increase Pe to increase ρ. We observe from Figure 2 that when
the power of the active attack is strong, the MF-AN design
cannot provide a non-zero secrecy rate. However, our proposed
design performs well in the entire considered range of ρ.
0
As ρ increases, the gap between Pe βe and P0 β0k
increases
as well. Therefore, the secrecy rate increases with ρ for the
proposed design. Moreover, Figure 2 reveals that increasing
T is beneficial for the secrecy performance of the proposed
design.
18
14
12
10
8
6
4
Proposed Design for T = 4096
Proposed Design for T = 1024
MF−AN Design
2
0
Fig. 2:
10
20
30
ρ
40
50
60
Secrecy rate vs. ρ for K = 5, P0 /N0 = 5 dB, SNR
= 5dB, and different values of T
analysis and demonstrate the effectiveness of the proposed
design under strong active attacks.
A PPENDIX A
P ROOF OF T HEOREM 1
We define Ω0
=
[ω , · · · , ωK ]T , D0
=
√ 1 T
√
√
T
T T
P0 [d01 , · · · , d0K ] , ΩL =
P1 Ω0 , · · · , PL Ω0 , DL
T
√
√
√
√
=
P1 d11 , · ·√
· , P1 d1K , · · · , PL dL1 , · · · , PL dLK ,
X0 =
=
[ΩL DL ],
P0 Ω0 D0 , XI
q
q
K
P
PE
PE
Wk
Xe =
KNe
Ne A .
k=1
Based on (1) and (2), the received signal Y0 can be reexpressed as
0
Y
(15)
0 0= H00X
0 + HI XI + He Xe + N
where N = Np Nd .
When T → ∞, based on [18, Corollary 1], we obtain (16)
given at the
i
h top of the next page, where
(17)
UY = UW HI BI−1/2 He βe−1/2 H0 B0−1/2
0
0
B0 = diag β01
, · · · , β0K
(18)
0
0
0
0
(19)
BI = diag β11 , · · · , β1K , · · · , βL1 , · · · , βLK
PI = diag (P1 , · · · , P1 , · · · , PL , · · · , PL )
(20)
and UW ∈ CNt ×(Nt −M) has orthogonal columns.
When Nt → ∞, we have
1 H
Nt →∞
(21)
U UY → INt .
Nt Y
1
N0
1
1
T →∞ 1
0 H
H
H
+
IN
Y0 Y0H →
H0 X0 XH
HI XI XH
H0 Xe XH
0 H0 +
I HI +
e He
Nt T
Nt T
Nt T
Nt T e
Nt t
i
h
1
−1/2
−1/2
−1/2
=
UW HI BI
H e βe
H 0 B0
Nt
N I
0 Nt −M
UH
W
1/2
H 1/2
BI XI XI BI
−1/2
B
+ N0 I(L−1)K
HH
T
I
I
−1/2 H
βe Xe XH
e
β
H
+
N
I
e
0
N
e
e
T
1/2
1/2
−1/2 H
B0 X0 XH
0 B0
H
B
0
+
N
I
0
0 K
T
N0 INt −M
PI BI + N0 I(L−1)K
T →∞ 1
H
→
UY
UY
βe0 Pe + N0 INe
Nt
P0 B0 + N0 IK
From (16)–(21), we know that for T → ∞, Nt → ∞, UY
is the right singular matrix of Y0 . Therefore, we obtain
1
→∞
0 H 0 Nt→
Veq
Yp
Z= √
T Nt
p
1
1
0 H
0 H 0
√
Np .
(22)
Veq
Veq
P0 Ω0 X0 + √
T Nt
T Nt
Define z = vec (Z0p ), where Z0p is defined in Theorem 1.
From (22), we can re-express the equivalent received signal
during the pilot transmission phase as follows
K
p X
z = P0
(ωt ⊗ IK ) heq,0t + n
(23)
where
t=1
.
n=
..
0
Veq
H
H
n0p1
(24)
0
Veq
n0pτ
and n0pt in (24) is the tth column of N0p .
Based onp(23), the MMSE estimate of heq,0k is given by
b eq,0k = P0 (P0 τ IK + N0 IK )−1 (ωk ⊗ IK )H z
h
√
P0 p
P0 τ heq,0k + (ωk ⊗ IK )H n . (25)
=
P0 τ + N0
For the noise term in (25), we have
τ
X
H
∗
0 H
ωkt
wt
(ωk ⊗ IK ) n = Veq
t=1
0
= Veq
τ
H X
∗
0
ωkt
wt = Veq
t=1
H
ñeq
(26)
where ωkt is the tth element of ωk . Combining (25) and (26)
completes the proof.
A PPENDIX B
P ROOF OF T HEOREM 2
First,h basedi on the property
of iMMSE estimates, we know
h
√
0
= P E ĥeq,0k .
that E g0k,k
Based on (3) and (16), we have
2
P0
ĥeq,0k =
2
(P0 τ + N0 )
H p
p
0
0
P0 τ heq,0k + Veq
ñeq
P0 τ heq,0k + Veq
ñeq
×
(16)
H 0
1
h01 , · · · h00K B−1
h00k
0
Nt
(P0 τ + N0 )
H
1 H 0
× h001 , · · · h00K h00k +
ñeq h01 , · · · h00K B−1
0
i Nt
0
H
× h01 , · · · h00K ñeq
"
K
X
H 0 −1 0 0 H 0
P0
2 1
h0t h0t h0k
β0t
=
h00k
2 P0 τ N
t t=1
(P0 τ + N0 )
#
K
1 X H 0 0 −1 0 H
(27)
h0t ñeq .
+
ñ h β
Nt t=1 eq 0t 0t
=
P0
2
P0 τ 2
When Nt → ∞, based on [18, Corollary 1], we have
H 0 0 −1 0 H 0
1
h0t h0k
h0t β0t
h0
Nt 0k
0
0 −1
H Nt →∞ 0
Nt →∞ β0k β0t
→ β0k
→
(28)
tr h00t h00t
Nt
1
Nt →∞ 0
H
H 0
0 −1
Nt
(29)
h00k h00k → β0k
h0k β0k
h0
Nt 0k
1 H 0 0 −1 0 H
Nt →∞
(30)
h0t ñeq → τ N0 .
ñeq h0t β0t
Nt
Substituting (28)–(30) into (27), we have
2 Nt →∞
P0
2 0
→
ĥeq,0k
2 P0 τ β0k Nt +
(P0 τ + N0 )
0
(31)
P0 τ 2 β0k
(K − 1) + Kτ N0 .
0
. First, we obtain
Next, we evaluate var g0k,k
P0
H
H
0
0
0
0 H
ĥeq,0k ĥH
Veq
h00k
eq,0k Veq h0k =
2 h0k
(P0 τ + N0 )
p
1 0
−1/2
0
P0 τ heq,0k + Veq
h01 , · · · h00K B0
×√
ñeq
Nt
H 1
p
H
0
√ B0−1/2 h001 , · · · h00K h00k
P0 τ heq,0k + Veq
ñeq
×
Nt
−1/2 1
0
P0
1
Nt →∞
H
0
0
→
√
h , · · · h0K B0
2 h0k
Nt
Nt 01
(P0 τ + N0 )
H 0 0 H 0
0
0
2 −1/2
0
h01 , · · · h0K
h0k h0k
× P0 τ B0
h01 , · · · h0K
0
H
1 −1/2 0
−1/2
0
× B0
+
h01 , · · · h00K ñeq ñH
B0
eq h01 , · · · h0K
Nt
(32)
i
H
1
√ B0−1/2 h001 , · · · h00K h00k
Nt
P0
P0 τ 2 0 H 0
=
h0k
h01 , · · · h00K B−1
0
2
2
Nt
(P0 τ + N0 )
0
H
H
h001 , · · · h00K B0−1
h00k h00k
× h01 , · · · h00K
H
H 0
1
× h001 , · · · h00K h00k + 2 h00k
h01 , · · · h00K B−1
0
Nt
0
H
−1
0
0
× h01 , · · · h00K ñeq ñH
eq h01 , · · · h0K B0
i
H
(33)
× h001 , · · · h00K h00k .
From (28) and (29), we have
H 0
0
H 0
1
h0k
h01 , · · · h00K
h01 , · · · h00K B−1
h00k
0
Nt
−1/2
×B0
Nt →∞
0
0
→ β0k
Nt + β0k
(K − 1)
H
0
1
H
h01 , · · · h00K ñeq
h001 , · · · h00K B−1
h0
0
Nt2 0k
0
−1 0
H
0
× ñH
h01 , · · · h00K h00k
eq h01 , · · · h0K B0
E
h
0
glt,k
2
i
0
= P β0k
eve
For Ck,upper
in
0 H
0
→
Veq He
(40)
H
ĥeq,lt ĥeq,lt
H
= PE
tr
ĥ
ĥ
eq,lk
eq,lk
2
ĥeq,lt
0
= P βlk
.
(41)
(9), we know from (16) that when Nt → ∞,
0. Therefore, we have
N →∞
t
eve
→ 0.
(42)
Ck,upper
Substituting (31), (39), (40), (41), and (42) into (5) completes the proof.
(34)
Also, we have
H
H
−1 0
0
1
0
h01 , · · · h00K h00k h00k
= 2 ñH
eq h01 , · · · h0K B0
Nt
H
0
0
h01 , · · · h00K ñeq
× h01 , · · · h00K B−1
0
H
0
Nt →∞ 1
→
h01 , · · · h00K h00k
τ
N
tr
h001 , · · · h00K B−1
0
0
2
Nt
0
0
H
0 H
0
× h0k
h01 , · · · h00K B−1
h
,
·
·
·
h
01
0K
0
H
0
τ N0 0 H 0
h01 , · · · h00K
h01 , · · · h00K B−1
=
h
0
Nt2 0k
0
H
× h001 , · · · h00K B−1
h01 , · · · h00K h00k
0
K
τ N0 0 H X 0 −1 0 0 H
h0t h0t
β0t
h0k
2
Nt
t=1
K
X
H
0 −1 0
h0p h00p h00k
β0p
×
=
p=1
H 0 −1 0 0 H 0 −1 0 H 0 0
1
h00k
β0t
h0t h0t
β0p
h0p h0p h0k (35)
2
Nt
When k 6= t 6= p, we have
0
Nt →∞ β0k
0 H 0
0 −1
0 H
0 −1 0
→
h
h
β
h
h
β
tr
0p
0p
0p
0t
0t
0t
Nt2
0
Nt →∞ β0k
→
(36)
Nt
When k = t = p, we have
H
H 0 −1 0
H 0 −1 0
1
h0k h00k h00k
β0k
h0k h00k
β0k
h00k
2
Nt
Nt →∞
0
→ Nt β0k
(37)
When k = t 6= p, k = p 6= t, k 6= t = p, k 6= p = t, we
have
H 0 −1 0 0 H 0 −1 0 H 0 0
1
h0t h0t h0k
β0t
h0t h0t
β0t
h00k
2
Nt
Nt →∞
where a2and a1
are defined
and (14).
in (13)
2
2
0
0
and E glt,k , we have
For E g0t,k
H
h
i
ĥ
ĥ
2
eq,0t eq,0t
0
H
E g0t,k
= PE
2 tr ĥeq,0k ĥeq,0k
ĥeq,0t
0
→ β0k
(38)
0
Combining (31), (33)–(38), and the definition of g0k,k
, we
have
0
= a2 − a1 .
(39)
var g0k,k
R EFERENCES
[1] A. D. Wyner, “The wiretap channel,” Bell Syst. Tech. J., vol. 54, pp.
1355–1387, Oct. 1975.
[2] F. Oggier and B. Hassibi, “The secrecy capacity of the MIMO wiretap
channel,” IEEE Trans. Inf. Theory, vol. 57, pp. 4961–4972, Aug. 2011.
[3] Y. Wu, C. Xiao, Z. Ding, X. Gao, and S. Jin, “Linear precoding for finitealphabet signaling over MIMOME wiretap channels,” IEEE Trans. Veh.
Technol., vol. 61, pp. 2599–2612, Jul. 2012.
[4] Y. Wu, J.-B. Wang, J. Wang, R. Schober, and C. Xiao, “Secure
transmission with large numbers of antennas and finite alphabet inputs,”
IEEE Trans. Commun., vol. 65, pp. 3614–3628, Aug. 2017.
[5] J. G. Andrews, S. Buzzi, W. Choi, S. Hanly, A. Lozano, A. C. K. Soong,
and J. C. Zhang, “What will 5G be?” IEEE J. Sel. Areas Commun.,
vol. 32, pp. 1065–1082, Jun. 2014.
[6] J. Zhu, R. Schober, and V. K. Bhargava, “Secure transmission in
multicell massive MIMO systems,” IEEE Trans. Wireless Commun.,
vol. 13, pp. 4766–4781, Sep. 2014.
[7] X. Chen, L. Lei, H. Zhang, and C. Yuen, “Large-scale MIMO relaying
techniques for physical layer security: AF or DF?” IEEE Trans. Wireless
Commun., vol. 14, pp. 5135–5146, Sep. 2015.
[8] J. Chen, X. Chen, W. H. Gerstacker, and D. W. K. Ng, “Resource
allocation for a massive MIMO relay aided secure communication,”
IEEE Trans. Inf. Foresics Security, vol. 11, pp. 1700–1711, Aug. 2016.
[9] J. Zhu, D. W. K. Ng, N. Wang, R. Schober, and V. K. Bhargava,
“Analysis and design of secure massive MIMO systems in the presence
of hardware impairments,” IEEE Trans. Wireless Commun., vol. 16, pp.
2001–2016, Mar. 2017.
[10] Y. Wu, R. Schober, D. W. K. Ng, C. Xiao, and G. Caire, “Secure
massive MIMO transmission with an active eavesdropper,” IEEE Trans.
Inf. Theory, vol. 62, pp. 3880–3900, Jul. 2016.
[11] S. Im, H. Jeon, J. Choi, and J. Ha, “Secret key agreement with
large antenna arrays under the pilot contamination attack,” IEEE Trans.
Wireless Commun., vol. 14, pp. 6579–6594, Dec. 2015.
[12] Y. O. Basciftci, C. E. Koksal, and A. Ashikhmin, “Securing massive MIMO at the physical layer,” [Online]. Available:
http://arxiv.org/abs/1505.00396.
[13] C.-K. Wen, Y. Wu, K.-K. Wong, R. Schober, and P. Ting, “Performance
limits of massive MIMO systems based on Bayes-optimal inference,”
in Proc. Int. Conf. Commun. (ICC’2015), London, UK, Jun. 2015, pp.
1783–1788.
[14] R. R. Müller, L. Cottatellucci, and M. Vehkaperä, “Blind pilot decontamination,” IEEE J. Sel. Topic Signal Process., vol. 8, pp. 773–786,
Oct. 2014.
[15] R. Couillet and M. Debbah, Random Matrix Methods for Wireless
Communications. Cambridge University Press, 2011.
[16] G. Geraci, M. Egan, J. Yuan, A. Razi, and I. Collings, “Secrecy sumrates for multi-user MIMO regularized channel inversion precoding,”
IEEE Trans. Commun., vol. 60, pp. 3472–3482, Nov. 2012.
[17] J. Jose, A. Ashikhmin, T. L. Marzetta, and S. Vishwanath, “Pilot
contamination and precoding in multi-cell TDD systems,” IEEE Trans.
Wireless Commun., vol. 10, pp. 2640–2651, Aug. 2011.
[18] J. Evans and D. N. C. Tse, “Large system performance of linear
multiuser receivers in multipath fading channels,” IEEE Trans. Inf.
Theory, vol. 46, pp. 2059–2078, Sep. 2000.
| 7 |
Multi-scale Deep Learning Architectures for Person Re-identification
arXiv:1709.05165v1 [] 15 Sep 2017
Xuelin Qian1 Yanwei Fu2,5, * Yu-Gang Jiang1,3 Tao Xiang4 Xiangyang Xue1,2
1
Shanghai Key Lab of Intelligent Info. Processing, School of Computer Science, Fudan University;
2
School of Data Science, Fudan University; 3 Tencent AI Lab;
4
Queen Mary University of London; 5 University of Technology Sydney;
{15110240002,yanweifu,ygj,xyxue}@fudan.edu.cn; [email protected]
Abstract
Person Re-identification (re-id) aims to match people
across non-overlapping camera views in a public space. It
is a challenging problem because many people captured in
surveillance videos wear similar clothes. Consequently, the
differences in their appearance are often subtle and only detectable at the right location and scales. Existing re-id models, particularly the recently proposed deep learning based
ones match people at a single scale. In contrast, in this paper, a novel multi-scale deep learning model is proposed.
Our model is able to learn deep discriminative feature representations at different scales and automatically determine
the most suitable scales for matching. The importance of
different spatial locations for extracting discriminative features is also learned explicitly. Experiments are carried
out to demonstrate that the proposed model outperforms the
state-of-the art on a number of benchmarks.
1. Introduction
Person re-identification (re-id) is defined as the task of
matching two pedestrian images crossing non-overlapping
camera views [11]. It plays an important role in a number of applications in video surveillance, including multicamera tracking [2, 41], crowd counting [3, 10], and multicamera activity analysis [54, 53]. Person re-id is extremely
challenging and remains unsolved for a number of reasons.
First, in different camera views, one person’s appearance
often changes dramatically caused by the variances in body
pose, camera viewpoints, occlusion and illumination conditions. Second, in a public space, many people often wear
very similar clothes (e.g., dark coats in winter). The differences that can be used to tell them apart are often subtle,
which could be the global, e.g., one person is bulkier than
the other, or local, e.g., the two people wear different shoes.
Early re-id methods use hand-crafted features for per* Corresponding
Author
(a) The cognitive process a human may take to match
people
(b) Our model aims to imitate the human cognitive process
Figure 1. Multi-scale learning is adopted by our MuDeep to learn
discriminative features at different spatial scales and locations.
son appearance representation and employ distance metric
learning models as matching functions. They focus on either designing cross-view robust features [7, 13, 26, 62, 32],
or learning robust distance metrics [31, 33, 63, 25, 49,
60, 38, 32], or both [24, 32, 57, 59]. Recently, inspired
by the success of convolutional neural networks (CNN) in
many computer vision problems, deep CNN architectures
[1, 48, 27, 51, 44, 50, 4] have been widely used for person re-id. Using a deep model, the tasks of feature representation learning and distance metric learning are tackled
jointly in a single end-to-end model. The state-of-the-art reid models are mostly based on deep learning; deep re-id is
thus the focus of this paper.
Learning discriminative feature representation is the key
objective of a deep re-id model. These features need to be
computed at multiple scales. More specifically, some people can be easily distinguished by some global features such
as gender and body build, whilst for some others, detecting
local images patches corresponding to, say a handbag of a
particular color or the type of shoes, would be critical for
distinguishing two otherwise very similarly-looking people. The optimal matching results are thus only obtainable
when features at different scales are computed and combined. Such a multi-scale matching process is likely also
adopted by most humans when it comes to re-id. In particular, humans typically compare two images from coarse to
fine. Taking the two images in Fig. 1(a) as an example. At
the coarse level, the color and textual information of clothes
are very similar; humans would thus go down to finer scales
to notice the subtle local differences (e.g. the hairstyle, shoe,
and white stripes on the jacket of the person on the left) to
reach the conclusion that these are two different people.
However, most existing re-id models compute features
at a single scale and ignore the factor that people are often only distinguishable at the right spatial locations and
scales. Existing models typically adopt multi-branch deep
convolutional neural networks (CNNs). Each domain has a
corresponding branch which consists of multiple convolutional/pooling layers followed by fully connected (FC) layers. The final FC layer is used as input to pairwise verification or triplet ranking losses to learn a joint embedding space where people’s appearance from different camera views can be compared. However, recent efforts [9, 39]
on visualizing what each layer of a CNN actually learns reveal that higher-layers of the network capture more abstract
semantic concepts at global scales with less spatial information. When it reaches the FC layers, the information at finer
and local scales has been lost and cannot be recovered. This
means that the existing deep re-id architectures are unsuitable for the multi-scale person matching.
In this work, we propose a novel multi-scale deep learning model (MuDeep) for re-id which aims to learn discriminative feature representations at multiple scales with automatically determined scale weighting for combining them
(see Fig. 1(b)). More specifically, our MuDeep network
architecture is based on a Siamese network but critically
has the ability to learn features at different scales and evaluating their importance for cross-camera matching. This
is achieved by introducing two novel layers: multi-scale
stream layers that extract images features by analyzing the
person images in multi-scale; and saliency-based learning fusion layer, which selectively learns to fuse the data
streams of multi-scale and generate the more discriminative features of each branch in MuDeep. The multi-scale
data can implicitly serve as a way of augmenting the training data. In addition to the verification loss used by many
previous deep re-id models, we introduce a pair of classification losses at the middle layers of our network, in order
to strongly supervise multi-scale features learning.
2. Related Work
Deep re-id models Various deep learning architectures
have been proposed to either address visual variances of
pose and viewpoint [27], learn better relative distances of
triplet training samples [6], or learn better similarity metrics of any pairs [1]. To have enough training samples, [48]
built upon inception module a single deep network and is
trained on multiple datasets; to address the specific person
re-id task, the neural network will be adapted to a single
dataset by a domain guided dropout algorithm. More recently, an extension of the siamese network has been studied
for person re-id [50]. Pairwise and triplet comparison objectives have been utilized to combine several sub-networks
to form a network for person re-id in [51]. Similarly, [4]
employed triplet loss to integrate multi-channel parts-based
CNN models. To resolve the problem of large variations,
[44] proposed a moderate positive sample mining method
to train CNN. However, none of the models developed is
capable of multi-scale feature computation as our model.
More specifically, the proposed deep re-id model differs from related existing models in several aspects. (1)
our MuDeep generalizes the convolutional layers with
multi-scale strategy and proposed multi-scale stream layers and saliency-based learning fusion layer, which is different from the ideas of combing multiple sub-networks
[51] or channels [4] with pairwise or triplet loss. (2)
Comparing with [48], our MuDeep are simplified, refined and flexible enough to be trained from scratch
on either large-scale dataset (e.g. CUHK03) or mediumsized dataset (e.g. CUHK01). Our experiments show
that without using any extra data, the performance of our
MuDeep is 12.41%/4.27% higher than that of [48] on
CUHK01/CUHK03 dataset. (3) We improve the architecture of [1] by introducing two novel layers to implement
multi-scale and saliency-based learning mechanisms. Our
experiment results validate that the novel layers lead to
much better performance than [1].
Multi-scale re-id
The idea of multi-scale learning for
re-id was first exploited in [29]. However, the definition of
scale is different: It was defined as different levels of resolution rather than the global-to-local supporting region as
in ours. Therefore, despite similarity between terminology,
very different problems are tackled in these two works. The
only multi-scale deep re-id work that we are aware of is
[36]. Compared to our model, the model in [36] is rather
primitive and naive: Different down-sampled versions of
the input image are fed into shallower sub-networks to extract features at different resolution and scale. These subnetworks are combined with a deeper main network for feature fusion. With an explicit network for each scale, this
network becomes computationally very expensive. In addition, no scale weighting can be learned automatically and
no spatial importance of features can be modeled as in ours.
Deep saliency modelling Visual saliency has been studied extensively [19, 20]. It is typically defined in a bottomup process. In contrast, attention mechanism [5] works
Figure 2. Overview of MuDeep architecture.
Layers
Stream id
1
2
3
4
1
2
3
1
2
number@size
output
1@3 × 3 × 96 AF –24@1 × 1 × 96 CF
24@1 × 1 × 96 CF
Multi-scale-A
78 × 28 × 96
16@1 × 1 × 96 CF – 24@3 × 3 × 96 CF
16@1 × 1 × 96 CF –24@3 × 3 × 96 CF– 24@3 × 3 × 24 CF
1@3 × 3 × 96 MF*
Reduction
96@3 × 3 × 96 CF*
39 × 14 × 256
48@1 × 1 × 96 CF – 56@3 × 3 × 48 CF– 64@3 × 3 × 56 CF*
256@1 × 1 × 256 CF
39 × 14 × 256
64@1 × 1 × 256 CF –128@1 × 3 × 64 CF–256@3 × 1 × 128 CF
39 × 14 × 256
Multi-scale-B
64@1 × 1 × 256 CF – 64@1 × 3 × 64 CF
3
39 × 14 × 256
–128@3 × 1 × 64CF– 128@1 × 3 × 128 CF– 256@3 × 1 × 128 CF
4
1@3 × 3 × 256 AF* –256@1 × 1 × 256 CF
39 × 14 × 256
Table 1. The parameters of tied multi-scale stream layers of MuDeep. Note that (1) number@size indicates the number and the size of
filters. (2) * means the stride of corresponding filters is 2; the stride of other filters is 1. We add 1 padding to the side of input data stream
if the corresponding side of C-filters is 3. (3) CF, AF, MF indicate the C-filters, A-filters and M-filters respectively. A-filter is the average
pooling filter.
in a top-down way and allows for salient features to dynamically come to the front as needed. Recently, deep
soft attention modeling has received increasing interest as
a means to attend to/focus on local salient regions for computing deep features [43, 42, 58, 35]. In this work, we use
saliency-based learning strategy in a saliency-based learning fusion layer to exploit both visual saliency and attention mechanism. Specifically, with the multi-scale stream
layers, the saliency features of multiple scales are computed in multi-channel (e.g. in a bottom-up way); and a per
channel weighting layer is introduced to automatically discover the most discriminative feature channels with their
associated scale and locations. Comparing with [35] which
adopts a spatial attention model, our model is much compact and can be learned from scratch on a small re-id
dataset. When comparing the two models, our model, despite being much smaller, yields overall slightly better performance: on CUHK-01 dataset ours is 8% lower than that
of [35] but on the more challenging CUHK-03(detected) we
got around 10% improvement over that of [35]. Such a simple saliency learning architecture is shown to be very effective in our experiments.
Our contributions are as follows: (1) A novel multi-scale
representation learning architecture is introduced into the
deep learning architectures for person re-id tasks. (2) We
propose a saliency-based learning fusion layer which can
learn to weight important scales in the data streams in a
saliency-based learning strategy. We evaluate our model on
a number of benchmark datasets, and the experiments show
that our models can outperform state-of-the-art deep re-id
models, often by a significant margin.
3. Multi-scale Deep Architecture (MuDeep)
Problem Definition. Typically, person re-id is formulated
only as a verification task [24, 32, 57, 59]. In contrast, this
paper formulates person re-id into two tasks: classification
[55, 56] and verification [1, 48]. Specifically, given a pair
of person images, our framework will categorize them (1)
either as the “same person” or “different persons” class, and
(2) predict the person’s identity.
Architecture Overview. As shown in Fig. 2, MuDeep has
two branches to process each of image pairs. It consists
of five components: tied convolutional layers, multi-scale
stream layers (Sec. 3.1), saliency-based learning fusion
layer (Sec. 3.2), verification subnet and classification subnet (Sec. 3.3). Note that after each convolutional layer or
fully connected layer, batch normalization [18] is used before the ReLU activation.
Preprocessing by convolutional layers. The input pairs
are firstly pre-processed by two convolutional layers with
the filters (C-filters) of 48@3 × 3 × 3 and 96@3 × 3 × 48;
furthermore, the generated feature maps are fed into a maxpooling layer with filter size (M-filter) as 1@3 × 3 × 96 to
reduce both length and width by half. The weights of these
layers are tied across two branches, in order to enforce the
filters to learn the visual patterns shared by both branches.
3.1. Multi-scale stream layers
We propose multi-scale stream layers to analyze data
streams in multi-scale. The multi-scale data can implicitly serve as a way of augmenting the training data. Different from the standard Inception structure [47], all of these
layers share weights between the corresponding stream of
two branches; however, within each two data streams of the
same branch, the parameters are not tied. The parameters
of these layers are shown in Tab. 1; and please refer to Supplementary Material for the visualization of these layers.
Multi-scale-A layer analyses the data stream with the size
1 × 1, 3 × 3 and 5 × 5 of the receptive field. Furthermore,
in order to increase both depth and width of this layer, we
split the filter size of 5 × 5 into two 3 × 3 streams cascaded
(i.e. stream-4 and stream-3 in Table 1). The weights of each
stream are also tied with the corresponding stream in another branch. Such a design is in general inspired by, and
yet different from Inception architectures [46, 47, 45]. The
key difference lies in the factors that the weights are not tied
between any two streams from the same branch, but are tied
between two corresponding streams of different branches.
Reduction layer further passes the data streams in multiscale, and halves the width and height of feature maps,
which should be, in principle, reduced from 78 × 28 to
39 × 14. We thus employ Reduction layer to gradually decrease the size of feature representations as illustrated in Table 1, in order to avoid representation bottlenecks. Here we
follow the design principle of “avoid representational bottlenecks” [47]. In contrast to directly use max-pooling layer
for decreasing feature map size, our ablation study shows
that the Reduction layer, if replaced by max-pooling layer,
will leads to more than 10% absolute points lower than the
reported results of Rank-1 accuracy on the CUHK01 dataset
[28]. Again, the weights of each filter here are tied for
paired streams.
Multi-scale-B layer serves as the last stage of high-level
features extraction for the multiple scales of 1 × 1, 3 × 3
and 5 × 5. Besides splitting the 5 × 5 stream into two 3 × 3
streams cascaded (i.e. stream-4 and stream-3 in Table 1).
We can further decompose the 3 × 3 C-filters into one 1 × 3
C-filter followed by 3 × 1 C-filter [45]. This leads to several
benefits, including reducing the computation cost on 3 ×
3 C-filters, further increasing the depth of this component,
and being capable of extracting asymmetric features from
the receptive field. We still tie the weights of each filter.
3.2. Saliency-based learning fusion layer
This layer is proposed to fuse the outputs of multi-scale
stream layers. Intuitively, with the output processed by previous layers, the resulting data channels have redundant information: Some channels may capture relative important
information of persons, whilst others may only model the
background context. The saliency-based learning strategy
is thus utilized here to automatically discover and emphasize the channels that had extracted highly discriminative
patterns, such as the information of head, body, arms, clothing, bags and so on, as illustrated in Fig. 3. Thus, we assume Fi? represents the input feature maps of i−th stream
(1 ≤ i ≤ 4) in each branch and Fij represents the j−th
channel of Fi? , i.e. (1 ≤ j ≤ 256) and Fij ∈ R39×14 .
The output feature maps denoted as G will fuse the four
streams; Gj represents the j−th channel map of G, which
is computed by:
Gj =
4
X
Fij · αij (1 ≤ j ≤ 256)
(1)
i=1
where αij is the scalar for j−th channel of Fi? ; and the
saliency-weighted vector αi? is learned to account for the
importance of each channel of stream Fi? ; αi? is also tied.
A fully connected layer is appended after saliency-based
learning fusion layer, which extracts features of 4096dimensions of each image. The idea of this design is 1) to
concentrate the saliency-based learned features and reduce
dimensions, and 2) to increase the efficiency of testing.
3.3. Subnets for person Re-id
Verification subnet accepts feature pairs extracted by
previous layers as input, and calculate distance with feature difference layer, which followed by a fully connected
layer of 512 neurons with 2 softmax outputs. The output indicates the probability of ”same person” or ”different persons”. Feature difference layer is employed here
to fuse the features of two branches and compute the distance between two images. We denote the output features
of two branches as G1 and G2 respectively. The feature
the difference D as D =
1 difference
layer computes
G − G2 . ∗ G1 − G2 . Note that (1) ‘.∗’ indicates the
element-wise multiplication; the idea behind using elementwise subtraction is that if an input image pair is labelled
”same person”, the features generated by multi-scale stream
layers and saliency-based learning fusion layers should be
similar; in other words, the output values of feature difference layer should be close to zero; otherwise, the values
have different responses. (2) We empirically compare the
performance
layer
operations
including
1
of two difference
G − G2 . ∗ G1 − G2 and G1 − G2 . Our experiment shows that the former achieves 2.2% higher performance than the latter on Rank-1 accuracy on CUHK01.
Classification subnet In order to learn strong discriminative features for appearance representation, we add classification subnet following saliency-based learning fusion
layers of each branch. The classification subnet is learned
to classify images with different pedestrian identities. After extracting 4096-D features in saliency-based learning fusion layers, a softmax with N output neurons are connected,
where N denotes the number of pedestrian identities.
4. Experiments
4.1. Datasets and settings
Datasets. The proposed method is evaluated on three
widely used datasets, i.e. CUHK03 [27], CUHK01 [28] and
VIPeR [12]. The CUHK03 dataset includes 14, 096 images of 1, 467 pedestrians, captured by six camera views.
Each person has 4.8 images on average. Two types of person images are provided [27]: manually labelled pedestrian
bounding boxes (labelled) and bounding boxes automatically detected by the deformable-part-model detector [8]
(detected). The manually labelled images generally are of
higher quality than those detected images. We use the settings of both manually labelled and automatically detected
person images on the standard splits in [27] and report the
results in Sec. 4.2 and Sec. 4.3 respectively. CUHK01
dataset has 971 identities with 2 images per person of each
camera view. As in [28], we use as probe the images from
camera A and take those from camera B as gallery. Out of
all data, we select randomly 100 identities as the test set.
The remaining identities for training and validation. The
experiments are repeated over 10 trials. For all the experiments, we train our models from the scratch. VIPeR has 632
pedestrian pairs in two views with only one image per person of each view. We split the dataset and half of pedestrian
pairs for training and the left for testing as in [1] over 10 trials. In addition, we also evaluate proposed method on two
video-based re-id datasets, i.e., iLIDS-VID dataset [52] and
PRID-2011 dataset [15]. The iLIDS-VID dataset contains
300 persons, which are captured by two non-overlapping
cameras. The sequences range in length from 23 to 192
frames, with an average number of 73. The PRID-2011
dataset contains 385 persons for camera view A; 749 persons for camera view B, with sequences lengths of 5 to 675
frames. These two camera views have no nonoverlapping.
Since the primary focus of this paper is on image-based person re-id, we employ the simplest feature fusion scheme for
video re-id: Given a video sequence, we compute features
of each frame which are aggregated by max-pooling to form
video level representation. In contrast, most of the state-ofthe-art video-based re-id methods [40, 37, 52, 23, 30, 22]
utilized the RNN models such as LSTM to perform temporal/sequence video feature fusion from each frame.
Experimental settings. On the CUHK03 dataset, in term
of training set used, we introduce two specific experimental settings; and we report the results for both settings: (a)
Jointly: as in [48], under this setting the model is firstly
trained with the image set of both labelled and detected
CUHK03 images, and for each task, the corresponding image set is used to fine-tune the pre-trained networks. (b) Exclusively: for each of the “labelled” and “detected” tasks,
we only use the training data from each task without using
the training data of the other task.
Implementation details. We implement our model based
on the Caffe framework [21] and we make our own implementation for the proposed layers. We follow the training strategy used in [1] to first train the network without
classification subnets; secondly, we add the classification
subnets and freeze other weights to learn better initialization of the identity classifier; finally we train classification
loss and verification loss simultaneously, with a higher loss
weight of the former. The training data include positive and
negative pedestrian pairs. We augment the data to increase
the training set size by 5 times with the method of random
2D translation as in [27]. The negative pairs are randomly
sampled as twice the number of positive pairs. We use the
stochastic gradient descent algorithm with the mini-batch
size of 32. The learning rate is set as 0.001, and gradually
decreased by 1/10 every 50000 iterations. The size of input
image pairs is1 60×160×3. Unless specified otherwise, the
dropout ratio is set as 0.3. The proposed MuDeep get converged in 9 ∼ 12 hours on re-id dataset on a NVIDIA TITANX GPU. Our MuDeep needs around 7GB GPU memory. Code and models will be made available on the first
author’s webpage.
Competitors. We compare with the deep learning based
methods including DeepReID [27], Imp-Deep [1], En-Deep
[55], and G-Dropout [48], Gated Sia [50], EMD [44], SI-CI
[51], and MSTC2 [36], as well as other non-deep competitors, such as Mid-Filter [62], and XQDA [32], LADF [31],
1 To make a fair comparision with [1], the input images are resized to
60 × 160 × 3.
2 We re-implement [36] for evaluation purpose.
eSDC [61], LMNN [16], and LDM [14].
Evaluation metrics. In term of standard evaluation metrics,
we report the Rank-1, Rank-5 and Rank-10 accuracy with
single-shot setting in our paper. For more detailed results
using Cumulative Matching Characteristics (CMC) curves,
please refer to the Supplementary Material.
4.2. Results on CUHK03-Detected
On the CUHK03-Detected dataset, our results are compared with the state-of-the-art methods in Table 2.
Firstly and most importantly, our best results – MuDeep
(jointly) outperforms all the other baselines at all ranks. Particularly, we notice that our results are significantly better
than both the methods of using hand-crafted features and
the recent deep learned models. This validates the efficacy
of our architectures and suggests that the proposed multiscale and saliency-based learning fusion layer can help extract discriminative features for person re-id.
Secondly, comparing with the Gated Sia [50] which is
an extension of Siamese network with the gating function
to selectively emphasize fine common local patterns from
data, our result is 7.54% higher at Rank-1 accuracy. This
suggests that our framework can better analyze the multiscale patterns from data than Gated Sia [50], again thanks
to the novel multi-scale stream layers and saliency-based
learning fusion layers.
Finally, we further compare our results on both “Jointly”
and “Exclusively” settings. The key difference is that in
the “jointly” settings, the models are also trained with the
data of CUHK03-Labelled, i.e. the images with manually
labelled pedestrian bounding boxes. As explained in [27],
the quality of labelled images is generally better than those
of detected images. Thus with more data of higher quality used for training, our model under the “Jointly” setting
can indeed beat our model under the “Exclusively” setting.
However, the improved margins between these two settings
at all Ranks are very small if compared with the margin
between our results and the results of the other methods.
This suggests that our multi-scale stream layers have efficiently explored and augmented the training data; and with
such multi-scale information explored, we can better train
our model. Thanks to our multi-scale stream layers, our
MuDeep can still achieve good results with less training
data, i.e. under the Exclusively setting.
4.3. Results on CUHK03-Labelled
The results of CUHK03-Labelled dataset are shown in
Table 3 and we can make the following observations.
Firstly, in this setting, our MuDeep still outperforms the
other competitors by clear margins on all the rank accuracies. Our result is 4.27% higher than the second best one
– G-Dropout[48]. Note that the G-Dropout adopted the domain guided dropout strategies and it utilized much more
Dataset
Rank
SDALF[7]
eSDC [61]
LMNN [16]
XQDA [32]
LDM [14]
DeepReid [27]
MSTC [36]
Imp-Deep [1]
SI-CI [51]
Gated Sia [50]
EMD [44]
MuDeep (Jointly)
MuDeep (Exclusively)
1
4.87
7.68
6.25
46.25
10.92
19.89
55.01
44.96
52.17
68.10
52.09
75.64
75.34
“Detected”
5
21.17
21.86
18.68
78.90
32.25
50.00
–
76.01
84.30
88.10
82.87
94.36
94.31
10
35.06
34.96
29.07
88.55
48.78
64.00
–
83.47
92.30
94.60
91.78
97.46
97.40
Table 2. Results of CUHK03-Detected dataset.
Dataset
Rank
SDALF[7]
eSDC [61]
LMNN [16]
XQDA [32]
LDM [14]
DeepReid [27]
Imp-Deep [1]
G-Dropout[48]
EMD [44]
MuDeep (Jointly)
MuDeep (Exclusively)
1
5.60
8.76
7.29
52.20
13.51
20.65
54.74
72.60
61.32
76.87
76.34
“Labelled”
5
10
23.45
36.09
24.07
38.28
21.00
32.06
82.23
92.14
40.73
52.13
51.50
66.50
86.50
93.88
92.30* 94.30*
88.90
96.44
96.12
98.41
95.96
98.40
Table 3. Results of CUHK03-Labelled dataset. Note that: * represents the results reproduced from [48] with the model trained only
by CUHK03 dataset.
training data in this task. This further validates that our
multi-scale stream layers can augment the training data to
exploit more information from medium-scale dataset rather
than scaling up the size of training data; and saliency-based
learning fusion layers can better fuse the output of multiscale information which can be used for person re-id.
Secondly, we can draw a similar conclusion as from
the CUHK03-Detected results: Our MuDeep with “Jointly”
setting is only marginally better than that with “Exclusively” setting. This means that more available related training data can always help improve the performance of deep
learning model; and this also validates that our multi-scale
stream layers and saliency-based fusion layers can help extract and fuse the multi-scale information and thus cope well
with less training data.
4.4. Results on CUHK01 and VIPeR
CUHK01 dataset. Our MuDeep is trained only on
CUHK01 dataset without using extra dataset. As listed in
Table 4, our approach obtains 79.01% on Rank-1 accuracy,
which can beat all the state-of-the-art; and is 7.21% higher
than the second best method [51]. This further shows the
advantages of our framework.
VIPeR dataset. This dataset is extremely challenging due
to small data size and low resolution. In particular, this
dataset has relatively small number of distinct identities
and thus the positive pairs for each identity are much less
if compared with the other two datasets. Thus for this
dataset, our network is initialized by the model pre-trained
on CUHK03-Labelled dataset with the “Jointly” setting.
The training set of VIPeR dataset is used to fine-tune the
pre-trained network. We still use the same network structure except changing the number of neurons on the last layer
in the classification subnet. The results on VIPeR dataset
are listed in Table 5. The results of our MuDeep remains
competitive and outperforms all compared methods.
Qualitative visualization. We give some qualitative results of visualizing the saliency-based learning maps on
CUHK01. The visualization of our saliency-based learning maps is computed from saliency-based learning fusion
layer in Fig. 3. Given one input pair of images (left of
Fig. 3), for each branch, the saliency-based learning fusion
layer combines four data streams into a single data stream
with selectively learning learned from the saliency of each
data stream. The heatmaps of three channels are visualized
for each stream and each branch is shown on the right side
of Fig. 3. Each row corresponds to one data stream; and
each column is for one channel of heatmap of features. The
weight α (in Eq. (1)) for each feature channel is learned
and updated accordingly with the iteration of the whole network. The three channels illustrated in Fig. 3 have a high
reaction (i.e. high values) in the heatmaps on discriminative local patterns of the input image pair. For example, the
first and third columns highlight the difference of clothes
and body due to the existence of different visual patterns
learned by our multi-scale stream layers. Thus these two
channels have relative higher α weights, whilst the second
column models the background patterns which are less discriminative for the task of person re-id and results in lower
α value. Our saliency-based learning fusion layer can automatically learn the optimal α weights from training data.
Rank
KISSME [34, 25]
SDALF[7]
eSDC [61]
LMNN [16]
LDM [14]
DeepReid [27]
G-Dropout [48]
MSTC [36]
Imp-Deep [1]
SI-CI [51]
EMD [44]
MuDeep
1
29.40
9.90
22.84
21.17
26.45
27.87
66.60
64.12
65.00
71.80
69.38
79.01
5
60.18
41.21
43.89
49.67
57.68
58.20
–
–
88.70
91.35*
91.03
97.00
10
74.44
56.00
57.67
62.47
72.04
73.46
–
–
93.12
95.23*
96.84
98.96
Table 4. Results of CUHK01 dataset. *: reported from CMC
curves in [51].
Rank
kCCA[34]
Mid-Filter [62]
RPLM [17]
MtMCML [38]
LADF [31]
XQDA [32]
Imp-Deep [1]
G-Dropout [48]
MSTC [36]
SI-CI [51]
Gated Sia [50]
EMD [44]
MuDeep
1
30.16
29.11
27.00
28.83
30.22
40.00
34.81
37.70
31.24
35.76
37.90
40.91
43.03
5
62.69
52.34
55.30
59.34
64.70
68.13
63.61
–
–
67.40
66.90
67.41
74.36
10
76.04
65.95
69.00
75.82
78.92
80.51
75.63
–
–
83.50
76.30
79.11
85.76
Table 5. Results on the VIPeR dataset
4.5. Ablation study
Multi-scale stream layers. We compare our multi-scale
stream layers with three variants of Inception-v4 [45] on
CUHK01 dataset. Specifically, Inception-v4 has Inception
A and Inception B modules, both of which are compared
against here. Furthermore, we also compare Inception A+B
structure which is constructed by connecting Inception A,
Reduction, and Inception B modules. The Inception A+B
structure is the most similar one to our multi-scale stream
layers except that (1) we modify some parameters; (2) the
Figure 3. Saliency Map of G in Eq (1).
weights of our layers are tied between the corresponding
streams of two branches. Such weight tieing strategy enforces each paired stream of our two branches to extract the
common patterns. With the input of each image pair, we use
Inception A, Inception B, and Inception A+B as the based
network structure to predict whether this pair is the “same
person” or “different persons”. The results are compared in
Table 6. We can see that our MuDeep architecture has the
best performance over the other baselines. This shows that
our MuDeep is the most powerful at learning discriminative
patterns than the Inceptions variants, since the multi-scale
stream layers can more effectively extract multi-scale information and the saliency-based learning fusion layer facilitates the automatic selection of the important feature channels.
Saliency-based learning fusion layer and classification
subset. To further investigate the contributions of the fusion
layer and the classification subset, we compare three variants of our model with one of or both of the two components
removed on CUHK01 dataset. In Table 7, the “– Fusion”
denotes our MuDeep without using the fusion layer; and
“–ClassNet” indicates our MuDeep without the classification subnet; and “– Fusion – ClasNet” means that MuDeep
has neither fusion layer nor classification subnet. The results in Table 7 show that our full model has the best performance over the three variants. We thus conclude that both
components help and the combination of the two can further
boost the performance.
Rank
Inception A
Inception B
Inception A+B
MuDeep
1
60.11
67.31
72.11
79.01
5
85.30
92.71
91.90
97.00
10
92.44
97.43
96.45
98.96
Table 6. Results of comparing with different inception models on
the CUHK01 dataset.
Rank
– Fusion
– ClassNet
– Fusion – ClasNet
MuDeep
1
77.88
76.21
74.21
79.01
5
96.81
94.47
92.10
97.00
10
98.21
98.41
97.63
98.96
Table 7. Results of comparing with the variants of MuDeep on the
CUHK01 dataset. Note that “–Fusion” means that saliency-based
learning fusion layer is not used; “–ClassNet” indicates that the
classification subnet is not used in the corresponding structure.
Dataset
Rank
RCNvrid[40]
STA [37]
VR [52]
SRID [23]
AFDA [30]
DTDL [22]
DDC [15]
MuDeep
PRID-2011
1
5 10
70 90 95
64 87 90
42 65 78
35 59 70
43 73 85
41 70 78
28 48 55
65 87 93
iLIDS-VID
1
5 10
58 84 91
44 72 84
35 57 68
25 45 56
38 63 73
26 48 57
–
–
–
41 70 83
Table 8. Results on the PRID-2011 and iLIDS-VID datasets
4.6. Further evaluations
Multi-scale vs. Multi-resolution. Due to the often different camera-to-object distances and the resultant object image resolutions, multi-resolution re-id is also interesting on
its own and could potentially complement multi-scale re-id.
Here a simplest multi-resolution multi-scale re-id model is
formulated, by training the proposed multi-scale models at
different resolutions followed by fusing the different model
outputs. We consider the original resolution (60 × 160) and
a lower one (45 × 145). The feature fusion is done by concatenation. We found that the model trained at the lower
resolution achieves lower results and when the two models
are fused, the final performance is around 1−2% lower than
the model learned at the original resolution alone. Possible
reasons include (1) All three datasets have images of similar resolutions; and (2) more sophisticated multi-resolution
learning model is required which can automatically determine the optimal resolution for measuring similarity given
a pair of probe/gallery images.
Results on video-based re-id. Our method can be evaluated on iLIDS-VID and PRID-2011 datasets. Particularly, two datasets are randomly split into 50% of persons
for training and 50% of persons for testing. We follow
the evaluation protocol in [15] on PRID-2011 dataset and
only consider the first 200 persons who appear in both cameras. We compared our model with the results reported in
[15, 52, 40]. The results are listed in Table 8. These results
are higher than those in [15, 52], but lower than those in
[40] (only slightly on PRID-2011)3 . These results are quite
encouraging and we expect that if the model is extended to
a CNN-RNN model, better performance can be achieved.
5. Conclusion
We have identified the limitations of existing deep reid models in the lack of multi-scale discriminative feature
learning. To overcome the limitation, we have presented
a novel deep architecture – MuDeep to exploit the multiscale and saliency-based learning strategies for re-id. Our
model has achieved state-of-the-art performance on several
benchmark datasets.
Acknowledgments. This work was supported in part by
two NSFC projects (#U 1611461 and #U 1509206) and
one project from STCSM (#16JC1420401).
3 We also note that our results are better than those of most video-based
re-id specialist models listed in Table 1 of [40].
References
[1] E. Ahmed, M. Jones, and T. K. Marks. An improved deep
learning architecture for person re-identification. In CVPR,
2015. 1, 2, 3, 4.1, 1, 4.2, 4.3, 4.4, 4.4
[2] J. Berclaz, F. Fleuret, and P. Fua. Multi-camera tracking and
atypical motion detection with behavioral maps. In ECCV,
2008. 1
[3] A. Chan and N. Vasconcelos. Bayesian poisson regression
for crowd counting. In ICCV, pages 545 –551, 2009. 1
[4] D. Cheng, Y. Gong, S. Zhou, JinjunWang, and N. Zheng.
Person re-identification by multi-channel parts-based cnn
with improved triplet loss function. In CVPR, 2016. 1, 2
[5] M. Corbetta and G. L. Shulman. Control of goal-directed
and stimulus-driven attention in the brain. In Nature reviews
neuroscience,, 2002. 2
[6] S. Ding, L. Lin, G. Wang, and H. Chao. Deep feature
learning with relative distance comparison for person reidentification. In Pattern Recognition, 2015. 2
[7] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and
M. Cristani. Person re-identification by symmetry-driven accumulation of local features. In CVPR, 2010. 1, 4.2, 4.3, 4.4
[8] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained partbased models. IEEE TPAMI, 32:1627–1645, 2010. 4.1
[9] L. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis
and the controlled generation of natural stimuli using convolutional neural networks. CoRR, 2015. 1
[10] W. Ge and R. T. Collins. Marked point processes for crowd
counting. In CVPR, 2009. 1
[11] S. Gong, M. Cristani, S. Yan, and C. C. Loy. Person reidentification, volume 1. Springer, 2014. 1
[12] D. Gray, S. Brennan, and H. Tao. Evaluating appearance
models for recognition, reacquisition, and tracking. In IEEE
PETS Workshop, 2007. 4.1
[13] D. Gray and H. Tao. Viewpoint invariant pedestrian recognition with an ensemble of localized features. In ECCV, 2008.
1
[14] M. Guillaumin, J. Verbeek, and C. Schmid. Is that you?
metric learning approaches for face identification. In ICCV,
2009. 4.1, 4.2, 4.3, 4.4
[15] M. Hirzer, C. Beleznai, P. M. Roth, and H. Bischof. Person
re-identification by descriptive and discriminative classification. In Scandinavian conference on Image analysis, pages
91–102. Springer, 2011. 4.1, 4.6, 4.6
[16] M. Hirzer, P. M. Roth, and H. Bischof.
Person reidentification by efficient impostor-based metric learning. In
IEE AVSS, 2012. 4.1, 4.2, 4.3, 4.4
[17] M. Hirzer, P. M. Roth, M. Kostinger, and H. Bischof. Relaxed pairwise learned metric for person re-identification. In
ECCV, 2012. 4.4
[18] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
deep network training by reducing internal covariate shift. In
ICML, 2015. 3
[19] L. Itti and C. Koch. Computational modelling of visual attention. Nat Rev Neurosci, 2(3):194–203, Mar 2001. 2
[20] L. Itti and C. Koch. Feature combination strategies for
saliency-based visual attention systems. Journal of Electronic Imaging, 10(1):161–169, Jan 2001. 2
[21] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional
architecture for fast feature embedding. arXiv, 2014. 4.1
[22] S. Karanam, Y. Li, and R. J. Radke. Person re-identification
with discriminatively trained viewpoint invariant dictionaries. In Proceedings of the IEEE International Conference on
Computer Vision, pages 4516–4524, 2015. 4.1, 4.6
[23] S. Karanam, Y. Li, and R. J. Radke. Sparse re-id: Block
sparsity for person re-identification. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 33–40, 2015. 4.1, 4.6
[24] S. Khamis, C. Kuo, V. Singh, V. Shet, and L. Davis. Joint
learning for attribute-consistent person re-identification. In
ECCV workshop, 2014. 1, 3
[25] M. Koestinger, M. Hirzer, P. Wohlhart, P. M. Roth, and
H. Bischof. Large scale metric learning from equivalence
constraints. In CVPR, 2012. 1, 4.4
[26] I. Kviatkovsky, A. Adam, and E. Rivlin. Color invariants for
person reidentification. IEEE TPAMI, 2013. 1
[27] W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter
pairing neural network for person re-identification. In CVPR,
2014. 1, 2, 4.1, 4.2, 4.3, 4.4
[28] W. Li, R. Zhao, and X.Wang. Human re-identification with
transferred metric learning. In ACCV, 2012. 3.1, 4.1
[29] X. Li, W.-S. Zheng, X. Wang, T. Xiang, and S. Gong. Multiscale learning for low-resolution person re-identification. In
ICCV, December 2015. 2
[30] Y. Li, Z. Wu, S. Karanam, and R. J. Radke. Multi-shot human
re-identification using adaptive fisher discriminant analysis.
In BMVC, volume 1, page 2, 2015. 4.1, 4.6
[31] Z. Li, S. Chang, F. Liang, T. S. Huang, L. Cao, and J. R.
Smith. Learning locally-adaptive decision functions for person verification. In ECCV, 2014. 1, 4.1, 4.4
[32] S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person re-identification
by local maximal occurrence representation and metric
learning. In CVPR, 2015. 1, 3, 4.1, 4.2, 4.3, 4.4
[33] G. Lisanti, I. Masi, A. Bagdanov, and A. D. Bimbo. Person re-identification by iterative re-weighted sparse ranking.
IEEE TPAMI, 2014. 1
[34] G. Lisanti, I. Masi, and A. D. Bimbo. Matching people
across camera views using kernel canonical correlation analysis. In ICDSC, 2014. 4.4, 4.4
[35] H. Liu, J. Feng, M. Qi, J. Jiang, and S. Yan. End-to-end
comparative attention networks for person re-identification.
In IEEE TIP, 2016. 2
[36] J. Liu, Z.-J. Zha, Q. Tian, D. Liu, T. Yao, Q. Ling, and T. Mei.
Multi-scale triplet cnn for person re-identification. In ACM
Multimedia, 2016. 2, 4.1, 2, 4.2, 4.4, 4.4
[37] K. Liu, B. Ma, W. Zhang, and R. Huang. A spatiotemporal appearance representation for viceo-based pedestrian re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pages 3810–3818,
2015. 4.1, 4.6
[38] L. Ma, X. Yang, and D. Tao. Person re-identification over
camera networks using multi-task distance metric learning.
In IEEE TIP, 2014. 1, 4.4
[39] A. Mahendran and A. Vedaldi. Understanding deep image
representations by inverting them. In CVPR, 2015. 1
[40] N. McLaughlin, J. Martinez del Rincon, and P. Miller. Recurrent convolutional network for video-based person reidentification. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 1325–
1334, 2016. 4.1, 4.6, 4.6, 3
[41] T. Mensink, W. Zajdel, and B. Krose. Distributed em learning for appearance based multi-camera tracking. In ICDSC,
2007. 1
[42] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of
visual attention. In NIPS, 2014. 2
[43] P. Sermanet, A. Frome, and E. Real. Attention for finegrained categorization. arXiv, 2014. 2
[44] H. Shi, Y. Yang, X. Zhu, S. Liao, Z. Lei1, W. Zheng, and
S. Z. Li. Embedding deep metric for person re-identification:
A study against large variations. In ECCV, 2016. 1, 2, 4.1,
4.2, 4.3, 4.4, 4.4
[45] C. Szegedy, S. Ioffe, and V. Vanhoucke. Inception-v4,
inception-resnet and the impact of residual connections on
learning. In arxiv, 2016. 3.1, 4.5
[46] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed,
D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. In CVPR, 2015. 3.1
[47] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and ZbigniewWojna. Rethinking the inception architecture for computer
vision. In arxiv, 2015. 3.1
[48] X. T, W.Ouyang, H. Li, and X. Wang. Learning deep feature
representations with domain guided dropout for person reidentification. In CVPR, 2016. 1, 2, 3, 4.1, 4.3, 3, 4.4, 4.4
[49] D. Tao, L. Jin, Y. Wang, Y. Yuan, and X. Li. Person reidentification by regularized smoothing kiss metric learning. IEEE
TCSVT, 2013. 1
[50] R. R. Varior, M. Haloi, and G. Wang. Gated siamese
convolutional neural network architecture for human reidentification. In ECCV, 2016. 1, 2, 4.1, 4.2, 4.4
[51] F. Wang, W. Zuo, L. Lin, D. Zhang, and L. Zhang. Joint
learning of single-image and cross-image representations for
person re-identification. In CVPR, 2016. 1, 2, 4.1, 4.2, 4.4,
4, 4.4
[52] T. Wang, S. Gong, X. Zhu, and S. Wang. Person reidentification by video ranking. In European Conference on
Computer Vision, pages 688–703. Springer, 2014. 4.1, 4.6,
4.6
[53] X. Wang, K. T. Ma, G.-W. Ng, and W. E. L. Grimson. Trajectory analysis and semantic region modeling using a nonparametric bayesian model. In CVPR, 2008. 1
[54] X. Wang, K. Tieu, and W. Grimson. Correspondencefree multi-camera activity analysis and scene modeling. In
CVPR, 2008. 1
[55] S. Wu, Y.-C. Chen, X. Li, A.-C. Wu, J.-J. You, and W.-S.
Zheng. An enhanced deep feature representation for person
re-identification. In WACV, 2016. 3, 4.1
[56] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang. Endto-end deep learning for person search. arXiv preprint
arXiv:1604.01850, 2016. 3
[57] F. Xiong, M. Gou, O. Camps, and M. Sznaier. Person reidentification using kernel-based metric learning methods. In
ECCV, 2014. 1, 3
[58] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked
attention networks for image question answering. In CVPR,
2016. 2
[59] Z. Zhang, Y. Chen, and V. Saligrama. A novel visual word
co-occurrence model for person re-identification. In ECCV
workshop, 2014. 1, 3
[60] R. Zhao, W. Ouyang, and X. Wang. Person re-identification
by salience matching. In ICCV, 2013. 1
[61] R. Zhao, W. Ouyang, and X. Wang. Unsupervised salience
learning for person re-identification. In CVPR, 2013. 4.1,
4.2, 4.3, 4.4
[62] R. Zhao, W. Ouyang, and X. Wang. Learning mid-level filters for person re-identification. In CVPR, 2014. 1, 4.1, 4.4
[63] W.-S. Zheng, S. Gong, and T. Xiang. Re-identification by
relative distance comparison. IEEE TPAMI, 2013. 1
| 1 |
Low-dimensional Data Embedding via Robust Ranking
Ehsan Amid
Nikos Vlassis
Manfred K. Warmuth
University of California,
Santa Cruz, CA 95064
[email protected]
Adobe Research
San Jose, CA 95113
[email protected]
University of California,
Santa Cruz, CA 95064
[email protected]
arXiv:1611.09957v2 [] 16 May 2017
ABSTRACT
We describe a new method called t-ETE for finding a low-dimensional
embedding of a set of objects in Euclidean space. We formulate the
embedding problem as a joint ranking problem over a set of triplets,
where each triplet captures the relative similarities between three
objects in the set. By exploiting recent advances in robust ranking, t-ETE produces high-quality embeddings even in the presence
of a significant amount of noise and better preserves local scale
than known methods, such as t-STE and t-SNE. In particular, our
method produces significantly better results than t-SNE on signature datasets while also being faster to compute.
KEYWORDS
Ranking, Triplet Embedding, Robust Losses, t-Exponential Distribution, Dimensionality Reduction, t-SNE.
1
INTRODUCTION
Learning a metric embedding for a set of objects based on relative
similarities is a central problem in human computation and crowdsourcing. The application domain includes a variety of different
fields such as recommender systems and psychological questionnaires. The relative similarities are usually provided in the form
of triplets, where a triplet (i, j, k) expresses that “object i is more
similar to object j than to object k”, for which the similarity function may be unknown or not even quantified. The first object i
is referred to as the query object and objects j and k are the test
objects. The triplets are typically gathered by human evaluators via
a data-collecting mechanism such as Amazon Mechanical Turk1 .
These types of constraints have also been used as side information
in semi-supervised metric learning [4, 8] and clustering [2].
Given a set of relative similarity comparisons on a set of objects,
the goal of triplet embedding is to find a representation for the
objects in some metric space such that the constraints induced
by the triplets are satisfied as much as possible. In other words,
the embedding should reflect the underlying similarity function
from which the constraints were generated. Earlier methods for
triplet embedding include Generalized Non-metric Multidimensional Scaling (GNMDS) [1], Crowd Kernel Learning (CKL) [14],
and Stochastic Triplet Embedding (STE) and extension, t-distributed
STE (t-STE) [15].
One major drawback of the previous methods for triplet embedding is that their performance can drop significantly when a small
amount of noise is introduced in the data. The noise may arise due
to different reasons. For instance, each human evaluator may use
a different similarity function when comparing objects [3]. As a
result, there might exist conflicting triplets with reversed test objects. Another type of noise could be due to the insufficient degree
1 https://www.mturk.com
of freedom when mapping an intrinsically (and possibly hidden)
high-dimensional representation to a lower-dimensional embedding. A simple example is mapping uniformly distributed points
on a two-dimensional circle to a one-dimensional line; regardless
of the embedding, the end points of line will always violate some
similarity constraints.
In this paper, we cast the triplet embedding problem as a joint
ranking problem. In any embedding, for each object i, the remaining
object are naturally ranked by their “distance” to i. The triplet
(i, j, k) expresses that the object j should be ranked higher than
object k for the ranking of i. Therefore, triplet embedding can be
viewed as mapping the objects into a Euclidean space so that the
joint rankings belonging to all query objects are as consistent (with
respect to the triplets) as possible. In order to find the embedding,
we define a loss for each triplet and minimize the sum of losses
over all triplets. Initially our triplet loss is unbounded. However in
order to make our method robust to noise, we apply a novel robust
transformation (using the generalized log function), which caps the
triplet loss by a constant. Our new method, t-Exponential Triplet
Embedding (t-ETE)2 , inherits the heavy-tail properties of t-STE
in producing high-quality embeddings, while being significantly
more robust to noise than any other method. Figure 1 illustrates
examples of embeddings of a subset of 6000 data points from the
MNIST dataset using t-STE and our proposed method. The triplets
are synthetically generated by sampling a random point from one
of the 20-nearest neighbors for each point and another point from
those that are located far away (100 triplets for each point). The two
embeddings are very similar when there is no noise in the triplets
(Figures 1(a) and 1(b)). However, after ‘reversing’ 20% of the triplets,
t-STE fails to produce a meaningful embedding (Figure 1(c)) while
t-ETE is almost unaffected by the noise (Figure 1(d)).
We also apply our t-ETE method to dimensionality reduction
and develop a new technique, which samples a subset of triplets in
the high-dimensional space and finds the low-dimensional representation that satisfies the corresponding ranking. We quantify the
importance of each triplet by a non-negative weight. We show that
even a small carefully chosen subset of triplets capture sufficient information about the local as well as the global structure of the data
to produce high-quality embeddings. Our proposed method outperforms the commonly used t-SNE [9] for dimensionality reduction
in many cases while having a much lower complexity.
2
TRIPLET EMBEDDING VIA RANKING
In this section we formally define the triplet embedding problem.
Let I = {1, 2, . . . , N } denote a set of objects. Suppose that the feature (metric) representation of these objects is unknown. However,
2 The
acronym t-STE is based on the Student-t distribution. Here, “t” is part of the
name of the distribution. Our method, t -ETE, is based on t -exponential family. Here,
t is a parameter of the model.
Ehsan Amid, Nikos Vlassis, and Manfred K. Warmuth
t-STE - Clean Triplets
t-ETE - Clean Triplets
t-STE - Noisy Triplets
t-ETE - Noisy Triplets
(b)
(c)
(d)
0
1
2
3
4
5
6
7
8
9
(a)
Figure 1: Experiments on the MNIST dataset: noise-free triplets using (a) t-STE, and (b) t-ETE, and triplets with 20% noise using
(c) t-STE, and (d) the proposed t-ETE.
some information about the relative similarities of these objects is
available in the form of triplets. A triplet (i, j, k) is an ordered tuple
which represents a constraint on the relative similarities of the objects i, j, and k, of the type “object i is more similar to object j than
to object k.” Let T = {(i, j, k)} denote the set of triplets available
for the set of objects I.
Given the set of triplets T , the triplet embedding problem amounts
to finding a metric representation of the objects, Y = {y1 , y2 , . . . , yN },
such that the similarity constraints imposed by the triplets are
satisfied as much as possible by a given distance function in the
embedding. For instance, in the case of Euclidean distance, we want
(i, j, k) =⇒ kyi − yj k < kyi − yk k , w.h.p.
(1)
The reason that we may not require all the constraints to be satisfied
in the embedding is that there may exist inconsistent and/or conflicting constraints among the set of triplets. This is a very common
phenomenon when the triplets are collected via human evaluators
via crowdsourcing [3, 16].
We can consider the triplet embedding problem as a ranking
problem imposed by the set of constraints T . More specifically,
each triplet (i, j, k) can be seen as a partial ranking result where for
a query over i, we are given two results, namely j and k, and the
triplet constraint specifies that “the result j should have relatively
higher rank than k”. In this setting, only the order of closeness
of test objects to the query object determines the ranking of the
objects.
Let us define `i jk (Y) ∈ [0, ∞) to be non-negative loss associated
with the triplet constraint (i, j, k). To reflect the ranking constraint,
the loss `i jk (Y) should be a monotonically increasing (decreasing)
function of the pairwise distance kyi − yj k (kyi − yk k). These
properties ensure that `i jk (Y) → 0 whenever kyi − yj k → 0 and
kyi − yk k → ∞. We can now define the triplet embedding problem
as minimizing the sum of the ranking losses of the triplets in T ,
that is,
Õ
min L T ,
Y
`i jk (Y) .
LT =
(2)
(i, j,k)∈ T
In the above formulation, the individual loss of each triplet is unbounded. This means that in cases where a subset of the constraints
are corrupted by noise, the loss of even a single inconsistent triplet
may dominate the total objective (2) and result in a poor performance. In order to avoid such effect, we introduce a new robust
transformation to cap the individual loss of each triplet from above
by a constant. As we will see, the capping helps to avoid the noisy
triplets and produce high-quality embeddings, even in the presence
of a significant amount of noise.
3
ROBUST LOSS TRANSFORMATIONS
We first introduce the generalized logt and expt functions as the
generalization of the standard log and exp functions, respectively.
The generalized logt function with temperature parameter 0 < t < 2
is defined as [10, 12]
(
log(x)
if t = 1
logt (x) =
.
(3)
(x 1−t − 1)/(1 − t) otherwise
Note that logt is concave and non-decreasing and generalizes the
log function which is recovered in the limit t → 1. The expt function is defined as the inverse of logt function.
(
exp(x)
if t = 1
expt (x) =
,
(4)
1/(1−t )
[1 + (1 − t)x]+
otherwise
where [ · ]+ = max(0, ·). Similarly, the standard exp is recovered in
the limit t → 1. Figure 2(a) and 2(b) illustrate the expt and logt
functions for several values of t.
One major difference with the standard exp and log functions
is that the familiar distributive properties do not hold in general:
expt (a b) , expt (a) expt (b) and logt (a b) , logt (a) + logt (b). An
important property of expt is that it decays to zero slower than exp
for values of 1 < t < 2. This motivates defining heavy-tailed distributions using the expt function. More specifically, the t-exponential
family of distributions is defined as a generalization of the exponential family by using the expt function in place of the standard
exp function [11, 13].
Our main focus is the capping property of the logt function: for
values x > 1, the logt function with t > 1 grows slower than the
log function and reaches the constant value 1/(t − 1) in the limit
Low-dimensional Data Embedding via Robust Ranking
expt
t!0
t = 0.5
t=1
t = 1.5
Algorithm 1 Weighted t-ETE Dimensionality Reduction
logt
7
6
Input: high-dimensional data X = {x1 , x2 , . . . , xn },
temperatures t and t 0 , embedding dimension d
Output: Y = {y1 , y2 , . . . , yn }, where yi ∈ Rd
2
5
1
- T ← {}, W ← {}
for i = 1 to n do
for j ∈ {m-nearest neighbors of i} do
- sample k unif. from {k : kxi − xk k > kxi − xj k}
- compute weight ωi jk using (13)
- T ← T ∪ (i, j, k)
- W ← W ∪ ωi jk
end for
end for
- for all ω ∈ W: ω ← maxωω W + γ
4
0
3
-3
-2
-1
2
-1
1
-2
0
1
2
3
1
2
3
4
-3
x
x
(a)
(b)
Figure 2: Generalized exp and log functions: (a) expt function,
and (b) logt function for different values of 0 < t < 2. Note
that for t = 1, the two functions reduce to standard exp and
log functions, respectively.
- initialize Y to n points in Rd
sampled from N (0, 10−3 Id ×d )
for r = 1 to iter# do
- calculate the gradient ∇C W, T of (12)
- update Y ← Y − η∇C W, T
end for
x → ∞. This idea can be used to define the following robust loss
transformation on the non-negative unbounded loss `:
ρ t (`) = logt (1 + `),
1 < t < 2.
(5)
Note that ρ t (0) = 0, as desired. Moreover, the derivative of the
transformed loss ρ t0 (`) → 0 as ` → ∞ along with the additional
property that the loss function converges to a constant as ` → ∞,
i.e., ρ t (`) → 1/(t − 1) ≥ 0. We will use this transformation to
develop a robust ranking approach for the problem of the triplet
embedding in presence of noise in the set of constraints.
Finally, note that setting t = 1 yields the transformation
ρ 1 (`) = log(1 + `) ,
(6)
which has been used for robust binary ranking in [17]. Note that
ρ 1 (`) grows slower than `, but still ρ 1 (`) → ∞ as ` → ∞. In other
words, the transformed loss will not be capped from above. We will
show that this transformation is not sufficient for robustness to
noise.
4 T -EXPONENTIAL TRIPLET EMBEDDING
Building on our discussion on the heavy-tailed properties of generalized exp function (4), we can define the ratio
expt 0 (−kyi − yk k 2 )
`i jk (Y) =
expt 0 (−kyi − yj k 2 )
(t 0 )
(7)
with 1 <
< 2 as the loss of the ranking associated with the
triplet (i, j, k). The loss is non-negative and satisfies the properties
of a valid loss for ranking, as discussed earlier. Note that due to
heavy-tail of expt 0 function with 1 < t 0 < 2, the loss function (7)
encourages relatively higher-satisfaction of the ranking compared
to, e.g., standard exp function.
Defining the loss of each triplet (i, j, k) ∈ T as the ranking loss
in (7), we formulate the objective of the triplet embedding problem
as minimizing the sum of robust transformations of individual
losses, that is,
Õ
0
min C T , C T =
logt 1 + `i(tjk) (Y) ,
(8)
in which, 1 < t < 2. We call our method t-Exponential Triplet
Embedding (t-ETE, for short). Note that the loss of each triplet in the
summation is now capped from above by 1/(t − 1). Additionally, the
gradient of the objective function (8) with respect to the positions
of the objects Y
Õ
0
1
(9)
∇`i(tjk) (Y)
∇C T =
(t 0 )
t
(i, j,k )∈T (1 + `i jk (Y))
includes additional forgetting factors 1/(1 + `i(tjk) (Y))t that damp the
effect of those triplets that are highly-unsatisfied.
0
5
CONNECTION TO PREVIOUS METHODS
Note that by setting t = 1, we can use the property of the log function log(a) = − log(1/a) to write the objective (8) as the following
equivalent maximization problem3
Õ
0
log pi(tjk) ,
(10)
max
Y
where
0
pi(tjk) =
t0
Y
(i, j,k )∈ T
(i, j,k)∈T
expt 0 (−kyi − yj k 2 )
expt 0 (−kyi − yj k 2 ) + expt 0 (−kyi − yk k 2 )
(11)
is defined as the probability that the triplet (i, j, k) is satisfied. Setting t 0 = 1 and t 0 = 2 recovers the STE and t-STE (with α = 1)
formulations, respectively4 . STE (and t-STE) aim to maximize the
joint probability that the triplets T are satisfied in the embedding
Y. The poor performance of STE and t-STE in presence of noise
can be explained by the fact that there is no capping of the logsatisfaction probabilities5 of each triplet (see (6)). Therefore, the
3 Note
that logt (a) , − logt (1/a) in general.
Student-t distribution with α degrees of freedom can be written in form of a
t -exponential distribution with −(α + 1)/2 = 1/(1 − t ) (see [5]).
5 Note that in this case, the probabilities should be capped from below.
4 The
Ehsan Amid, Nikos Vlassis, and Manfred K. Warmuth
0.95
0.7
0.6
0.5
0.4
0.3
0.2
0.9
0.8
Nearest Neighbor Accuracy
0.8
0.1
MNIST Digits
MNIST Digits
1
Generalization Accuracy
Nearest Neighbor Error
MNIST Digits
0.9
0.9
0.85
0.8
0.75
0.7
0.65
0.6
5
10
15
20
0.5
25
0.6
0.5
0.4
0.3
0.2
0.1
0.55
0
0.7
0
0.1
Number of Dimensions
0.2
0.3
0.4
0
0.5
0
0.1
MIT Scenes
MIT Scenes
0.75
0.2
0.3
0.4
0.5
0.4
0.5
Noise Level
Noise Level
MIT Scenes
1
0.6
0.6
0.55
0.5
0.45
Nearest Neighbor Accuracy
Generalization Accuracy
Nearest Neighbor Error
0.7
0.65
0.9
0.8
0.7
0.6
0.5
0.4
0.35
0
5
10
15
20
25
Number of Dimensions
(a)
(b)
30
0.4
0
0.1
0.2
0.3
Noise Level
(c)
0.4
0.5
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
Noise Level
(d)
Figure 3: Generalization and nearest-neighbor performance: MNIST (top row) and MIT Scenes (bottom row). (a) Generalization
error, (b) nearest-neighbor error, (c) generalization accuracy in presence of noise, and (d) nearest-neighbor accuracy in presence
of noise. For all the experiments, we use t = t 0 . For the generalization and nearest-neighbor error experiments, we start with
t = 2 and use a smaller t as the number of dimensions increases (more degree of freedom). For the noise experiments, we set
t = 1.7. Figures best viewed in color.
low satisfaction probabilities of a the noisy triplet dominates the
objective function (10) and thus, results in poor performance.
6
APPLICATIONS TO DIMENSIONALITY
REDUCTION
Now, consider the case where a high-dimensional representation
n is provided for a set of n objects. Having the t-ETE
X = {xi }i=1
method in hand, one may ask the following question: “given the
high-dimensional representation X for the objects, is it possible
to find a lower-dimensional representation Y for these objects
by satisfying a set of ranking constraints (i.e., triplets), formed
based on their relative similarities in the representation X?”. Note
that the total number of triplets that can be formed on a set of
n objects is O(n 3 ) and trying to satisfy all the possible triplets
is computationally expensive. However, we argue that most of
the these triplets are redundant and contain the same amount of
information about the relative similarity of the objects. For instance,
consider two triplets (i, j, k) and (i, j, k 0 ) in which i and k are located
far away and k and k 0 are neighbors of each other. Given (i, j, k),
having (i, j, k 0 ) provides no extra information on the placements
of i and j, as long as k and k 0 are located close together in the
embedding. In other words, k and k 0 are viewed by i as almost
being the same object.
Note that for each object i, the nearby objects having relatively
short distance to i specify the local structure of the object, whereas
those that are located far away determine the global placement of
i in the space. For that matter, for each query object i, we would
like to consider those triplets (with high probability) that preserve
both local and global structure of the data. Following the discussion
above, we emphasize on preserving the local information by explicitly choosing the first test object among the nearest-neighbors of
the query object i. The global information of the object i is then
preserved by considering a small number of objects, uniformly
sampled from those that are located farther away. This leads to the
following procedure for sampling a set of informative triplets. For
each object i, we choose the first object from the set of m-nearest
neighbors of i and then, sample the outlier object uniformly from
those that are located farther away from i than the first object. This
is equivalent to sampling a triplet uniformly at random conditioned
on that the first test object is chosen among the m-nearest neighbors of i. We use equal number of nearest-neighbors and outliers
for each point, which results in nm2 triplets in total.
Low-dimensional Data Embedding via Robust Ranking
(a)
(b)
Figure 4: Embedding of the Food dataset using (a) t-STE, and t-ETE (t = 2) methods. There appear no clear separation between
the clusters in (a) while in (b), three different clusters of food are evident: “Vegetables and Meals” (top), “Ice creams and Deserts”
(bottom left), and “Breads and Cookies” (bottom right).
The original t-ETE formulation aims to satisfy each triplet equally
likely. This would be reasonable in cases where no side information
about the extent of each constraint is provided. However, given the
high-dimensional representation of the objects X, this assumption
may not be accurate. In other words, the ratio of the pairwise similarities of the objects specified in each triplet may vary significantly
among the triplets. To account for this variation, we can introduce a
notion of weight for each triplet to reflect the extent that the triplet
needs to be satisfied. More formally, let ωi jk ≥ 0 denote the weight
associated with the triplet (i, j, k) and let W = {ωi jk } denote the
set of all triplet weights. The Weighted t-ETE can be formulated as
minimizing the sum of weighted capped losses of triplets, that is,
min C W, T ,
Y
C W, T =
Õ
0
ωi jk logt 1 + `i(tjk) (Y) .
(12)
(i, j,k )∈ T
The t-ETE method can be seen as a special case of the weighted
triplet embedding formulation where all the triplets have unit
weights.
Finally, to assign weights to the sampled triplets, we note that
the loss ratio in (7) is inversely proportional to how well the triplet
is satisfied in the embedding. This suggests using the inverse loss
ratios of the triplets in the high-dimensional space as the weights
associated with the triplets. More formally, we set
ωi jk =
exp(−kxi − xj k 2 /σi2j )
2 )
exp(−kxi − xk k 2 /σik
,
(13)
where σi2j = σi σ j is a constant scaling factor for the pair (i, j).
We set σi to the distance of i to its 10-th nearest neighbor. This
choice of scaling adaptively handles the dense as well as the sparse
regions of data distribution. Finally, the choice of exp function
rather than using expt 0 with 1 < t 0 < 2 is to have more emphasis
on the distances of the objects in the high-dimensional space. The
pseudo-code for the algorithm is shown in Algorithm 1. In practice,
dividing each weight at the end by the maximum weight in W and
adding a constant positive bias γ > 0 to all weight improves the
results.
Note that both sampling and weighting the triplets using (13) and
calculating the gradient of loss requires calculating the pairwise distances only between O(nm2 ) objects in the high-dimensional space
(for instance, by using efficient methods to calculate m-nearest
neighbors such as [7]) or the low-dimensional embedding. In many
cases, m 2 n, which results in a huge computational advantage
over O(n 2 ) complexity of t-SNE.
7
EXPERIMENTS
In this section, we conduct experiments to evaluate the performance
of t-ETE for triplet embedding as well the application of Weighted
t-ETE for non-linear dimensionality reduction. In the first set of
experiments, we compare t-ETE to the following triplet embedding
methods: 1) GNMDS, 2) CKL, 3) STE, and 4) t-STE. We evaluate the
generalization performance of the different methods by means of
satisfying unseen triplets and the nearest-neighbor error, as well as
their robustness to constraint noise. We also provide visualization
Ehsan Amid, Nikos Vlassis, and Manfred K. Warmuth
Backstreet Boys
rock
metal
pop
dance
hiphop
jazz
country
reggae
other
*NSync
LFO
112
Britney Spears
Big Star
Christina
Aguilera
Jessica Simpson
Mya
Jennifer Paige
Toni Braxton
Violent Femmes
Eric ClaptonMark Knopfler
Dire Straits
Nazareth
Billy Joel
Bette
Cher Midler
Alanis Morissette
Bonnie Tyler
Stevie Nicks
Sarah McLachlan
Fleetwood
Mac
Alan Jackson
Dido Toby Keith
Billy Ray Cyrus
The Corrs
Lifehouse
Luniz
Fiona
Apple
Kenny Chesney
Whiskeytown
Pretenders
Cranberries
Nelly
Furtado
SixpenceGarden
None the Richer
LeAnn Rimes
Olivia Newton-John
Savage
DebelahDeana
Morgan
Shania
TwainCarter
ABBA Ace of Base
Dixie
Chicks
Miles Davis
Aqua
Faith Hill
Vertical
Horizon
Supertramp
Chumbawamba
Dave Matthews Band
Chicago
Beeand
Gees
KC
TheRafferty
Sunshine Band
Gerry
S Club
7
Samantha
Mumba
Santana
Huey LewisBBMak
& The News
Culture Club
Everclear
Roxette
Third
Eye Blind
Rednex
The
Cardigans
Spin
Doctors
Our
Lady Peace
Barenaked
Ben Folds Five
Oleander
Sly & The Family Stone
Edwin
McCainLadies
3 Doors Down
Wheatus
Enya
Men at Work
Eiffel 65O-Town
Jessica
Andrews
Vengaboys Sade
Mr. Big
Enigma
Marc Anthony
Twista
D'Angelo
New Radicals
3LW
La Bouche
Milli
VanilliIce
Tracy Chapman
Xzibit
Vanilla
Eros Ramazzotti
Mike Oldfield
Ricky Martin
Counting Crows
Sugar Ray
Clash
Lou Bega Enrique Iglesias
Brian
Wilson
Sting
The
Beach BoysThe Police
Liz Phair
Smash
Mouth
"Weird
Al" Yankovic
Rick Astley
Neil Young
Rockapella
CAKE
Bob Dylan
Aaron Carter
Moby
Sugar
Q-Tip
Dr. Octagon
Anouk
Ani DiFranco
Michael
OutKast
George
Michael
Prince Jackson
Wu-Tang
Clan
Spineshank
Marvin
StevieGaye
Wonder
Bryan Adams
Blackstreet
DMX
Busta
Rhymes
Coolio
Hill
Everything but the Girl
Bloodhound
GangCypress
Al Green
The
Offspring
Seal
Steve
Winwood
Chemical
Brothers
Van
Morrison
Uriah
Heep
Lenny Kravitz
Daft
Punk
Propellerheads
Prodigy
Basement
Jaxx
Rancid
Paul van
Dyk
Ice Dre
Cube
Dr.
Richard
Marx Barry White
Beck
Mystikal
Rod
Stewart
Gorillaz Alice DeeJay
Blind Melon
Coal Chamber
LLMatthew
Cool
JDMC
The
Presidents
of the United States of America
Madison
Avenue
Sweet
Fatboy
Slim
Run
Pennywise
Beastie
Boys
blink-182
Elvis
Costello
Goldfinger
MxPx
Nine Days
Green Day
Paul
Simon
311
Incubus
Simon
& Garfunkel
Lionel
Richie
House of Pain
The
Kids
Don McLean
Nick Drake
MeGet
FirstUp
and
the Gimme Gimmes
Crowded House
Live
John Lennon
Everlast
The Beatles
Fuel
Tubby
Portishead
Blessid Union of Souls
Carly
Simon Cross
Sneaker King
Pimps
Christopher
Phil
Collins
Genesis
Peter
Gabriel
Stereophonics
Seven Mary No
Three
Ludacris Juvenile
Doubt
PJ
Harvey
Garbage
Neil Diamond
Cat Stevens
Elton John
Paula
AbdulCarey
Mariah
Madonna
Janet
Jackson
Selena
Cyndi Lauper
Tricky
Belinda Carlisle
Bangles
Pat Benatar
Blondie
Heart
Wilson Phillips
Whitney Houston
CélineG
Dion
Kenny
Natalie Imbruglia
Sheryl Crow
Garth Brooks
Tim McGraw
New
Order
Erasure
Depeche
Mode
Westlife
Melissa Etheridge
Bruce Springsteen
Eurythmics
Annie Lennox
The
Human
League
Fine
Young
Cannibals
Soft
Cell
Spandau
Ballet
Joe Cocker
Tom Petty
Melissa Etheridge
Bruce Springsteen Ron Sexsmith
Aretha
Franklin
Donna
Summer
Joe Cocker
Tom Petty
En Vogue
Lucy
Pearl
Culture Beat
Alphaville Twins
Thompson
Simple
Minds
a-ha
Pet Shop Boys
Billy Idol
Spice Girls
Melanie C
Mark Knopfler
Eric Clapton
Dire Straits
Nazareth
Gary Wright U2 INXS
Sublime
Ja Rule
Joe
Nelly
UB40
Shaggy
Bob Marley
Bobby Vee
Neil Sedaka
Kenny
Loggins
John
Denver
Weezer
Fastball
Pink Floyd
R.E.M.
Tears for Fears
Duran Duran
Led Zeppelin
The Rolling Stones
The Verve
Oasis
Blur
New Found Glory
The Cure
Craig
David
Missy Elliott
Coldplay
Tina Turner
Janis Joplin
The Jimi Hendrix
The Doors Cream
Experience
Aaliyah
TLC
Pink
Placebo
AllGabrielle
Saints
Lauryn
Hill Sisqó
Wyclef
FugeesJean
Usher
Montell
Dru
Hill Jordan
Stroke 9
Bob Seger
Semisonic
Alice
in Chains
Les Rythmes
Digitales
Temple
of the
Dog
Goo
Goo Dolls
The Smashing Pumpkins
Soul Asylum
Stone Evan
Temple
Pilots
and
Jaron
Faith No More
Collective Soul
Nine
Inch Nails
Nirvana
Soundgarden
Foo
FilterFighters
LoveDeftones
Rage Against the Machine
Chris Isaak
Roy Orbison
Everly Brothers
Elvis Presley
Frank Sinatra
R.Air
Kelly
Supply
Keith Sweat
Nick Cave & The Bad Seeds
Bad
Brains
ToolEleven
Finger
Staind
Mudvayne Linkin Park
Talking Heads
Deep Purple Metallica
BlackOzzy
Sabbath
Osbourne
Alice
Cooper
Disturbed
Marilyn Manson
Iron Maiden
Orgy
Creedence Clearwater Revival
Papa
Roach
White
Zombie
Steppenwolf
Rammstein
Limp
Bizkit
Kid Rock
Velvet
Underground
Lynyrd Skynyrd Skid Row
Robert
Palmer
David
Bowie
Bad Company
Steve Miller Band
Procol Harum
Cheap Trick
Kiss
War
Blood, Sweat & Tears
Chic
Boston
Bad Brains
Finger Eleven
Tina Turner
Janis Joplin
The Doors
The Jimi Hendrix
Experience
Cream
Pink Floyd
R.E.M.
Radiohead
Led Zeppelin
Muse
Moody Blues
Queen
Kansas
REO
Speedwagon
Survivor
Styx
Tool
Toto
Foreigner
Ugly Kid Joe
Queensrÿche
Def Leppard
Extreme
Bon Jovi
Poison
Tesla
Scorpions
ZZ Top
AC/DC
VanAerosmith
Halen
Scorpions
Metallica
Black Sabbath
OzzyAlice
Osbourne
Cooper
ZZ Top
AC/DC
Van Halen
Aerosmith
Iron Maiden
Figure 5: Results of the t-ETE algorithm (t = 2) on the Music dataset: compare the result with the one in [15]. The neighborhood
structure is more meaningful in some regions than the one with t-STE.
results on two real-world datasets. Next, we apply the Weighted tETE method for non-linear dimensionality reduction and compare
the result to the t-SNE method. The code for the (Weighted) t-ETE
method as well as all the experiments will be publicly available
upon acceptance.
7.1
Generalization and Nearest-Neighbor Error
We first evaluate the performance of different methods by means of
generalization to unseen triplets as well as preserving the nearestneighbor similarity. For this part of experiments, we consider the
MNIST Digits 6 (1000 subsamples) and MIT Scenes 7 (800 subsamples) datasets. The synthetic triplets are generated as mentioned
earlier (100 triplets per point). To evaluate the generalization performance, we perform a 10-fold cross validation and report the fraction
of held-out triplets that are unsatisfied as a function of number of
dimension. This quantity indicates how well the method learns
the underlying structure of the data. Additionally, we calculate the
nearest-neighbor error as a function of number of dimensions. The
nearest-neighbor error is a measure of how well the embedding
captures the pairwise similarity of the objects based on relative
6 http://yann.lecun.com/exdb/mnist/
7 http://people.csail.mit.edu/torralba/code/spatialenvelope/
comparisons. The results are shown in Figure 3(a)-3(b). As can be
seen, t-ETE performs as good as the best performing method or
even better on both generalization and nearest-neighbor error. This
indicates that t-ETE successfully captures the underlying structure
of the data and scales properly with the number of dimensions.
7.2
Robustness to Noise
Next, we evaluate the robustness of the different methods to triplet
noise. To evaluate the performance, we generate a different test set
for both datasets with the same number of triplets as the training set.
For each noise level, we randomly subsample a subset of training
triplets and reverse the order of the objects. After generating the
embedding, we evaluate the performance on the test set and report
the fraction of the test triplets that are satisfied as well as the nearestneighbor accuracy. The results are shown in Figure 3(c)-3(d). As
can be seen, the performance of all the other methods starts to drop
immediately when only a small amount of noise is added to the data.
On the other hand, t-ETE is very robust to triplet noise such that
the performance is almost unaffected for up to 15% of noise. This
verifies that t-ETE can be effectively applied to real-world datasets
where a large portion of the triplets may have been corrupted by
noise.
Low-dimensional Data Embedding via Robust Ranking
(a)
(b)
(c)
0
1
2
3
4
5
6
7
8
9
(e)
(d)
0
1
2
3
4
5
6
7
8
9
(f)
(g)
a
b
c
d
e
(h)
Figure 6: Dimensionality reduction results using t-SNE (top figure) and Weighted t-ETE (bottom figure) on: a) Wine, b) Sphere,
c) Swiss Roll, d) Faces, e) COIL-20, f) MNIST, g) USPS, and h) Letters datasets. We use t = t 0 = 2 in all experiments. Figures best
viewed in color.
7.3
Visualization Results
We provide visualization results on the Food [16] and Music [6]
datasets. Figures 4(a) and 4(b) illustrate the results on the Food
dataset using t-STE and t-ETE (t = 2), respectively. The same
initialization for the data points is used for the both methods. As
can be seen, no clear clusters are evident using the t-STE method.
On the other hand, t-ETE reveals three main clusters in the data:
“Vegetables and Meals” (top), “Ice creams and Deserts” (bottom left),
and “Breads and Cookies” (bottom right).
The visualization of the Music dataset using the t-ETE method
(t = 2) is shown in Figure 5. The result can be compared with the
one using the t-STE method8 . The distribution of the artists and
the neighborhood structure are similar for both methods, but more
8 Available
on homepage.tudelft.nl/19j49/ste
Ehsan Amid, Nikos Vlassis, and Manfred K. Warmuth
meaningful in some regions using the t-ETE method. This can be
due to the noise in the triplets that have been collected via human
evaluators. Additionally, t-ETE results in 0.52 nearest-neighbor
error on the data points compared to 0.63 error using t-STE.
7.4
Dimensionality Reduction Results
We apply the weighted triplet embedding method to find a 2dimensional visualization of the following datasets: 1) Wine9 , 2)
Sphere (1000 uniform samples from a surface of a three-dimensional
sphere10 ), 3) Swiss Roll (3000 sub-samples11 ), 4) Faces (400 synthetic faces with different pose and lighting11 ), 5) COIL-2012 , 6)
MNIST (10,000 sub-samples), and 7) USPS (11,000 images of handwritten digits13 ). We compare our results with those obtained using
the t-SNE method. In all experiments, we use m = 20 for our method
(for COIL-20, we use m = 10) and bias γ = 0.01. The results are
shown in Figure 6.
As can be seen, our method successfully preserves the underlying
structure of the data and produces high-quality embedding on all
datasets, both having an underlying low-dimensional manifold (e.g.,
Swiss Roll), or clusters of points (e.g., USPS). On the other hand, in
most cases, t-SNE over-emphasizes the separation of the points and
therefore, tears up the manifold. The same effect happens for the
clusters, e.g., in the USPS dataset. The embedding forms multiple
separated sub-clusters (for instance, the clusters of points ‘3’s, ‘7’s,
and ‘8’s are divided into several smaller sub-clusters). Our objective
function also enjoys better convergence properties and converges
to a good solution using simple gradient descent. This eliminates
the need for more complex optimization tricks such as momentum
and early over-emphasis, used in t-SNE.
8
CONCLUSION
We introduced a ranking approach for embedding a set of objects
in a low-dimensional space, given a set of relative similarity constraints in the form of triplets. We showed that our method, t-ETE,
is robust to high level of noise in the triplets. We generalized our
method to a weighted version to incorporate the importance of
each triplet. We applied our weighted triplet embedding method
to develop a new dimensionality reduction technique, which outperforms the commonly used t-SNE method in many cases while
having a lower complexity and better convergence behavior.
REFERENCES
[1] Sameer Agarwal, Josh Wills, Lawrence Cayton, Gert Lanckriet, David Kriegman,
and Serge Belongie. 2007. Generalized Non-metric Multidimensional Scaling. In
Proceedings of the Eleventh International Conference on Artificial Intelligence and
Statistics. San Juan, Puerto Rico.
[2] Ehsan Amid, Aristides Gionis, and Antti Ukkonen. 2015. A kernel-learning
approach to semi-supervised clustering with relative distance comparisons. In
Joint European Conference on Machine Learning and Knowledge Discovery in
Databases. Springer, 219–234.
[3] Ehsan Amid and Antti Ukkonen. 2015. Multiview Triplet Embedding: Learning
Attributes in Multiple Maps. In Proceedings of the 32nd International Conference
on Machine Learning (ICML-15). 1472–1480. http://jmlr.org/proceedings/papers/
v37/amid15.pdf
9 UCI
repository.
10 research.cs.aalto.fi/pml/software/dredviz/
11 web.mit.edu/cocosci/isomap/datasets.html
12 www1.cs.columbia.edu/CAVE/software/softlib/coil-20.php
13 www.cs.nyu.edu/~roweis/data.html
[4] Jason V Davis, Brian Kulis, Prateek Jain, Suvrit Sra, and Inderjit S Dhillon. 2007.
Information-theoretic metric learning. In Proceedings of the 24th international
conference on Machine learning. ACM, 209–216.
[5] Nan Ding and S. V. N. Vishwanathan. 2010. t -Logistic Regression. In Proceedings
of the 23th International Conference on Neural Information Processing Systems
(NIPS’10). Cambridge, MA, USA, 514–522.
[6] Daniel P. W. Ellis, Brian Whitman, Adam Berenzweig, and Steve Lawrence. 2002.
The Quest for Ground Truth in Musical Artist Similarity. In Proceedings of the
3rd International Conference on Music Information Retrieval (ISMIR ’02). Paris,
France, 170–177.
[7] Ville Hyvönen, Teemu Pitkänen, Sotiris Tasoulis, Elias Jääsaari, Risto Tuomainen,
Liang Wang, Jukka Corander, and Teemu Roos. 2015. Fast k-nn search. arXiv
preprint arXiv:1509.06957 (2015).
[8] Eric Yi Liu, Zhishan Guo, Xiang Zhang, Vladimir Jojic, and Wei Wang. 2012.
Metric learning from relative comparisons by minimizing squared residual. In
2012 IEEE 12th International Conference on Data Mining. IEEE, 978–983.
[9] Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE.
Journal of Machine Learning Research 9, Nov (2008), 2579–2605.
[10] Jan Naudts. 2002. Deformed exponentials and logarithms in generalized thermostatistics. Physica A 316 (2002), 323–334. http://arxiv.org/pdf/cond-mat/0203489
[11] Jan Naudts. 2004. Estimators, escort probabilities, and phi-exponential families
in statistical physics. Journal of Inequalities in Pure and Applied Mathematics 5, 4
(2004), 102.
[12] Jan Naudts. 2004. Generalized thermostatistics based on deformed exponential
and logarithmic functions. Physica A 340 (2004), 32–40.
[13] Timothy Sears. 2010. Generalized Maximum Entropy, Convexity and Machine
Learning. Ph.D. Dissertation. The Australian National University.
[14] Omer Tamuz, Ce Liu, Serge Belongie, Ohad Shamir, and Adam T. Kalai. 2011.
Adaptively Learning the Crowd Kernel. In Proceedings of the 28th International
Conference on Machine Learning (ICML-11).
[15] L. van der Maaten and K. Weinberger. 2012. Stochastic triplet embedding. In
2012 IEEE International Workshop on Machine Learning for Signal Processing. 1–6.
DOI:http://dx.doi.org/10.1109/MLSP.2012.6349720
[16] Michael Wilber, Sam Kwak, and Serge Belongie. 2014. Cost-Effective HITs for
Relative Similarity Comparisons. In Human Computation and Crowdsourcing
(HCOMP). Pittsburgh.
[17] Hyokun Yun, Parameswaran Raman, and S. V. N. Vishwanathan. 2014. Ranking
via Robust Binary Classification. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS’14). Cambridge, MA, USA,
2582–2590. http://dl.acm.org/citation.cfm?id=2969033.2969115
| 2 |
A LAX MONOIDAL TOPOLOGICAL QUANTUM FIELD THEORY
FOR REPRESENTATION VARIETIES
ÁNGEL GONZÁLEZ-PRIETO, MARINA LOGARES AND VICENTE MUÑOZ
arXiv:1709.05724v1 [math.AG] 17 Sep 2017
Abstract. We construct a lax monoidal Topological Quantum Field Theory that computes
Deligne-Hodge polynomials of representation varieties of the fundamental group of any closed
manifold into any complex algebraic group G. As byproduct, we obtain formulas for these
polynomials in terms of homomorphisms between the space of mixed Hodge modules on G.
The construction is developed in a categorical-theoretic framework allowing its application to
other situations.
2010 Mathematics Subject Classification. Primary: 57R56. Secondary: 14C30, 14D07, 14D21.
Key words and phrases: TQFT, moduli spaces, E-polynomial, representation varieties.
1. Introduction
Let W be a compact manifold, possibly with boundary, and let G be a complex algebraic
group. The set of representations ρ : π1 (W ) → G can be endowed with a complex algebraic
variety structure, the so-called representation variety of W into G, denoted XG (W ). The
group G itself acts on XG (W ) by conjugation so, taking the Geometric Invariant Theory
(GIT) quotient of XG (W ) by this action (see [37]) we obtain MG (W ) = XG (W ) G, the
moduli space of representations of π1 (W ) into G, as treated in [36]. It is customary to call
these spaces character varieties or, in the context of non-abelian Hodge theory, Betti moduli
spaces. Even in the simplest cases G = GL(n, C), SL(n, C) and W = Σ, a closed orientable
surface, the topology of these varieties is extremely rich and has been the object of studies
during the past twenty years.
One of the main reasons of this study is the prominent role of these varieties in the nonabelian Hodge theory. For G = GL(n, C) (resp. G = SL(n, C)), an element of MG (Σ) defines
a G-local system and, thus, a rank n algebraic bundle E → Σ of degree 0 (resp. and fixed
determinant) with a flat connection ∇ on it. Hence, the Riemann-Hilbert correspondence ([46]
[47]) gives a real analytic correspondence between MG (Σ) and the moduli space of flat bundles
of rank n and degree 0 (and fixed determinant if G = SL(n, C)), usually called the de Rham
moduli space.
Furthermore, via the Hitchin-Kobayashi correspondence ([45] [11]), we also have that, for Σ
a compact Riemann surface and G = GL(n, C) (resp. G = SL(n, C)), the Betti moduli space
MG (Σ) is real analytic equivalent to the Dolbeault moduli space, that is, the moduli space
of rank n and degree 0 (resp. and fixed determinant) G-Higgs bundles i.e. bundles E → Σ
together with a field Φ : E → E ⊗ KΣ called the Higgs field.
Motivated by these correspondences, we can also consider representation varieties of a manifold W with a parabolic structure Q. This Q consists of a finite set of pairwise disjoint
subvarieties S1 , . . . , Sr of W of codimension 2 and conjugacy classes λ1 , . . . , λr ⊆ G that allows us to define XG (W, Q) as the set of representations ρ : π1 (W − S1 − . . . − Sr ) → G such
that the image of the loops arround Si must live in λi (see section 3.4 for precise definition).
Analogously, we set MG (W, Q) = XG (W, Q) G.
1
2
When W = Σ is a surface, the Si are a set of (marked) points, called the parabolic points,
and we can obtain stronger results. For example, for G = GL(n, C) and Q a single marked
point p ∈ Σ and λ = e2πid/n Id we have that MG (Σ, Q) is diffeomorphic to the moduli
space of rank n and degree d Higgs bundles and to the moduli space of rank n logarithmic flat
bundles of degree n with a pole at p with residue − nd Id. In this case, MG (Σ, Q) is referred to
as the twisted caracter variety.
For an arbitrary number of marked points p1 , . . . , pr ∈ Σ and different semi-simple conjugacy
classes of G = GL(n, C), we obtain diffeomorphisms with moduli spaces of parabolic Higgs
bundles with parabolic structures (with general weights) on p1 , . . . , pr and with the moduli
space of logarithmic flat connections with poles on p1 , . . . , pr ([44]). Incidentally, another
correspondences can also appear as for the case of G = SL(2, C), Σ an elliptic curve and Q
two marked points with different semi-simple conjugacy classes not containing a multiple of
identity, in which MG (Σ, Q) is diffeomorphic to the moduli space of doubly periodic instantons
through the Nahm transform [7] [25].
Using these correspondences, it is possible to compute the Poincaré polynomial of character
varieties by means of Morse theory. Following these ideas, Hitchin, in the seminal paper
[24], gave the Poincaré polynomial for G = SL(2, C) in the non-parabolic case, Gothen also
computed it for G = SL(3, C) in [19] and Garcı́a-Prada, Heinloth and Schmitt for G = GL(4, C)
in [18]. In the parabolic case, Boden and Yokogawa calculated it in [8] for G = SL(2, C) and
generic semi-simple conjugacy classes and Garcı́a-Prada, Gothen and the third author for
G = GL(3, C) and G = SL(3, C) in [17].
However, these correspondences from non-abelian Hodge theory are far from being algebraic.
Hence, the study of their (mixed) Hodge structure on cohomology turns important. An useful
combinatorial tool for this purpose is the so-called Deligne-Hodge polynomial, also referred to
as E-polynomial, that, to any complex algebraic X assings a polynomial e (X) ∈ Z[u, v]. As
described in section 2.1, this polynomial is constructed as an alternating sum of the Hodge
numbers of X, in the spirit of a combination of Poincaré polynomial and Euler characteristic.
A great effort has been made to compute these E-polynomials for character varieties. The
first strategy was accomplised by [22] by means of a theorem of Katz of arithmetic flavour
based on the Weil conjectures and the Lefschetz principle. Following this method, when Σ is
an orientable surface, an expression of the E-polynomial for the twisted character varieties is
given in terms of generating functions in [22] for G = GL(n, C) and in [35] for G = SL(n, C).
Recently, using this technique, explicit expressions of the E-polynomials have been computed,
in [4], for the untwisted case and orientable surfaces with G = GL(3, C), SL(3, C) and for
non-orientable surfaces with G = GL(2, C), SL(2, C). Also they have checked the formulas
given in [34] for orientable surfaces and G = SL(2, C) in the untwisted case.
The other approach to this problem was initiated by the second and third authors together
with Newstead in [30]. In this case, the strategy is to focus on the computation of e (XG (Σ))
and, once done, to pass to the quotient. In this method, the representation variety is chopped
into simpler strata for which the E-polynomial can be computed. After that, one uses the
additivity of E-polynomials to combine them and get the one of the whole space.
3
Following this idea, in the case G = SL(2, C), they computed the E-polynomials for a
single marked point and genus g = 1, 2 in [30]. Later, the second and third authors computed
them for two marked points and g = 1 in [29] and the third author with Martı́nez for a
marked point and g = 3 in [33]. In the case of arbitrary genus and, at most, a marked
point, the case G = SL(2, C) was accomplished in [34] and the case G = P GL(2, C) in [32].
In these later papers, this method is used to obtain recursive formulas of E-polynomials of
representation varieties in terms of the ones for smaller genus. It suggests that some sort of
recursion formalism, in the spirit of Topological Quantum Field Theory (T QF T for short),
must hold. That is the starting point of the present paper.
In the parabolic case, much remains to be known. The most important advance was given
in [21], following the arithmetic method, for G = GL(n, C) and generic semi-simple marked
points. Using the geometric method, as we mentioned above, only at most two marked points
have been studied.
In this paper, we propose a general framework for studing E-polynomials of representation
varieties based on the stratification strategy, valid for any complex algebraic group G, any
manifold (not necessarely surfaces) and any parabolic configuration. For this purpose, section
2 is devoted to review the fundamentals of Hodge theory and Saito’s mixed Hodge modules as
a way of tracing variations of Hodge structures with nice functorial properties (see [41]).
With these tools at hand, we can develop a categorical theoretic machinery that shows
how recursive computations of Deligne-Hodge polynomials can be accomplished. Based on
T QF T (i.e. monoidal functors Z : Bdn → k-Vect between the category of n-bordisms and the
category of k-vector spaces, as introduced in [1]) we define, in section 3.2, a weaker version of
them in the context of 2-categories and pairs of spaces. We propose to consider lax monoidal lax
functors Z : Bdpn → R-Bim between the 2-category of pairs of bordisms and the 2-category
of R-algebras and bimodules (being R a ring), which we called soft Topological Quantum Field
Theories of pairs, s TQFTp for short. These s TQFTp are, in some sense, parallel to the socalled Extended Topological Quantum Field Theories, as studied in [3], [16] or [27] amongst
others.
In this setting, we will show how a s TQFTp can be constructed in full generality from two
basic pieces: a functor G : Bdpn → Span(VarC ) (being Span(VarC ) the 2-category of spans
of the category of complex algebraic varieties), called the geometrisation, and a contravariant
functor A : VarC → Ring, called the algebraisation. In section 4, we will apply these ideas to
the computation of Deligne-Hodge polynomials of representations varieties. For that, we will
define the geometrisation by means of the fundamental groupoid of the underlying manifold
and we will use the previously developed theory of mixed Hodge modules for an algebraisation.
Even though this s TQFTp encodes the recursive nature of the Deligne-Hodge polynomial,
we can make it even more explicit. For this purpose, in section 3.3, we define a lax monoidal
Topological Quantum Field Theories of pairs, ` TQFTp, as a lax monoidal strict functor Z :
Bdpn → R-Modt , where R-Modt is the usual category of R-modules with an additional
2-category structure (see definition 3.5). In this context, we show how a partner covariant
functor B : VarC → R-Mod to the algebraisation A allow us to define a natural ` TQFTp.
Again, we will use this idea to define a ` TQFTp computing E-polynomials of representation
varieties. In this formulation, an explicit formula for these polynomials can be deduced.
4
Theorem 1.1. There exists a ` TQFTp, Z : Bdpn → R-Modt , where R = KMHSQ is
the K-theory ring of the category of mixed Hodge modules, such that, for any n-dimensional
connected closed orientable manifold W and any non-empty finite subset A ⊆ W we have
e (Z (W, A)(Q0 )) = e (G)|A|−1 e (XG (W )) .
With a view towards applications, in section 4.3, we will show how, for computational
purposes, it is enough to consider tubes instead of general bordisms, defining what we call
an almost-TQFT, Z : Tbpn → R-Mod. The idea is that, given a closed surface Σ, we can
choose a suitable handlebody decomposition of Σ as composition of tubes. Therefore, in order
to compute Z (Σ) we do not need the knowledge of the image of any general cobordism but
just of a few tubes.
For n = 2, we will give explicitly the image of the generators of Tbp2 for the corresponding
almost-TQFT computing E-polynomials of representation varieties. From this description, we
obtain an explicit formula of e (XG (Σ, Q)) in terms of simpler pieces, see theorem 4.13. From
this formula, a general algorithm for computing these polynomials arises. Hence, future work
would be to compute them explicitly for some cases, for example G = SL(2, C), which, at the
present moment, are unknown in the parabolic case. Actually, the computations of [30] and
[34] are particular calculations of that program.
Finally, another framework in which character varieties are central is the Geometric Langlands program (see [5]). In this setting, the Hitchin fibration satisfies the Strominger-YauZaslow conditions of mirror symmetry for Calabi-Yau manifolds (see [48]) from which it arises
several questions about relations between E-polynomials of character varieties for Langland
dual groups as conjectured in [20] and [22]. The validity of these conjectures has been discussed
in some cases as in [30] and [32]. Despite of that, the general case remains unsolved. We hope
that the ideas introduced in this paper could be useful to shed light into these questions.
Acknowledgements. The authors want to thank Bruce Bartlett, Christopher Douglas and
Constantin Teleman for useful conversations. We want to express our highest gratitude to
Thomas Wasserman for his invaluable help throughout the development of this paper.
We thank the hospitality of the Mathematical Institute at University of Oxford offered
during a research visit which was supported by the Marie Sklodowska Curie grant GREAT DLV-654490. The work of the first and third authors has been partially supported by MINECO
(Spain) Project MTM2015-63612-P. The first author was also supported by a ”la Caixa” scholarship for PhD studies in Spanish Universities from ”la Caixa” Foundation. The second author
was supported by the Marie Sklodowska Curie grant GREAT - DLV-654490.
2. Hodge theory
2.1. Mixed Hodge structures. Let X be a complex algebraic variety. The rational cohomology of X, H • (X; Q), carries an additional linear structure, called mixed Hodge structure,
which generalize the so-called pure Hodge structures. For further information, see [12] and
[13], also [38].
5
Definition 2.1. Let V be a finite dimensional rational vector space and let k ∈ Z. A pure
Hodge structure of weight k on V consists of a finite decreasing filtration F • of VC = V ⊗Q C
VC ⊇ . . . ⊇ F p−1 V ⊇ F p V ⊇ F p+1 V ⊇ . . . ⊇ {0}
such that F p V ⊕ F k−p+1 V = VC where conjugation is taken with respect to the induced real
structure.
Remark 2.2. An equivalent description of a pure Hodge structure is as a finite decomposition
M
VC =
V p,q
p+q=k
for some complex vector spaces V p,q such that V p,q ∼
= V q,p with respect to the natural
real structure of VC . From this description, the filtration F • can be recovered by taking
L
F p V = r≥p V r,k−r . In this terms, classical Hodge theory shows that the cohomology of a
compact Kähler manifold M carries a pure Hodge structure induced by Dolbeault cohomology
L
H p,q (M ). See [38] for further information.
by H k (M ; C) =
p+q=k
Example 2.3. Given m ∈ Z, we define the Tate structures, Q(m), as the pure Hodge structure
whose underlying rational vector space is (2πi)m Q ⊆ C with a single-piece decomposition
Q(m) = Q(m)−m,−m . Thus, Q(m) is a pure Hodge structure of weight −2m. Moreover, if V is
another pure Hodge structure of weight k then V (m) := V ⊗ Q(m) is a pure Hodge structure
of weight k − 2m, called the Tate twist of V . For short, we will denote Q0 = Q(0), the Tate
structure of weight 0. Recall that there is a well defined tensor product of pure (and mixed)
Hodge structures, see Examples 3.2 of [38] for details.
Definition 2.4. Let V be a finite dimensional rational vector space. A (rational) mixed Hodge
structure on V consist of a pair of filtrations:
• An increasing finite filtration W• of V , called the weight filtration.
• A decreasing finite filtration F • of VC , called the Hodge filtration.
•
W
Such that,
for any k ∈ Z, the induced filtration of F on the graded complex (Grk V )C =
Wk V
gives a pure Hodge structure of weight k. Given two mixed Hodge structures
Wk−1 V
C
(V, F, W ) and (V 0 , F 0 , W 0 ), a morphism of mixed Hodge structures is a linear map f : V → V 0
preserving both filtrations.
Deligne proved in [12] and [13] (for a concise exposition see also [14]) that, if X is a complex
algebraic variety, then H k (X; Q) carries a mixed Hodge structure in a functorial way. More
preciselly, let VarC be the category of complex varieties with morphisms given by the regular
maps, Q-Vect the category of Q-vector spaces and MHSQ be the category of mixed Hodge
structures. First, we have that MHSQ is an abelian category (see Théorème 2.3.5 of [12]) and,
moreover, the cohomology functor H k (−; Q) : VarC → Q-Vect factorizes through MHSQ ,
that is
VarC
H k (−;Q)
MHSQ
/ Q-Vect
9
6
Remark 2.5. A pure Hodge structure of weight k is, in particular, a mixed Hodge structure
by taking the weight filtration with a single step. When X is a smooth complex projective
variety, the induced pure Hodge structure given by Remark 2.2 corresponds to the mixed Hodge
structure given above.
An analogous statement holds for compactly supported cohomology, that is Hck (X; Q) has a
mixed Hodge structure in a functorial way (see section 5.5 of [38] for a complete construction).
From this algebraic structure, some new invariants can be defined (see Definition 3.1 of [38]).
Given a complex algebraic variety X, we define the (p, q)-pieces of its k-th compactly supported
cohomology groups by
k
Hck;p,q (X) := GrFp Grp+q
W Hc (X; Q)
hk;p,q
(X)
c
C
k;p,q
Hc (X)
From them, we define the Hodge numbers as
= dim
and the Deligne-Hodge
polynomial, or E-polynomial, as the alternating sum
X
e (X) =
(−1)k hk;p,q
(X) up v q ∈ Z[u±1 , v ±1 ]
c
k
Remark 2.6. Sometimes in the literature, the E-polynomial is defined as e (X) (−u, −v). It
does not introduce any important difference but it would produce an annoying change of sign.
Remark 2.7. An important fact is that the Künneth isomorphism
Hc• (X; Q) ⊗ Hc• (Y ; Q) ∼
= Hc• (X × Y ; Q)
is an isomorphism of mixed Hodge structures (see Theorem 5.44 in [38]). In particular, this
implies that e (X × Y ) = e (X) e (Y ). When, instead of product varieties, we consider general
fibrations, the monodromy plays an important role (see, for example [30], [33] or [34]). The
best way to deal with this issue is through the theory of mixed Hodge modules (see next).
2.2. Mixed Hodge modules. In [41] (see also [39] and [40]) Saito proved that we can assign,
to every complex algebraic variety X, an abelian category MX called the category of mixed
Hodge modules on X. As described in [42], if X is smooth, the basic elements of MX are tuples
M = (S, F • , W• , K, α) where S is a regular holonomic DX -module with F • a good filtration,
K is a perverse sheaf (sometimes also called a perverse complex) of rational vector spaces and
W• is a pair of increasing filtrations of S and K. These filtration have to correspond under
the isomorphism
∼
=
α : DRX (S) → K ⊗QX CX
where CX , QX are the respective constant sheaves on X and DRX is the Riemann-Hilbert correspondence functor between the category of filtered DX -modules and the category of rational
perverse sheaves on X, Perv(X, Q) (for all these concepts see [38]). Starting with these basic
elements, the category MX is constructed as a sort of “controlled” extension closure of these
tuples, in the same spirit as mixed Hodge structures are a closure of pure Hodge structures
under extension. In the case that X is singular, the construction is similar but more involved
using local embeddings of X into manifolds (see [41], [38] or [42]).
In [39] and [41] Saito proves that MX is an abelian category endowed with a functor
ratX : MX → Perv(X, Q)
7
that extends to the (bounded) derived category as a functor
b
ratX : Db MX → Dcs
(X, Q)
b (X, Q) is the derived category of cohomological constructible complexes of sheaves
where Dcs
that contains Perv(X, Q) as a full abelian subcategory ([38], Lemma 13.22). Moreover, given
a regular morphism f : X → Y , there are functors
f ∗ , f ! : D b MY → D b MX
f∗ , f! : Db MX → Db MY
which lift to the analogous functors on the level of constructible sheaves. Finally, the tensor
and external product of constructibles complexes lift to bifunctors
⊗ : D b MX × D b MX → D b MX
: Db MX × Db MY → Db MX×Y .
b (X, Q) are just the usual direct image and proper
Remark 2.8.
• Recall that f∗ , f! on Dcs
direct image on sheaves, f ∗ is the inverse image sheaf and f ! is the adjoint functor of
f! , the so-called extraordinary pullback. See [38], Chapter 13, for a complete definition
of these functors.
• As in the case of constructible complexes, the external product can be defined in terms
of the usual tensor product by
M • N • = π1∗ M • ⊗ π2∗ N •
for M • ∈ Db MX , N • ∈ Db MY and π1 : X × Y → X, π2 : X × Y → Y the
corresponding projections.
A very important feature of these induced functors is that they behave in a functorial way,
as the following result shows. The proof of this claim is a compendium of Proposition 4.3.2
and Section 4.4 (in particular 4.4.3) of [41].
Theorem 2.9 (Saito). The induced functors commute with composition. More explicitly, let
f : X → Y and g : Y → Z regular morphisms of complex algebraic varieties, then
(g ◦ f )∗ = g∗ ◦ f∗
(g ◦ f )∗ = f ∗ ◦ g ∗
(g ◦ f )! = g! ◦ f!
(g ◦ f )! = f ! ◦ g !
Furthermore, suppose that we have a cartesian square of complex algebraic varieties (i.e. a
pullback diagram in VarC )
W
f0
g0
Y
/X
g
f
/Z
Then we have a natural isomorphism of functors g ∗ ◦ f! ∼
= f!0 ◦ (g 0 )∗ .
Given a complex algebraic variety X, associated to the abelian category MX , we can consider the Grothendieck group, also known as K-theory group, denoted by KMX . Recall
that it is the free abelian group generated by the objects of MX quotiented by the relation
M ∼ M 0 + M 00 if 0 → M 0 → M → M 00 → 0 is a short exact sequence in MX . By definition,
we have an arrow on objects MX → KMX . Moreover, given M • ∈ Db MX , we can associate
to it the element of KMX
X
[M • ] =
(−1)k H k (M • ) ∈ KMX
k
8
where H k (M • ) ∈ MX is the k-th cohomology of the complex. This gives an arrow on objects
Db MX → KMX . Under this arrow, tensor product ⊗ : Db MX × Db MX → Db MX descends
to a bilinear map ⊗ : KMX × KMX → KMX that endows KMX with a natural ring
structure.
With respect to induced morphisms, given f : X → Y , the functors f∗ , f! , f ∗ , f ! of mixed
Hodge modules also descend to give group homomorphisms f∗ , f! : KMX → KMY and
f ∗ , f ! : KMY → KMX (see Section 4.2 of [42]). For example, we define f! : KMX → KMY
by
X
f! [M ] := [f! M ] =
(−1)k H k (f! M )
k
where [M ] denotes the class of M ∈ MX on KMX and we are identifying M with the
complex of Db MX concentrated in degree 0. Analogous definitions are valid for f∗ , f ∗ and
f ! . Furthermore, these constructions imply that Theorem 2.9 also holds in K-theory (see [42])
and, moreover, the natural isomorphism for cartesian squares becomes an equality.
Another important feature of mixed Hodge modules is that they actually generalize mixed
Hodge structures. As mentioned in [40], Theorem 1.4, the category of mixed Hodge modules
over a single point, M? , is naturally isomorphic to the category of (rational) mixed Hodge
structures MHSQ . In particular, this identification endows the K-theory of the category of
mixed Hodge modules with a natural KMHSQ = KM? module structure via the external
product
: KM? × KMX → KM?×X = KMX
The induced functors f∗ , f! , f ∗ , f ! commute with exterior products at the level of constructible
complexes (see [42]), so they also commute in the category of mixed Hodge modules which
means that they are KMHSQ -module homomorphisms. Furthermore, f ∗ commutes with tensor products so it is also a ring homomorphism.
Example 2.10. Using the identification M? = MHSQ , we can consider the Tate structure
of weight 0, Q0 = Q(0), as an element of M? . By construction, this element is the unit of the
ring KMHSQ . Moreover, for any complex algebraic variety X, if cX : X → ? is the projection
of X onto a singleton, then the mixed Hodge module QX := c∗X Q0 is the unit of KMX . The
link between this mixed Hodge module and X is that, as proven in Lemma 14.8 of [38], we can
recover the compactly supported cohomology of X via
(cX )! QX = [Hc• (X; Q)],
as elements of KMHSQ . The analogous formula for usual cohomology and (cX )∗ holds too,
i.e. (cX )∗ QX = [H • (X; Q)].
Remark 2.11. Let π : X → B be regular fibration locally trivial in the analytic topology. In
that case, the mixed Hodge module π! QX ∈ KMB plays the role of the Hodge monodromy
representation of [30] and [34] controlling the monodromy of π.
Remark 2.12. Let us define the semi-group homomorphism e : MHSQ → Z[u±1 , v ±1 ] that, for
a mixed Hodge structure V gives
h
i
X
p+q
dim GrpF GrW
V
up v q .
e (V ) :=
p,q
C
9
This map descends to a ring homomorphism e : KMHSQ → Z[u±1 , v ±1 ]. Now, let X be a
complex algebraic variety and let [Hc• (X; Q)] be is compactly supported cohomology, as an
element of KMHSQ . Then, we have that
e (X) = e ([Hc• (X; Q)]) ,
where the left hand side is the Deligne-Hodge polynomial of X.
3. Lax Quantum Field Theories of Pairs
3.1. The category of bordisms of pairs. Let n ≥ 1. We define the category of n-bordisms
of pairs, Bdpn as the 2-category (i.e. enriched category over the category of small categories,
see [6]) given by the following data:
• Objects: The objects of Bdpn are pairs (X, A) where X is a (n − 1)-dimensional
closed oriented manifold together with a finite subset of points A ⊆ X such that its
intersection with each connected component of X is non empty.
• 1-morphisms: Given objects (X1 , A1 ), (X2 , A2 ) of Bdpn , a morphism (X1 , A1 ) →
(X2 , A2 ) is a class of pairs (W, A) where W : X1 → X2 is an oriented bordism between
X1 and X2 , and A ⊆ W is a finite set with X1 ∩ A = A1 and X2 ∩ A = A2 . Two pairs
(W, A), (W 0 , A0 ) are in the same class if there exists a diffeomorphism of bordisms (i.e.
fixing the boundaries) F : W → W 0 such that F (A) = A0 .
With respect to the composition, given (W, A) : (X1 , A1 ) → (X2 , A2 ) and (W 0 , A0 ) :
(X2 , A2 ) → (X3 , A3 ), we define (W 0 , A0 ) ◦ (W, A) as the morphism (W ∪X2 W 0 , A ∪ A0 ) :
(X1 , A1 ) → (X3 , A3 ) where W ∪X2 W 0 is the usual gluing of bordisms along X2 .
• 2-morphisms: Given two 1-morphisms (W, A), (W 0 , A0 ) : (X1 , A1 ) → (X2 , A2 ), we declare that there exists a 2-cell (W, A) ⇒ (W 0 , A0 ) if there is a diffeomorphism of bordisms F : W → W 0 such that F (A) ⊆ A0 . Composition of 2-cells is just composition
of diffeomorphisms.
In this form, Bdpn is not exactly a category since there is no unit morphism in the category
Hom Bdpn ((X, A), (X, A)). This can be solved by weakening slightly the notion of bordism,
allowing that X itself could be seen as a bordism X : X → X. With this modification,
(X, A) : (X, A) → (X, A) is the desired unit and it is a straightforward check to see that Bdpn
is a (strict) 2-category, where strict means that the associativity axioms are satisfied “on the
nose” and not just “up to isomorphism”, in which case it is called a (weak) 2-category.
Remark 3.1. As a stronger version of bordisms, there is a forgetful functor F : Bdpn → Bdn ,
where Bdn is the usual category of oriented n-bordisms.
3.2. Bimodules and sTQFTp. Recall from [6] and [43] that, given a ground commutative
ring R with identity, we can define the 2-category R-Bim of R-algebras and bimodules whose
objects are commutative R-algebras with unit and, given algebras A and B, a 1-morphism
A → B is a (A, B)-bimodule. By convention, a (A, B)-bimodule is a set M with a left Amodule and a right B-module compatible structures, usually denoted A MB . Composition of
M : A → B and N : B → C is given by A (M ⊗B N )C .
With this definition, the set Hom R-Bim (A, B) is naturally endowed with a category structure, namely, the category of (A, B)-bimodules. Hence, a 2-morphism M ⇒ N between
10
(A, B)-bimodules is a bimodule homomorphism f : M → N . Therefore, R-Bim is a monoidal
2-category with tensor product over R.
Definition 3.2. Let R be a commutative ring with unit. A soft Topological Quantum Field
Theory of pairs, shortened s TQFTp, is a lax monoidal lax functor Z : Bdpn → R-Bim.
Recall that a lax functor between 2-categories F : C → D is an assignment that:
• For each object x ∈ C , it gives an object F (x) ∈ D .
• For each pair of objects x, y of C , we have a functor
Fx,y : Hom C (x, y) → Hom D (F (x), F (y)).
Recall that, as 2-category, both Hom C (x, y) and Hom D (F (x), F (y)) are categories.
• For each object x ∈ C , we have a 2-morphism Fidx : idF (x) ⇒ Fx,x (idx ).
• For each triple x, y, z ∈ C and every f : x → y and g : y → z, we have a 2-morphism
Fx,y,z (g, f ) : Fy,z (g) ◦ Fx,y (f ) ⇒ Fx,z (g ◦ f ), natural in f and g.
Also, some technical conditions, namely the coherence conditions, have to be satisfied (see [6]
or [31] for a complete definition). If the 2-morphisms Fidx and Fx,y,z are isomorphisms, it is
said that F is a pseudo-functor or a weak functor (or even simply a functor) and, if they are
the identity 2-morphism, F is called a strict functor.
Analogously, a (lax) functor F : C → D between monoidal categories is called lax monoidal
if there exists:
• A morphism : 1D → F (1C ), where 1C and 1D are the units of the monoidal structure
of C and D respectively.
• A natural transformation ∆ : F (−) ⊗D F (−) ⇒ F (− ⊗C −).
satisfying a set of coherence conditions (for a precise definition, see Definition 1.2.10 of [28]).
Again, if and ∆ are isomorphisms, F is said to be pseudo-monoidal and, if they are identity
morphisms, F is called strict monoidal, or simply monoidal.
A general recipe for building a s TQFTp from simpler data can be given as follows. Let
S = Span(VarC ) be the (weak) 2-category of spans of VarC . As described in [6], objects of
S are the same as the ones of VarC . A morphism X → Y in S between complex algebraic
varieties is a pair (f, g) of regular morphisms
Z
f
~
X
g
Y
where Z is a complex algebric variety. Given two morphisms (f1 , g1 ) : X → Y and (f2 , g2 ) :
Y → Z, say
Z1
f1
X
~
Z2
g1
f2
Y
we define the composition (f2 , g2 ) ◦ (f1 , g1 ) =
Y
(f1 ◦ f20 , g2
g2
~
◦ g10 ),
Z
where
f20 , g10
are the morphisms
11
in the pullback diagram
W
f20
g10
~
Z1
Z2
f1
X
g1
~
f2
g2
~
Y
Z
where W = Z1 ×Y Z2 is the product of Z1 and Z2 as Y -varieties. Finally a 2-morphism
f
f0
g
g0
(f, g) ⇒ (f 0 , g 0 ) between X ← Z → Y and X ← Z 0 → Y is a regular morphism α : Z 0 → Z
such that the following diagram commutes
Z0
f0
X`
g0
~
>Y
α
f
g
Z
Moreover, with this definition, S is a monoidal category with cartesian product of varieties
and morphisms.
Remark 3.3. Actually, the same construction works verbatim for any category C with pullbacks
defining a weak 2-category Span(C ). Furthermore, if C has pushouts, we can define Spanop (C )
as in the case of Span(C ) but with all the arrows reversed. Again, if C is a monoidal category
then Span(C ) (resp. Spanop (C )) also is.
Now, let A : VarC → Ring be a contravariant functor between the category of complex
algebraic varieties and the category of (commutative unitary) rings. Set R = A(?), where ? is
the singleton variety, and define the lax functor SA : S → R-Bim as follows:
• For any complex algebraic variety X, SA(X) = A(X). The R-algebra structure on
A(X) is given by the morphism R = A(?) → A(X) image of the projection X → ?.
• Let us fix X, Y algebraic varieties. We define
SAX,Y : Hom S (X, Y ) → Hom R-Bim (A(X), A(Y ))
as the covariant functor that:
f
g
– For any 1-morphism X ← Z → Y it assigns SAX,Y (f, g) = A(Z) with the
(A(X), A(Y ))-bimodule structure given by az = A(f )(a) · z and zb = z · A(g)(b)
for a ∈ A(X), b ∈ A(Y ) and z ∈ A(Z).
– For a 2-morphism α : (f, g) ⇒ (f 0 , g 0 ) given by a regular morphism α : Z 0 → Z,
we define SAX,Y (α) = A(α) : A(Z) → A(Z 0 ). Since A is a functor to rings, A(α)
is a bimodule homomorphism.
• The 2-morphism SAidX : idSA(X) ⇒ SAX,X (idX ) is the identity.
12
f1
g1
f2
g2
• Let X ← Z1 → Y and Y ← Z2 → Z. By definition of S ,
SAY,Z (f2 , g2 ) ◦ SAY,X (f1 , g1 ) = A(Y ) A(Z2 )A(Z) ◦ A(X) A(Z1 )A(Y )
= A(X) A(Z1 ) ⊗A(Y ) A(Z2 ) A(Z)
SAX,Z ((f2 , g2 ) ◦ (f1 , g1 )) = A(X) [A(Z1 ×Y Z2 )]A(Z)
Hence, the 2-morphism SAX,Y,Z ((f2 , g2 ), (f1 , g1 )) is the (A(X), A(Z))-bimodule homomorphism
A(Z1 ) ⊗A(Y ) A(Z2 ) → A(Z1 ×Y Z2 )
given by z1 ⊗ z2 7→ A(p1 )(z1 ) · A(p2 )(z2 ) where p1 and p2 are the projections p1 :
Z1 ×Y Z2 → Z1 , p2 : Z1 ×Y Z2 → Z2 .
Furthermore, with the construction above, the lax functor SA : S → R-Bim is also lax
monoidal. Since SA preserves the units of the monoidal structures it is enough to define the
morphisms ∆X,Y : A(X) ⊗R A(Y ) → A(X × Y ). For that, just take as bimodule the own
ring A(X × Y ), where the left (A(X) ⊗R A(Y ))-module structure commes from the external
product A(X) ⊗R A(Y ) → A(X × Y ), z1 ⊗ z2 7→ A(p1 )(z1 ) · A(p2 )(z2 ) (being p1 , p2 the
respective projections) which is a ring homomorphism.
Remark 3.4. If the functor A is monoidal, then SA is strict monoidal. Moreover, if it satisfies
that A(Z1 )⊗A(Y ) A(Z2 ) = A(Z1 ×Y Z2 ) (resp. isomorphic) then it is also a strict (resp. pseudo)
functor.
Therefore, with this construction at hand, given a lax monoidal lax functor
G : Bdpn → S ,
called the geometrisation, and a functor
A : VarC → Ring,
called the algebraisation, we can build a s TQFTp, Z = ZG ,A , by
ZG ,A = SA ◦ G : Bdpn → R-Bim.
We will use this approach in section 4 to define a s TQFTp generalizing Deligne-Hodge polynomials of representation varieties.
3.3. 2-categories of modules with twists. Let R be a fixed ring (commutative and with
unit). Given two homomorphisms of R-modules f, g : M → N , we say that g is an immediate
twist of f if there exists an R-module D, homomorphisms f1 : M → D, f2 : D → N and
ψ : D → D such that f = f2 ◦ f1 and g = f2 ◦ ψ ◦ f1 .
ψ
M
f1
/D
f2
/6 N
g
In general, given f, g : M → N two R-module homomorphisms, we say that g is a twist of f
if there exists a finite sequence f = f0 , f1 , . . . , fr = g : M → M of homomorphisms such that
fi+1 is an immediate twist of fi .
13
Definition 3.5. Let R be a ring. The 2-category of R-modules with twists, R-Modt is the
category whose objects are R-modules, its 1-morphisms are R-modules homomorphisms and,
given homomorphisms f, g : A → B, we have a 2-morphism f ⇒ g if and only if g is a twist of
f . Moreover, R-Modt is a monoidal category with the usual tensor product.
Definition 3.6. Let R be a commutative ring with unit. A lax monoidal Topological Quantum
Field Theory of pairs, shortened ` TQFTp, is a lax monoidal strict 2-functor Z : Bdpn →
R-Modt .
Remark 3.7. In the literature, it is customary to forget about the 2-category structure and to
say that a lax monoidal TQFT is just a lax monoidal functor between Bdn and R-Mod. In
this paper, the 2-category structures are chosen to suit the geometric situation.
Remark 3.8. Since we requiere Z to be only lax monoidal, some of the properties of Topological
Quantum Field Theories can be lost. For example, duality arguments no longer hold and in
particular, Z (X) can be not finitely generated. This is the case of the construction of section
4.
A ` TQFTp is, in some sense, stronger than a s TQFTp. As described in section 3.2 suppose
that we have a contravariant functor A : VarC → Ring and set R = A(?). Furthermore,
suppose that, together with this functor, we have a covariant functor B : VarC → R-Mod
(being R-Mod the usual category of R-modules) such that B (X) = A(X) for any complex
algebraic variety X and satisfying the Beck-Chevalley condition, sometimes called the base
change formula, which requires that, for any pullback diagram in VarC
Y
f0
g0
Y2
/ Y1
g
f
/X
we have A(g)B (f ) = B (f 0 )A(g 0 ).
In this framework, we can define the strict 2-functor SA,B : S = Span(VarC ) → R-Modt as
follows:
• For any algebraic variety X we define SA,B (X) = A(X).
• Fixed X, Y complex algebraic varieties, we define the functor
(SA,B )X,Y : Hom S (X, Y ) → Hom R-Modt (A(X), A(Y ))
by:
f
g
– For a 1-morphism X ← Z → Y we define SA,B (f, g) = B (g) ◦ A(f ) : A(X) →
A(Z) → A(Y ).
– For a 2-morphism α : (f, g) ⇒ (f 0 , g 0 ) given by α : Z 0 → Z we define the immediate
twist ψ = B (α) ◦ A(α) : A(Z) → A(Z 0 ). Since α is a 2-cell in S we have that
B (g) ψ A(f ) = B (g 0 )A(f 0 ). Observe that, if α is an isomorphism, then ψ = id.
• For (SA,B )idX we take the identity 2-cell.
14
• Given 1-morphisms (f1 , g1 ) : X → Y and (f2 , g2 ) : Y → Z we have (SA,B )Y,Z (f2 , g2 ) ◦
(SA,B )X,Y (f1 , g1 ) = B (g2 )A(f2 )B (g1 )A(f1 ). On the other hand, (SA,B )X,Z ((f2 , g2 ) ◦
(f1 , g1 )) = B (g2 )B (g10 )A(f20 )A(f1 ), where g1 and f20 are the maps in the pullback
Z1 ×Y Z2
f20
Z1
f1
X
~
g10
z
$
g1
$
f2
Y
Z2
g2
z
Z.
By the Beck-Chevalley condition we have B (g10 )A(f20 ) = A(f2 )B (g1 ) and the two morphisms agree. Thus, we can take the 2-cell (SA,B )X,Y,Z as the identity.
Remark 3.9. The Beck-Chevalley condition appears naturally in the context of Grothendieck’s
yoga of six functors f∗ , f ∗ , f! , f ! , ⊗ and D in which (f∗ , f ∗ ) and (f! , f ! ) are adjoints, and f ∗
and f! satisfy the Beck-Chevalley condition. In this context, we can take A to be the functor
f 7→ f ∗ and B the functor f 7→ f! as we will do in section 4. For further infomation, see for
example [15] or [2].
Remark 3.10. If the Beck-Chevalley condition was satisfied up to natural isomorphism, then
this corresponds to equality after a pair of automorphisms in A(Z1 ) and A(Z2 ) which corresponds to an invertible 2-cell in R-Modt . In this case, SA,B would be a pseudo-functor. We
will not need this trick in this paper.
As in the previous case, SA,B : S → R-Modt is automatically lax monoidal taking ∆X,Y :
SA,B (X) ⊗ SA,B (Y ) → SA,B (X × Y ) to be the external product with respect to A. Therefore,
given a geometrisation functor, i.e. a lax monoidal strict functor G : Bdpn → S , and these
two functors A : VarC → Ring and B : VarC → R-Mod as algebraisations, we can build a
` TQFTp, Z = ZG ,A,B by
ZG ,A,B = SA,B ◦ G : Bdpn → R-Modt .
Observe that the previous s TQFTp, Z = SA ◦ G can also be constructed in this setting, so
in this sense Z is stronger than Z .
3.4. The parabolic case. For some applications, it is useful to consider the so-called parabolic
case. In this setting, the starting point is a fixed set Λ that we will call the parabolic data.
Given a compact n-dimensional manifold W , possibly with boundary, we will denote by Par(W )
the set of closed connected subvarieties S ⊆ W of dimension n − 2 such that S ∩ ∂W = ∅. A
parabolic structure, Q, on W is a finite set (possibly empty) of pairs Q = {(S1 , λ1 ), . . . , (Ss , λs )}
where Si ∈ Par(W ) and λi ∈ Λ.
With this notion, we can improve our previous category Bdpn to the 2-category of pairs
of bordisms with parabolic data Λ, Bdpn (Λ). The objects of this category are the same
than for Bdpn . For morphisms, given objects (X1 , A1 ) and (X2 , A2 ) of Bdpn , a morphism
(X1 , A1 ) → (X2 , A2 ) is a class of triples (W, A, Q) where (W, A) : (X1 , A1 ) → (X2 , A2 ) is a
bordism of pairs and Q is a parabolic structure on W such that S ∩ A = ∅ for any (S, λ) ∈ Q.
As in the case of Bdpn , two triples (W, A, Q) and (W 0 , A0 , Q0 ) are in the same class if there
15
exists a diffeomorphism F : W → W 0 preserving the boundaries such that F (A) = A0 and
(S, λ) ∈ Q if and only if (F (S), λ) ∈ Q0 . As expected, composition of morphisms in Bdpn (Λ)
is defined by (W 0 , A0 , Q0 ) ◦ (W, A, Q) = (W ∪ W 0 , A ∪ A0 , Q ∪ Q0 ). In the same spirit, we have
a 2-morphism (W, A, Q) ⇒ (W 0 , A0 , Q0 ) if there exists a boundary preserving diffeomorphism
F : W → W 0 such that F (A) ⊆ A0 and (S, λ) ∈ Q if and only if (F (S), λ) ∈ Q0 .
Then, in analogy with the non-parabolic case, a lax monoidal lax functor Z : Bdpn (Λ) →
R-Bim is called a parabolic s TQFTp and a lax monoidal strict functor Z : Bdpn (Λ) →
R-Modt is called a parabolic ` TQFTp.
4. A s TQFTp for Deligne-Hodge polynomials of representations varieties
In this section, we will use the previous ideas to construct a s TQFTp and a ` TQFTp
allowing computation of Deligne-Hodge polynomials of representation varieties. Both theories
share a common geometrisation functor, constructed via the fundamental groupoid functor as
described in section 4.1. On the other hand, for the algebraisation we will use, in section 4.2,
the properties of mixed Hodge modules so that A will be the pullback of mixed Hodge modules
and B the pushforward.
4.1. The geometrisation functor. Recall from [10], Chapter 6, that given a topological
space X and a subset A ⊆ X, we can define the fundamental groupoid of X with respect to A,
Π(X, A), as the category whose objects are the points in A and, given a, b ∈ A, Hom (a, b) is the
set of homotopy classes (with fixed endpoints) of paths between a and b. It is a straighforward
check that this category is actually a groupoid and only depends on the homotopy type of
the pair (X, A). In particular, if A = {x0 }, Π(X, A) has a single object whose automorphism
group is π1 (X, x0 ), the fundamental group of X based on x0 . For convenience, if A is any set,
not necessarily a subset of X, we will denote Π(X, A) := Π(X, X ∩ A) and we declare that
Π(∅, ∅) is the singleton category.
With this notion at hand, we define the strict 2-functor Π : Bdpn → Spanop (Grpd), where
Grpd is the category of groupoids, as follows:
• For any object (X, A) of Bdpn we set Π(X, A) as the fundamental groupoid of (X, A).
• Given objects (X1 , A1 ), (X2 , A2 ), the functor
Π(X1 A1 ),(X2 ,A2 ) : Hom Bdpn ((X1 , A1 ), (X2 , A2 )) → Hom Spanop (Grpd) (Π(X1 , A1 ), Π(X2 , A2 ))
is given by:
– For any 1-morphism (W, A) : (X1 , A1 ) → (X2 , A2 ) we define Π(X1 A1 ),(X2 ,A2 ) (W, A)
to be the span
i
i
2
1
Π(X2 , A2 ) −→
Π(W, A) ←−
Π(X1 , A1 )
where i1 , i2 are the induced functions on the level of groupoids by the inclusions
of pairs (X1 , A1 ), (X2 , A2 ) ,→ (W, A).
– For a 2-morphism (W, A) ⇒ (W 0 , A0 ) given by a diffeomorphism F : W → W 0 ,
we obtain a groupoid homomorphism ΠF : Π(W, A) → Π(W 0 , A0 ) giving rise to a
16
commutative diagram
Π(W, A)
i2
Π(X2 , A2 )
i02
8
f
ΠF
/ Π(W 0 , A0 ) o
i1
i01
Π(X1 , A1 )
which is a 2-cell in Spanop (Grpd).
• For Πid(X,A) we take the identity.
• With respect to composition, let us take (W, A) : (X1 , A1 ) → (X2 , A2 ) and (W 0 , A0 ) :
(X2 , A2 ) → (X3 , A3 ) two 1-morphisms. Set W 00 = W ∪X2 W 0 and A00 = A ∪ A0 so that
(W 0 , A0 ) ◦ (W, A) = (W 00 , A00 ). In order to identify Π(W 00 , A00 ), let V ⊆ W 00 be an open
bicollar of X2 such that V ∩ A00 = A2 . Set U1 = W ∪ V and U2 = W 0 ∪ V .
By construction, {U1 , U2 } is an open covering of W 00 such that (U1 , A00 ∩ U1 ) is homotopically equivalent to (W, A), (U2 , A00 ∩ U2 ) is homotopically equivalent to (W 0 , A0 )
and (U1 ∩ U2 , A00 ∩ U1 ∩ U2 ) is homotopically equivalent to (V, A00 ∩ V ) which is homotopically equivalent to (X2 , A2 ). Therefore, by Seifert-van Kampen theorem for
fundamental groupoids (see [10], [9] and [23]) we have a pushout diagram induced by
inclusions
Π(U1 ∩ U2 , A00 ) = Π(X2 , A2 )
/ Π(U1 , A00 ) = Π(W, A)
/ Π(W 00 , A00 )
Π(U2 , A00 ) = Π(W 0 , A0 )
But observe that, by definition, Π(W 0 , A0 ) ◦ Π(W, A) is precisely this pushout, so we
can take the functors ΠX1 ,X2 ,X3 as the identities.
We can slightly improve the previous construction. Given a groupoid G , we will say that G
is finitely generated if Obj(G ) is finite and, for any object a of G , Hom G (a, a) (usually denoted
Ga , the vertex group of a) is a finitely generated group. We will denote by Grpd0 the category
of finitely generated groupoids.
Remark 4.1. Given a groupoid G , two objects a, b ∈ G are said to be connected if Hom G (a, b)
is not empty. In particular, this means that Ga and Gb are isomorphic groups so, in order to
check whether G is finitely generated, it is enough to check it on a point of every connected
component.
Remark 4.2. Let X be a compact connected manifold (possibly with boundary) and let A ⊆ X
be finite. As we mentioned above, for any a ∈ A, Π(X, A)a = π1 (X, a). But a compact
connected manifold has the homotopy type of a finite CW-complex, so in particular has finitely
generated fundamental group. Hence, Π(X, A) is finitely generated. Therefore, the previous
functor actually can be promoted to a functor Π : Bdpn → Spanop (Grpd0 ).
Now, let G be a complex algebraic group. Seeing G as a groupoid, we can consider the
functor Hom Grpd (−, G) : Grpd → Set. Moreover, if G is finitely generated, then Hom (G , G)
has a natural structure of complex algebraic variety. To see that, pick a set S = {a1 , . . . , as } of
objects of G such that every connected component contains exactly one element of S. Moreover,
for any object a of G , pick a morphism fa : a → ai where ai is the object of S in the connected
17
component of a. Hence, if ρ : G → G is a groupoid homomorphism, it is uniquely determined
by the group representations ρi : Gai → G for ai ∈ S together with elements ga corresponding
to the morphisms fa for any object a. Since the elements ga can be chosen without any
restriction, if G has n objects, we have
Hom (G , G) ∼
= Hom (Ga1 , G) × . . . × Hom (Ga1 , G) × Gn−s
and each of these factors has a natural structure of complex algebraic variety as representation
variety. This endows Hom (G , G) with an algebraic structure which can be shown not to depend
on the choices.
Definition 4.3. Let (W, A) be a pair of topological spaces such that Π(W, A) is finitely generated (for example, if W is a compact manifold and A is finite). We denote the variety
XG (W, A) = Hom (Π(W, A), G) and we call it the representation variety of (W, A) into G. In
particular, if A is a singleton, we recover the usual representation varieties of group homomorphisms ρ : π1 (W ) → G, just denoted by XG (W ).
Remark 4.4. If we drop out the requirement of G being finitely generated, we can still endow
Hom (G , G) with a scheme structure following the same lines (see, for example [36]). However,
in general, this scheme is no longer of finite type. For this reason, in the definition of Bdpn ,
we demand the subset A ⊆ W to be finite.
Therefore, we can promote this functor to a contravariant functor Hom (−, G) : Grpd0 →
VarC . Recall that Hom (−, G) sends colimits of Grpd into limits of VarC so, in particular,
sends pushouts into pullback and, thus, defines a functor Hom (−, G) : Spanop (Grpd0 ) →
Span(VarC ). With this functor at hand, we can finally define the geometrisation functor as
G = Hom (−, G) ◦ Π : Bdpn → S = Span(VarC ).
Observe that, since Π and Hom (−, G) are both (strict) monoidal functors then G is strict
monoidal.
4.2. The algebraisation functor. As algebraisation functor for our s TQFTp, let us consider
the contravariant functor A : VarC → Ring that, for a complex algebraic variety X gives
A(X) = KMX and, for a regular morphism f : X → Y it assigns A(f ) = f ∗ : KMY → KMX .
As we mention in section 2.2, f ∗ is a ring homomorphism and A(?) = KM? = KMHSQ .
With these choices, the corresponding s TQFTp is Z = SA ◦ G : Bdpn → R-Bim with
R = KMHSQ . As described in section 3.2, it satisfies:
• For any pair (X, A), where X is a compact (n − 1)-dimensional manifold and A ⊆ X
is finite, we have Z (X, A) = KMXG (X,A) .
• For a 1-morphism (W, A) : (X1 , A1 ) → (X2 , A2 ) we assign
Z (W, A) = KMXG (W,A)
with the structure of a (KMXG (X1 ,A1 ) , KMXG (X2 ,A2 ) )-bimodule. In particular, taking
the unit Q ∈ KMXG (W,A) and the projection onto a singleton c : XG (W, A) → ?, by
Example 2.10 we have
c! Q = [Hc• (XG (W, A); Q)]
as mixed Hodge structures.
18
• For a 2-morphism α : (W, A) ⇒ (W 0 , A0 ) we obtain a bimodule homomorphism
Z (α) : KMXG (W 0 ,A0 ) → KMXG (W,A)
Remark 4.5. As we mentioned in section 3.2, SA : Span(VarC ) → R-Bim is automatically
a lax monoidal lax functor. However, it is not strict monoidal since, in general, KMX×Y 6=
KMX ⊗R KMY .
Furthermore, we can also consider the covariant functor B : VarC → R-Mod, where R =
A(?) = KMHSQ given by B (X) = KMX for an algebraic variety X and, for a regular
morphism f : X → Y , it assigns B (f ) = f! : KMX → KMY . In this case, as described in
section 3.3, the corresponding ` TQFTp is Z = SA,B ◦ G : Bdpn → R-Modt . It assigns:
• For a pair (X, A), Z (X, A) = KMXG (X,A) .
• For a 1-morphism (W, A) : (X1 , A1 ) → (X2 , A2 ), it assigns the R-module homomorphism
i∗
i2!
1
Z (W, A) : KMXG (X1 ,A1 ) −→
KMXG (W,A) −→ KMXG (X2 ,A2 )
• The existence of a 2-morphism α : (W, A) ⇒ (W 0 , A0 ) implies that Z (W 0 , A0 ) = (i02 )! ◦
(i01 )∗ : KMXG (X1 ,A1 ) → KMXG (X2 ,A2 ) can be obtained from Z (W, A) = (i2 )! ◦ (i1 )∗ :
KMXG (X1 ,A1 ) → KMXG (X2 ,A2 ) by twists.
In particular, let W be a connected closed oriented n-dimensional manifold and let A ⊆ W
finite. We can see W as a 1-morphism (W, A) : (∅, ∅) → (∅, ∅). In that case, since Z (∅, ∅) =
KMXG (∅,∅) = KM? = KMHSQ we obtain that Z (W, A) is the morphism
c∗
c
!
Z (W, A) : KMHSQ −→ KMXG (W,A) −→
KMHSQ ,
where c : XG (W, A) → ? is the projection onto a point. In particular, for the unit Q0 ∈ MHSQ
we have
Z (W, A)(Q0 ) = c! c∗ Q0 = c! QX (W,A) = [Hc• (XG (W, A); Q)] .
G
Hence, taking into account that XG (W, A) = XG (W ) × G|A|−1 we have that
Z (W, A)(Q0 ) = [Hc• (XG (W ); Q)] ⊗ [Hc• (G; Q)]|A|−1
Therefore, we have proved the main result of this paper in the non-parabolic case.
Theorem 4.6. There exists a ` TQFTp, Z : Bdpn → KMHSQ -Modt such that, for any ndimensional connected closed orientable manifold W and any non-empty finite subset A ⊆ W
we have
Z (W, A)(Q0 ) = [Hc• (XG (W ); Q)] ⊗ [Hc• (G; Q)]|A|−1
where Q0 ∈ KMHSQ is the unit Hodge structure. In particular, this means that
e (Z (W, A)(Q0 )) = e (G)|A|−1 e (XG (W )) .
Remark 4.7. An analogous formula holds in the non-connected case. Suppose that W =
W1 t . . . t Ws with Wi connected and denote Ai = A ∩ Wi . Then XG (W, A) = XG (W1 , A1 ) ×
N
. . . × XG (Ws , As ) so Z (W, A) = i Z (Wi , Ai ) and, thus
e (Z (W, A)(Q0 )) = e (G)
|A|−s
s
Y
i=1
e (XG (Wi )) .
19
4.3. Almost-TQFT and computational methods. For computational purposes, we can
restrict our attention to a wide subcategory of bordisms. Along this section, we will forget
about the 2-category structure of Bdpn and we will see it as a 1-category.
First of all, let us consider the subcategory Tbp0n ⊆ Bdpn of strict tubes of pairs. An object
(X, A) of Bdpn is an object of Tbp0n if X is connected or empty. Given objects (X1 , A1 ) and
(X2 , A2 ) of Tbp0n , a morphism (W, A) : (X1 , A1 ) → (X2 , A2 ) of Bdpn is in Tbp0n if W is
connected. We will call such morphisms strict tubes.
From this category, we consider the category of tubes, Tbpn as the subcategory of Bdpn
whose objects and morphisms are disjoint unions of the ones of Tbp0n . Observe that, in
particular, Tbpn is a wide subcategory of Bdpn i.e. they have the same objects. As in the
case of Bdpn , Tbpn is a monoidal category with the disjoint union.
Definition 4.8. Let R be a ring. An almost-TQFT of pairs is a monoidal functor Z : Tbpn →
R-Mod.
Remark 4.9. Since Tbpn does not contain all the bordisms, dualizing arguments no longer
hold for almost-TQFT. For example, for n = 2, the pair of pants is not a tube, so, in contrast
with T QF T , we cannot assure that Z(S 1 ) is a Frobenius algebra. In particular, Z(S 1 ) is not
forced to be finite dimensional.
Remark 4.10. An almost-TQFT gives an effective way of computing invariants as follows. Fix
n ≥ 1 and suppose that we have a set of generators ∆ for the morphisms of Tbp0n , i.e. after
boundary orientation preserving diffeomorphisms, every morphism of Tbp0n is a compositions
of elements of ∆. These generators can be obtained, for example, by means of Morse theory
(see Section 1.4 of [26] for n = 2).
Suppose that we want to compute an invariant that, for a closed connected orientable ndimensional manifold W and a finite set A ⊆ W is given by Z(W, A)(1). In that case, seeing
(W, A) as a morphism (W, A) : ∅ → ∅, we can decompose (W, A) = Ws ◦ . . . ◦ W1 with Wi ∈ ∆.
Thus, for Z(W, A) : R → R we have
Z(W, A)(1) = Z(Ws ) ◦ . . . ◦ Z(W1 )(1)
Hence, the knowledge of Z(Wi ) for Wi ∈ ∆ is enough to compute that invariant for closed
manifolds.
Given a ` TQFTp, Z : Bdpn → R-Modt , we can build a natural almost-TQFT from it,
Z = ZZ : Tbpn → R-Mod. For that, forgetting about the 2-category structure, the restriction
of Z to Tbp0n gives us a functor Z : Tbp0n → R-Mod. Now, we define Z by:
• Given objects (X i , Ai ) of Tbp0n , we take
!
G
O
i
i
Z
(X , A ) =
Z (X i , Ai )
i
i
where the tensor product is taken over R.
• Given strict tubes (W i , Ai ) : (X1i , Ai1 ) → (X2i , Ai2 ), we define
!
O
G
O
O
Z
(W i , Ai ) =
Z (W j , Aj ) :
Z (X1i , Ai1 ) →
Z (X2i , Ai2 )
i
i
i
i
20
Remark 4.11. The apparently artificial definition of Z can be better understood in terms of the
corresponding map of Z . To see it, let (W, A) : (X1 , A1 ) → (X2 , A2 ) and (W 0 , A0 ) : (X10 , A01 ) →
(X20 , A02 ) be two tubes. Recall that, since Z is lax monoidal, we have a natural trasformation
∆ : Z (−) ⊗R Z (−) ⇒ Z (− t −) (see section 3.2). Then, Z ((W, A) t (W, A0 )) is just a lift
Z (X1 , A1 ) ⊗R Z (X10 , A01 )
∆(X
0
0
1 ,A1 ),(X1 ,A1 )
Z((W,A)t(W,A0 ))
Z (X1 t X10 , A1 ∪ A01 )
/ Z (X2 , A2 ) ⊗R Z (X 0 , A0 )
2
2
Z ((W,A)t(W,A0 ))
∆(X
0
0
2 ,A2 ),(X2 ,A2 )
/ Z (X2 t X 0 , A2 ∪ A0 )
2
2
Since composition of tubes is performed by componentwise gluing, these lifts behave well with
composition, which implies that Z is well defined. This property no longer holds for Z since
there exists bordisms mixing several components, as the pair of pants for n = 2.
In particular, from the ` TQFTp described in Theorem 4.6, we obtain an almost-TQFT,
Z : Tbpn → R-Mod, with R = KMHSQ , that allows the computation of Deligne-Hodge
polynomials of representation varieties. To be precise, from the previous construction we
obtain the following result.
Corollary 4.12. There exists an almost-TQFT, Z : Tbpn → KMHSQ -Mod, such that, for
any n-dimensional connected closed orientable manifold W and any finite set A ⊆ W we have
Z(W, A)(Q0 ) = [Hc• (XG (W ); Q)] × [Hc• (G; Q)]|A|−1
where Q0 ∈ KMHSQ is the unit. In particular,
e (Z(W, A)(Q0 )) = e (G)|A|−1 e (XG (W )) .
In the case n = 2, we can go a step further and describe explicitly this functor. With respect
to objects of Tbp2 , recall that every non-empty 1-dimensional closed manifold is diffeomorphic
to S 1 . Thus, the objects of Tbp2 are pairs (X, A) with X diffeomorphic to either ∅ or S 1 .
Since π1 (S 1 ) = Z we have XG (S 1 ) = Hom (Z, G) = G and, thus, for any x0 ∈ S 1
Z(∅) = KM1 = KMHSQ = R,
Z(S 1 , {x0 }) = KMG .
Regarding morphisms, observe that, adapting the proof of Proposition 1.4.13 of [26] for the
generators of Bd2 , we have that a set of generators of Tbp2 is ∆ = D, D, T, P where
D : ∅ → S 1 is the disc with a marked point in the boundary, D : S 1 → ∅ is the opposite
disc, T : S 1 → S 1 is the torus with two holes and a puncture on each boundary component,
and P : S 1 → S 1 is the bordism S 1 × [0, 1] with a puncture on each boundary component an
another in the interior. They are depicted in Figure 1.
Figure 1. A set of generators of Tbp2 .
21
For D and D the situation is simple since they are simply connected. Therefore, their images
under the geometrisation functor G are
h
i
h
i
i
i
G (D) = 1 ←− 1 −→ G ,
G (D) = G ←− 1 −→ 1 ,
so their images under Z are
Z(D) = i! : R = KM1 → KMG
Z(D) = i∗ : KMG → KM1 = R.
For the holed torus T : S 1 → S 1 the situation is a bit more complicated. Let T = (T0 , A)
where A = {x1 , x2 } the set of marked points of T , x1 in the ingoing boundary and x2 in the
outgoing boundary. Recall that T0 is homotopically equivalent to a bouquet of three circles
so its fundamental group is the free group with three generators. Thus, we can take γ, γ1 , γ2
as the set of generators of Π(T )x1 = π1 (T0 , x1 ) depicted in Figure 2 and α the shown path
between x1 and x2 .
Figure 2. Chosen paths for T .
With this description, γ is a generator of π1 (S 1 , x1 ) and αγ[γ1 , γ2 ]α−1 is a generator of
π1 (S 1 , x2 ), where [γ1 , γ2 ] = γ1 γ2 γ1−1 γ2−1 is the group commutator. Hence, since XG (T ) =
Hom (Π(T0 , A), G) = G4 we have that the geometrisation G (T ) is the span
p
s
G ←−
G4
−→
G
g ←[ (g, g1 , g2 , h) 7→ hg[g1 , g2 ]h−1
where g, g1 , g2 and h are the images of γ, γ1 , γ2 and α, respectively. Hence, we obtain that
p∗
s
!
Z(T ) : KMG −→ KMG4 −→
KMG .
For the morphism P , let P = (S 1 ×[0, 1], A) where A = {x1 , x2 , x} with x1 , x2 the ingoing and
outgoing boundary points respectively and x the interior point. Since π1 (S 1 × [0, 1]) = Z, the
fundamental groupoid Π(P ) has three vertices isomorphic to Z and, thus, Hom (Π(P ), G) = G3 .
Let γ be a generator of π1 (S 1 , x1 ), α1 a path between x1 and x2 and α2 a path between x1
and x, as depicted in Figure 3.
Figure 3. Chosen paths for P .
22
Since α1 γα1−1 is a generator of π1 (S 1 , x2 ) we obtain that G (P ) is the span
q
t
G ←−
G3
−→
G
g ←[ (g, g1 , g2 ) 7→ g1 gg1−1
Hence, we have that
q∗
t
!
Z(P ) : KMG −→ KMG3 −→
KMG .
Now, let Σg be the closed oriented surface of genus g. If we choose any g + 1 points we have
a decomposition of the bordism Σg : ∅ → ∅ as Σg = D ◦ T g ◦ D. Thus, by Corollary 4.12 we
have that
1
g
e (XG (Σg )) =
g e Z(D) ◦ Z(T ) ◦ Z(D)(Q0 ) .
e (G)
This kind of computations were carried out in the paper [34] for G = SL(2, C) and in [32] for
G = P GL(2, C). In future work, we shall explain the computations of those papers in these
terms and extend them to the parabolic case using the techniques explained in the following
section.
4.4. The parabolic case. The previous constructions can be easily extended to the parabolic
case. Following the construction above, it is just necessary to adapt the geometrisation functor
to the parabolic context.
As above, let us fix a complex algebraic group G and, as parabolic data, we choose for Λ a
collection of subsets of G closed under conjugacy. Then, as geometrisation functor, we define
the 2-functor G : Bdpn (Λ) → S = Span(VarC ) as follows:
• For any object (X, A) of Bdpn (Λ), we set G (X, A) = XG (X, A) = Hom (Π(X, A), G).
• Let (W, A, Q) : (X1 , A1 ) → (X2 , A2 ) with Q = {(S1 , λ1 ), . . . , (Ss , λs )} be a 1-morphism.
For short, let us denote S = S1 ∪ . . . ∪ Ss . To this morphism we assign the span
XG (X2 , A2 ) ←− XG (W, A, Q) −→ XG (X1 , A1 )
where XG (W, A, Q) ⊆ XG (W − S, A) is the subvariety of groupoid homomorphisms
ρ : Π(W − S, A) → G satisfying the following condition: if γ is a loop of π1 (W − S, a),
being a any point of W in the connected component of γ, whose image under the map
induced by inclusion π1 (W − S, a) → π1 ((W − S) ∪ Sk , a) vanishes, then ρ(γ) ∈ λk .
Observe that, since the λ ∈ Λ are closed under conjugation, that condition does not
depend on the chosen basepoint a.
• For a 2-morphism (W, A, Q) ⇒ (W 0 , A0 , Q0 ) given by a diffeomorphism F : W → W 0
we use the induced map F ∗ : XG (W 0 , A0 , Q0 ) → XG (W, A, Q) to create a commutative
diagram
XG (W 0 , A0 , Q0 )
v
XG (X2 , A2 ) o
F∗
XG (W, A, Q)
(
/ XG (X1 , A1 )
which is a 2-cell in Span(VarC ).
As in section 4.1, these assigments can be put together to define a 2-functor G : Bdpn (Λ) →
S . This functor, together with the algebraisations A : VarC → Ring and B : VarC →
R-Mod described in section 4.2, gives us a parabolic s TQFTp, Z : Bdpn (Λ) → R-Bim and
23
a parabolic ` TQFTp, Z : Bdpn (Λ) → R-Modt . As in the non-parabolic case, the functor
Z satisfies that, for any closed connected orientable n-dimensional manifold X any finite set
A ⊆ X and any parabolic structure Q on X disjoint from A we have
Z (X, A, Q)(Q0 ) = [Hc• (XG (X, Q); Q)] ⊗ [Hc• (G; Q)]|A|−1 .
As always, this implies that
e (Z (X, A, Q)(Q0 )) = e (G)|A|−1 e (XG (X, Q)) .
Furthermore, following the construction in section 4.3, we can also modify this result to
obtain an almost-TQFT, Z : Tbpn (Λ) → KMHSQ -Mod, where Tbpn (Λ) is the category
of tubes of pairs with parabolic data Λ. In this parabolic case, for any closed orientable
n-dimensional manifold W with parabolic structure Q and any finite set A ⊆ W we obtain
Z(W, A, Q)(Q0 ) = [Hc• (XG (W, Q); Q)] × [Hc• (G; Q)]|A|−1 ,
e (Z(W, A, Q)(Q0 )) = e (G)|A|−1 e (XG (W, Q)) .
In the particular case n = 2, observe that Par(W ) is just the set of interior points of the
surface W . Therefore, in order to obtain a set of generators of Tbpn (Λ) it is enough to consider
the elements of the set of generators ∆ previouly defined with no parabolic structure and add
a collection of tubes Tλ = (S 1 × [0, 1], {x1 , x2 } , {(x, λ)}) for λ ∈ Λ, where x is any interior
point of S 1 × [0, 1] with parabolic structure λ and x1 , x2 are points in the ingoing and outgoing
boundaries repectively, as depicted in Figure 4.
Figure 4. Tube with marked point.
In this case, observe that π1 ((S 1 × [0, 1]) − {x0 }) is the free group with two generators and
that the fundamental groupoid of Tλ has two vertices. These generators can be taken to be
around the incoming boundary and around the marked point so XG (Tλ ) = G2 × λ. Thus, the
geometrisation G (Tλ ) is the span
r
u
G ←− G2 × λ −→
G
g ←[ (g, g1 , h) 7→ g1 ghg1−1
Hence, the image Z(Tλ ) is the morphism
r∗
u
!
Z(Tλ ) : KMG −→ KMG2 ×λ −→
KMG .
With this description we have proven the following result.
Theorem 4.13. Let Σg be a closed oriented surface of genus g and Q a parabolic structure on
Σg with s marked points with data λ1 , . . . , λs ∈ Λ, then,
1
g
e (XG (Σg , Q)) =
g+s e Z(D) ◦ Z(Tλs ) ◦ . . . ◦ Z(Tλ1 ) ◦ Z(T ) ◦ Z(D)(Q0 ) .
e (G)
24
This formula gives a general recipe for computing Deligne-Hodge polynomials of representation varieties, for any group G. That is, once computed explicitly the homomorphism
Z(T ) : KMG → KMG , all the Deligne-Hodge polynomials are known in the non-parabolic
case and, if we also compute Z(Tλ ) : KMG → KMG for any conjugacy class λ ⊆ G, also the
parabolic case follows.
Some reductions can be done to this program. Instead of KMG , we can consider the vector
space VG = KMG ⊗Z[u,v] Q(u, v), where Z[u, v] lies inside KMG via the Tate structures.
Hence, in order to perform the previous program (at least in the non-parabolic case), it is
enough to compute the Q(u, v)-linear map Z(T ) : VG → VG . However, generically, VG is an
infinite dimensional vector space, so that computation could be as difficult as the previous one.
Nevertheless, observe that, for our purpose, it is not needed to compute Z(T ) in the whole space
VG , but only in the subspace WG = hZ(T )g (QG )i∞
g=0 which can be significatly smaller. Actually,
computations of [34], in the case G = SL(2, C), and [32] for G = P GL(2, C), shows that WG is,
in both cases, a finite dimensional vector space (of dimensions 8 and 6, respectively). That is
important because, in that case, the knowledge of finitely many elements Z(T )g (QG ) ∈ KMG
is enough to characterize completely the map along WG . Therefore, there are good reasons to
expect the following result, at least for some general class of algebraic groups.
Conjecture 1. The vector space WG ⊆ VG = KMG ⊗Z[u,v] Q(u, v) is finitely dimensional.
Future work will address this problem, at least for linear groups, and will look for bounds
for the dimension of WG . Another work consists of computing these polynomials explicitly for
some groups G, for example G = SL(2, C) or P GL(2, C), where the parabolic case remains
unknown.
Finally, the developed framework could be useful for understanding the mirror-symmetry
conjectures related to character varieties, as stated in [20] and [22]. For example, computations
of [34] and [32] for the Langlands dual groups SL(2, C) and P GL(2, C) show that WSL(2,C)
and WP GL(2,C) are strongly related, in the sense that their generators agree in the space of
traces. Based on these observations, it could be interesting to know if the linear maps Z(T )
are somehow related for Langland dual groups.
References
[1] M. Atiyah. Topological quantum field theories. Inst. Hautes Études Sci. Publ. Math., (68):175–186 (1989),
1988.
[2] J. Ayoub. Les six opérations de Grothendieck et le formalisme des cycles évanescents dans le monde motivique. I. Astérisque, (314):x+466 pp. (2008), 2007.
[3] J. C. Baez and J. Dolan. Higher-dimensional algebra and topological quantum field theory. J. Math. Phys.,
36(11):6073–6105, 1995.
[4] D. Baraglia and P. Hekmati. Arithmetic of singular character varieties and their E-polynomials. Proc. Lond.
Math. Soc. (3), 114(2):293–332, 2017.
[5] A. A. Beilinson and V. G. Drinfeld. Quantization of Hitchin’s fibration and Langlands’ program. In Algebraic
and geometric methods in mathematical physics (Kaciveli, 1993), volume 19 of Math. Phys. Stud., pages
3–7. Kluwer Acad. Publ., Dordrecht, 1996.
[6] J. Bénabou. Introduction to bicategories. In Reports of the Midwest Category Seminar, pages 1–77. Springer,
Berlin, 1967.
[7] O. Biquard and M. Jardim. Asymptotic behaviour and the moduli space of doubly-periodic instantons. J.
Eur. Math. Soc. (JEMS), 3(4):335–375, 2001.
25
[8] H. U. Boden and K. Yokogawa. Moduli spaces of parabolic Higgs bundles and parabolic K(D) pairs over
smooth curves. I. Internat. J. Math., 7(5):573–598, 1996.
[9] R. Brown. Groupoids and van Kampen’s theorem. Proc. London Math. Soc. (3), 17:385–401, 1967.
[10] R. Brown. Topology and groupoids. BookSurge, LLC, Charleston, SC, 2006. Third edition of ıt Elements
of modern topology [McGraw-Hill, New York, 1968; MR0227979], With 1 CD-ROM (Windows, Macintosh
and UNIX).
[11] K. Corlette. Flat G-bundles with canonical metrics. J. Differential Geom., 28(3):361–382, 1988.
[12] P. Deligne. Théorie de Hodge. II. Inst. Hautes Études Sci. Publ. Math., (40):5–57, 1971.
[13] P. Deligne. Théorie de Hodge. II. Inst. Hautes Études Sci. Publ. Math., (40):5–57, 1971.
[14] F. El Zein and L. D. ung Tráng. Mixed Hodge structures. In Hodge theory, volume 49 of Math. Notes, pages
123–216. Princeton Univ. Press, Princeton, NJ, 2014.
[15] H. Fausk, P. Hu, and J. P. May. Isomorphisms between left and right adjoints. Theory Appl. Categ., 11:No.
4, 107–131, 2003.
[16] D. S. Freed. Higher algebraic structures and quantization. Comm. Math. Phys., 159(2):343–398, 1994.
[17] O. Garcı́a-Prada, P. B. Gothen, and V. Muñoz. Betti numbers of the moduli space of rank 3 parabolic
Higgs bundles. Mem. Amer. Math. Soc., 187(879):viii+80, 2007.
[18] O. Garcı́a-Prada, J. Heinloth, and A. Schmitt. On the motives of moduli of chains and Higgs bundles. J.
Eur. Math. Soc. (JEMS), 16(12):2617–2668, 2014.
[19] P. B. Gothen. The Betti numbers of the moduli space of stable rank 3 Higgs bundles on a Riemann surface.
Internat. J. Math., 5(6):861–875, 1994.
[20] T. Hausel. Mirror symmetry and Langlands duality in the non-abelian Hodge theory of a curve. In Geometric
methods in algebra and number theory, volume 235 of Progr. Math., pages 193–217. Birkhäuser Boston,
Boston, MA, 2005.
[21] T. Hausel, E. Letellier, and F. Rodriguez-Villegas. Arithmetic harmonic analysis on character and quiver
varieties. Duke Math. J., 160(2):323–400, 2011.
[22] T. Hausel and F. Rodriguez-Villegas. Mixed Hodge polynomials of character varieties. Invent. Math.,
174(3):555–624, 2008. With an appendix by Nicholas M. Katz.
[23] P. J. Higgins. Categories and groupoids. Repr. Theory Appl. Categ., (7):1–178, 2005. Reprint of the 1971
original [ıt Notes on categories and groupoids, Van Nostrand Reinhold, London; MR0327946] with a new
preface by the author.
[24] N. J. Hitchin. The self-duality equations on a Riemann surface. Proc. London Math. Soc. (3), 55(1):59–126,
1987.
[25] M. Jardim. Nahm transform and spectral curves for doubly-periodic instantons. Comm. Math. Phys.,
225(3):639–668, 2002.
[26] J. Kock. Frobenius algebras and 2D topological quantum field theories, volume 59 of London Mathematical
Society Student Texts. Cambridge University Press, Cambridge, 2004.
[27] A. D. Lauda and H. Pfeiffer. Open-closed strings: two-dimensional extended TQFTs and Frobenius algebras. Topology Appl., 155(7):623–666, 2008.
[28] T. Leinster. Higher operads, higher categories, volume 298 of London Mathematical Society Lecture Note
Series. Cambridge University Press, Cambridge, 2004.
[29] M. Logares and V. Muñoz. Hodge polynomials of the SL(2, C)-character variety of an elliptic curve with
two marked points. Internat. J. Math., 25(14):1450125, 22, 2014.
[30] M. Logares, V. Muñoz, and P. E. Newstead. Hodge polynomials of SL(2, C)-character varieties for curves
of small genus. Rev. Mat. Complut., 26(2):635–703, 2013.
[31] S. Mac Lane. Categories for the working mathematician, volume 5 of Graduate Texts in Mathematics.
Springer-Verlag, New York, second edition, 1998.
[32] J. Martı́nez. E-polynomials of P GL(2, C)-character varieties of surface groups. Preprint. arXiv:1705.04649,
2017.
[33] J. Martı́nez and V. Muñoz. E-polynomials of SL(2, C)-character varieties of complex curves of genus 3.
Osaka J. Math., 53(3):645–681, 2016.
[34] J. Martı́nez and V. Muñoz. E-polynomials of the SL(2, C)-character varieties of surface groups. Int. Math.
Res. Not. IMRN, (3):926–961, 2016.
26
[35] M. Mereb. On the E-polynomials of a family of SLn -character varieties. Math. Ann., 363(3-4):857–892,
2015.
[36] K. Nakamoto. Representation varieties and character varieties. Publ. Res. Inst. Math. Sci., 36(2):159–189,
2000.
[37] P. E. Newstead. Introduction to moduli problems and orbit spaces, volume 51 of Tata Institute of Fundamental Research Lectures on Mathematics and Physics. Tata Institute of Fundamental Research, Bombay;
by the Narosa Publishing House, New Delhi, 1978.
[38] C. A. M. Peters and J. H. M. Steenbrink. Mixed Hodge structures, volume 52 of Ergebnisse der Mathematik
und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and
Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin, 2008.
[39] M. Saito. Mixed Hodge modules. Proc. Japan Acad. Ser. A Math. Sci., 62(9):360–363, 1986.
[40] M. Saito. Introduction to mixed Hodge modules. Astérisque, (179-180):10, 145–162, 1989. Actes du Colloque
de Théorie de Hodge (Luminy, 1987).
[41] M. Saito. Mixed Hodge modules. Publ. Res. Inst. Math. Sci., 26(2):221–333, 1990.
[42] J. Schürmann. Characteristic classes of mixed Hodge modules. In Topology of stratified spaces, volume 58
of Math. Sci. Res. Inst. Publ., pages 419–470. Cambridge Univ. Press, Cambridge, 2011.
[43] M. Shulman. Framed bicategories and monoidal fibrations. Theory Appl. Categ., 20:No. 18, 650–738, 2008.
[44] C. T. Simpson. Harmonic bundles on noncompact curves. J. Amer. Math. Soc., 3(3):713–770, 1990.
[45] C. T. Simpson. Higgs bundles and local systems. Inst. Hautes Études Sci. Publ. Math., (75):5–95, 1992.
[46] C. T. Simpson. Moduli of representations of the fundamental group of a smooth projective variety. I. Inst.
Hautes Études Sci. Publ. Math., (79):47–129, 1994.
[47] C. T. Simpson. Moduli of representations of the fundamental group of a smooth projective variety. II. Inst.
Hautes Études Sci. Publ. Math., (80):5–79 (1995), 1994.
[48] A. Strominger, S.-T. Yau, and E. Zaslow. Mirror symmetry is T -duality. Nuclear Phys. B, 479(1-2):243–259,
1996.
Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, Plaza Ciencias 3,
28040 Madrid Spain.
E-mail address: angel [email protected]
School of Computing, Electronics and Mathematics (Faculty of Science and Engineering),
Plymouth University, 2-5, Kirkby Place, Plymouth, PL1 8AA United Kingdom.
E-mail address: [email protected]
Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, Plaza Ciencias 3,
28040 Madrid Spain.
E-mail address: [email protected]
| 0 |
Randomised enumeration of small witnesses using a
decision oracle
Kitty Meeks∗
School of Computing Science, University of Glasgow
arXiv:1509.05572v4 [] 4 Jan 2018
[email protected]
January 2018
Abstract
Many combinatorial problems involve determining whether a universe of n elements contains a witness consisting of k elements which have some specified property. In this paper we investigate the relationship between the decision and enumeration versions of such problems: efficient methods are known for transforming
a decision algorithm into a search procedure that finds a single witness, but even
finding a second witness is not so straightforward in general. We show that, if the
decision version of the problem can be solved in time f (k) · poly(n), there is a randomised algorithm which enumerates all witnesses in time ek+o(k) ·f (k)·poly(n)·N ,
where N is the total number of witnesses. If the decision version of the problem
is solved by a randomised algorithm which may return false negatives, then the
same method allows us to output a list of witnesses in which any given witness will
be included with high probability. The enumeration algorithm also gives rise to
an efficient algorithm to count the total number of witnesses when this number is
small.
1
Introduction
Many well-known combinatorial decision problems involve determining whether a universe U of n elements contains a witness W consisting of exactly k elements which have
some specified property; we refer to such problems as k-witness problems. Although the
decision problems themselves are of interest, it is often not sufficient for applications to
output simply “yes” or “no”: we need to find one or more witnesses. The issue of finding
a single witness using an oracle for the decision problem has previously been investigated
by Björklund, Kaski, and Kowalik [6], motivated by the fact that the fastest known parameterised algorithms for a number of widely studied problems (such as graph motif [5]
and k-path [4]) are non-constructive in nature. Moreover, for some problems (such as
k-Clique or Independent Set [3] and k-Even Subgraph [17]) the only known FPT
The author is supported by a Royal Society of Edinburgh Personal Research Fellowship, funded by
the Scottish Government.
∗
1
decision algorithm relies on a Ramsey theoretic argument which says the answer must be
“yes” provided that the input graph avoids certain easily recognisable structures.
Following the first approach used in [6], we begin by assuming the existence of a deterministic inclusion oracle (a black-box decision procedure), as follows.
INC-ORA(X, U, k)
Input: X ⊆ U and k ∈ N
Output: 1 if some witness of size k in U is entirely contained in X; 0 otherwise.
Such an inclusion oracle can easily be obtained from an algorithm for the basic decision
problem in the case of self-contained k-witness problems, where we only have to examine
the elements of a k-element subset (and the relationships between them) to determine
whether they form a witness: we simply call the decision procedure on the substructure induced by X. Examples of k-witness problems that are self-contained in this sense
include those of determining whether a graph contains a k-vertex subgraph with some
property, such as the well-studied problems k-Path, k-Cycle and k-Clique; algorithms
to count the number of witnesses in problems of this form have been designed for applications ranging from the analysis of biological networks [23] to the design of network
security tools [16, 25, 26].
Given access to an oracle of this kind, a naı̈ve approach easily finds a single witness
using Θ(n) calls to INC-ORA: we successively delete elements of the universe, following
each deletion with an oracle call, and if the oracle answers “no” we reinsert the last
deleted element and continue. Assuming we start with a yes-instance, this process will
terminate when only k elements remain, and these k elements must form a witness. In [6],
ideas from combinatorial group testing are used to make a substantial improvement on
this strategy for the extraction of a single witness: rather than deleting a single element
at a time, large subsets are discarded (if possible)
at each stage. This gives an algorithm
n
that extracts a witness with only 2k log2 k + 2 oracle queries.
However, neither of these approaches for finding a single witness can immediately be
extended to find all witnesses, a problem which is of interest even if an efficient decision
algorithm does output a single witness. Both approaches for finding a first witness rely
on the fact that we can safely delete some subset of elements from our universe provided
we know that what is left still contains at least one witness; if we need to look for a
second witness, the knowledge that at least one witness will remain is no longer sufficient
to guarantee we can delete a given subset. Of course, for any k-witness problem we can
check all possible subsets of size k, and hence enumerate all witnesses, in time O(nk );
indeed, if every set of k vertices is in fact a witness then we will require this amount of
time simply to list them all. However, we can seek to do much better than this when the
number of witnesses is small by making use of a decision oracle.
The enumeration problem becomes straightforward if we have an extension oracle,1
defined as follows.
EXT-ORA(X,Y ,U,k)
Input: X ⊆ U, Y ⊆ X, and k ∈ N
1
Such an oracle is sometimes called an interval oracle, as in the enumeration procedure described by
Björklund, Kaski, Kowalik and Lauri [7] which builds on earlier work by Lawler [21].
2
Output: 1 if there exists a witness W with Y ⊆ W ⊆ X; 0 otherwise.
The existence of an efficient procedure EXT-ORA(X,Y ,U,k) for a given self-contained
k-witness problem allows us to use standard backtracking techniques to devise an efficient
enumeration algorithm. We explore a binary search tree of depth O(n), branching at level
i of the tree on whether the ith element of U belongs to the solution. Each node in the
search tree then corresponds to a specific pair (X, Y ) with Y ⊆ X ⊆ U; we can call EXTORA(X,Y ,U,k) to determine whether any descendant of a given node corresponds to a
witness. Pruning the search tree in this way ensures that no more than O(n · N) oracle
calls are required, where N is the total number of witnesses.
Note that, with only the inclusion oracle, we can determine whether there is a witness
that does not contain some element x (we simply call INC-ORA(U \ {x}, U, k)), but
we cannot determine whether there is a witness which does contain x. Moreover, as we
will show in Section 3, there are natural (self-contained) k-witness problems for which the
inclusion problem can be solved efficiently but there is no fpt-algorithm for the extension
decision problem unless FPT=W[1]. This motivates the development of enumeration
algorithms that do not rely on such an oracle.
The main result of this paper is just such an algorithm; specifically, we prove the
following theorem.
Theorem 1.1. There is a randomised algorithm to enumerate all witnesses of size k
in a k-witness problem exactly once, whose expected number of calls to a deterministic
inclusion oracle is at most ek+o(k) log2 n · N, where N is the total number of witnesses. If
an oracle call can be executed in time g(k) · nO(1) for some computable function g, then
the expected total running time of the algorithm is
ek+o(k) · g(k) · nO(1) · N.
Moreover, the total space required by the algorithm is at most ek+o(k) · nO(1) .
The key tool we use to obtain this algorithm is a colour coding method, using a family
of k-perfect hash functions. This technique was introduced by Alon, Yuster and Zwick
in [1] and has been widely used in the design of parameterised algorithms for decision
and approximate counting (see for example [15, Chapters 13 and 14] and [12, Chapter
8]), but to the best of the author’s knowledge has not yet been applied to enumeration
problems.
The main limitation of Theorem 1.1 is that it requires access to a deterministic inclusion oracle INC-ORA which always returns the correct answer. However, in a number of
cases (including k-Path [4] and Graph Motif [5]) the fastest known decision algorithm
for a self-contained k-witness problem (and hence for the corresponding inclusion problem) is in fact randomised and has a small probability of returning an incorrect answer.
We will also show that the same algorithm can be used in this case, at the expense of a
small increase in the expected running time (if the oracle can return false positives) and
the loss of the guarantee that we will output every witness exactly once: for each witness
in the instance, there is a small probability that we will omit it from the list due to the
oracle returning false negatives. Specifically, we prove the following theorem.
Theorem 1.2. Given a randomised inclusion oracle for the k-witness problem Π, whose
probability of returning an incorrect answer is at most c < 21 , there is a randomised
3
algorithm which takes as input an instance of Π and a constant ǫ > 0, and outputs a
list of witnesses of size k in the instance such that no witness appears more than once
and, for any witness W , the probability that W is included in the list is at least 1 − ǫ.
In expectation, the algorithm makes at most ek+o(k) · log(ǫ−1 ) · log3 n (log log n) · N oracle
calls, where N is the total number of witnesses in the instance, and if an oracle call can
be executed in time g(k) · nO(1) for some computable function g, then the expected running
time of the algorithm is
ek+o(k) · log(ǫ−1 · g(k) · nO(1) · N.
Moreover, the total space required by the algorithm is ek+o(k) · nO(1) .
This result initiates the study of approximate algorithms for enumeration problems:
in contrast with the well-established field of approximate counting, this relaxation of the
requirements for enumeration does not seem to have been addressed in the literature to
date.
In the study of counting complexity it is standard practice, when faced with a #Phard problem, to investigate whether there is an efficient method to solve the counting
problem approximately. The answer to this question is considered to be “yes” if and only
if the problem admits a fully polynomial randomised approximation scheme (FPRAS),
defined as follows.
Definition. An FPRAS for a counting problem Π is a randomised approximation scheme
that takes an instance I of Π (with |I| = n), and real numbers ǫ > 0 and 0 < δ < 1, and
in time poly(n, 1/ǫ, log(1/δ)) outputs a rational number z such that
P[(1 − ǫ)Π(I) ≤ z ≤ (1 + ǫ)Π(I)] ≥ 1 − δ.
In the parameterised setting, the analogue of this is a fixed parameter tractable randomised
approximation scheme (FPTRAS), in which the running time is additionally allowed to
depend arbitrarily on the parameter.
Perhaps the most obvious way to translate this notion in to the setting of enumeration
would be to look for an algorithm which, with probability at least (1 − δ), would output
at least (1 − ǫ)-proportion of all witnesses. In the setting of counting, all witnesses are
essentially interchangeable, so it makes sense to consider only the total number of objects
counted in relation to the true answer. However, this definition perhaps allows too much
freedom in the setting of enumeration: we could design an algorithm which satisfies these
requirements and yet will never output some collection of hard-to-find witnesses, so long
as this collection is not too large compared with the total number of witnesses.
Instead, we propose here a more demanding notion of approximate enumeration: given
ǫ > 0, we want a (randomised) algorithm such that, for any witness W , the probability
we output W is at least 1 − ǫ. This implies that we will, with high probability (depending
on ǫ) output a large proportion of all possible witnesses, but also ensures that we cannot
choose to ignore certain potential witnesses altogether. It may also be desirable to permit
a witness to be repeated in the output with small probability: we can allow this flexibility
by requiring only that, for each witness W , the probability that W is included in the
output exactly once is at least 1 − ǫ. We give a formal definition of efficient approximate
enumeration in Section 2.
4
Theorem 1.1 is proved in Section 4, and Theorem 1.2 in Section 5. We then discuss
some implications of our enumeration algorithms for the complexity of related counting
problems in Section 6. We begin in Section 2 with some background on relevant complexity theoretic notions, before discussing the hardness of the extension version of some
self-contained k-witness problems in Section 3.
2
Parameterised enumeration
There are two natural measures of the size of a self-contained k-witness problem, namely
the number of elements n in the universe and the number of elements k in each witness, so
the running time of algorithms is most naturally discussed in the setting of parameterised
complexity. There are two main complexity issues to consider in the present setting: first
of all, as usual, the running time, and secondly the number of oracle calls required.
For general background on the theory of parameterised complexity, we refer the reader
to [12, 15]. The theory of parameterised enumeration has been developed relatively
recently [13, 9, 8], and we refer the reader to [9] for the formal definitions of the different
classes of parameterised enumeration algorithms. To the best of the author’s knowledge,
this is the first occurrence of a randomised parameterised enumeration algorithm in the
literature, and so we introduce randomised analogues of the four types of parameterised
enumeration algorithms introduced in [9] (for a problem with total input size n and
parameter k, and with f : N → N assumed to be a computable function throughout):
• an expected-total-fpt algorithm enumerates all solutions and terminates in expected
time f (k) · nO(1) ;
• an expected-delay-fpt algorithm enumerates all solutions with expected delay at
most f (k) · nO(1) between the times at which one solution and the next are output
(and the same bound applies to the time before outputting the first solution, and
between outputting the final solution and terminating);
• an expected-incremental-fpt algorithm enumerates all solutions with expected delay
at most f (k) · (n + i)O(1) between outputting the ith and (i + 1)th solution;
• an expected-output-fpt algorithm enumerates all solutions and terminates in expected time f (k)·(n+N)O(1) , where N is the total number of solutions enumerated.
Under these definitions, Theorem 1.1 says that, if there is an FPT decision algorithm
for the inclusion version of a k-witness problem, then there is an expected-output-fpt
algorithm for the corresponding enumeration problem.
In the setting of approximate enumeration, we define a fully output polynomial randomised enumeration scheme (FOPRES) to be an algorithm which, given an instance I
of an enumeration problem (with total input size n) and a rational ǫ ∈ (0, 1), outputs, in
time bounded by a polynomial function of n, N and ǫ−1 (where N is the total number
of solutions to I), a list of solutions to I with the property that, for any solution W , the
probability that W appears exactly once in the list is at least 1 − ǫ. In the parameterised
setting, we analogously define a fully output fpt randomised enumeration scheme (FOFPTRES) to be an algorithm which, given an instance I of a parameterised enumeration
5
problem (with total input size n and parameter k) and a rational ǫ ∈ (0, 1), outputs, in
time bounded by f (k) · p(n, N, ǫ−1 ), where p is a polynomial, f is any computable function, and N is the total number of solutions to I, a list of solutions to I with the property
that, for any solution W , the probability that W appears exactly once in the list is at
least 1 − ǫ. An expected -FOPRES (respectively expected -FOFPTRES) is a randomised
algorithm which satisfies the definition of a FOPRES (resp. FOFPTRES) if we replace
the condition on the running time by the same condition on the expected running time.
We can make analogous definitions for total-polynomial, total-fpt, delay-polynomial etc.
Under these definitions, Theorem 1.2 says that, if there is a randomised FPT decision
algorithm for the inclusion version of a k-witness problem with error probability less than
a half, then the corresponding enumeration problem admits a FOFPTRES.
3
Hardness of the extension problem
Many combinatorial problems have a very useful property, often referred to as selfreducibility, which allows a search or enumeration problem to be reduced to (smaller
instances of) the corresponding decision problem in a very natural way (see [9, 20, 27]).
A problem is self-reducible in this sense if the existence of an efficient decision procedure (answering the question: “Does the universe contain at least one witness of size
k?”) implies that there is an efficient algorithm to solve the extension decision problem
(equivalent to EXT-ORA). While many self-contained k-witness problems do have this
property, we will demonstrate that there exist self-contained k-witness problems that
do not (unless FPT=W[1]), and so an enumeration procedure that makes use only of
INC-ORA and not EXT-ORA is desirable.
In order to demonstrate this, we show that there exist self-contained k-witness problems whose decision versions belong to FPT, but for which the corresponding extension
decision problem is W[1]-hard. We will consider the following problem, which is clearly
a self-contained k-witness problem.
k-Clique or Independent Set
Input: A graph G = (V, E) and k ∈ N.
Parameter: k.
Question: Is there a k-vertex subset of V that induces either a clique or an independent set?
This problem is known to belong to FPT [3]: any graph with at least 22k vertices must
be a yes-instance by Ramsey’s Theorem. We now turn our attention to the extension
version of the problem, defined as follows.
k-Extension Clique or Independent Set
Input: A graph G = (V, E), a subset U ⊆ V and k ∈ N.
Parameter: k.
Question: Is there a k-vertex subset S of V , with U ⊆ S, that induces either a clique
or an independent set?
6
It is straightforward to adapt the hardness proof for k-Multicolour Clique or Independent Set [22, Proposition 3.7] to show that k-Extension Clique or Independent Set is W[1]-hard.
Proposition 3.1. k-Extension Clique or Independent Set is W[1]-hard.
Proof. We prove this result by means of a reduction from the W[1]-complete problem
k-Clique. Let (G, k) be the input to an instance of k-Clique. We now define a new
graph G′ , obtained from G by adding one new vertex v, and an edge from v to every
vertex u ∈ V (G). It is then straightforward to verify that (G′ , {v}, k + 1) is a yes-instance
for k-Extension Clique or Independent Set if and only if G contains a clique of
size k.
This demonstrates that k-Extension Clique or Independent Set is a problem
for which there exists an efficient decision procedure but no efficient algorithm for the
extension version of the decision problem (unless FPT=W[1]). Both of these arguments
(inclusion of the decision problem in FPT, and hardness of the extension version) can
easily be adapted to demonstrate that the following problem exhibits the same behaviour.
k-Induced Regular Subgraph
Input: A graph G = (V, E) and k ∈ N.
Parameter: k.
Question: Is there a k-vertex subset of V that induces a subgraph in which every
vertex has the same degree?
Indeed, the same method can be applied to any problem in which putting a restriction
on the degree of one of the vertices in the witness guarantees that the witness induces a
clique (or some other induced subgraph for which it is W[1]-hard to decide inclusion in
an arbitrary input graph).
4
The randomised enumeration algorithm
In this section we describe our randomised witness enumeration algorithm and analyse
its performance when used with a deterministic oracle, thus proving Theorem 1.1.
As mentioned above, our algorithm relies on a colour coding technique. A family F
of hash functions from [n] to [k] is said to be k-perfect if, for every subset A ⊂ [n] of size
k, there exists f ∈ F such that the restriction of f to A is injective. We will use the
following bound on the size of such a family of hash functions.
Theorem 4.1. [24] For all n, k ∈ N there is a k-perfect family Fn,k of hash functions
from [n] to [k] of cardinality ek+o(k) · log n. Furthermore, given n and k, a representation
of the family Fn,k can be computed in time ek+o(k) · n log n.
Our strategy is to solve a collection of ek+o(k) · log n colourful enumeration problems,
one corresponding to each element of a family F of k-perfect hash functions. In each of
these problems, our goal is to enumerate all witnesses that are colourful with respect to
the relevant element f of F (those in which each element is assigned a distinct colour by
7
f ). Of course, we may discover the same witness more than once if it is colourful with
respect to two distinct elements in F , but it is straightforward to check for repeats of
this kind and omit duplicate witnesses from the output. It is essential in the algorithm
that we use a deterministic construction of a k-perfect family of hash functions rather
than the randomised construction also described in [1], as the latter method would allow
the possibility of witnesses being omitted (with some small probability).
The advantage of solving a number of colourful enumeration problems is that we can
split the problem into a number of sub-problems with the only requirement being that we
preserve witnesses in which every element has a different colour (rather than all witnesses).
This makes it possible to construct a number of instances, each (roughly) half the size
of the original instance, such that every colourful witness survives in at least one of the
smaller instances. More specifically, for each k-perfect hash function we explore a search
tree: at each node, we split every colour-class randomly into (almost) equal-sized parts,
and then branch to consider each of the 2k combinations that includes one (nonempty)
subset of each colour, provided that the union of these subsets still contains at least one
witness (as determined by the decision oracle). This simple pruning of the search tree
will not prevent us exploring “dead-ends” (where we pursue a particular branch due to
the presence of a non-colourful witness), but turns out to be sufficient to make it unlikely
that we explore very many branches that do not lead to colourful witnesses.
We describe the algorithm in pseudocode (Algorithm 1), making use of two subroutines. In addition to our oracle INC-ORA(X,U,k), we also define a procedure RANDPART(X) which we use, while exploring the search tree, to obtain a random partition
of a subset of the universe.
RANDPART(X)
Input: X ⊆ U
Output: A partition (X1 , X2 ) of X with ||X1 | − |X2 || ≤ 1, chosen uniformly at random from all such partitions of X.
We prove the correctness of the algorithm and discuss the space used in Section 4.1, and
bound the expected running time in Section 4.2.
4.1
Correctness of the algorithm
In order to prove that our algorithm does indeed output every witness exactly once, we
begin by showing that we will identify a given k-element subset X during the iteration
corresponding to the hash-function f ∈ F if and only if X is a colourful witness with
respect to f .
Lemma 4.2. Let X be a set of k vertices in the universe U. In the iteration of Algorithm 1
corresponding to f ∈ F , we will execute 9 to 11 with A = X if and only if:
1. X is a witness, and
2. X is colourful with respect to f .
8
Algorithm 1: Randomised algorithm to enumerate all k-element witnesses in the
universe U, using a decision oracle.
1 if INC − ORA(U, U, k) = 1 then
2
Construct a family F = {f1 , f2 , . . . , f|F | } of k-perfect hash functions from U to
[k];
3
for 1 ≤ r ≤ |F | do
4
Initialise an empty FIFO queue Q;
5
Insert U into Q;
6
while Q is not empty do
7
Remove the first element A from Q;
8
if |A| = k then
9
if A is not colourful with respect to fs for any s ∈ {1, . . . , r − 1}
then
10
Output A;
11
end if
12
else
13
for 1 ≤ i ≤ k do
14
Set Ai to be the set of elements in A coloured i by fr ;
(1)
(2)
15
Set (Ai , Ai ) = RANDPART(Ai );
16
end for
17
for each j = (j1 , . . . , jk ) ∈ {1, 2}k do
(j )
18
if |Ai ℓ | > 0 for each 1 ≤ ℓ ≤ k then
(j )
(j )
19
Set Aj = Ai 1 ∪ · · · ∪ Ai k ;
20
if INC − ORA(Aj , U, k) = 1 then
21
Add Aj to Q;
22
end if
23
end if
24
end for
25
end if
26
end while
27
end for
28 end if
9
Proof. We first argue that we only execute lines 9 to 11 with A = X if X is a witness and
is colourful with respect to f . We claim that, throughout the execution of the iteration
corresponding to f , every subset B in the queue Q has the following properties:
1. there is some witness W such that W ⊆ B, and
2. B contains at least one vertex receiving each colour under f .
Notice that we check the first condition before adding any subset A to Q (lines 1 and
20), and we check the second condition for any A 6= U in line 18 (U necessarily satisfies
condition 2 by construction of F ), so these two conditions are always satisfied. Thus, if
we execute lines 9 to 11 with A = X, these conditions hold for X; note also that we only
execute these lines with A = X if |X| = k. Hence, as there is a witness W ⊆ X where
|W | = |X| = k, we must have X = W and hence X is a witness. Moreover, as X must
contain at least one vertex of each colour, and contains exactly k elements, it must be
colourful.
Conversely, suppose that W = {w1 , . . . , wk } is a witness such that f (wi ) = i for
each 1 ≤ i ≤ k; we need to show that we will at some stage execute lines 9 to 11 with
A = W . We argue that at the start of each execution of the while loop, if W has not
yet been output, there must be some subset B in the queue such that W ⊆ B. This
invariant clearly holds before the first execution of the loop (U will have been inserted
into Q, as U contains at least one witness W ). Now suppose that the invariant holds
before starting some execution of the while loop. Either we execute lines 9 to 11 with
A = W on this iteration (in which case we are done), or else we proceed to line 13. Now,
(j )
for 1 ≤ i ≤ k, set ji to be either 1 or 2 in such a way that wi ∈ Ai i . The subset Aj ,
where j = (j1 , . . . , jk ) will then pass both tests for insertion into Q, and W ⊆ Aj by
construction, so the invariant holds when we exit the while loop. Since the algorithm
only terminates when Q is empty, it follows that we must eventually execute lines 9 to
11 with A = W .
The key property of k-perfect families of hash functions then implies that the algorithm will identify every witness; it remains only to ensure that we avoid outputting any
witness more than once. This is the purpose of lines 9 to 11 in the pseudocode. We know
from Lemma 4.2 that we find a given witness W while considering the hash-function f
if and only if W is colourful with respect to f : thus, in order to determine whether we
have found the witness in question before, it suffices to verify whether it is colourful with
respect to any of the colourings previously considered. Hence we see that every witness
is output exactly once, as required.
Note that the most obvious strategy for avoiding repeats would be to maintain a list
of all the witnesses we have output so far,
and check for membership of this list; however,
n
in general there might be as many as k witnesses, so both storing this list and searching
it would be costly. The approach used here means that we only have to store the family
F of k-perfect hash functions (requiring space ek+o(k) n log n). Since each execution of the
outer for loop clearly requires only polynomial space, the total space complexity of the
algorithm is at most ek+o(k) nO(1) , as required.
10
4.2
Expected running time
We know from Theorem 4.1 that a family F of k-perfect hash functions from U to [k], with
|F | = ek+o(k) log n, can be computed in time ek+o(k) n log n; thus line 2 can be executed in
time ek+o(k) n log n and the total number of iterations of the outer for-loop (lines 2 to 34)
is at most ek+o(k) log n.
Moreover, it is clear that each iteration of the while loop (lines 6 to 26) makes at most
2k oracle calls. If an oracle call can be executed in time g(k) · nO(1) for some computable
function g, then the total time required to perform each iteration of the while loop is at
most max{|F |, kn + 2k · g(k) · nO(1) } = ek+o(k) · g(k) · nO(1) .
Thus it remains to bound the expected number of iterations of the while loop in any
iteration of the outer for-loop; we do this in the next lemma.
Lemma 4.3. The expected number of iterations of the while-loop in any given iteration
of the outer for-loop is at most N (1 + ⌈log n⌉), where N is the total number of witnesses
in the instance.
Proof. We fix an arbitrary f ∈ F , and for the remainder of the proof restrict our attention
to the iteration of the outer for-loop corresponding to f .
We can regard this iteration of the outer for-loop as the exploration of a search tree,
with each node of the search tree indexed by some subset of U. The root is indexed by U
itself, and every node has up to 2k children, each child corresponding to a different way
of selecting one of the two randomly constructed subsets for each colour. A node may
have strictly fewer than 2k children, as we use the oracle to prune the search tree (line
20), omitting the exploration of branches indexed by a subset of U that does not contain
any witness (colourful or otherwise). Note that the search tree defined in this way has
depth at most ⌈log n⌉: at each level, the size of each colour-class in the indexing subset
is halved (up to integer rounding).
In this search tree model of the algorithm, each node of the search tree corresponds
to an iteration of the while-loop, and vice versa. Thus, in order to bound the expected
number of iterations of the while-loop, it suffices to bound the expected number of nodes
in the search tree.
Our oracle-based pruning method means that we can associate with every node v
of the search tree some representative witness Wv (not necessarily colourful), such that
Wv is entirely contained in the subset of U which indexes v. (Note that the choice of
representative witness for a given node need not be unique.) We know that in total there
are N witnesses; our strategy is to bound the expected number of nodes, at each level of
the search tree, for which any given witness can be the representative.
For a given witness W , we define a random variable XW,d to be the number of nodes
at depth d (where the root has depth 0, and children of the root have depth 1, etc.) for
which W could be the representative witness. Since every node has some representative
witness, it follows that the total number of nodes in the search tree is at most
⌈log n⌉
X
X
W a witness
XW,d .
d=0
Hence, by linearity of expectation, the expected number of nodes in the search tree is at
11
most
⌈log n⌉
X
W a witness
X
⌈log n⌉
E [XW,d ]
≤
N
d=0
X
d=0
max
W a witness
E [XW,d] .
In the remainder of the proof, we argue that E[XW,d ] ≤ 1 for all W and d, which will
give the required result.
Observe first that, if W is in fact a colourful witness with respect to f , then XW,d = 1
for every d: given a node whose indexing set contains W , exactly one of its children will
be indexed by a set that contains W . So we will assume from now on that W intersects
precisely ℓ colour classes, where ℓ < k.
If a given node is indexed by a set that contains W , we claim that the probability
k−ℓ
that W is contained in the set indexing at least one of its children is at most 21 . For
this to happen, it must be that for each colour i, all elements of W having colour i are
assigned to the same set in the random partition. If ci elements
in W have colour i, the
1 ci −1
(the first vertex of colour i
probability of this happening for colour i is at most 2
can be assigned to either set, and each subsequent vertex has probability at most 21 of
being assigned to this same set). Since the random partitions for each colour class are
independent, the probability that the witness W survives is at most
Y
W ∩f −1 (i)6=∅
ci−1 k−|{i:W ∩f −1 (i)6=∅}| k−ℓ
1
1
1
=
=
.
2
2
2
Moreover, if W is contained in the set indexing at least one of the child nodes, it will be
contained in the indexing sets for exactly 2k−ℓ child nodes: we must select the correct
subset for each colour-class that intersects W , and can choose arbitrarily for the remaining
k − ℓ colour classes. Hence, for each node indexed by a set that contains W , the expected
k−ℓ k−ℓ
number of children which are also indexed by sets containing W is at most 12
·2
=
1.
We now prove by induction on d that E [XW,d ] ≤ 1 (in the case that W is not colourful).
The base case for d = 0 is trivial (as there can only be one node at depth 0), so suppose
that d > 0 and that the result holds for smaller values. Then, if E[Y |Z = s] is the
conditional expectation of Y given that Z = s,
X
E[XW,d ] =
E[XW,d |XW,d−1 = t] P[XW,d−1 = t]
t≥0
≤
X
t P[XW,d−1 = t]
t≥0
= E[XW,d−1 ]
≤ 1,
by the inductive hypothesis, as required. Hence E[XW,d ] ≤ 1 for any witness W , which
completes the proof.
By linearity of expectation, it then follows that the expected total number of executions of the while loop will be at most |F | · N (1 + ⌈log n⌉), and hence that the expected number of oracle calls made during the execution of the algorithm is at most
12
ek+o(k) log2 n · N. Moreover, if an oracle call can be executed in time g(k) · nO(1) for some
computable function g, then the expected total running time of the algorithm is
ek+o(k) · g(k) · nO(1) · N,
as required.
5
Using a randomised oracle
In this section we show that the method described in Section 4 will in fact work almost
as well if we only have access to a randomised decision oracle, thus proving Theorem
1.2. The randomised decision procedures in [4, 5] only have one-sided errors, but for the
sake of generality we consider the effect of both false positives and false negatives on our
algorithm.
False positives and false negatives will affect the behaviour of the algorithm in different
ways. If the decision procedure gives false positives then, provided we add a check
immediately before outputting a supposed witness that it really is a witness, the algorithm
is still sure to output every witness exactly once; however, we will potentially waste time
exploring unfruitful branches of the search tree due to false positives, so the expected
running time of the algorithm will increase. If, on the other hand, our algorithm outputs
false negatives, then this will not increase the expected running time; however, in this
case, we can no longer be sure that we will find every witness as false negatives might
cause us to prune the search tree prematurely. We will show, however, that we can still
enumerate approximately in this case.
Before turning our attention to the specific effects of false positives and false negatives on the algorithm, we observe that, provided our randomised oracle returns the
correct answer with probability greater than a half, we can obtain a decision procedure
with a much smaller failure probability by making repeated oracle calls. We make the
standard assumption that the events corresponding to each oracle call returning an error
are independent.
Lemma 5.1. Let c > 12 be a fixed constant, and let ǫ > 0. Suppose that we have access to
a randomised oracle for the decision version of a self-contained k-witness problem which,
on each call, returns the correct answer independently with probability at least c. Then
there is a decision procedure for the problem, making O (k + log log n + log ǫ−1 ) calls to
this oracle, such that:
1. the probability of obtaining a false positive is at most 2−k , and
2. the probability of obtaining a false negative is at most
ǫ
⌈log n⌉+1
Proof. Our procedure is as follows: we make t oracle calls (where t is a value to be
determined later) and output whatever is the majority answer from these calls. We
need to choose t large enoughnto ensure that
o the probability that the majority answer is
ǫ
incorrect is at most δ := min 2−k , ⌈log n⌉+1 .
The probability that we obtain the correct answer from a given oracle call is at least
c, so the number of correct answers we obtain out of t trials is bounded below by the
13
random variable X, where X has distribution Bin(t, c). Thus E[X] = tc. We will return
the correct answer so long as X > 2t .
Using a Chernoff bound, we can see that
t
1
P X≤
= P X ≤ tc ·
2
2c
2c − 1
= P X ≤ tc 1 −
2c
2 !
1 2c − 1
≤ exp −
tc
2
2c
(2c − 1)2 t
.
= exp −
8c
It is enough to ensure that this is at most δ, which we achieve if
(2c − 1)2 t
< ln δ
8c
−8c ln δ
⇐⇒
t>
,
(2c − 1)2
−
so we can take t = O(log δ −1 ). Thus the number of oracle calls required is
(
−1 )!
ǫ
−1
= O k + log log n + log ǫ−1 ,
O max log 2−k
, log
⌈log n⌉ + 1
as required.
We now show that, if the probability that our oracle gives a false positive is sufficiently
small, then such errors do not increase the expected running time of Algorithm 1 too
much. Just as when bounding the expected running time in Section 4.2, it suffices to
bound the expected number of iterations of the while loop corresponding to a specific
colouring f in our family F of hash functions.
Lemma 5.2.
that the probability that the oracle returns a false positive is at
n Suppose o
1
−k
most min 2 , ⌈log n⌉+1 . Then the expected number of iterations of the while-loop in
any given iteration of the outer for-loop is at most O(N · log2 n), where N is the total
number of witnesses in the instance.
Proof. We fix an arbitrary f ∈ F , and for the remainder of the proof we restrict our
attention or the iteration of the outer for-loop corresponding to f . As in the proof of
Lemma 4.3, we can regard this iteration of the outer for-loop as the exploration of a
search tree, and it suffices to bound the expected number of nodes in the search tree.
We can associate with each node of the search tree some subset of the universe, and
we prune the search tree in such a way that we only have a node corresponding to a
subset A of the universe if a call to the oracle with input A has returned yes. This
means that for the node corresponding to the set A, either there is some representative
witness W ⊆ X, or the oracle gave a false positive. We call a node good if it has some
14
representative witness, and bad if it is the result of a false positive. We already bounded
the expected number of good nodes in the proof of Lemma 4.3, so it remains to show
that the expected number of bad nodes is not too large.
We will assume initially that there is at least one witness, and so the root of the search
tree is a good node. Now consider a bad node v in the search tree; v must have some
ancestor u in the search tree such that u is good (note that the root will always be such
an ancestor in this case). Since the subset of the universe associated with any node is
a subset of that associated with its parent, no bad vertex can have a good descendant.
Thus, any path from the root to the bad node v must consist of a segment of good nodes
followed by a segment of bad nodes; we can therefore associate with every bad node v a
unique good node good(v) such that good(v) is the last good node on the path from the
root to v. In order to bound the expected number of bad nodes in the tree, our strategy
is to bound, for each good node u, the number of bad nodes v such that good(v) = u.
As in the proof of Lemma 4.3, we will write XW,d for the number of nodes at depth d
for which W is the representative witness. For c > d, we further define YW,d,c to be the
number of bad nodes v such that v is at depth c, good(v) is at depth d, and W is the
representative witness for good(v).
Since every node can have at most 2k children, and the probability that the oracle
gives a false positive is at most 2−k , the expected number of bad children of any node is
at most one. Thus we see that
X
E [YW,d,d+1] =
E [YW,d,d+1 |XW,d = t] P [XW,d = t]
t≥0
≤
X
tP [WW,d = t]
t≥0
= E [XW,d] .
Observe also that if u and w are bad nodes such that u is the child of w, then good(u) =
good(w) (and so good(u) and good(w) are at the same depth and have the same representative witness). For c > d + 1 we can then argue inductively:
X
E [YW,d,c] =
E [YW,d,c |YW,d,c−1 = t] P [YW,d,c−1 = t]
t≥0
=
X
tP [YW,d,c−1 = t]
t≥0
= E [YW,d,c−1]
= E [XW,d] .
15
We can therefore bound the expected number of nodes in the search tree by
⌈log n⌉
⌈log n⌉
X
X
X
XW,d +
YW,d,c
E
W a witness d=0
c=d+1
⌈log n⌉
=
X
X
W a witness d=0
⌈log n⌉
=
X
X
W a witness d=0
=
X
⌈log n⌉
X
⌈log n⌉
E [XW,d] +
E [XW,d] +
X
c=d+1
⌈log n⌉
X
c=d+1
E [YW,d,c ]
E [XW,d ]
(⌈log n⌉ − d + 1) E [XW,d] .
W a witness d=0
As we know from the proof of Lemma 4.3 that E [XW,d ] ≤ 1, we can therefore deduce
that the expected number of nodes is at most
⌈log n⌉
X
X
⌈log n⌉+1
(⌈log n⌉ − d + 1) = N
W a witness d=0
X
i
i=1
N
(⌈log n⌉ + 1) (⌈log n⌉ + 2)
2
= O(N log2 n),
=
as required. This completes the proof in the case that the instance contains at least one
witness.
If there is in fact no witness in the instance, we know that there are no good
nodes in the tree. Moreover, the expected number of bad nodes at depth 0 is at most
1/ (⌈log n⌉ + 1) (the probability that the oracle returns a false positive). Since we have already argued that the expected number of bad children of any node is at most 1, it follows
that the expected number of bad nodes at each level is at most 1/ (⌈log n⌉ + 1), and so
the total expected number of bad nodes is at most 1/ (⌈log n⌉ + 1) (1 + ⌈log n⌉) = 1.
To complete the proof of Theorem 1.2, it remains to show that, so long as the probability that the oracle returns a false negative is sufficiently small, our algorithm will
output any given witness with high probability.
Lemma 5.3. Fix ǫ ∈ (0, 1), and suppose that the probability that the oracle returns a false
negative is at most ⌈log ǫn⌉+1 . Then, for any witness W , the probability that the algorithm
does not output W is at most ǫ.
Proof. By construction of F , we know that there is some f ∈ F such that W is colourful
with respect to f . We wil now restrict our attention to the iteration of the outer for-loop
corresponding to f ; it suffices to demonstrate that we will output W during this iteration
with probability at least 1 − ǫ.
If we obtain the correct answer from each oracle call, we are sure to output W . The
only way we will fail to output W is if our oracle gives us an incorrect answer on at
16
least one occasion when it is called with input V ⊇ W . This can either happen in line 1
when we make the initial check that we have a yes-instance, or when we check whether a
subset is still a yes-instance in line 20. Note that we execute line 20 with Aj = W at most
⌈log n⌉ times, so the total number of times we call INC-ORA(V ,U,k) with some V ⊇ W
during the iteration of the outer for-loop corresponding to f is at most ⌈log n⌉ + 1. By
the union bound, the probability that we obtain a false negative on at least one of these
calls is at most
ǫ
(⌈log n⌉ + 1) ·
= ǫ,
⌈log n⌉ + 1
as required.
6
Application to counting
There is a close relationship between the problems of counting and enumerating all witnesses in a k-witness problem, since any enumeration algorithm can very easily be adapted
into an algorithm that counts the witnesses instead of listing them. However, in the case
that the number N of witnesses is large, an enumeration algorithm necessarily takes time
at least Ω(N), whereas we might hope for much better if our goal is simply to determine
the total number of witnesses.
The family of self-contained k-witness problems studied here includes subgraph problems, whose parameterised complexity from the point of view of counting has been a
rich topic for research in recent years [14, 18, 19, 10, 11, 22, 17]. Many such counting problems, including those whose decision problem belongs to FPT, are known to
be #W[1]-complete (see [15] for background on the theory of parameterised counting
complexity). Positive results in this setting typically exploit structural properties of the
graphs involved (e.g. small treewidth) to design (approximate) counting algorithms for
inputs with these properties, avoiding any dependence on N [2, 3, 18].
In this section we demonstrate how our enumeration algorithms can be adapted to
give efficient (randomised) algorithms to solve the counting version of a self-contained
k-witness problem whenever the total number of witnesses is small. This complements
the fact that a simple random sampling algorithm can be used for approximate counting
when the number of witnesses is very large [22, Lemma 3.4], although there remain many
situations which are not covered by either result.
We begin with the case in which we assume access to a deterministic oracle for the
decision problem.
Theorem 6.1. Let Π be a self-contained k-witness problem, and suppose that 0 <
δ ≤ 21 and M ∈ N. Then there exists a randomised algorithm which makes at most
ek+o(k) log2 n M log(δ −1 ) calls to a deterministic decision oracle for Π, and
1. if the number of witnesses in the instance of Π is at most M, outputs with probability
at least 1 − δ the exact number of witnesses in the instance;
2. if the number of witnesses in the instance of Π is strictly greater than M, always
outputs “More than M.”
17
Moreover, if there is an algorithm solving the decision version of Π in time g(k) · nO(1) for
some computable function g, then the expected running time of the randomised algorithm
is bounded by ek+o(k) · g(k) · nO(1) · M · log(δ −1 ).
Proof. Note that Algorithm 1 can very easily be adapted to give a randomised counting
algorithm which runs in the same time as the enumeration algorithm but, instead of
listing all witnesses, simply outputs the total number of witnesses when it terminates.
We may compute explicitly the expected running time of our randomised enumeration
algorithm (and hence its adaptation to a counting algorithm) for a given self-contained
k-witness problem Π in terms of n, k and the total number of witnesses, N. We will write
T (Π, n, k, N) for this expected running time.
Now consider an algorithm A, in which we run our randomised counting algorithm for
at most 2T (Π, n, k, M) steps; if the algorithm has terminated within this many steps, A
outputs the value returned, otherwise A outputs “FAIL”. Since our randomised counting
algorithm is always correct (but may take much longer than the expected time), we know
that if A outputs a numerical value then this is precisely the number of witnesses in our
problem instance. If the number of witnesses is in fact at most M, then the expected
running time of the randomised counting algorithm is bounded by T (Π, n, k, M), so by
Markov’s inequality the probability that it terminates within 2T (Π, n, k, M) steps is at
least 1/2. Thus, if we run A on an instance in which the number of witnesses is at most
M, it will output the exact number of witnesses with probability at least 1/2.
To obtain the desired probability of outputting the correct answer, we repeat A a
total of ⌈log(δ −1 )⌉ times. If any of these executions of A terminates with a numerical
answer that is at most M, we output this answer (which must be the exact number of
witnesses by the argument above); otherwise we output “More than M.”
If the total number of witnesses is in fact less than or equal to M, we will output the
exact number of witnesses unless A outputs “FAIL” every time it is run. Since in this
case A outputs “FAIL” independently with probability at most 1/2 each time we run it,
the probability that we output “FAIL” on every one of the ⌈log(δ −1 )⌉ repetitions is at
−1
most (1/2)⌈log(δ )⌉ ≤ 2log δ = δ. Finally, note that if the number of witnesses is strictly
greater than M, we will always output “More than M” since every execution of A must
in this case return either “FAIL” or a numerical answer greater than M.
The total running time is at most O (log(δ −1 ) · T (Π, n, k, M)) and hence, using the
bound on the running time of our enumeration algorithm from Theorem 1.1, is bounded
by ek+o(k) · g(k) · nO(1) · M · log(δ −1 ), as required.
Finally, we prove an analogous result in the case that we only have access to a randomised oracle.
Theorem 6.2. Let Π be a self-contained k-witness problem, suppose that 0 < ǫ < 1,
0 < δ ≤ 12 and M ∈ N, and that we have access to a randomised oracle for the decision
problem whose error probability is at most some constant c < 21 . Then there exists a
randomised algorithm which makes at most ek+o(k) log3 n M log(δ −1 ) calls to this oracle
and, with probability at least 1 − δ, if the total number of witnesses in the instance is
exactly N, does the following:
1. if N ≤ M, outputs a number N ′ such that (1 − ǫ)N ≤ N ′ ≤ N;
18
2. if N ≥ M, outputs either a number N ′ such that (1 − ǫ)N ≤ N ′ ≤ M or “More
than M.”
Moreover, if there is a randomised algorithm solving the decision version of Π (with error
probability at most c < 12 ) in time g(k) · nO(1) for some computable function g, then the
expected running time of the randomised counting algorithm is bounded by ek+o(k) · g(k) ·
nO(1) · M · log(δ −1 ).
Proof. We claim that it suffices to demonstrate that there is a procedure which makes at
most ek+o(k) · log3 n · M oracle calls and, with probability greater than 21 , outputs
(a) a number N ′ such that (1 − ǫ)N ≤ N ′ ≤ N if N ≤ M, and
(b) either a number N ′ such that (1 − ǫ)N ≤ N ′ ≤ N or “FAIL” if N > M.
Given such a procedure, we run it log(δ −1 ) times; if the largest numerical value returned
on any run (if any) is at most M then we return this maximum value, otherwise we return
“More than M.” Conditions (a) and (b) ensuer that the procedure never returns a value
strictly greater than N, so the largest numerical value returned (if any) is sure to be the
best estimate. Therefore we only return an answer that does not meet the conditions of
the theorem if all of the executions of the procedure fail to return an answer that meets
−1
conditions (a) and (b), which happens with probability at most 2− log(δ ) = δ.
To obtain the required procedure, we modify the enumeration algorithm used to prove
Theorem 1.2 so that it counts the total number of witnesses found rather than listing
them; we will run this randomised enumeration procedure with error probability ǫ2 /4. We
can compute explicitly the expected running time of this adapted algorithm for a given kwitness problem Π in terms of n, k, N and ǫ; we write T (Π, n, k, ǫ, N) for this expectation.
We will allow the adapted algorithm to run for time 4T (Π, n, k, ǫ, M), outputting “FAIL”
if we have not terminated within this time.
There are two ways in which the procedure could fail to meet conditions (a) and (b).
First of all, the adapted enumeration algorithm might not terminate within the required
time. Secondly, it might terminate but with an answer N ′ where N ′ < (1 − ǫ)N (recall
that enumeration algorithm never repeats a witness, and that we can verify each witness
deterministically, ensuring that only ever output a subset of the witnesses actually present
in the instance). In the remainder of the proof, we will argue that the probability of each
of these two outcomes is strictly less than 14 , so the probability of avoiding both is greater
than 21 , as required.
First, we bound the probability that the algorithm does not terminate within the
required time. By Markov’s inequality, the probability that a random variable takes a
value greater than four times its expectation is less than 14 , so we see immediately that if
the total number of witnesses is at most M then the probability that the algorithm fails
to terminate within the permitted time is less than 14 .
Next, we need to bound the probability that the procedure outputs a value N ′ <
(1 − ǫ)N. Let the random variable Z denote the number of witnesses omitted by the
procedure. Then E[Z] ≤ ǫ2 N/4, so by Markov’s inequality we have
ǫ
1
ǫ2 N/4
= < ,
ǫN
4
4
as required. This completes the argument that the procedure outputs the an answer that
meets conditions (a) and (b) with probability greater than 21 , and hence the proof.
P[Z > ǫN] ≤
19
7
Conclusions and open problems
Many well-known combinatorial problems satisfy the definition of the k-witness problems
considered in this paper. We have shown that, given access to a deterministic decision
oracle for the inclusion version of a k-witness problem (answering the question “does this
subset of the universe contain at least one witness?”), there is a randomised algorithm
which is guaranteed to enumerate all witnesses and whose expected number of oracle
calls is at most ek+o(k) log2 n · N, where N is the total number of witnesses. Moreover,
if the decision problem belongs to FPT (as is the case for many self-contained k-witness
problems), our enumeration algorithm is an expected-output-fpt algorithm.
We have also shown that, in the presence of only a randomised decision oracle, we
can use the same strategy to produce a list of witnesses so that the probability of any
given witness appearing in the list is at least 1 − ǫ, with only a factor log n increase in
the expected running time. This result initiates the study of algorithms for approximate
enumeration.
Our results also has implications for counting the number of witnesses. In particular, if
the total number of witnesses is small (at most f (k) · nO(1) for some computable function
f ) then our enumeration algorithms can easily be adapted to give fpt-algorithms that
will, with high probability, calculate a good approximation to the number of witnesses
in an instance of a self-contained k-witness problem (in the setting where we have a
deterministic decision oracle, we in fact obtain the exact answer with high probability).
The resulting counting algorithms satisfy the conditions for a FPTRAS (Fixed Parameter
Tractable Randomised Approximation Scheme, as defined in [3]), and in the setting with
a deterministic oracle we do not even need the full flexibility that this definition allows:
with probability 1 − δ we will output the exact number of witnesses, rather than just an
answer that is within a factor of 1 ± ǫ of this quantity.
While the enumeration problem can be solved in a more straightforward fashion for
self-contained k-witness problems that have certain additional properties, we demonstrated that several self-contained k-witness problems do not have these properties, unless
FPT=W[1]. A natural line of enquiry arising from this work would be the characterisation
of those self-contained k-witness problems that do have the additional properties, namely
those for which an fpt-algorithm for the decision version gives rise to an fpt-algorithm for
the extension version of the decision problem.
Our approach assumed the existence of an oracle to determine whether a given subset
of the universe contains a witness of size exactly k. An interesting direction for future
work would be to explore the extent to which the same techniques can be used if we only
have access to a decision procedure that tells us whether some subset of the universe
contains a witness of size at most k.
Another key question that remains open after this work is whether the existence of
an fpt-algorithm for the inclusion version of a k-witness problem is sufficient to guarantee the existence of an (expected-)delay-fpt or (expected-)incremental-fpt algorithm for
the enumeration problem. Finally, it would be interesting to investigate whether the
randomised algorithm given here can be derandomised.
20
References
[1] Noga Alon, Raphael Yuster, and Uri Zwick, Color-coding, Journal of the ACM 42
(1995), no. 4, 844–856.
[2] N. Alon, P. Dao, I. Hajirasouliha, F. Hormozdiari, S. C. Sahinalp, Biomolecular
network motif counting and discovery by color coding, Bioinformatics 24(13), 241–
249, 2008.
[3] V. Arvind and Venkatesh Raman, Approximation algorithms for some parameterized
counting problems, ISAAC 2002, LNCS, vol. 2518, Springer-Verlag Berlin Heidelberg,
2002, pp. 453–464.
[4] Andreas Björklund, Thore Husfeldt, Petteri Kaski, and Mikko Koivisto, Narrow
sieves for parameterized paths and packings, arXiv:1007.1161 [], 2010.
[5] Andreas Björklund, Petteri Kaski, and Lukasz Kowalik, Probably Optimal Graph
Motifs, 30th International Symposium on Theoretical Aspects of Computer Science
(STACS 2013), LIPIcs, vol. 20, Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik,
Dagstuhl, Germany, 2013, pp. 20–31.
[6] Andreas Björklund, Petteri Kaski, and Lukasz Kowalik, Fast witness extraction using a decision oracle, Algorithms (ESA 2014), LNCS, vol. 8737, Springer Berlin
Heidelberg, 2014, pp. 149–160.
[7] Andreas Björklund, Petteri Kaski, Lukasz Kowalik, and Juho Lauri, Engineering
motif search for large graphs, 2015 Proc. of the Seventeenth Workshop on Algorithm
Engineering and Experiments (ALENEX 2015), SIAM, 2015, pp. 104–118.
[8] Nadia Creignou, Raı̈da Ktari, Arne Meier, Julian-Steffen Müller, Frédéric Olive, and
Heribert Vollmer, Parameterized enumeration for modification problems, Language
and Automata Theory and Applications (LATA 2015), LNCS, vol. 8977, Springer
International Publishing, 2015, pp. 524–536.
[9] Nadia Creignou, Arne Meier, Julian-Steffen Müller, Johannes Schmidt, and Heribert
Vollmer, Paradigms for parameterized enumeration, Mathematical Foundations of
Computer Science (MFCS 2013), LNCS, vol. 8087, Springer Berlin Heidelberg, 2013,
pp. 290–301.
[10] Radu Curticapean, Counting matchings of size k is #W[1]-hard, Automata, Languages, and Programming (ICALP 2013), LNCS, vol. 7965, Springer Berlin Heidelberg, 2013, pp. 352–363.
[11] Radu Curticapean and Dániel Marx, Complexity of counting subgraphs: Only the
boundedness of the vertex-cover number counts, 55th Annual IEEE Symposium on
Foundations of Computer Science (FOCS 2014), 2014.
[12] Rodney G. Downey and Michael R. Fellows, Fundamentals of parameterized complexity, Springer London, 2013.
21
[13] Henning Fernau, On parameterized enumeration, Computing and Combinatorics
(COCOON 2002), LNCS, vol. 2387, Springer Berlin Heidelberg, 2002, pp. 564–573.
[14] J. Flum and M. Grohe, The parameterized complexity of counting problems, SIAM
Journal on Computing 33 (2004), no. 4, 892–922.
[15] J. Flum and M. Grohe, Parameterized complexity theory, Springer, 2006.
[16] B. Gelbord, Graphical techniques in intrusion detection systems, Information Networking, 2001. Proc. 15th International Conference on, 2001, pp. 253–258.
[17] Mark Jerrum and Kitty Meeks, The parameterised complexity of counting even and
odd induced subgraphs, Combinatorica, 2016, doi:10.1007/s00493-016-3338-5.
[18] Mark Jerrum and Kitty Meeks, The parameterised complexity of counting connected
subgraphs and graph motifs, Journal of Computer and System Sciences 81 (2015),
no. 4, 702 – 716.
[19] Mark Jerrum and Kitty Meeks, Some hard families of parameterised counting problems, ACM Transactions on Computation Theory 7 (2015), no. 3.
[20] Samir Khuller and Vijay V. Vazirani, Planar graph coloring is not self-reducible,
assuming P 6= NP, Theoretical Computer Science 88 (1991), no. 1, 183 – 189.
[21] Eugene L. Lawler, A procedure for computing the k best solutions to discrete optimization problems and its application to the shortest path problem, Management
Science 18 (1972), no. 7, 401–405.
[22] Kitty Meeks, The challenges of unbounded treewidth in parameterised subgraph counting problems, Discrete Applied Mathematics 198 (2016), 170 – 194.
[23] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, Network
motifs: Simple building blocks of complex networks, Science 298 (2002), no. 5594,
824–827.
[24] M. Naor, L. J. Schulman and A. Srinivasan, Splitters and near-optimal derandomization, Proceedings of IEEE 36th Annual Foundations of Computer Science (FOCS
1995), Milwaukee, WI, 1995, pp. 182-191. doi: 10.1109/SFCS.1995.492475
[25] V. Sekar, Y. Xie, D.A. Maltz, M.K. Reiter, and H. Zhang, Toward a framework for
internet forensic analysis, Third Workshop on Hot Topics in Networking (HotNetsIII), 2004.
[26] S. Staniford-Chen, S. Cheung, R. Crawford, M. Dilger, J. Frank, J. Hoagland,
K. Levitt, C. Wee, R. Yip, and D. Zerkle, GrIDS - A graph based intrusion detection system for large networks, In Proc. of the 19th National Information Systems
Security Conference, 1996, pp. 361–370.
[27] C.P. Schnorr, Optimal algorithms for self-reducible problems, Proc. of the 3rd ICALP,
Edinburgh University Press, 1976, pp. 322 – 337.
22
| 8 |
IEEE TRANSACTIONS ON MULTIMEDIA , VOL. ?, NO. ?, ? 2018
1
Predicting Visual Features from Text for
Image and Video Caption Retrieval
arXiv:1709.01362v2 [] 29 Jan 2018
Jianfeng Dong, Xirong Li*, and Cees G. M. Snoek
Abstract—This paper strives to find amidst a set of sentences
the one best describing the content of a given image or video.
Different from existing works, which rely on a joint subspace for
their image and video caption retrieval, we propose to do so in a
visual space exclusively. Apart from this conceptual novelty, we
contribute Word2VisualVec, a deep neural network architecture
that learns to predict a visual feature representation from textual
input. Example captions are encoded into a textual embedding
based on multi-scale sentence vectorization and further transferred into a deep visual feature of choice via a simple multi-layer
perceptron. We further generalize Word2VisualVec for video
caption retrieval, by predicting from text both 3-D convolutional
neural network features as well as a visual-audio representation. Experiments on Flickr8k, Flickr30k, the Microsoft Video
Description dataset and the very recent NIST TrecVid challenge
for video caption retrieval detail Word2VisualVec’s properties,
its benefit over textual embeddings, the potential for multimodal
query composition and its state-of-the-art results.
Index Terms—Image and video caption retrieval.
I. I NTRODUCTION
T
HIS paper attacks the problem of image and video
caption retrieval, i.e., finding amidst a set of possible
sentences the one best describing the content of a given
image or video. Before the advent of deep learning based
approaches to feature extraction, an image or video is typically
represented by a bag of quantized local descriptors (known as
visual words) while a sentence is represented by a bag of
words. These hand-crafted features do not well represent the
visual and lingual modalities, and are not directly comparable.
Hence, feature transformations are performed on both sides
to learn a common latent subspace where the two modalities
are better represented and a cross-modal similarity can be
computed [1], [2]. This tradition continues, as the prevailing
image and video caption retrieval methods [3]–[8] prefer to
represent the visual and lingual modalities in a common
latent subspace. Like others before us [9]–[11], we consider
caption retrieval an important enough problem by itself, and
we question the dependence on latent subspace solutions. For
image retrieval by caption, recent evidence [12] shows that
a one-way mapping from the visual to the textual modality
outperforms the state-of-the-art subspace based solutions. Our
work shares a similar spirit but targets at the opposite direction,
*Corresponding author.
J. Dong is with the College of Computer Science and Technology, Zhejiang
University, Hangzhou 310027, China (e-mail: [email protected]).
X. Li is with the Key Lab of Data Engineering and Knowledge Engineering,
School of Information, Renmin University of China, Beijing 100872, China
(e-mail: [email protected]).
C. G. M. Snoek is with the Informatics Institute, University of Amsterdam,
Amsterdam 1098 XH, The Netherlands (e-mail: [email protected]).
Fig. 1. We propose to perform image and video caption retrieval in
a visual feature space exclusively. This is achieved by Word2VisualVec
(W2VV), which predicts visual features from text. As illustrated by the
(green) down arrow, a query image is projected into a visual feature space
by extracting features from the image content using a pre-trained ConvNet,
e.g., , GoogleNet or ResNet. As demonstrated by the (black) up arrows, a
set of prespecified sentences are projected via W2VV into the same feature
space. We hypothesize that the sentence best describing the image content
will be the closest to the image in the deep feature space.
i.e., image and video caption retrieval. Our key novelty is
that we find the most likely caption for a given image or
video by looking for their similarity in the visual feature space
exclusively, as illustrated in Fig. 1.
From the visual side we are inspired by the recent progress
in predicting images from text [13], [14]. We also depart
from the text, but instead of predicting pixels, our model
predicts visual features. We consider features from deep
convolutional neural networks (ConvNet) [15]–[19]. These
neural networks learn a textual class prediction for an image
by successive layers of convolutions, non-linearities, pooling,
and full connections, with the aid of big amounts of labeled
images, e.g., ImageNet [20]. Apart from classification, visual
features derived from the layers of these networks are superior
representations for various challenges in vision [21]–[25] and
2
multimedia [26]–[30]. We also rely on a layered neural network architecture, but rather than predicting a class label for an
image, we strive to predict a deep visual feature from a natural
language description for the purpose of caption retrieval.
From the lingual side we are inspired by the encouraging
progress in sentence encoding by neural language modeling
for cross-modal matching [5]–[7], [31]–[33]. In particular,
word2vec [34] pre-trained on large-scale text corpora provides
distributed word embeddings, an important prerequisite for
vectorizing sentences towards a representation shared with
image [5], [31] or video [8], [35]. In [6], [7], a sentence
is fed as a word sequence into a recurrent neural network
(RNN). The RNN output at the last time step is taken as
the sentence feature, which is further projected into a latent
subspace. We employ word2vec and RNN as part of our
sentence encoding strategy as well. What is different is that we
continue to transform the encoding into a higher-dimensional
visual feature space via a multi-layer perceptron. As we predict
visual features rather than latent features from text, we call
our approach Word2VisualVec. While both visual and textual
modalities are used during training, Word2VisualVec performs
a mapping from the textual to the visual modality. Hence, at
run time, Word2VisualVec allows the caption retrieval to be
performed in the visual space.
We make the following three contributions in this paper:
• First, to the best of our knowledge we are the first to
solve the caption retrieval problem in the visual space. We
consider this counter-tradition approach promising thanks to
the effectiveness of deep learning based visual features which
are continuously improving. For cross-modal matching, we
consider it beneficial to rely on the visual space, instead of
a joint space, as it allows us to learn a one-way mapping from
natural language text to the visual feature space, rather than a
more complicated joint space.
• Second, we propose Word2VisualVec to effectively realize
the above proposal. Word2VisualVec is a deep neural network
based on multi-scale sentence vectorization and a multi-layer
perceptron. While its components are known, we consider their
combined usage in our overall system novel and effective to
transform a natural language sentence into a visual feature
vector. We consider prediction of several recent visual features [16], [18], [19] based on text, but the approach is general
and can, in principle, predict any deep visual feature it is
trained on.
• Third, we show how Word2VisualVec can be easily generalized to the video domain, by predicting from text both 3-D
convolutional neural network features [36] as well as a visualaudio representation including Mel Frequency Cepstral Coefficients [37]. Experiments on Flickr8k [38], Flickr30k [39], the
Microsoft Video Description dataset [40] and the very recent
NIST TrecVid challenge for video caption retrieval [41] detail
Word2VisualVec’s properties, its benefit over the word2vec
textual embedding, the potential for multimodal query composition and its state-of-the-art results.
Before detailing our approach, we first highlight in more
detail related work.
IEEE TRANSACTIONS ON MULTIMEDIA , VOL. ?, NO. ?, ? 2018
II. R ELATED W ORK
A. Caption Retrieval
Prior to deep visual features, methods for image caption
retrieval often resort to relatively complicated models to learn
a shared representation to compensate for the deficiency of
traditional low-level visual features. Hodosh et al. [38] leverage Kernel Canonical Correlation Analysis (CCA), finding
a joint embedding by maximizing the correlation between
the projected image and text kernel matrices. With deep
visual features, we observe an increased use of relatively light
embeddings on the image side. Using the fc6 layer of a pretrained AlexNet [15] as the image feature, Gong et al. show
that linear CCA compares favorably to its kernel counterpart
[3]. Linear CCA is also adopted by Klein et al. [5] for visual
embedding. More recent models utilize affine transformations
to reduce the image feature to a much shorter h-dimensional
vector, with the transformation optimized in an end-to-end
fashion within a deep learning framework [6], [7], [42].
Similar to the image domain, the state-of-the-art methods
for video caption retrieval also operate in a shared subspace
[8], [43], [44]. Xu et al. [8] propose to vectorize each subjectverb-object triplet extracted from a given sentence by a pretrained word2vec, and subsequently aggregate the vectors into
a sentence-level vector by a recursive neural network. A
joint embedding model projects both the sentence vector and
the video feature vector, obtained by temporal pooling over
frame-level features, into a latent subspace. Otani et al. [43]
improve upon [8] by exploiting web image search results
of an input sentence, which are deemed helpful for word
disambiguation, e.g., telling if the word “keyboard” refers to a
musical instrument or an input device for computers. To learn
a common multimodal representation for videos and text, Yu
et al. [44] use two distinct Long Short Term Memory (LSTM)
modules to encode the video and text modalities respectively.
They then employ a compact bilinear pooling layer to capture
implicit interactions between the two modalities.
Different from the existing works, we propose to perform
image and video caption retrieval directly in the visual space.
This change is important as it allows us to completely remove
the learning part from the visual side and focus our energy on
learning an effective mapping from natural language text to
the visual feature space.
B. Sentence Vectorization
To convert variably-sized sentences to fixed-sized feature
vectors for subsequent learning, bag-of-words (BoW) is arguably the most popular choice [3], [38], [45], [46]. A BoW
vocabulary has to be prespecified based on the availability of
words describing the training images. As collecting imagesentence pairs at a large-scale is both labor intensive and
time consuming, the amount of words covered by BoW is
bounded. To overcome this limit, a distributional text embedding provided by word2vec [34] is gaining increased attention.
The word embedding matrix used in [8], [31], [43], [47] is
instantiated by a word2vec model pre-trained on large-scale
text corpora. In Frome et al. [31], for instance, the input text
is vectorized by averaging the word2vec vectors of its words.
DONG et al.: PREDICTING VISUAL FEATURES FROM TEXT FOR IMAGE AND VIDEO CAPTION RETRIEVAL
3
image
ConvNet
Word2VisualVec Training
ResNet-152
GoogleNet
AlexNet
multi-scale sentence vectorization
A man sits on the side of
his boat while fumbling with
a knot in his fishing net.
bag-of-words
word2vec
sentence
target
GRU
Visual Feature
Visual Feature Space
Word2VisualVec
people are on a hilltop over
Two dogs on pavement mo
Two dogs a
er a black dog on a rocky shor
are standing outside of a coffee house .
The Wine Shop
Two hikers climbin
oat down a river .
multi-scale sentence vectorization
sentence
A tent is being set up on the ic
Man and child in yellow kayak
ures in the snow next to tr
bag-of-words
word2vec
GRU
A person eats takeout while w
nds next to a man with a brown t-shirt in
Visual Feature
hilltop overlooking a gre
Fig. 2. Word2VisualVec network architecture. The model first vectorizes an input sentence into a fixed-length vector by relying on bag-of-words, word2vec
and a GRU. The vector then goes through a multi-layer perceptron to produce the visual feature vector of choice, from a pre-trained ConvNet such as
GoogleNet or ResNet. The network parameters are learned from image-sentence pairs in an end-to-end fashion, with the goal of reconstructing from the input
sentence the visual feature vector of the image it is describing. We rely on the visual feature space for image and video caption retrieval.
Such a mean pooling strategy results in a dense representation
that could be less discriminative than the initial BoW feature.
As an alternative, Klein et al. [5] and their follow-up [42]
perform fisher vector pooling over word vectors.
Beside BoW and word2vec, we observe an increased use
of RNN-based sentence vectorization. Socher et al. design a
Dependency-Tree RNN that learns vector representations for
sentences based on their dependency trees [32]. Lev et al. [48]
propose RNN fisher vectors on the basis of [5], replacing the
Gaussian model by a RNN model that takes into account the
order of elements in the sequence. Kiros et al. [6] employ
an LSTM to encode a sentence, using the LSTM’s hidden
state at the last time step as the sentence feature. In a followup work, Vendrov et al. replace LSTM by a Gated Recurrent
Unit (GRU) which has less parameters to tune [7]. While RNN
and its LSTM or GRU variants have demonstrated promising
results for generating visual descriptions [49]–[52], they tend
to be over-sensitive to word orders by design. Indeed Socher
et al. [32] suggest that for caption retrieval, models invariant
to surface changes, such as word order, perform better.
In order to jointly exploit the merits of the BoW, word2vec
and RNN based representations, we consider in this paper
multi-scale sentence vectorization. Ma et al. [4] have made
a first attempt in this direction. In their approach three multimodal ConvNets are trained on feature maps, formed by
merging the image embedding vector with word, phrase and
sentence embedding vectors. The relevance between an image
and a sentence is estimated by late fusion of the individual
matching scores. By contrast, we perform multi-scale sentence
vectorization in an early stage, by merging BoW, word2vec
and GRU sentence features and letting the model figure out
the optimal way for combining them. Moreover, at run time
the multi-modal network by [4] requires a query image to
be paired with each of the test sentences as the network
input. By contrast, our Word2VisualVec model predicts visual
features from text alone, meaning the vectorization can be
precomputed. An advantageous property for caption retrieval
on large-scale image and video datasets.
III. W ORD 2V ISUALV EC
We propose to learn a mapping that projects a natural
language description into a visual feature space. Consequently,
the relevance between a given visual instance x and a specific
sentence q can be directly computed in this space. More formally, let φ(x) ∈ Rd be a d-dimensional visual feature vector.
A pretrained ConvNet, apart from its original mission of visual
class recognition, has now been recognized as an effective
visual feature extractor [21]. We follow this good practice,
instantiating φ(x) with a ConvNet feature vector. We aim for a
sentence representation r(q) ∈ Rd such that the similarity can
be expressed by the cosine similarity between φ(x) and r(q).
The proposed mapping model Word2VisualVec is designed to
produce r(q), as visualized in Fig. 2 and detailed next.
A. Architecture
Multi-scale sentence vectorization. To handle sentences of
varying length, we choose to first vectorize each sentence. We
propose multi-scale sentence vectorization that utilizes BoW,
word2vec and RNN based text encodings.
BoW is a classical text encoding method. Each dimension
in a BoW vector corresponds to the occurrence of a specific
word in the input sentence, i.e.,
sbow (q) = (c(w1 , q), c(w2 , q), . . . , c(wm , q)),
(1)
where c(w, q) returns the occurrence of word w in q, and
m is the size of a prespecified vocabulary. A drawback of
4
IEEE TRANSACTIONS ON MULTIMEDIA , VOL. ?, NO. ?, ? 2018
Bow is that its vocabulary is bounded by the words used in
the multi-modal training data, which is at a relatively small
scale compared to a text corpus containing millions of words.
Given faucet as a novel word, for example, “A little girl plays
with a faucet” will not have the main object encoded in its
BoW vector. Notice that setting a large vocabulary for BoW
is unhelpful, as words without training images will always
have zero value and thus will not be effectively modeled. To
compensate for such a loss, we further leverage word2vec.
By learning from a large-scale text corpus, the vocabulary of
word2vec is much larger than its BoW counterpart. We obtain
the embedding vector of the sentence by mean pooling over
its words, i.e.,
1 X
sword2vec (q) :=
v(w),
(2)
|q| w∈q
where v(w) denotes individual word embedding vectors, |q| is
the sentence length. Previous works employ word2vec trained
on web documents as their word embedding matrix [4], [31],
[49]. However, recent studies suggest that word2vec trained on
Flickr tags better captures visual relationships than its counterpart learned from web documents [53], [54]. We therefore
train a 500-dimensional word2vec model on English tags of
30 million Flickr images, using the skip-gram algorithm [34].
This results in a vocabulary of 1.7 million words.
Despite their effectiveness, the BoW and word2vec representations ignore word orders in the input sentence. As such,
they cannot discriminate between “a dog follows a person” and
“a person follows a dog”. To tackle this downside, we employ
an RNN, which is known to be effective for modeling longterm word dependency in natural language text. In particular,
we adopt a GRU [55], which has less parameters than LSTM
and presumably requires less amounts of training data. At a
specific time step t, let vt be the embedding vector of the t-th
word, obtained by performing a lookup on a word embedding
matrix We . GRU receives inputs from vt and the previous
hidden state ht−1 , and accordingly the new hidden state ht is
updated as follows,
zt = σ(Wzv vt + Wzh ht−1 + bz ),
rt = σ(Wrv vt + Wrh ht−1 + br ),
het = tanh(Whv vt + Whh (rt ⊙ ht−1 ) + bh ),
ht = (1 − zt ) ⊙ ht−1 + zt ⊙ het ,
(3)
where zt and rt denote the update and reset gates at time
t respectively, while W and b with specific subscripts are
weights and bias parameterizing the corresponding gates.
The symbol ⊙ indicates element-wise multiplication, while
σ(·) is the sigmoid activation function. We re-use word2vec
previously trained on the Flickr tags to initialize We . The last
hidden state h|q| is taken as the RNN based representation of
the sentence.
Multi-scale sentence vectorization is obtained by concatenating the three representations, that is
s(q) = [sbow (q), sword2vec (q), h|q| ].
(4)
Text transformation via a multilayer perceptron. The
sentence vector s(q) goes through subsequent hidden layers
until it reaches the output layer r(q), which resides in the
visual feature space. More concretely, by applying an affine
transformation on s(q), followed by an element-wise ReLU
activation σ(z) = max(0, z), we obtain the first hidden layer
h1 (q) of an l-layer Word2VisualVec as:
h1 (q) = σ(W1 s(q) + b1 ).
(5)
The following hidden layers are expressed by:
hi (q) = σ(Wi hi−1 (q) + bi ), i = 2, ..., l − 2,
(6)
where Wi parameterizes the affine transformation of the i-th
hidden layer and bi is a bias terms. In a similar manner, we
compute the output layer r(q) as:
r(q) = σ(Wl hl−1 (q) + bl ).
(7)
Putting it all together, the learnable parameters are represented
by θ = [We , Wz. , Wr. , Wh. , bz , br , bh , W1 , b1 , . . . , Wl , bl ].
In principle, the learning capacity of our model grows
as more layers are used. This also means more solutions
exist which minimize the training loss, yet are suboptimal
for unseen test data. We analyze in the experiments how
deep Word2VisualVec can go without losing its generalization
ability.
B. Learning algorithm
Objective function. For a given image, different persons
might describe the same visual content with different words.
For example, “A dog leaps over a log” versus “A dog is leaping
over a fallen tree”. The verb leap in different tenses essentially
describe the same action, while a log and a fallen tree can have
similar visual appearance. Projecting the two sentences into the
same visual feature space has the effect of implicitly finding
such correlations. In order to reconstruct the visual feature
φ(x) directly from q, we use Mean Squared Error (MSE) as
our objective function. We have also experimented with the
marginal ranking loss, as commonly used in previous works
[31], [56]–[58], but found MSE yields better performance.
The MSE loss lmse for a given training pair is defined as:
lmse (x, q; θ) = (r(q) − φ(x))2 .
(8)
We train Word2VisualVec to minimize the overall MSE loss
on a given training set D = {(x, q)}, containing a number of
relevant image-sentence pairs:
X
lmse (x, q; θ).
(9)
argmin
θ
(x,q)∈D
Optimization. We solve Eq. (9) using stochastic gradient
descent with RMSprop [59]. This optimization algorithm divides the learning rate by an exponentially decaying average of
squared gradients, to prevent the learning rate from effectively
shrinking over time. We empirically set the initial learning
rate η = 0.0001, decay weights γ = 0.9 and small constant
ǫ = 10−6 for RMSprop. We apply dropout to all hidden
layers in Word2VisualVec to mitigate model overfitting. Lastly,
we take an empirical learning schedule as follows. Once the
validation performance does not increase in three consecutive
epochs, we divide the learning rate by 2. Early stop occurs if
the validation performance does not improve in ten consecutive
epochs. The maximal number of epochs is 100.
DONG et al.: PREDICTING VISUAL FEATURES FROM TEXT FOR IMAGE AND VIDEO CAPTION RETRIEVAL
C. Image Caption Retrieval
For a given image, we select from a given sentence pool
the sentence deemed most relevant with respect to the image.
Note that image-sentence pairs are required only for training
Word2VisualVec. For a test sentence, its r(q) is obtained by
forward computation through the Word2VisualVec network,
without the need of any test image. Hence, the sentence pool
can be vectorized in advance. Image caption retrieval in our
case boils down to finding the sentence nearest to the given
image in the visual feature space. We use the cosine similarity
between r(q) and the image feature φ(x), as this similarity
normalizes feature vectors and is found to be better than the
dot product or mean square error according to our preliminary
experiments.
D. Video Caption Retrieval
Word2VisualVec is also applicable for video as long as we
have an effective vectorized representation of video. Again,
different from previous methods for video caption retrieval
that execute in a joint subspace [8], [43], we project sentences
into the video feature space.
Following the good practice of using pre-trained ConvNets
for video content analysis [23], [60]–[62], we extract features
by applying image ConvNets on individual frames and 3D ConvNets [36] on consecutive-frame sequences. For short
video clips, as used in our experiments, mean pooling over
video frames is considered reasonable [60], [62]. Hence, the
visual feature vector of each video is obtained by averaging
the feature vectors of its frames. Note that longer videos open
up possibilities for further improvement of Word2VisualVec
by exploiting temporal order of video frames, e.g., [63]. The
audio channel of a video sometime provides complementary
information to the visual channel. For instance, to help decide
whether a person is talking or singing. To exploit this channel,
we extract a bag of quantized Mel-frequency Cepstral Coefficients (MFCC) [37] and concatenate it with the previous visual
feature. Word2VisualVec is trained to predict such a visualaudio feature, as a whole, from input text.
Word2VisualVec is used in a principled manner, transforming an input sentence to a video feature vector, let it be visual
or visual-audio. For the sake of clarity we term the video
variant Word2VideoVec.
IV. E XPERIMENTS
A. Properties of Word2VisualVec
We first investigate the impact of major design choices,
e.g., how to vectorize an input sentence?. Before detailing the
investigation, we first introduce data and evaluation protocol.
Data. For image caption retrieval, we use two popular
benchmark sets, Flickr8k [38] and Flickr30k [39]. Each image
is associated with five crowd-sourced English sentences, which
briefly describe the main objects and scenes present in the
image. For video caption retrieval we rely on the Microsoft
Video Description dataset (MSVD) [40]. Each video is labeled
with 40 English sentences on average. The videos are short,
usually less than 10 seconds long. For the ease of cross-paper
5
comparison, we follow the identical data partitions as used in
[5], [7], [58] for images and [60] for videos. That is, training
/ validation / test is 6k / 1k / 1k for Flickr8k, 29K / 1,014 /
1k for Flickr30k, and 1,200 / 100 / 670 for MSVD.
Visual features. A deep visual feature is determined by
a specific ConvNet and its layers. We experiment with four
pretrained 2-D ConvNets, i.e., CaffeNet [16], GoogLeNet
[18], GoogLeNet-shuffle [61] and ResNet-152 [19]. The first
three 2-D ConvNets were trained using images containing
1K different visual objects as defined in the Large Scale Visual Recognition Challenge [20]. GoogLeNet-shuffle follows
GoogLeNet’s architecture, but is re-trained using a bottomup reorganization of the complete 22K ImageNet hierarchy,
excluding over-specific classes and classes with few images
and thus making the final classes more balanced. For the
video dataset, we further experiment with a 3-D ConvNet
[36], trained on one million sports videos containing 487
sport-related concepts [64]. As the videos were muted, we
cannot evaluate Word2VideoVec with audio features. We tried
multiple layers of each ConvNet model and report the best
performing layer. Finally we use the fc7 layer for CaffeNet
(4,096-dim), the pool5 layer for GoogleNet (1,024-dim),
GoogleNet-shuffle (1,024-dim) and ResNet-152 (2,048-dim),
and the fc6 layer for C3D (4,096-dim).
Evaluation protocol. The training, validation and test set
are used for model training, model selection and performance
evaluation, respectively, and exclusively. For performance
evaluation, each test caption is first vectorized by a trained
Word2VisualVec. Given a test image/video query, we then
rank all the test captions in terms of their similarities with
the image/video query in the visual feature space. The performance is evaluated based on the caption ranking. Following
the common convention [4], [7], [38], we report rank-based
performance metrics R@K (K = 1, 5, 10). R@K computes
the percentage of test images for which at least one correct
result is found among the top-K retrieved sentences. Hence,
higher R@K means better performance.
How to vectorize an input sentence? As shown in Table
I, II and III, multi-scale sentence vectorization outperforms
its single-scale counterparts. Table IV shows examples for
which a particular vectorization method is particularly suited.
In the first two rows, word2vec performs better than BoW
and GRU, because the main words rottweiler and quad are
not in the vocabularies of BoW and GRU. However, the use
of word2vec sometimes has the side effect of overweighting
high-level semantic similarity between words. E.g., beagle in
the third row is found to be closer to dog than to hound, and
woman in the fourth row is found to be more close to man
than to lady in the word2vec space. In this case, the resultant
Word2VisualVec vector is less discriminative than its BoW
counterpart. Since GRU is good at modeling long-term word
dependency, it performs the best in the last two rows, where
the captions are more narrative.
Which visual feature? Table I and II show performance
of image caption retrieval on Flickr8k and Flickr30k, respectively. As the ConvNets go deeper, predicting the corresponding visual features by Word2VisualVec improves. This
result is encouraging as better performance can be expected
6
IEEE TRANSACTIONS ON MULTIMEDIA , VOL. ?, NO. ?, ? 2018
TABLE I
P ERFORMANCE OF IMAGE CAPTION RETRIEVAL ON F LICKR 8 K . M ULTI - SCALE SENTENCE VECTORIZATION COMBINED WITH THE R ES N ET-152
FEATURE IS THE BEST.
BoW
Visual Features
CaffeNet
GoogLeNet
GoogLeNet-shuffle
ResNet-152
word2vec
GRU
Multi-scale
R@1
R@5
R@10
R@1
R@5
R@10
R@1
R@5
R@10
R@1
R@5
R@10
20.7
27.1
32.2
34.7
43.3
53.5
57.4
62.9
55.2
64.9
72.0
74.7
18.9
24.7
30.2
32.1
42.3
51.6
57.6
62.9
54.2
64.1
70.5
75.5
21.2
25.1
32.9
33.4
44.7
51.9
59.5
63.1
56.1
64.2
70.5
75.3
23.1
28.8
35.4
36.3
47.1
54.5
63.1
66.4
57.7
68.2
74.0
78.2
TABLE II
P ERFORMANCE OF IMAGE CAPTION RETRIEVAL ON F LICKR 30 K . M ULTI - SCALE SENTENCE VECTORIZATION COMBINED WITH THE R ES N ET-152
FEATURE IS THE BEST.
BoW
Visual Features
CaffeNet
GoogLeNet
GoogLeNet-shuffle
ResNet-152
word2vec
GRU
Multi-scale
R@1
R@5
R@10
R@1
R@5
R@10
R@1
R@5
R@10
R@1
R@5
R@10
24.4
32.2
38.6
41.8
47.1
58.3
66.4
70.9
57.1
67.7
75.2
78.6
18.9
24.7
30.2
36.5
42.3
51.6
57.6
65.0
54.2
64.1
70.5
75.1
24.1
33.6
38.6
42.0
46.4
56.8
64.8
70.4
57.4
67.2
76.7
80.1
24.9
33.9
41.3
45.9
50.4
62.2
69.1
71.9
60.8
70.8
78.6
81.3
TABLE III
P ERFORMANCE OF VIDEO CAPTION RETRIEVAL ON MSVD. M ULTI - SCALE SENTENCE VECTORIZATION COMBINED WITH THE G OOG L E N ET- SHUFFLE
FEATURE IS THE BEST.
BoW
CaffeNet
GoogLeNet
GoogLeNet-shuffle
ResNet-152
C3D
GRU
Multi-scale
R@1
R@5
R@10
R@1
R@5
R@10
R@1
R@5
R@10
R@1
R@5
R@10
9.4
14.2
14.8
15.8
10.4
19.9
27.5
29.6
32.1
22.5
26.7
36.0
37.2
39.9
28.4
8.7
14.5
16.6
16.4
14.8
22.2
30.3
33.7
34.8
34.5
31.3
39.7
43.4
46.6
44.0
9.6
16.0
16.6
15.8
13.1
19.4
33.1
35.1
31.3
26.6
26.9
43.0
42.8
41.8
33.4
9.6
17.2
18.5
16.1
14.9
21.9
33.7
36.7
34.5
27.8
30.6
42.8
45.1
43.1
35.5
from the continuous progress in deep learning features. Table
III shows performance of video caption retrieval on MSVD,
where the more compact GoogLeNet-shuffle feature tops the
performance when combined with multi-scale sentence vectorization. Although MSVD has more visual / sentence pairs than
Flickr8k, it has a much less number of 1,200 visual examples
for training. Substituting ResNet-152 for GoogLeNet-shuffle
reduces the amount of trainable parameters by 18%, making
Word2VisualVec more effective to learn from the relatively
limited examples.
Given a fixed amount of training pairs, having more visual
examples might be better for Word2VisualVec. To verify this
conjecture, we take from the Flickr30k training set a random
subset of 3k images with one sentence per image. We then
incrementally increase the amount of image / sentence pairs
for training, using the following two strategies. One is to
increase the number of sentences per image from 1 to 2, 3,
4, and 5 with the number of images fixed, while the other is
to let the amount of images increase to 6k, 9k, 12k and 15k
with the number of sentences per image fixed to one. As the
performance curves in Fig. 3 show, given the same amount of
training pairs, adding more images results in better models.
The result is also instructive for more effective acquisition of
training data for image and video caption retrieval.
50
(R@1+R@5+R@10)/3
Visual Features
word2vec
45
40
Word2VisualVec
Word2VisualVec
Word2VisualVec
Word2VisualVec
35
3K
6K
9K
(word2vec), more sentences
(word2vec), more images
(multi-scale), more sentences
(multi-scale), more images
12K
15K
Number of image-sentence pairs for training
Fig. 3. Performance curves of two Word2VisualVec models on the Flickr30k
test set, as the amount of image-sentence pairs for training increases. For both
models, adding more training images gives better performance compared to
adding more training sentences.
How deep? In this experiment, we use word2vec as sentence vectorization for its efficient execution. We vary the
number of MLP layers, and observe a performance peak when
DONG et al.: PREDICTING VISUAL FEATURES FROM TEXT FOR IMAGE AND VIDEO CAPTION RETRIEVAL
7
TABLE IV
C APTION RANKS BY W ORD 2V ISUALV EC WITH DISTINCT SENTENCE
VECTORIZATION STRATEGIES . L OWER RANK MEANS BETTER
PERFORMANCE .
sentences per image) or the 1,024-dim GoogLeNet-shuffle
feature when training data is more scarce.
Query image
B. Word2VisualVec versus word2vec
Ground-truth caption and its ranks
A rottweiler running.
BoW→857
word2vec→84
GRU→841
A quad sends dirt flying into the air.
BoW→41
word2vec→5
GRU→28
A white-footed beagle plays with a tennis ball on a
garden path.
BoW→7
word2vec→22
GRU→65
A man in a brown sweater and a woman smile for
their video camera.
BoW→3
word2vec→43
GRU→16
A young man wearing swimming goggles wearing a
blue shirt with a pirate skull on it.
BoW→422
word2vec→105
GRU→7
A dark-haired young woman, number 528, wearing
red and white, is preparing to throw a shot put.
BoW→80
word2vec→61
GRU→1
using three-layers, i.e., 500-2048-2048, on Flickr8k and fourlayers, i.e., 500-2048-2048-2048, on Flickr30k. Recall that the
model is chosen in terms of its performance on the validation
set. While its learning capacity increases as the model goes
deeper, the chance of overfitting also increases. To improve
generalization we also tried l2 regularization on the network
weights. This tactic brings a marginal improvement, yet introduces extra hyper parameters. So we did not go further in
that direction. Overall the three-layer Word2VisualVec strikes
the best balance between model capacity and generalization
ability, so we use this network configuration in what follows.
How fast? We implement Word2VisualVec using Keras
with theano backend. The three-layer model with multi-scale
sentence vectorization takes about 1.3 hours to learn from the
30k image-sentence pairs in Flickr8k on a GeoForce GTX
1070 GPU. Predicting visual features for a given sentence
is swift, at an averaged speed of 20 milliseconds. Retrieving
captions from a pool of 5k sentences takes 8 milliseconds per
test image. Based on the above evaluations we recommend
Word2VisualVec that uses multi-scale sentence vectorization,
and predicts the 2,048-dim ResNet-152 feature when adequate
training data is available (over 2k training images with five
Although our model is meant for caption retrieval, it essentially generates a new representation of text. How meaningful is this new representation as compared to word2vec?
To answer this question, we take all the 5K test sentences from Flickr30k, vectorizing them by word2vec and
Word2VisualVec, respectively. The word2vec model was
trained on Flickr tags as described in Section III-A. For a fair
comparison, we let Word2VisualVec use the same word2vec as
its first layer. Fig. 4 presents t-SNE visualizations of sentence
distributions in the word2vec and Word2VisualVec spaces,
showing that sentences describing the same image stay more
close while sentences from distinct images are more distant
in the latter space. Recall that sentences associated with the
same image are meant for describing the same visual content.
Moreover, since they were independently written by distinct
users, the wording may vary across the users, requiring a
text representation to capture shared semantics among distinct
words. Word2VisualVec better handles such variance in captions as illustrated in the first two examples in Fig. 4(e).
The last example in Fig. 4(e) shows failures of both models,
where the two sentences (#5 and #6) are supposed to be
close. Large difference between their subject (teenagers versus
people) and object (shirt versus paper) makes it difficult for
Word2VisualVec to predict similar visual features from the
two sentences. Actually, we find in the Word2VisualVec space
that the sentence nearest to #5 is “A woman is completing
a picture of a young woman” (which resembles subjects,
i.e., teenager versus young woman and action, i.e., holding
paper or easel) and the one to #6 is “Kids scale a wall as two
other people watch” (which depicts similar subjects, i.e., two
people and objects, i.e., concrete versus wall). This example
shows the existence of large divergence between manually
written descriptions of the same visual content, and thus the
challenging nature of the caption retrieval problem.
C. Word2VisualVec for multi-modal querying
Fig. 5 presents an example of Word2VisualVec’s learned
representation and its ability for multi-modal query composition. Given the query image, its composed queries are
obtained by subtracting and/or adding the visual features of
the query words, as predicted by Word2VisualVec. A deep
dream visualization [66] is performed on an average (gray)
image guided by each composed query. Consider the query in
the second row for instance, where we instruct the search to
replace bicycle with motorbike via a textual specification. The
predicted visual feature of word bicycle is subtracted (effect
visible in first row) and the predicted visual feature of word
motorbike is added. Imagery of motorbikes are indeed present
in the dream. Hence, the nearest retrieved images emphasize
on motorbikes in street scenes.
8
IEEE TRANSACTIONS ON MULTIMEDIA , VOL. ?, NO. ?, ? 2018
6
4
2
1
2
5
1
5
6
3
3
4
(a)
(b)
(c)
(d)
1. Six kids splash in
water.
3. A jockey with a red helmet
is riding a white horse, and a
jockey with an orange helmet
is riding a brown horse.
5 . Overhead shot of 2
teenagers both looking at
a shirt being held up by
one of them.
2. Kids play in the
water.
4. Two jockeys on horses are
racing down a track.
6. Two people looking a
piece of paper standing
on concrete.
(e)
Fig. 4. Word2VisualVec versus word2vec. For the 5k test sentences from Flickr30k, we use t-SNE [65] to visualize their distribution in (a) the word2vec
space and (b) the Word2VisualVec space obtained by mapping the word2vec vectors to the ResNet-152 features. Histograms of intra-cluster (i.e., sentences
describing the same image) and inter-cluster (i.e., sentences from different images) distances in the two spaces are given in (c) and (d). Bigger colored dots
indicate 50 sentences associated with 10 randomly chosen images, with exemplars detailed in (e). Together, the plots reveal that different sentences describing
the same image stay closer, while sentences from different images are more distant in the Word2VisualVec space. Best viewed in color.
D. Comparison to the State-of-the-Art
Image caption retrieval. We compare a number of
recently developed models for image caption retrieval
[4]–[7], [42], [48], [67]. All the methods, including ours,
require image-sentence pairs to train. They all perform caption
retrieval on a provided set of test sentences. Note that the
compared methods have no reported performance on the
ResNet-152 feature. We have tried the VGGNet feature as
used in [4], [5], [42] and found Word2VisualVec less effective.
This is not surprising as the choice of the visual feature
is an essential ingredient of our model. While it would be
ideal to replicate all methods using the same ResNet feature,
only [6], [7] have released their source code. So we retrain these two models with the same ResNet features we
use. Table V presents the performance of the above models
on both Flickr8k and Flickr30k. Word2VisualVec compares
favorably against the state-of-the-art. Given the same visual
feature, our model outperforms [6], [7], especially for R@1.
Notice that Plummer et al. [67] employ extra bounding-box
level annotations. Still our results are better. Indicating that
we can expect further gains by including locality in the
Word2VisualVec representation. As all the competitor models
use joint subspaces, the results justify the viability of directly
using the deep visual feature space for image caption retrieval.
Compared with the two top-performing methods [7], [42],
the run-time complexity of the multi-scale Word2VisualVec is
O(m × s + s × g + (m + s + g) × 2048 + 2048 × d), where s
indicates the dimensionality of word embedding and g denotes
DONG et al.: PREDICTING VISUAL FEATURES FROM TEXT FOR IMAGE AND VIDEO CAPTION RETRIEVAL
Composed query
Deep dream
9
Nearest images from Flickr30k test
A crowded sidewalk in the inner
city of an Asian country.
-
People look on as participants in
a marathon pass by.
bicycle
Police on motorcycle, in turning
land, waiting at stoplight.
-
bicycle
A group of men in red and black
jackets waits on motorcycles.
+ motorbike
Six people ride mountain bikes
through a jungle environment.
-
street
Two male hikers inspect a log by
the side of a forest path.
+ woods
(a)
(b)
(c)
Fig. 5. Word2VisualVec allows for multi-modal query composition. (a) For each multi-modal query we visualize its predicted visual feature in (b) and
show in (c) the nearest images and their sentences from the Flickr30k test set. Note the change in emphasis in (b), better viewed digitally in close-up.
TABLE V
S TATE - OF - THE - ART FOR IMAGE CAPTION RETRIEVAL . A LL NUMBERS
ARE FROM THE CITED PAPERS EXCEPT FOR [6], [7], BOTH RE - TRAINED
USING THEIR CODE WITH THE SAME R ES N ET FEATURES WE USE .
W ORD 2V ISUALV EC OUTPERFORMS RECENT ALTERNATIVES .
Flickr8k
Ma et al. [4]
Kiros et al. [6]
Klein et al. [5]
Lev et al. [48]
Plummer et al. [67]
Wang et al. [42]
Vendrov et al. [7]
Word2VisualVec
Flickr30k
R@1
R@5
R@10
R@1
R@5
R@10
24.8
23.7
31.0
31.6
–
–
27.5
36.3
53.7
53.1
59.3
61.2
–
–
56.5
66.4
67.1
67.3
73.7
74.3
–
–
69.2
78.2
33.6
32.9
35.0
35.6
39.1
40.3
41.3
45.9
64.1
65.6
62.0
62.5
64.8
68.9
71.0
71.9
74.9
77.1
73.8
74.2
76.4
79.9
80.8
81.3
the size of GRU. This complexity is larger than [7] which has
a complexity of O(m × s + s × g + g × d), but lower than
[42] which vectorizes a sentence by a time-consuming Fisher
vector encoding.
Video caption retrieval. We also participated in the NIST
TrecVid 2016 video caption retrieval task [41]. The test set
consists of 1,915 videos collected from Twitter Vine. Each
video is about 6 sec long. The videos were given to 8
annotators to generate a total of 3,830 sentences, with each
video associated with two sentences written by two different
annotators. The sentences have been split into two equalsized subsets, set A and set B, with the rule that sentences
describing the same video are not in the same subset. Per
test video, participants are asked to rank all sentences in the
two subsets. Notice that we have no access to the ground-
truth, as the test set is used for blind testing by the organizers
only. NIST also provides a training set of 200 videos, which
we consider insufficient for training Word2VideoVec. Instead,
we learn the network parameters using video-text pairs from
MSR-VTT [68], with hyper-parameters tuned on the provided
TrecVid training set. By the time of TrecVid submission, we
used GoogLeNet-shuffle as the visual feature, a 1,024-dim bag
of MFCC as the audio feature, and word2vec for sentence
vectorization. The performance metric is Mean Inverted Rank
(MIR) at which the annotated item is found. Higher MIR
means better performance.
As shown in Fig. 6, with MIR ranging from 0.097 to 0.110,
Word2VideoVec leads the evaluation on both set A and set B
in the context of 21 submissions from seven teams worldwide.
Moreover, the results can be further improved by predicting
the visual-audio feature. Besides us two other teams submitted
their technical reports, scoring their best MIR of 0.076 [69]
and 0.006 [70], respectively. Given a video-sentence pair, the
model from [69] iteratively combines the video and sentence
features into one vector, followed by a fully connected layer
to predict the similarity score. The model from [70] learns an
embedding space by minimizing a cross-media distance.
Some qualitative image and video caption retrieval results
are shown in Fig. 7. Consider the last image in the top row. Its
ground-truth caption is “A man playing an accordion in front
of buildings”, while the top-retrieved caption is “People walk
through an arch in an old-looking city”. Though the ResNet
feature well describes the overall scene, it fails to capture
the accordion which is small but has successfully drawn the
attention of the annotator who wrote the ground-truth caption.
The last video in the bottom row of Fig. 7 shows “A man
10
IEEE TRANSACTIONS ON MULTIMEDIA , VOL. ?, NO. ?, ? 2018
19 runs from 6 other teams
Word2VideoVec
Word2VideoVec with audio
Set A
R EFERENCES
Set B
0.00
suggest Word2VisualVec with multi-scale sentence vectorization, predicting the ResNet feature when adequate training data
is available or the GoogLeNet-shuffle feature when training
data is in short supply.
0.02
0.04
0.06
0.08
0.10
0.12
Mean Inverted Rank
Fig. 6. State-of-the-art for video caption retrieval in the TrecVid 2016
benchmark, showing the good performance of Word2VideoVec compared to
19 alternative approaches evaluated by the NIST TrecVid 2016 organizers
[41], which can be further improved by predicting the visual-audio feature.
throws his phone into a river”. This action is not well described
by the averagely pooled video feature. Hence, the main sources
of errors come from the cases where the visual features do not
well represent the visual content.
E. Limits of caption retrieval and possible extensions
The caption retrieval task works with the assumption that
for a query image or video, there is at least one sentence
relevant w.r.t the query. In a general scenario where the query
is unconstrained with arbitrary content, this assumption is
unlikely to be valid. A naive remedy would be to enlarge
the sentence pool. A more advanced solution is to combine
with methods that construct novel captions. In [71], [72] for
instance, a caption is formed using a set of visually relevant
phrases extracted from a large-scale image collection. From
the top-n sentences retrieved by Word2VisualVec, one can also
generate a new caption, using the methods of [71], [72]. As
this paper is to retrieve rather than to construct a caption, we
leave this for future exploration.
V. C ONCLUSIONS
This paper shows the viability of resolving image and
video caption retrieval in a visual feature space exclusively.
We contribute Word2VisualVec, which is capable of transforming a natural language sentence to a meaningful visual
feature representation. Compared to the word2vec space,
sentences describing the same image tend to stay closer,
while sentences from different images are more distant in
the Word2VisualVec space. As the sentences are meant for
describing visual content, the new textual encoding captures
both semantic and visual similarities. Word2VisualVec also
supports multi-modal query composition, by subtracting and/or
adding the predicted visual features of specific words to a
given query image. What is more the Word2VisualVec is
easily generalized to predict a visual-audio representation from
text for video caption retrieval. For state-of-the-art results, we
[1] N. Rasiwasia, J. Costa Pereira, E. Coviello, G. Doyle, G. R. G.
Lanckriet, R. Levy, and N. Vasconcelos, “A new approach to crossmodal multimedia retrieval,” in MM, 2010.
[2] F. Feng, X. Wang, and R. Li, “Cross-modal retrieval with correspondence
autoencoder,” in MM, 2014.
[3] Y. Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik,
“Improving image-sentence embeddings using large weakly annotated
photo collections,” in ECCV, 2014.
[4] L. Ma, Z. Lu, L. Shang, and H. Li, “Multimodal convolutional neural
networks for matching image and sentence,” in ICCV, 2015.
[5] B. Klein, G. Lev, G. Sadeh, and L. Wolf, “Associating neural word
embeddings with deep image representations using fisher vectors,” in
CVPR, 2015.
[6] R. Kiros, R. Salakhutdinov, and R. S. Zemel, “Unifying visual-semantic
embeddings with multimodal neural language models,” TACL, 2015.
[7] I. Vendrov, R. Kiros, S. Fidler, and R. Urtasun, “Order-embeddings of
images and language,” in ICLR, 2016.
[8] R. Xu, C. Xiong, W. Chen, and J. J. Corso, “Jointly modeling deep
video and compositional text to bridge vision and language in a unified
framework,” in AAAI, 2015.
[9] V. Ordonez, G. Kulkarni, and T. L. Berg, “Im2text: Describing images
using 1 million captioned photographs,” in NIPS, 2011.
[10] J. Devlin, S. Gupta, R. Girshick, M. Mitchell, and C. L. Zitnick,
“Exploring nearest neighbor approaches for image captioning,” arXiv
preprint arXiv:1505.04467, 2015.
[11] S. Yagcioglu, E. Erdem, A. Erdem, and R. Cakici, “A distributed
representation based query expansion approach for image captioning,”
in ACL, 2015.
[12] I. Chami, Y. Tamaazousti, and H. Le Borgne, “AMECON: Abstract
meta-concept features for text-illustration,” in ICMR, 2017.
[13] E. Mansimov, E. Parisotto, J. L. Ba, and R. Salakhutdinov, “Generating
images from captions with attention,” in ICLR, 2016.
[14] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee,
“Generative adversarial text to image synthesis,” in ICML, 2016.
[15] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification
using deep convolutional neural networks,” in NIPS, 2012.
[16] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick,
S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for
fast feature embedding,” in MM, 2014.
[17] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
large-scale image recognition,” in ICLR, 2015.
[18] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,
V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,”
in CVPR, 2015.
[19] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in CVPR, 2016.
[20] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. FeiFei, “Imagenet large scale visual recognition challenge,” IJCV, vol. 115,
no. 3, pp. 211–252, 2015.
[21] A. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features
off-the-shelf: An astounding baseline for recognition,” in CVPR Workshop, 2014.
[22] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “Overfeat: Integrated recognition, localization and detection using
convolutional networks,” in ICLR, 2014.
[23] G. Ye, Y. Li, H. Xu, D. Liu, and S.-F. Chang, “EventNet: a large scale
structured concept library for complex event detection in video,” in MM,
2015.
[24] Z. Wu, Y.-G. Jiang, X. Wang, H. Ye, and X. Xue, “Multi-stream multiclass fusion of deep networks for video classification,” in MM, 2016.
[25] K. Cho, A. Courville, and Y. Bengio, “Describing multimedia content
using attention-based encoder-decoder networks,” TMM, vol. 17, no. 11,
pp. 1875–1886, 2015.
[26] L. Jiang, S.-I. Yu, D. Meng, Y. Yang, T. Mitamura, and A. Hauptmann,
“Fast and accurate content-based semantic search in 100m Internet
videos,” in MM, 2015.
DONG et al.: PREDICTING VISUAL FEATURES FROM TEXT FOR IMAGE AND VIDEO CAPTION RETRIEVAL
A man riding a cart pulled by a
donkey.
People are sitting around and
playing in a fountain.
A white dog with a blue collar runs
while carrying a yellow toy.
11
A woman sitting on her couch, by
the front windows, reading a book.
2 puppies playing in a pool
Palm trees flutter in the wind by the seashore
A basketball player walks with the ball between
opposite players and score it
2 puppies playing in a pool
In the daytime at sunset, sea waves move to
the beach
Two basketball players speak into the
microphone in the basketball stadium
Fig. 7. Some image and video caption retrieval results by this work. The last row are the sentences retrieved by Word2VideoVec with audio, showing that
adding audio sometimes help describe acoustics, e.g. sea wave and speak.
[27] X. Jiang, F. Wu, X. Li, Z. Zhao, W. Lu, S. Tang, and Y. Zhuang, “Deep
compositional cross-modal learning to rank via local-global alignment,”
in MM, 2015.
[28] X. Shang, H. Zhang, and T.-S. Chua, “Deep learning generic features
for cross-media retrieval,” in MMM, 2016.
[29] J. Chen and C.-W. Ngo, “Deep-based ingredient recognition for cooking
recipe retrieval,” in MM, 2016.
[30] Y. Hua, S. Wang, S. Liu, A. Cai, and Q. Huang, “Cross-modal correlation
learning by adaptive hierarchical semantic aggregation,” TMM, vol. 18,
no. 6, pp. 1201–1216, 2016.
[31] A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and
T. Mikolov, “DeViSE: A deep visual-semantic embedding model,” in
NIPS, 2013.
[32] R. Socher, A. Karpathy, Q. V. Le, C. D. Manning, and A. Y. Ng,
“Grounded compositional semantics for finding and describing images
with sentences,” TACL, vol. 2, pp. 207–218, 2014.
[33] L. Zhang, B. Ma, G. Li, Q. Huang, and Q. Tian, “Cross-modal retrieval
using multiordered discriminative structured subspace learning,” TMM,
vol. 19, no. 6, pp. 1220–1233, 2017.
[34] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of
word representations in vector space,” in ICLR, 2013.
[35] M. Jain, J. C. van Gemert, T. Mensink, and C. G. M. Snoek, “Objects2action: Classifying and localizing actions without any video example,” in ICCV, 2015.
[36] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning
spatiotemporal features with 3d convolutional networks,” in ICCV, 2015.
[37] F. Eyben, F. Weninger, F. Gross, and B. Schuller, “Recent developments
in openSMILE, the Munich open-source multimedia feature extractor,”
in MM, 2013.
[38] M. Hodosh, P. Young, and J. Hockenmaier, “Framing image description
as a ranking task: Data, models and evaluation metrics,” JAIR, vol. 47,
no. 1, pp. 853–899, 2013.
[39] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, “From image
descriptions to visual denotations: New similarity metrics for semantic
inference over event descriptions,” TACL, vol. 2, pp. 67–78, 2014.
[40] D. L. Chen and W. B. Dolan, “Collecting highly parallel data for
paraphrase evaluation,” in ACL, 2011.
[41] G. Awad, J. Fiscus, D. Joy, M. Michel, A. Smeaton, W. Kraaij,
G. Quenot, M. Eskevich, R. Aly, R. Ordelman, G. Jones, B. Huet,
and M. Larson, “Trecvid 2016: Evaluating video search, video event
detection, localization, and hyperlinking,” in TRECVID, 2016.
[42] L. Wang, Y. Li, and S. Lazebnik, “Learning deep structure-preserving
image-text embeddings,” in CVPR, 2016.
[43] M. Otani, Y. Nakashima, E. Rahtu, J. Heikkilä, and N. Yokoya,
“Learning joint representations of videos and sentences with web image
search,” in ECCV Workshop, 2016.
[44] Y. Yu, H. Ko, J. Choi, and G. Kim, “End-to-end concept word detection
for video captioning, retrieval, and question answering,” in CVPR, 2017.
[45] T. Yao, T. Mei, and C.-W. Ngo, “Learning query and image similarities
with ranking canonical correlation analysis,” in ICCV, 2015.
[46] J. L. Ba, K. Swersky, S. Fidler, and R. Salakhutdinov, “Predicting deep
zero-shot convolutional neural networks using textual descriptions,” in
ICCV, 2015.
[47] Q. You, L. Cao, H. Jin, and J. Luo, “Robust visual-textual sentiment
analysis: When attention meets tree-structured recursive neural networks,” in MM, 2016.
[48] G. Lev, G. Sadeh, B. Klein, and L. Wolf, “Rnn fisher vectors for action
recognition and image annotation,” in ECCV, 2016.
[49] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: a neural
image caption generator,” in CVPR, 2015.
[50] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille, “Deep
captioning with multimodal recurrent neural networks (m-rnn),” in ICLR,
2015.
[51] C. Wang, H. Yang, C. Bartz, and C. Meinel, “Image captioning with
deep bidirectional LSTMs,” in MM, 2016.
[52] J. Dong, X. Li, W. Lan, Y. Huo, and C. G. M. Snoek, “Early embedding
and late reranking for video captioning,” in MM, 2016.
[53] X. Li, S. Liao, W. Lan, X. Du, and G. Yang, “Zero-shot image tagging
by hierarchical semantic embedding,” in SIGIR, 2015.
[54] S. Cappallo, T. Mensink, and C. G. M. Snoek, “Image2emoji: Zero-shot
emoji prediction for visual media,” in MM, 2015.
[55] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares,
H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn
encoder-decoder for statistical machine translation,” in EMNLP, 2014.
[56] D. Grangier and S. Bengio, “A discriminative kernel-based approach to
rank images from text queries,” TPAMI, vol. 30, no. 8, pp. 1371–1384,
2008.
[57] B. Bai, J. Weston, D. Grangier, R. Collobert, K. Sadamasa, Y. Qi,
C. Cortes, and M. Mohri, “Polynomial semantic indexing,” in NIPS,
2009.
[58] A. Karpathy and L. Fei-Fei, “Deep visual-semantic alignments for
generating image descriptions,” in CVPR, 2015.
[59] T. Tieleman and G. Hinton, “Lecture 6.5-rmsprop: Divide the gradient
by a running average of its recent magnitude.” COURSERA: Neural
Networks for Machine Learning, 2012.
12
[60] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and
K. Saenko, “Translating videos to natural language using deep recurrent
neural networks,” in NAACL-HLT, 2015.
[61] P. Mettes, D. C. Koelma, and C. G. M. Snoek, “The ImageNet shuffle:
Reorganized pre-training for video event detection,” in ICMR, 2016.
[62] Y. Pan, T. Mei, T. Yao, H. Li, and Y. Rui, “Jointly modeling embedding
and translation to bridge video and language,” in CVPR, 2016.
[63] H. Yu, J. Wang, Z. Huang, Y. Yang, and W. Xu, “Video paragraph
captioning using hierarchical recurrent neural networks,” in CVPR, 2016.
[64] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and
L. Fei-Fei, “Large-scale video classification with convolutional neural
networks,” in CVPR, 2014.
[65] L. van de Maaten and G. Hinton, “Visualizing data using t-sne,” JMLR,
vol. 9, pp. 2579–2605, 2008.
[66] A. Mordvintsev, C. Olah, and M. Tyka, “Inceptionism: Going deeper
into neural networks,” Google Research Blog, 2015.
[67] B. Plummer, L. Wang, C. Cervantes, J. Caicedo, J. Hockenmaier, and
S. Lazebnik, “Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models,” in ICCV, 2015.
[68] J. Xu, T. Mei, T. Yao, and Y. Rui, “MSR-VTT: A large video description
dataset for bridging video and language,” in CVPR, 2016.
[69] H. Zhang, L. Pang, Y. Lu, and C. Ngo, “VIREO@ TRECVID 2016:
Multimedia event detection, ad-hoc video search, video to text description,” in TRECVID 2016 Workshop., 2016.
[70] D.-D. Le, S. Phan, V.-T. Nguyen, B. Renoust, T. A. Nguyen, V.-N.
Hoang, T. D. Ngo, M.-T. Tran, Y. Watanabe, M. Klinkigt et al., “NIIHITACHI-UIT at TRECVID 2016,” in TRECVID 2016 Workshop., 2016.
[71] P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and Y. Choi,
“Collective generation of natural image descriptions,” in ACL, 2012.
[72] V. Ordonez, X. Han, P. Kuznetsova, G. Kulkarni, M. Mitchell, K. Yamaguchi, K. Stratos, A. Goyal, J. Dodge, A. Mensch et al., “Large scale
retrieval and generation of image descriptions,” IJCV, vol. 119, no. 1,
pp. 46–59, 2016.
IEEE TRANSACTIONS ON MULTIMEDIA , VOL. ?, NO. ?, ? 2018
| 1 |
Conditioning in Probabilistic Programming
Nils Jansen
and Benjamin Lucien Kaminski
and Joost-Pieter Katoen
and Federico Olmedo
Friedrich Gretz
and Annabelle McIver
Macquarie University
Sydney, Australia
arXiv:1504.00198v1 [] 1 Apr 2015
RWTH Aachen University
Aachen, Germany
Abstract—We investigate the semantic intricacies of conditioning, a main feature in probabilistic programming. We provide a
weakest (liberal) pre–condition (w(l)p) semantics for the elementary probabilistic programming language pGCL extended with
conditioning. We prove that quantitative weakest (liberal) pre–
conditions coincide with conditional (liberal) expected rewards in
Markov chains and show that semantically conditioning is a truly
conservative extension. We present two program transformations
which entirely eliminate conditioning from any program and
prove their correctness using the w(l)p–semantics. Finally, we
show how the w(l)p–semantics can be used to determine conditional probabilities in a parametric anonymity protocol and
show that an inductive w(l)p–semantics for conditioning in non–
deterministic probabilistic programs cannot exist.
I. I NTRODUCTION
Probabilistic programming is en vogue [1], [2]. It is mainstream in machine learning for describing distribution functions; Bayesian inference is pivotal in their analysis. It is used
in security for describing both cryptographic constructions
such as randomized encryption and experiments defining security notions [3]. Probabilistic programs, being an extension
of familiar notions, render these various fields accessible to
programming communities. A rich palette of probabilistic
programming languages exists including Church [4] as well
as modern approaches like probabilistic C [5], Tabular [6]
and R2 [7].
Probabilistic programs are sequential programs having two
main features: (1) the ability to draw values at random from
probability distributions, and (2) the ability to condition values
of variables in a program through observations. The semantics of languages without conditioning is well–understood.
Kozen [8] considered denotational semantics, whereas McIver
and Morgan [9] provided a weakest (liberal) precondition
(w(l)p) semantics; a corresponding operational semantics is
given by Gretz et al. [10]. Other relevant works include
probabilistic power–domains [11], semantics of constraint
probabilistic programming languages [12], and semantics for
stochastic λ–calculi [13].
Conditioning of variables through observations is less well–
understood and raises various semantic difficulties as we will
discuss in this paper. Previous work on semantics for programs
with observe statements [7], [14] do neither consider the
possibility of non–termination nor the powerful feature of
non–determinism. In this paper, we thoroughly study a more
general setting which accounts for non–termination by means
of a very simple yet powerful probabilistic programming
language supporting non–determinism and observations. Let
us first study a few examples that illustrate the semantic
intricacies. The sample program snippet Pobs1
{x := 0} [1/2] {x := 1}; observe x = 1
assigns zero to the variable x with probability 1/2 while x is
assigned one with the same likelihood, after which we condition to the outcome x being one. The observe statement
blocks all runs violating its condition and prevents those runs
from happening. It differs, e.g., from program annotations
like (probabilistic) assertions [15]. The interpretation of the
program is the expected outcome conditioned on permitted
runs. For the sample program Pobs1 this yields the outcome
1 · 1—there is one feasible run that happens with probability
one with x being one. Whereas this is rather straightforward,
a slight variant like Pobs2
{x := 0; observe x = 1} [1/2] {x := 1; observe x = 1}
is somewhat more involved, as the entire left branch of the
probabilistic choice is infeasible. Is this program equivalent to
the sample program Pobs1 ?
The situation becomes more intricate when considering
loopy programs that may diverge. Consider the programs Pdiv
(left) and Pandiv (right):
x := 1;
while (x = 1) {
x := 1
}
x := 1;
while (x = 1) {
{x := 1} [1/2] {x := 0};
observe x = 1
}
Program Pdiv diverges and therefore yields as expected outcome zero. Due to the conditioning on x=1, Pandiv admits
just a single—diverging—feasible run but this run almost
surely never happens. Its conditional expected outcome can
thus not be measured. It should be noted that programs with
(probabilistic) assertions must be loop–free to avoid similar
problems [15]. Other approaches insist on the absence of
diverging loops [16].
Intricacies also occur when conditioning is used in programs
that may abort. Consider the program
abort [1/2] {x := 0} [1/2] {x := 1};
{y := 0} [1/2] {y := 1}; observe x=0 ∨ y=0
wp[P ](f ) with respect to wlp[P ](1). The latter yields the
wp under which P either does not terminate or terminates
while passing all observe statements. This is proven to
correspond to conditional expected rewards in the RMDP–
semantics, extending a similar result for pGCL [10]. Our
semantic viewpoints are thus consistent for fully probabilistic
programs. Besides, we show that conditioning is semantically
a truly conservative extension. That is to say, our semantics is
backward compatible with the (usual) pGCL semantics; this
does not apply to alternative approaches such as R2 [7].
Finally, we show several practical applications of our results. We present two program transformations which entirely
eliminate conditioning from any program and prove their
correctness using the w(l)p–semantics. In addition, we show
how the w(l)p–semantics can be used to determine conditional probabilities in a simplified version of the parametric
anonymity protocol Crowds [18].
Summarized, we provide the first operational semantics for
imperative probabilistic programming languages with conditioning and both probabilistic and non–deterministic choice.
Furthermore we give a denotational semantics for the fully
probabilistic case, which in contrast to [7], [14], where every program is assumed to terminate almost surely, takes
the probability of non–termination into account. Finally, our
semantics enables to prove the correctness of several program
transformations that eliminate observe statements.
where abort is the faulty aborting program which by definition does nothing else but diverge. The above program tosses
a fair coin and depending on the outcome either diverges or
tosses a fair coin twice. It finally conditions on at least once
heads (x=0 or y=0). What is the probability that the outcome
of the last coin toss was heads? The main issue here is how
to treat the possibility of abortion.
Combining conditioning with non–determinism is complicated, too.1 Non–determinism is a powerful means to deal with
unknown information, as well as to specify abstractions in
situations where details are unimportant. Let program Pnondet
be:
{{x := 5} {x := 2}} [1/4] {x := 2};
observe x > 3
where with probability 1/4, x is set to either 5 or 2 non–
deterministically (denoted {x := 5} {x := 2}), while x is
set to 2 with likelihood 3/4. Resolving the non–deterministic
choice in favour of setting x to five yields an expectation of
5 for x, obtained as 5 · 1/4 rescaled over the single feasible
run of Pnondet . Taking the right branch however induces an
infeasible run due to the violation of the condition x > 3,
yielding a non–measurable outcome.
The above issues—loops, divergence, and non–determinism—indicate that conditioning in probabilistic programs
is far from trivial. This paper presents a thorough semantic treatment of conditioning in a probabilistic extension of
Dijkstra’s guarded command language (known as pGCL [9]),
an elementary though foundational language that includes
(amongst others) parametric probabilistic choice. We take several semantic viewpoints. Reward Markov Decision Processes
(RMDPs) [17] are used as the basis for an operational semantics. This semantics is rather simple and elegant while covering
all aforementioned phenomena. In particular, it discriminates
the programs Pdiv and Pandiv while it does not discriminate
Pobs1 and Pobs2 .
We also provide a weakest pre–condition (wp) semantics à
la [9]. This is typically defined inductively over the structure of
the program. We show that combining both non–determinism
and conditioning cannot be treated in this manner. Given this
impossibility result we present a wp–semantics for fully probabilistic programs, i.e., programs without non–determinism.
To treat possibly non–terminating programs, due to e.g., diverging loops or abortion, this is complemented by a weakest
liberal pre–condition (wlp) semantics. The wlp–semantics
yields the weakest pre–expectation—the probabilistic pendant
of weakest pre–condition—under which program P either
does not terminate or establishes a post–expectation. It thus
differs from the wp–semantics in not guaranteeing termination.
The conditional weakest pre–expectation (cwp) of P with
respect to post–expectation f is then given by normalizing
II. P RELIMINARIES
In this section we present the probabilistic programming
language used for our approaches and recall the notions of
expectation transformers and (conditional) expected reward
over Markov decision processes used to endow the language
with a formal semantics.
a) Probabilistic programs and expectation transformers: We adopt the probabilistic guarded command language
(pGCL) [9] for describing probabilistic programs. pGCL is
an extension of Dijkstra’s guarded command language (GCL)
[19] with a binary probabilistic choice operator and its syntax
is given by clause
P ::= skip | abort | x := E | P; P | ite (G) {P} {P}
| {P} [p] {P} | {P} {P} | while (G) {P} .
Here, x belongs to V, the set of program variables; E is an
arithmetical expression over V, G a Boolean expression over
V and p a real–valued parameter with domain [0, 1]. Most
of the pGCL instructions are self–explanatory; we elaborate
only on the following: {P } [p] {Q} represents a probabilistic
choice where programs P is executed with probability p and
program Q with probability 1−p. {P } {Q} represents a
non–deterministic choice between P and Q.
pGCL programs are given a formal semantics through the
notion of expectation transformers. Let S be the set of program
states, where a program state is a variable valuation. Now
assume that P is a fully probabilistic program, i.e. a program
without non–deterministic choices. We can see P as a mapping
from an initial state σ to a distribution over final states JP K(σ).
1 As stated in [2], “representing and inferring sets of distributions is more
complicated than dealing with a single distribution, and hence there are several
technical challenges in adding non–determinism to probabilistic programs”.
2
Given a random variable f : S → R≥0 , transformer wp[P ]
maps every initial state σ to the expected value EJP K(σ) (f )
of f with respect to the distribution of final states JP K(σ).
Symbolically,
Definition II.1 (Parametric Discrete–time Reward Markov
Decision Process). Let AP be a set of atomic propositions.
A parametric discrete–time reward Markov decision process
(RMDP) is a tuple R = (S, sI , Act, P, L, r) with a countable set of states S, a unique initial state sI ∈ S, a finite set of
actions Act, a transition probability function
P : S × Act →
P
Distr(S) with ∀(s, α) ∈ S × Act•
P(s,
α)(s′ ) = 1,
′
s ∈S
AP
a labeling function L : S → 2 , and a reward function
r : S → R≥0 .
wp[P ](f )(σ) = EJP K(σ) (f ) .
In particular, if f = χA is the characteristic function of
some event A, wp[P ](f ) retrieves the probability that the
event occurred after the execution of P . (Moreover, if P is a
deterministic program in GCL, EJP K(σ) (χA ) is {0, 1}–valued
and we recover the ordinary notion of weakest pre–condition
introduced by Dijkstra [19].)
In contrast to the fully probabilistic case, the execution of
a non–deterministic program P may lead to multiple—rather
than a single—distributions of final states. To account for
these kind of programs, the definition of wp[P ] is extended as
follows:
wp[P ](f )(σ) = ′ inf Eµ′ (f )
A path of R is a finite or infinite sequence π = s0 α0 s1 α1 . . .
such that si ∈ S, αi ∈ Act, s0 = sI , and P(si , αi )(si+1 ) > 0
for all i ≥ 0. A finite path is denoted by π̂ = s0 α0 . . . sn for
n ∈ N with last (π̂) = sn and |π| = n. The i-th state si of π is
denoted π(i). The set of all paths of R is denoted by PathsR
R
and sets of infinite or finite paths by PathsR
inf or Pathsfin ,
R
respectively. Paths (s) is the set of paths starting in s and
PathsR (s, s′ ) is the set of all finite paths starting in s and
ending in s′ . This is also lifted to sets of states. If clear from
the context we omit the superscript R.
An MDP operates by a non–deterministic choice of an
action α ∈ Act that is enabled at state s and a subsequent
probabilistic determination of a successor state according to
P(s, α). We denote the set of actions that are enabled at s by
Act(s) and assume that Act(s) 6= ∅ for each state s. A state
s with |Act(s)| = 1 is called fully probabilistic, and in this
case we use P(s, s′ ) as a shorthand for P(s, α)(s′ ) where
Act(s) = {α}. For resolving the non–deterministic choices,
so–called schedulers are used. In our setting, deterministic
schedulers suffice, which are partial functions S : PathsR
fin →
Act with S(π̂) ∈ Act(last (π̂)). A deterministic scheduler is
called memoryless if the choice depends only on the current
state, yielding a function S : S → Act. The class of all
(deterministic) schedulers for R is denoted by Sched R .
A parametric discrete–time reward Markov chain (RMC) is
an RMDP with only fully probabilistic states. For an RMC
we use the notation R = (S, sI , P, L, r) where P : S →
Distr(S) is called a transition probability matrix. For RMDP
R, the fully probabilistic system S R induced by a scheduler
S ∈ Sched R is an induced RMC. A probability measure
is defined on the induced RMCs. The measure for RMC R
R
is given by PrR : PathsR
fin → [0, 1] ⊆ R with Pr (π̂) =
Qn−1
i=0 P(si , si+1 ), for π̂ = s0 . . . sn . The probability measure
can be lifted to sets of (infinite) paths using a cylinder set
construction, see [21, Ch. 10]. The cumulated
of a
Preward
n−1
finite path π̂ = s0 . . . sn is given by r(π̂) = i=0 r(si ) as
the reward is “earned” when leaving the state.
We consider reachability properties of the form ♦ T for a
set of target states T = {s ∈ S | T ∈ L(s)} where T is
overloaded to be a set of states and a label in AP . The set
♦ T = {π ∈ Paths(sI , T ) | ∀0 ≤ i < |π|• π(i) 6∈ T } shall
be prefix–free and contain all paths of R that visit a target
state. Analogously, the set ¬♦ T = {π ∈ PathsR (sI ) | ∀i ≥
0• π(i) 6∈ T } contains all paths that never reach a state in T .
Let us first consider reward objectives for fully probabilistic
models, i.e., RMCs. The expected reward for a finite set of
µ ∈JP K(σ)
In other words, wp[P ](f ) represents the tightest lower bound
that can be guaranteed for the expected value of f (we assume
that non-deterministic choices are resolved demonically2, attempting to minimize the expected value of f ).
In the following, we use the term expectation to refer to a
random variable mapping program states to real values. The
expectation transformer wp then transforms a post–expectation
f into a pre–expectation wp[P ](f ) and can be defined inductively, following the rules in Figure 2 (second column),
Page 7. The transformer wp also admits a liberal variant wlp,
which differs from wp on the way in which non–termination
is treated.
Formally, the transformer wp operates on unbounded expectations in E = S → R∞
≥0 and wlp operates on bounded
expectations in E≤1 = S → [0, 1]. Here R∞
≥0 denotes the
set of non–negative real values with the adjoined ∞ value.
In order to guarantee the well–definedness of wp and wlp we
need to provide E and E≤1 the structure of a directed–complete
partial order. Expectations are ordered pointwise, i.e. f ⊑ g
iff f (σ) ≤ g(σ) for every state σ ∈ S. The least upper bound
of directed subsets is also defined pointwise.
In what follows we use bold fonts for constant expectations,
e.g. 1 denotes the constant expectation 1. Given an arithmetical
expression E over program variables we simply write E for
the expectation that in state σ returns σ(E). Given a Boolean
expression G over program variables we use χG to denote
the {0, 1}–valued expectation that returns 1 if σ |= G and 0
otherwise.
b) MDPs and conditional expected rewards: Let V be
a finite set of parameters. A parametric distribution
over a
P
countable set S is a function µ : S → ZV with s∈S µ(s) =
1, where ZV denotes the set of all polynomials3 over V .
Distr (S) denotes the set of parametric distributions over S.
2 Demonic schedulers induce the most pessimistic expected outcome while
in [20] also angelic schedulers are considered which guarantee the most
optimistic outcome.
3 Although parametric distributions are defined as polynomials over the
parameters, we only use p and 1 − p for p ∈ V
3
paths ♦ T ∈ PathsR
fin is
X
ExpRewR (♦ T ) ,
the observe statements do not only block further execution
but condition resulting distributions on the program’s state to
only those executions satisfying the observations. Consider
two small example programs:
PrR (π̂) · r(π̂) .
π̂∈♦ T
For a reward bounded by one, the notion of the liberal
expected reward also takes the mere probability of not reaching
the target states into account:
{x := 0} [p] {x := 1};
{y := 0} [q] {y := −1}
{x := 0} [p] {x := 1};
{y := 0} [q] {y := −1};
observe x + y = 0
LExpRewR (♦ T ) , ExpRewR (♦ T ) + PrR (¬♦ T )
The left program establishes that the probability of x=0
is p, whereas for the right program this probability is
pq
pq+(1−p)(1−q) . The left program admits all (four) runs, two of
which satisfy x=0. Due to the observe statement requiring
x+y=0, the right program, however, admits only two runs
(x=0, y=0 and x=1, y=−1), satisfying x=0.
In Section V we will focus on the subclass of fully probabilistic programs in cpGCL, which we denote cpGCL⊠ .
A liberal expected reward will later represent the probability
of either establishing some condition or not terminating.
To explicitly exclude the probability of paths that reach
“undesired” states, we let U = {s ∈ S | ∈ L(s)} and define
the conditional expected reward for the condition ¬♦ U by4
CExpRewR (♦ T | ¬♦ U ) ,
ExpRewR (♦ T ∩ ¬♦ U )
.
PrR (¬♦ U )
For details about conditional probabilities and expected rewards, we refer to [22]. Conditional liberal expected rewards
are defined by
CLExpRewR (♦ T | ¬♦ U ) ,
IV. O PERATIONAL S EMANTICS
ExpRew (♦ T ) ,
LExpRew (♦ T ∩ ¬♦ U )
.
PrR (¬♦ U )
h i
hinit i
inf
S∈Sched R
ExpRew
R
S
S∈Sched R
CExpRew
S
=
inf
S∈Sched R
X
XXXX
hsink i
(♦ T ) .
diverge
Terminating runs eventually end up in the hsink i state; other
runs are diverging (never reach hsink i). A program terminates
either successfully, i.e. a run passes a X–labelled state, or
terminates due to a false observation, i.e. a run passes h i.
Squiggly arrows indicate reaching certain states via possibly
multiple paths and states; the clouds indicate that there might
be several states of the particular kind. The X–labelled states
are the only ones with positive reward. Note that the sets of
paths that eventually reach h i, eventually reach X, or diverge,
are pairwise disjoint.
CExpRewR (♦ T | ¬♦ U )
inf
X
S
The scheduler for conditional expected reward properties minimizes the value of the quotient:
,
cpGCL
This section presents an operational semantics for cpGCL
using RMDPs as underlying model inspired by [10]. Schematically, the operational RMDP of a cpGCL program shall have
the following structure:
R
Reward objectives for RMDPs are now defined using a demonic scheduler S ∈ Sched R minimizing probabilities and
expected rewards for the induced RMC SR. For the expected
reward this yields
R
FOR
ExpRew
R
R
SR
Pr
(♦ T | ¬♦ U )
(♦ T ∩ ¬♦ U )
(¬♦ U )
The liberal reward notions for RMDPS are analogous. Regarding the quotient minimization we assume “ 00 < 0” as we see
0
0 —being undefined—to be less favorable than 0.
III. C ONDITIONAL pGCL
Definition IV.1 (Operational cpGCL semantics). The operational semantics of P ∈ cpGCL for σ ∈ S and f ∈ E is
the RMDP Rfσ JP K = (S, hP, σi, Act, P, L, r), such that S
is the smallest set of states with h i ∈ S, hsink i ∈ S, and
hQ, τ i, h↓, τ i ∈ S for Q ∈ pGCL and τ ∈ S. hP, σi ∈ S is
the initial state. Act = {left , right } is the set of actions. P is
formed according to the rules given in Figure 1. The labelling
and the reward function are given by:
{X},
if s = h↓, τ i, for some τ ∈ S
{sink }, if s = hsink i
L(s) ,
{ },
if s = h i
∅,
otherwise,
As mentioned in Section II, pGCL programs can be considered as distribution transformers. Inspired by [2], we extend
pGCL with observe statements to obtain conditional pGCL
(cpGCL, for short). This is done by extending the syntax
of pGCL (p. 2) with observe G where G is a Boolean
expression over the program variables. When a program’s execution reaches observe G with a current variable valuation
σ 6|= G, further execution of the program is blocked as with
an assert statement [23]. In contrast to assert, however,
4 Note that strictly formal one would have to define the intersection of sets
of finite and possibly infinite paths by means of a cylinder set construction
considering all infinite extensions of finite paths.
4
(terminal)
(skip)
h↓, σi −→ hsink i
(abort)
(undesired)
h i −→ hsink i
σ |= G
σ 6|= G
(observe)
(assign)
hx := E, σi −→ h↓, σ[x ← JEKσ ]i
hobserve G, σi −→ h↓, σi
hobserve G, σi −→ h i
hP, σi −→ h i
hP, σi −→ µ
(concatenate)
, where ∀P ′ . ν(hP ′ ; Q, σ′ i) := µ(hP ′ , σ′ i)
h↓; Q, σi −→ hQ, σi
hP ; Q, σi −→ h i
hP ; Q, σi −→ ν
σ 6|= G
σ |= G
(if)
hite (G) {P } {Q}, σi −→ hP, σi
hite (G) {P } {Q}, σi −→ hQ, σi
σ 6|= G
σ |= G
(while)
hwhile (G) {P }, σi −→ hP ; while (G) {P }, σi
hwhile (G) {P }, σi −→ h↓, σi
(prob. choice)
h{P } [p] {Q}, σi −→ ν
(non–det. choice)
hskip, σi −→ h↓, σi
habort, σi −→ habort, σi
, where ν(hP, σi) := p, ν(hQ, σi) := 1 − p
left
right
h{P } {Q}, σi −−−→ hP, σi
h{P } {Q}, σi −−−−→ hQ, σi
Fig. 1. Rules for the construction of the operational RMDPs. If not stated otherwise, hsi−→hti is a shorthand for hsi−→µ ∈ Distr (S) with µ(hti) = 1. A
terminal state of the form h↓, σi indicates successful termination. Terminal states and h i go to the hsink i state. skip without context terminates successfully.
abort self–loops, i.e. diverges. x := E alters the variable valuation according to the assignment then terminates successfully. For the concatenation, h↓; Q, σi
indicates successful termination of the first program, so the execution continues with hQ, σi. If for P ; Q the execution of P leads to h i, P ; Q does so, too.
Otherwise, for hP, σi−→µ, µ is lifted such that Q is concatenated to the support of µ. If for the conditional choice σ |= G holds, P is executed, otherwise
Q. The case for while is similar. For the probabilistic choice, a distribution ν is created according to p. For {P } {Q}, we call P the left choice and Q
the right choice for actions left, right ∈ Act. For the observe statement, if σ |= G observe acts like skip. Otherwise, the execution leads directly to
h i indicating a violation of the observe statement.
r(s) ,
(
f (τ ),
0,
hP, σI i
if s = h↓, τ i, for some τ ∈ S
otherwise
q
where a state of the form h↓, τ i denotes a terminal state in
which no program is left to be executed.
hP4 ; P3 , σI i
To determine the conditional expected outcome of program
P given that all observations are true, we need to determine
the expected reward to reach hsink i from the initial state
conditioned on not reaching h i under a demonic scheduler.
f
For Rfσ JP K this is given by CExpRewRσ JP K (♦sink | ¬♦ ).
Recall for the condition ¬♦ that all paths not eventually
reaching h i either diverge (thus collect reward 0) or pass by
a X–labelled state and eventually reach hsink i. This gives us:
h↓; P3 , σI [x/5]i
CExpRew
Rfσ JP K
5
=
inf
(♦sink ∩ ¬♦ )
S
f
S∈Sched Rσ JP K
Rfσ JP K
Pr
(¬♦ )
S
=
inf
f
S∈Sched Rσ JP K
ExpRew
Rfσ JP K
(♦sink )
S
Pr
Rfσ JP K
right
hP2 ; P3 , σI i
hP3 , σI [x/5]i
h↓; P3 , σI [x/2]i
h↓, σI [x/5]i
h P3 , σI [x/2]i
h i
The only state with positive reward is s′ := h↓, σI [x/5]i and
its reward is indicated by number 5. Assume first a scheduler
choosing action left in state hP1 ; P3 , σI i. In the induced RMC
the only path accumulating positive reward is the path π going
from hP, σI i via s′ to hsink i with r(π) = 5 and Pr(π) = q.
This gives an expected reward of 5 · q. The overall probability
of not reaching h i is also q. The conditional expected reward
of eventually reaching hsink i given that h i is not reached
is hence 5·q
= 5. Assume now the minimizing scheduler
q
choosing right at state hP1 ; P3 , σI i. In this case there is
no path having positive accumulated reward in the induced
RMC, yielding an expected reward of 0. The probability of
not reaching h i is also 0. The conditional expected reward in
this case is undefined (0/0) thus the right branch is preferred
over the left branch.
In general, the operational RMDP is not finite, even if the
program terminates almost–surely (i.e. with probability 1).
S
Rfσ JP K
hP1 ; P3 , σI i
hsink i
(♦sink | ¬♦ )
ExpRew
left
1−q
(¬♦ )
f
This is analogous for CLExpRewRσ JP K (♦sink | ¬♦ ).
Example IV.1. Consider the program P ∈ cpGCL:
{{x := 5} {x := 2}} [q] {x := 2};
observe x > 3
where with parametrized probability q a non–deterministic
choice between x being assigned 2 or 5 is executed, and
with probability 1 − q, x is directly assigned 2. Let for
readability P1 = {x := 5} {x := 2}, P2 = x := 2,
P3 = observe x > 3, and P4 = x := 5. The operational
RMDP RxσI JP K for an arbitrary initial variable valuation σI
and post–expectation x is depicted below.
V. D ENOTATIONAL S EMANTICS
FOR
cpGCL⊠
This section presents an expectation transformer semantics
for the fully probabilistic fragment cpGCL⊠ of cpGCL. We
5
cwp[P1 ] and cwp[P2 ], weighted according to p. cwp[while
(G) {P ′ }] is defined using standard fixed point techniques.6
The cwlp transformer follows the same rules as cwp, except
for the abort and while statements. cwlp[abort] takes any
post–expectation to pre–expectation (1, 1) and cwlp[while
(G) {P }] is defined as a greatest fixed point rather than a
least fixed point.
formally relate this to the wp/wlp–semantics of pGCL as well
as to the operational semantics from the previous section.
A. Conditional Expectation Transformers
An expectation transformer semantics for the fully probabilistic fragment of cpGCL is defined using the operators:
cwp[ · ] : E × E≤1 → E × E≤1
cwlp[ · ] : E≤1 × E≤1 → E≤1 × E≤1
Example V.1. Consider the program P ′
{x := 0} [1/2] {x := 1};
ite (x = 1) {y := 0} [1/2] {y := 2}
{y := 0} [4/5] {y := 3} ;
observe y = 0
1
These functions can intuitively be viewed as the counterpart
of wp and wlp respectively, as shortly shown. The weakest
conditional pre–expectation cwp[P ](f ) of P ∈ cpGCL⊠ with
respect to post–expectation f is now given as
cwp[P ](f ) ,
2
3
Assume we want to compute the conditional expected value
of the expression 10+x given that the observation y=0 is
passed. This expected value is given by cwp[P ′ ](10+x) and
the computation of cwp[P ′ ](10+x, 1) goes as follows:
cwp1 [P ](f, 1)
,
cwp2 [P ](f, 1)
where cwp1 [P ](f, g) (resp. cwp2 [P ](f, g)) denotes the first
(resp. second) component of cwp[P ](f, g) and 1 is the constant expectation one. The weakest liberal conditional pre–
expectation cwlp[P ](f ) is defined analogously. In words,
cwp[P ](f )(σ) represents the expected value of f with respect
to the distribution of final states obtained from executing P in
state σ, given that all observe statements occurring along
the runs of P were satisfied. The quotient defining cwp[P ](f )
is interpreted is the same way as the quotient
cwp[P ′ ](10+x, 1)
′
= cwp[P1-2
](cwp[observe y = 0](10+x, 1))
′
= cwp[P1-2
](f, g) where (f, g) = χy=0 · (10+x, 1)
′
= cwp[P1-1 ](cwp[ite (x=1) {. . .} {. . .}](f, g))
′
= cwp[P1-1
](χx=1 · (h, i) + χx6=1 · (h′ , i′ )) where
(h, i) = cwp[{y := 0} [1/2] {y := 2}](f, g)
=
1
2
=
4
5
· (10 + x, 1) , and
(h , i ) = cwp[{y := 0} [4/5] {y := 3}](f, g)
Pr(A ∩ B)
Pr(B)
′
encoding the conditional probability Pr(A|B). However, here
we measure the expected value of random variable f 5 . The
denominator cwp2 [P ](f, 1)(σ) measures the probability that
P satisfies all the observations (occurring along valid runs)
from the initial state σ. If cwp2 [P ](f, 1)(σ) = 0, program P
is infeasible from state σ and in this case cwp[P ](f )(s) is not
well–defined (due to the division by zero). This corresponds
to the conditional probability Pr(A|B) being not well–defined
when Pr(B) = 0.
The operators cwp and cwlp are defined inductively on the
program structure, see Figure 2 (last column). Let us briefly
explain this. cwp[skip] behaves as the identity since skip
has no effect on the program state. cwp[abort] maps any pair
of post–expectations to the pair of constant pre–expectations
(0, 1). Assignments induce a substitution on expectations,
i.e. cwp[x := E] maps (f, g) to pre–expectation (f [x/E],
g[x/E]), where h[x/E](σ) = h(σ[x/E]) and σ[x/E] denotes the usual variable update on states. cwp[P1 ; P2 ] is
obtained as the functional composition (denoted ◦) of cwp[P1 ]
and cwp[P2 ]. cwp[observe G] restricts post–expectations to
those states that satisfy G; states that do not satisfy G
are mapped to 0. cwp[ite (G) {P1 } {P2 }] behaves either
as cwp[P1 ] or cwp[P2 ] according to the evaluation of G.
cwp[{P1 } [p] {P2 }] is obtained as a convex combination of
=
=
′
· (10 + x, 1)
1
2
· 54 · (10 + 0, 1) + 12 · 12 · (10
4, 52 + 11
, 41 = 27
, 13
4
4 20
+ 1, 1)
and the conditional expected
Then cwp[P ′ ](10+x) = 135
13
value of 10+x is approximately 10.38.
In the rest of this section we investigate some properties of
the expectation transformer semantics of cpGCL⊠ . As every
fully probabilistic pGCL program is contained in cpGCL⊠ ,
we first study the relation of the cw(l)p– to the w(l)p–
semantics of pGCL. To that end, we extend the weakest
(liberal) pre–expectation operator to cpGCL as follows:
wp[observe G](f ) = χG ·f
wlp[observe G](f ) = χG ·f .
To relate the cw(l)p– and w(l)p–semantics we heavily rely
on the following result which says that cwp (resp. cwlp) can
be decoupled as the product wp × wlp (resp. wlp × wlp).
Theorem V.1 (Decoupling of cw(l)p). For P ∈ cpGCL⊠ ,
f ∈ E, and f ′ , g ∈ E≤1 :
cwp[P ](f, g) = wp[P ](f ), wlp[P ](g)
cwlp[P ](f ′ , g) = wlp[P ](f ), wlp[P ](g)
6 We define cwp[while (G) {P }] by the least fixed point w.r.t. the order
(⊑, ⊒) in E×E≤1 . This way we encode the greatest fixed point in the second
component w.r.t. the order ⊑ over E≤1 as the least fixed point w.r.t. the dual
order ⊒.
5 In fact, cwp[P ](f )(σ) corresponds to the notion of conditional expected
value or in simpler terms, the expected value over a conditional distribution.
6
P
wp[P ](f )
cwp[P ](f, g)
skip
abort
x := E
f
0
(f, g)
(0, 1)
observe G
f [x/E]
χG · f
(f [x/E], g[x/E])
χG · (f, g)
P1 ; P2
ite (G) {P1 } {P2 }
(wp[P1 ] ◦ wp[P2 ])(f )
χG · wp[P1 ](f ) + χ¬G · wp[P2 ](f )
(cwp[P1 ] ◦ cwp[P2 ])(f, g)
χG · cwp[P1 ](f, g) + χ¬G · cwp[P2 ](f, g)
{P1 } [p] {P2 }
{P1 } {P2 }
P
p · wp[P1 ](f ) + (1 − p) · wp[P2 ](f )
λσ • min{wp[P
1 ](f )(σ), wp[P2 ](f
)(σ)}
′ ˆ
ˆ
µ f • χG · wp[P ](f ) + χ¬G · f
wlp[P ](f )
p · cwp[P1 ](f, g) + (1 − p) · cwp[P2 ](f, g)
— not defined —
µ⊑,⊒ (fˆ, ĝ)• χG · cwp[P ′ ](fˆ, ĝ) + χ¬G · (f, g)
abort
1
(1, 1)
while (G) {P ′ }
ν fˆ•
while (G) {P ′ }
χG · wp[P ′ ](fˆ) + χ¬G · f
cwlp[P ](f, g)
ν ⊑,⊑ (fˆ, ĝ)•
χG · cwp[P ′ ](fˆ, ĝ) + χ¬G · (f, g)
Fig. 2. Definitions for the wp/wlp and cwp/cwlp operators. The wlp (cwlp) operator differs from wp (cwp) only for abort and the while–loop. A
scalar multiplication a · (f, g) is meant componentwise yielding (a · f, a · g). Likewise an addition (f, g) + (f ′ , g ′ ) is also meant componentwise yielding
(f + f ′ , g + g ′ ).
cpGCL⊠ . By Theorem V.1, the transformers cwlp[P ] and
cwlp[P ] can be recast as:
Proof. By induction on the program structure. See Appendix B
for details.
Let pGCL⊠ denote the fully probabilistic fragment of pGCL.
We show that the cwp–semantics is a conservative extension
of the wp–semantics for pGCL⊠ . The same applies to the
weakest liberal pre–expectation semantics.
f 7→
wp[P ](f )
wlp[P ](1)
and f 7→
wlp[P ](f )
,
wlp[P ](1)
respectively. Recall that wlp[P ](1) yields the weakest pre–
expectation under which P either does not terminate or
does terminate while passing all observe–statements. An
alternative is to normalize using wp in the denominator instead
of wlp, yielding:
Theorem V.2 (Compatibility with the w(l)p–semantics). For
P ∈ pGCL⊠ , f ∈ E, and g ∈ E≤1 :
wp[P ](f ) = cwp[P ](f ) and wlp[P ](g) = cwlp[P ](g)
f 7→
Proof. By Theorem V.1 and the fact that cwlp[P ](1) = 1 (see
Lemma V.3).
wp[P ](f )
wp[P ](1)
and
f 7→
wlp[P ](f )
wp[P ](1)
Proof. Using Theorem V.1 one can show that the transformers cwp/cwlp inherit these properties from the transformers
wp/wlp. For details we refer to Appendix D.
The transformer on the right is not meaningful, as the denominator wp[P ](1)(σ) may be smaller than the numerator
wlp[P ](f )(σ) for some state σ ∈ S. This would lead to
probabilities exceeding one. The transformer on the left normalizes w.r.t. the terminating executions. This interpretation
corresponds to the semantics of the probabilistic programming
language R2 [7], [14] and is only meaningful if programs
terminate almost surely (i.e. with probability one).
A noteworthy consequence of adopting this semantics is that
observe G is equivalent to while (¬G) {skip} [14], see the
discussion in Section VI.
Let us briefly compare the four alternatives. To that end
consider the program P below
abort [1/2] {x := 0} [1/2] {x := 1};
We conclude this section by discussing alternative approaches
for providing an expectation transformer semantics for P ∈
P tosses a fair coin and according to the outcome either
diverges or tosses a fair coin twice and observes at least
once heads (y=0 ∨ x=0). We measure the probability that
We now investigate some elementary properties of cwp and
cwlp such as monotonicity and linearity.
Lemma V.3 (Elementary properties of cwp and cwlp). For
every P ∈ cpGCL⊠ with at least one feasible execution
(from every initial state), post–expectations f, g ∈ E and non–
negative real constants α, β:
i) f ⊑ g implies cwp[P ](f ) ⊑ cwp[P ](g) and likewise for
cwlp.
ii) cwp[P ](α · f + β · g) = α · cwp[P ](f ) + β · cwp[P ](g).
iii) cwp[P ](0) = 0 and cwlp[P ](1) = 1.
{y := 0} [1/2] {y := 1}; observe x = 0 ∨ y = 0
7
the outcome of the last coin toss was heads according to each
transformer:
2
wp[P ](χy=0 )
=
wlp[P ](1)
7
wlp[P ](χy=0 )
6
=
wlp[P ](1)
7
wp[P ](χy=0 )
2
=
wp[P ](1)
3
wlp[P ](χy=0 )
=2
wp[P ](1)
VI. A PPLICATIONS
In this section we study approaches that make use of our
semantics in order to analyze fully probabilistic programs with
observations. We first present a program transformation based
on hoisting observe statements in a way that probabilities
of conditions are extracted, allowing for a subsequent analysis
on an observation–free program. Furthermore, we discuss how
observations can be replaced by loops and vice versa. Finally,
we use a well–known case study to demonstrate the direct
applicability of our cwp–semantics.
](f )
As mentioned before, the transformer f 7→ wlp[P
wp[P ](1) is
not significant as it yields a “probability” exceeding one.
Note that our cwp–semantics yields a probability of y=0 on
termination—while passing all observe–statements—of 27 .
As shown before, this is a conservative and natural extension
of the wp–semantics. This does not apply to the R2–semantics,
as this would require an adaptation of rules for abort and
while.
A. Observation Hoisting
In what follows we give a semantics–preserving transformation for removing observations from cpGCL⊠ programs.
Intuitively, the program transformation “hoists” the observe
statements while updating the probabilities in case of probabilistic choices. Given P ∈ cpGCL⊠ , the transformation
delivers a semantically equivalent observe–free program
P̂ ∈ pGCL⊠ and—as a side product—an expectation ĥ ∈ E≤1
that captures the probability of the original program to establish all observe statements. For intuition, reconsider the
program from Example V.1. The transformation yields the
program
B. Correspondence Theorem
We now investigate the connection between the operational
semantics of Section IV (for fully probabilistic programs) and
the cwp–semantics. We start with some auxiliary results. The
first result establishes a relation between (liberal) expected
rewards and weakest (liberal) pre–expectations.
Lemma V.4. For P ∈ cpGCL⊠ , f ∈ E, g ∈ E≤1 , and σ ∈ S:
f
ExpRewRσ JP K (♦hsink i) = wp[P ](f )(σ)
LExpRew
Rgσ JP K
(♦hsink i) = wlp[P ](g)(σ)
(i)
{x := 0} [8/13] {x := 1};
ite (x = 1) {{y := 0} [1] {y := 2}}
{{y := 0} [1] {y := 3}}
(ii)
Proof. By induction on P , see Appendix E and F.
The next result establishes that the probability to never reach
h i in the RMC of program P coincides with the weakest
liberal pre–expectation of P w.r.t. post–expectation 1 :
13
. By eliminating dead code in both
and expectation ĥ = 20
probabilistic choices and coalescing the branches in the conditional, we can simplify the program to:
Lemma V.5. For P ∈ cpGCL⊠ , g ∈ E≤1 , and σ ∈ S:
{x := 0} [8/13] {x := 1}; y := 0
g
PrRσ JP K (¬♦ ) = wlp[P ](1)(σ)
As a sanity check note that the expected value of 10+x in this
8
5
program is equal to 10 · 13
+ 11 · 13
= 135
13 , which agrees with
the result obtained in Example V.1 by analyzing the original
program. Formally, the program transformation is given by a
function
Proof. See Appendix G
We now have all prerequisites in order to present the main
result of this section: the correspondence between the operational and expectation transformer semantics of cpGCL⊠ programs. It turns out that the weakest (liberal) pre–expectation
cwp[P ](f )(σ) (respectively cwlp[P ](f )(σ)) coincides with the
conditional (liberal) expected reward in the RMC Rfσ JP K of
terminating while never violating an observe-statement, i.e.,
avoiding the h i states.
T : cpGCL⊠ × E≤1 → cpGCL⊠ × E≤1 .
To apply the transformation to a program P we need to
determine T (P, 1), which gives the semantically equivalent
program P̂ and the expectation ĥ.
The transformation is defined in Figure 3 and works by
inductively computing the weakest pre–expectation that guarantees the establishment of all observe statements and
updating the probability parameter of probabilistic choices
so that the pre–expectations of their branches are established
in accordance with the original probability parameter. The
computation of these pre–expectations is performed following
the same rules as the wlp operator. The correctness of the
transformation is established by the following Theorem, which
states that a program and its transformed version share the
same terminating and non–terminating behavior.
Theorem V.6 (Correspondence theorem). For P ∈ cpGCL⊠ ,
f ∈ E, g ∈ E≤1 and σ ∈ S,
f
CExpRewRσ JP K (♦sink | ¬♦ ) = cwp[P ](f )(σ)
g
CLExpRewRσ JP K (♦sink | ¬♦ ) = cwlp[P ](g)(σ) .
Proof. The proof makes use of Lemmas V.4, V.5, and Theorem V.1. For details see Appendix H.
Theorem V.6 extends a previous result [10] that established a
connection between an operational and the wp/wlp semantics
for pGCL programs to the fully probabilistic fragment of
cpGCL.
8
while(rerun) { x1 , . . . , xn := s1 , . . . , sn ; P ′ }
Theorem VI.1 (Program Transformation Correctness). Let
P ∈ cpGCL⊠ admit at least one feasible run for every initial
state and T (P, 1) = (P̂ , ĥ). Then for any f ∈ E and g ∈ E≤1 ,
Here, s1 , . . . , sn are fresh variables and x1 , . . . , xn are all
program variables of P . The first assignment stores the initial
state in the variables si and the first line of the loop body,
ensures that the loop always starts with the same (initial)
values.
wp[P̂ ](f ) = cwp[P ](f ) and wlp[P̂ ](g) = cwlp[P ](g).
Proof. See Appendix I.
A similar program transformation has been given in [7].
Whereas they use random assignments to introduce randomization in their programming model, we use probabilistic
choices. Consequently, they can hoist observe statements
only until the occurrence of a random assignment, while we
are able to hoist observe statements through probabilistic
choices and completely remove them from programs. Another
difference is that their semantics only accounts for terminating
program behaviors and thus can guarantee the correctness of
the program transformation for terminating behaviors only.
Our semantics is more expressive and enables establishing the
correctness of the program transformation for non–terminating
program behavior, too.
Theorem VI.2. Let programs P and P ′′ be as above. Then
cwp[P ](f ) = wp[P ′′ ](f ) .
Proof. See Appendix J.
Example VI.1. Consider the following cpGCL program:
{x := 0} [p] {x := 1}; {y := 0} [p] {y := 1}
observe x 6= y;
We apply the program transformation to it and obtain:
s1 , s2 := x, y; rerun := true;
while(rerun){
x, y := s1 , s2 ; rerun := false;
{x := 0} [p] {x := 1};
B. Replacing Observations by Loops
For semantics that normalize with respect to the terminating
behavior of programs, observe statements can readily be
replaced by a loop [24], [14]. In our setting a more intricate
transformation is required to eliminate observations from
programs. Briefly stated, the idea is to restart a violating
run from the initial state until it satisfies all encountered
observations. To achieve this we consider a fresh variable
rerun and transform a given program P ∈ cpGCL⊠ into a
new program P ′ as described below:
observe G
abort
while (G) {. . .}
{y := 0} [p] {y := 1};
if(x = y){ rerun := true}
}
This program is simplified by a data flow analysis: The variables s1 and s2 are irrelevant because x and y are overwritten
in every iteration. Furthermore, there is only one observation
so that its predicate can be pushed directly into the loop’s
guard. Then the initial values of x and y may be arbitrary
but they must be equal to make sure the loop is entered. This
gives the final result
→ ite (¬G) {rerun := true} {skip}
→ ite (¬rerun) {abort} {skip}
x, y := 0, 0;
→ while (G ∧ ¬rerun) {. . .}
while(x = y){
{x := 0} [p] {x := 1}; {y := 0} [p] {y := 1}
For conditional and probabilistic choice, we apply the above
rules recursively to the subprograms.
The aim of the transformation is twofold. First, the program
P ′ flags the violation of an observe statement through the
variable rerun. If a violation occurs, rerun is set to true
while in contrast to the original program we continue the
program execution. As a side effect, we may introduce some
subsequent diverging behavior which would not be present in
the original program (since the execution would have already
been blocked). The second aim of the transformation is to
avoid this possible diverging–behavior. This is achieved by
blocking while–loops and abort statements once rerun is
set to true.
Now we can get rid of the observations in P by repeatedly
executing P ′ from the same initial state till rerun is set to
false (which would intuitively correspond to P passing all its
observations).
This is implemented by program P ′′ below:
}
This program is a simple algorithm that repeatedly uses a
biased coin to simulate an unbiased coin flip. A proof that x is
indeed distributed uniformly over {0, 1} has been previously
shown e.g. in [25].
Theorem VI.2 shows how to define and effectively calculate
the conditional expectation using a straightforward program
transformation and the well established notion of wp. However
in practice it will often be infeasible to calculate the fixed point
of the outer loop or to find a suitable loop invariant – even
though it exists.
C. Replacing Loops by Observations
In this section we provide an overview on how the aforementioned result can be “applied backwards” in order to
replace a loop by an observe statement. This is useful as
it is easier to analyze a loop–free program with observations
s1 , . . . , sn := x1 , . . . , xn ; rerun := true;
9
T (skip, f )
= (skip, f )
T (abort, f )
T (x := E, f )
= (abort, 1)
= (x := E, f [E/x])
T (observe G, f )
= (skip, χG · f )
T (ite (G) {P } {Q}, f ) = (ite (G) {P ′ } {Q′}, χG · fP + χ¬G · fQ )
where (P ′ , fP ) = T (P, f ), (Q′ , fQ ) = T (Q, f )
T ({P } [p] {Q}, f )
T (while (G) {P }, f )
T (P ; Q, f )
= ({P ′ } [p′ ] {Q′ }, p · fP + (1−p) · fQ )
p·fP
where (P ′ , fP ) = T (P, f ), (Q′ , fQ ) = T (Q, f ), and p′ = p·fP +(1−p)·f
Q
= (while (G) {P ′ }, f ′ )
where f ′ = ν X • (χG · (π2 ◦ T )(P, X) + χ¬G · f ) , and (P ′ , ) = T (P, f ′ )
= (P ′ ; Q′ , f ′′ ) where (Q′ , f ′ ) = T (Q, f ), (P ′ , f ′′ ) = T (P, f ′ )
Fig. 3. Program transformation for eliminating observe statements in cpGCL⊠ .
delivered := 0; counter := 1
than a program with loops for which fixed points need to be
determined.
The transformation presented in Section VI-B yields programs of a certain form: In every loop iteration the variable
values are initialized independently from their values after
the previous iteration. Hence the loop iterations generate a
sequence of program variable valuations that are independent
and identically distributed (iid loop), cf. Example VI.1 where
no “data flow” between iterations of the loop occurs.
In general, if loop = while(G){P } is an iid loop we can
obtain a program Q = P ; observe ¬G with
loop :
[p]
{delivered := 1}
};
observe(counter ≤ k)
Our goal is to determine the probability of a message not
being intercepted by a collaborator. We condition this by the
observation that a message is forwarded at most k times.
Note that the operational semantics of P produce an infinite
parametric RMC since the value of k is fixed but arbitrary.
Using Theorem V.1 we express the probability that a message
is not intercepted given that it was rerouted no more than k
times by
wp[loop](f ) = cwp[Q](f )
for any expectation f ∈ E. To see this, apply Theorem VI.2
to program Q. Let the resulting program be loop’. As in
Example VI.1, note that there is only one observe statement
at the end of loop’ and furthermore there is no data flow
between iterations of loop’. Hence by the same simplification
steps we arrive at the desired program loop.
D. The Crowds Protocol
To demonstrate the applicability of the cwp-semantics to
a practical example, consider the Crowds-protocol [18]. A
set of nodes forms a fully connected network called the
crowd. Crowd members would like to exchange messages
with a server without revealing their identity to the server.
To achieve this, a node initiates communication by sending its
message to a randomly chosen crowd member, possibly itself.
Upon receiving a message a node probabilistically decides to
either forward the message once again to a randomly chosen
node in the network or to relay it to the server directly.
A commonly studied attack scenario is that some malicious
nodes called collaborators join the crowd and participate in the
protocol with the aim to reveal the identity of the sender. The
following cpGCL-program P models this protocol where p is
the forward probability and c is the fraction of collaborating
nodes in the crowd. The initialization corresponds to the
communication initiation.
init :
while(delivered = 0) {
counter := counter + 1;
{intercepted := 1} [c] {skip}
cwp[P ]([¬intercepted]) =
wp[P ]([¬intercepted])
wlp[P ](1)
(1)
The computation of this quantity requires to find fixed points,
cf. Appendix K for details. As a result we obtain a closed
form solution parametrized in p, c, and k:
(1 − c)(1 − p)
1 − (p(1 − c))k
1
·
1 − p(1 − c)
1 − pk
The automation of such analyses remains a challenge and
is part of ongoing and future work.
VII. D ENOTATIONAL S EMANTICS
FOR
F ULL cpGCL
In this section we argue why (under mild assumptions) it
is not possible to come up with a denotational semantics in
the style of conditional pre–expectation transformers (CPETs
for short) for full cpGCL. To show this, it suffices to consider
a simple fragment of cpGCL containing only assignments,
observations, probabilistic and non–deterministic choices. Let
x be the only program variable that can be written or read in
this fragment. We denote this fragment by cpGCL− . Assume
{intercepted := 1} [c] {intercepted := 0};
10
cwp∗ [P2+ε ] = d2+ε , with Jd2+ε K = 2 + ε
cwp∗ [observe false] = of, with JofK = ⊥
D is some appropriate domain for representing conditional
expectations of the program variable x with respect to some
fixed initial state σ0 and let J · K : D → R ∪ {⊥} be an
interpretation function such that for any d ∈ D we have that
JdK is equal to the (possibly undefined) conditional expected
value of x.
for some appropriate d1 , d2 , d2+ε , of ∈ D. By Definition
VII.1, cwp∗ being inductive requires the existence of a function K, such that
cwp∗ [P4 ] = K (cwp∗ [observe false], 1/2, cwp∗ [P2+ε ])
Definition VII.1 (Inductive CPETs). A CPET is a function
cwp∗ : cpGCL− → D xsuch that for any P ∈ cpGCL− ,
Jcwp[P ]K = CExpRewRσ0 JP K (♦ sink | ¬♦ ). cwp∗ is called
inductive, if there exists some function K : cpGCL− × [0, 1]×
cpGCL− → D such that for any P1 , P2 ∈ cpGCL− ,
∗
∗
= K(of, 1/2, d2+ε ) .
In addition, there must be an N with:
cwp∗ [P5 ] = N (cwp∗ [P2 ], cwp∗ [P4 ])
= N (d2 , K(of, 1/2, d2+ε )) .
∗
cwp [{P1 } [p] {P2 }] = K(cwp [P1 ], p, cwp [P2 ]) ,
and some function N : cpGCL− × cpGCL− → D with
∗
∗
Since P4 is a probabilistic choice between an infeasible branch
and P2+ε , the expected value for x has to be rescaled to the
feasible branch. Hence P4 yields Jcwp∗ [P4 ]K = 2 + ε, whereas
Jcwp∗ [P2 ]K = 2. Thus:
∗
cwp [{P1 } {P2 }] = N (cwp [P1 ], cwp [P2 ]) ,
where ∀d1 , d2 ∈ D : N (d1 , d2 ) ∈ {d1 , d2 }.
This definition suggests that the conditional pre–expectation
of {P1 } [p] {P2 } is determined only by the conditional pre–
expectation of P1 , the conditional pre–expectation of P2 , and
the probability p. Furthermore the above definition suggests
that the conditional pre–expectation of {P1 } {P2 } is also
determined by the conditional pre–expectation of P1 and the
conditional pre–expectation of P2 only. Consequently, the
non–deterministic choice can be resolved by replacing it either
by P1 or P2 . While this might seem like a strong limitation, the
above definition is compatible with the interpretation of non–
deterministic choice as demonic choice: The choice is deterministically driven towards the worst option. The requirement
N (d1 , d2 ) ∈ {d1 , d2 } is also necessary for interpreting non–
deterministic choice as an abstraction where implementational
details are not important.
As we assume a fixed initial state and a fixed post–
expectation, the non–deterministic choice turns out to be
deterministic once the pre–expectations of P1 and P2 are
known. Under the above assumptions (which do apply to the
wp and wlp transformers) we claim:
Jd2 K
JK(of, 1/2, d2+ε )K
(2)
As non–deterministic choice is demonic, we have:
cwp∗ [P5 ] = N (d2 , K(of, 1/2, d2+ε )) = d2
(3)
As N (cwp∗ [P2 ], cwp∗ [P4 ]) ∈ {cwp∗ [P2 ], cwp∗ [P4 ]} we can
resolve non–determinism in P by either rewriting P to
{P1 } [1/2] {P2 } which gives
Jcwp∗ {P1 } [1/2] {P2 }K =
3
,
2
or we rewrite P to {P1 } [1/2] {P4 }, which gives
Jcwp∗ {P1 } [1/2] {P4 }K =
4+ε
.
3
For a sufficiently small ε the second option should be preferred
by a demonic scheduler. This, however, suggests:
cwp∗ [P5 ] = N (d2 , K(of, 1/2, d2+ε ))
= K(of, 1/2, d2+ε )
Theorem VII.1. There exists no inductive CPET.
Proof. The proof goes by contradiction. Consider the program
P = {P1 } [1/2] {P5 } with
hP i
1
2
P1 = x := 1
P5 = {P2 } {P4 }
P2 = x := 2
1
2
hP1 i
hP5 i
1
P4 = {observe false} [1/2] {P2+ε }
P2+ε = x := 2 + ε ,
hP2 i
2
Rxσ0 JP K
where ε > 0. A schematic depiction of the RMDP
is given in Figure 4. Assume there exists an inductive CPET
cwp∗ over some appropriate domain D. Then,
hP4 i
1
2
1
2
hP2+ε i
2+ε
cwp∗ [P1 ] = d1 , with Jd1 K = 1
cwp∗ [P2 ] = d2 , with Jd2 K = 2
Fig. 4. Schematic depiction of the RMDP Rx
σ0 JP K
11
Together with Equality (3) we get d2 = K(of, 1/2, d2+ε ),
which implies Jd2 K = JK(of, 1/2, d2+ε )K. This is a contradiction to Inequality (2).
[8] D. Kozen, “Semantics of probabilistic programs,” J. Comput. Syst. Sci.,
vol. 22, no. 3, pp. 328–350, 1981.
[9] A. McIver and C. Morgan, Abstraction, Refinement And Proof For
Probabilistic Systems. Springer, 2004.
[10] F. Gretz, J.-P. Katoen, and A. McIver, “Operational versus weakest preexpectation semantics for the probabilistic guarded command language,”
Perform. Eval., vol. 73, pp. 110–132, 2014.
[11] C. Jones and G. D. Plotkin, “A probabilistic powerdomain of evaluations,” in Logic in Computer Science. IEEE Computer Society, 1989,
pp. 186–195.
[12] V. Gupta, R. Jagadeesan, and V. A. Saraswat, “Probabilistic concurrent
constraint programming,” in Concurrency Theory, ser. LNCS, vol. 1243.
Springer, 1997, pp. 243–257.
[13] D. S. Scott, “Stochastic λ-calculi: An extended abstract,” J. Applied
Logic, vol. 12, no. 3, pp. 369–376, 2014.
[14] C.-K. Hur, A. V. Nori, S. K. Rajamani, and S. Samuel, “Slicing
probabilistic programs,” in Proc. of PLDI. ACM Press, 2014, pp. 133–
144.
[15] A. Sampson, P. Panchekha, T. Mytkowicz, K. S. McKinley, D. Grossman, and L. Ceze, “Expressing and verifying probabilistic assertions,”
in ACM SIGPLAN Conference on Programming Language Design and
Implementation. ACM, 2014, p. 14.
[16] A. Chakarov and S. Sankaranarayanan, “Expectation invariants for
probabilistic program loops as fixed points,” in Proc. of SAS, ser. LNCS,
vol. 8723. Springer, 2014, pp. 85–100.
[17] M. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic
Programming. John Wiley and Sons, 1994.
[18] M. K. Reiter and A. D. Rubin, “Crowds: Anonymity for web transactions,” ACM Trans. Inf. Syst. Secur., vol. 1, no. 1, pp. 66–92, 1998.
[19] E. W. Dijkstra, A Discipline of Programming. Prentice Hall, 1976.
[20] A. McIver and C. Morgan, “Partial correctness for probabilistic demonic
programs,” Theoretical Computer Science, vol. 266, no. 12, pp. 513 –
541, 2001.
[21] C. Baier and J. Katoen, Principles of Model Checking. MIT Press,
2008.
[22] C. Baier, J. Klein, S. Klüppelholz, and S. Märcker, “Computing conditional probabilities in Markovian models efficiently,” in Proc. of TACAS,
ser. LNCS, vol. 8413. Springer, 2014, pp. 515–530.
[23] G. Nelson, “A generalization of Dijkstra’s calculus,” ACM Trans. Program. Lang. Syst., vol. 11, no. 4, pp. 517–561, 1989.
[24] G. Claret, S. K. Rajamani, A. V. Nori, A. D. Gordon, and J. Borgström,
“Bayesian inference using data flow analysis,” in Proc. of ESEC/SIGSOFT FSE. ACM Press, 2013, pp. 92–102.
[25] F. Gretz, J.-P. Katoen, and A. McIver, “Prinsys - on a quest for
probabilistic loop invariants,” in Proc. of QEST, ser. LNCS, vol. 8054.
Springer, 2013, pp. 193–208.
[26] M. E. Andrés and P. van Rossum, “Conditional probabilities over
probabilistic and nondeterministic systems,” in Proc. of TACAS, ser.
LNCS, vol. 4963. Springer, 2008, pp. 157–172.
[27] J. Katoen, A. McIver, L. Meinicke, and C. C. Morgan, “Linear-invariant
generation for probabilistic programs: - automated support for proofbased methods,” in Proc. of SAS, ser. LNCS, vol. 6337. Springer,
2010, pp. 390–406.
[28] P. Cousot and M. Monerau, “Probabilistic abstract interpretation,” in
Proc. of ESOP, ser. LNCS, H. Seidl, Ed., vol. 7211. Springer, 2012,
pp. 169–193.
[29] H. Bekic, “Definable operation in general algebras, and the theory
of automata and flowcharts,” in Programming Languages and Their
Definition. Springer, 1984, pp. 30–55.
As an immediate corollary of Theorem VII.1 we obtain the
following statement:
Corollary VII.2. We cannot extend the cwp rules in Figure 2
for non–deterministic programs such that Theorem V.6 extends
to full cpGCL.
This result is related to the fact that for minimizing conditional
(reachability) probabilities in RMDPs positional, i.e. history–
independent, schedulers are insufficient [26]. Intuitively speaking, if a history–dependent scheduler is required, this necessitates the inductive definition of cwp∗ to take the context
of a statement (if any) into account. This conflicts with the
principle of an inductive definition. Investigating the precise
relationship with the result of [26] requires further study.
VIII. C ONCLUSION
AND
F UTURE W ORK
This paper presented an extensive treatment of semantic
issues in probabilistic programs with conditioning. Major
contributions are the treatment of non–terminating programs
(both operationally and for weakest liberal pre–expectations),
our results on combining non–determinism with conditioning,
as well as the presented program transformations. We firmly
believe that a thorough understanding of these semantic issues
provides a main cornerstone for enabling automated analysis
techniques such as loop invariant synthesis [16], [27], program
analysis [28] and model checking [22] to the class of probabilistic programs with conditioning. Future work consists of
investigating conditional invariants and a further investigation
of non–determinism in combination with conditioning.
ACKNOWLEDGMENT
This work was supported by the Excellence Initiative of the
German federal and state government. Moreover, we would
like to thank Pedro d’Argenio and Tahiry Rabehaja for the
valuable discussions preceding this paper.
R EFERENCES
[1] N. D. Goodman and A. Stuhlmüller, The Design and Implementation of Probabilistic Programming Languages.
(electronic), 2014,
http://dippl.org.
[2] A. D. Gordon, T. A. Henzinger, A. V. Nori, and S. K. Rajamani,
“Probabilistic programming,” in Proc. of FOSE. ACM Press, 2014,
pp. 167–181.
[3] G. Barthe, B. Köpf, F. Olmedo, and S. Z. Béguelin, “Probabilistic
relational reasoning for differential privacy,” ACM Trans. Program.
Lang. Syst., vol. 35, no. 3, p. 9, 2013.
[4] N. D. Goodman, V. K. Mansinghka, D. M. Roy, K. Bonawitz, and J. B.
Tenenbaum, “Church: a language for generative models,” in Proc. of
UAI. AUAI Press, 2008, pp. 220–229.
[5] B. Paige and F. Wood, “A compilation target for probabilistic programming languages,” in Proc. of ICML, vol. 32. JMLR.org, 2014, pp.
1935–1943.
[6] A. D. Gordon, T. Graepel, N. Rolland, C. V. Russo, J. Borgström,
and J. Guiver, “Tabular: a schema-driven probabilistic programming
language,” in Proc. of POPL. ACM Press, 2014, pp. 321–334.
[7] A. V. Nori, C.-K. Hur, S. K. Rajamani, and S. Samuel, “R2: An efficient
MCMC sampler for probabilistic programs,” in Proc. of AAAI. AAAI
Press, July 2014.
12
d) The Observation observe G.: For cwp we have:
A PPENDIX
A. Continuity of wp and wlp
cwp[observe G](f, g)
= (f · χG , g · χG )
Lemma A.1 (Continuity of wp/wlp). Consider the extension
of wp and wlp to cpGCL given by
wp[observe G](f ) = χG · f
The argument for cwlp is completely analogous.
e) The Induction Hypothesis:: Assume in the following
that for two arbitrary but fixed programs P, Q ∈ cpGCL⊠ it
holds that both
cwp[P ](f, g) = wp[P ](f ), wlp[P ](g) , and
cwlp[P ](f ′ , g) = wlp[P ](f ′ ), wlp[P ](g) .
wlp[observe G](g) = χG · g .
Then for every P ∈ cpGCL the expectation transformers
wp[P ] : E → E and wlp[P ] : E≤1 → E≤1 are continuous
mappings over (E, ⊑) and (E≤1 , ⊒), respectively.
Proof. For proving the continuity of wp we have to show that
for any directed subset D ⊆ E we have
!
sup wp[P ](f ) = wp[P ]
sup f
f ∈D
f ∈D
.
Then for the induction step we have:
f) The Concatenation P ; Q.: For cwp we have:
(4)
cwp[P ; Q](f, g)
= cwp[P ](cwp[Q](f, g)
This can be shown by structural induction on P . All cases except for the observe statement have been covered in [10]. It
remains to show that Equality (4) holds for P = observe G:
= cwp[P ] wp[Q](f ), wlp[Q](g)
=
sup wp[observe G](f ) = sup χG · f
f ∈D
wp[observe G](f ), wlp[observe G](g)
=
=
f ∈D
wp[P ](wp[Q](f )), wlp[P ](wlp[Q](g))
wp[P ; Q](f ), wlp[P ; Q](g)
(I.H. on Q)
(I.H. on P )
The argument for cwlp is completely analogous.
g) The Conditional Choice ite (G) {P } {Q}.: For cwp
we have:
= χG · sup f
f ∈D
= wp[observe G](sup f )
f ∈D
cwp[ite (G) {P } {Q}](f, g)
= χG · cwp[P ](f, g) + χ¬G · cwp[Q](f, g)
= χG · wp[P ](f ), wlp[P ](g)
+ χ¬G · wp[Q](f ), wlp[Q](g)
The proof for the liberal transformer wlp is analogous.
B. Proof of Theorem V.1
Theorem V.1 (Decoupling of cwp/cwlp). For P ∈ cpGCL⊠ ,
f ∈ E, and f ′ , g ∈ E≤1 :
cwp[P ](f, g) = wp[P ](f ), wlp[P ](g)
cwlp[P ](f ′ , g) = wlp[P ](f ′ ), wlp[P ](g)
Proof. The proof of Theorem V.1 goes by induction over all
cpGCL⊠ programs. For the induction base we have:
a) The Effectless Program skip.: For cwp we have:
wp[skip](f ), wlp[skip](g)
cwp[abort](f, g) = (0, 1)
wp[abort](f ), wlp[abort](g)
=
Analogously for cwlp we have:
=
′
cwlp[abort](f , g) = (1, 1)
=
=
wp[ite (G) {P } {Q}](f ),
χG · wlp[P ](g) + χ¬G · wlp[Q](g)
wlp[ite (G) {P } {Q}](g)
cwp[{P } [p] {Q}](f, g)
= p · cwp[P ](f, g) + (1 − p) · cwp[Q](f, g)
= p · wp[P ](f ), wlp[P ](g)
+ (1 − p) · wp[Q](f ), wlp[Q](g)
The argument for cwlp is completely analogous.
b) The Faulty Program abort.: For cwp we have:
=
χG · wp[P ](f ) + χ¬G · wp[Q](f ),
The argument for cwlp is completely analogous.
h) The Probabilistic Choice {P } [p] {Q}.: For cwp we
have:
cwp[skip](f, g) = (f, g)
=
=
(I.H.)
wlp[abort](f ′ ), wlp[abort](g)
(I.H.)
p · wp[P ](f ) + (1 − p) · wp[Q](f ),
p · wlp[P ](g) + (1 − p) · wlp[Q](g)
wp[{P } [p] {Q}](f ), wlp[{P } [p] {Q}](g)
The argument for cwlp is completely analogous.
i) The Loop while (G) {P }.: For cwp we have:
c) The Assignment x := E.: For cwp we have:
cwp[while (G) {P }](f, g)
= µ⊑,⊒ (X1 , X2 )• χG · cwp[P ](X1 , X2 ) + χ¬G · (f, g)
= µ⊑,⊒ (X1 , X2 )• χG · wp[P ](X1 ), wlp[P ](X2 )
cwp[x := E](f, g) = (f [x/E], g[x/E])
= wp[x := E](f ), wlp[x := E](g)
+ χ¬G · (f, g)
The argument for cwlp is completely analogous.
13
(I.H.)
= µ⊑,⊒ (X1 , X2 )• χG · wp[P ](X1 ) + χ¬G · f,
χG · wlp[P ](X2 ) + χ¬G · g
k) The Faulty Program abort.:
wp[abort](α · f + β · g)
=0
Now let H(X1 , X2 ) = χG · wp[P ](X1 ) + χ¬G · f, χG ·
wlp[P ](X2 ) + χ¬G · g and let H1 (X1 , X2 ) be the projection
of H(X1 , X2 ) to the first component and let H2 (X1 , X2 ) be
the projection of H(X1 , X2 ) to the second component.
Notice that the value of H1 (X1 , X2 ) does not depend on
X2 and that it is given by
H1 (X1 ,
= α · wp[abort](f ) + β · wp[abort](g)
l) The Assignment x := E.:
wp[x := E](α · f + β · g)
= (α · f + β · g)[x/E]
) = χG · wp[P ](X1 ) + χ¬G · f .
= α · f [x/E] + β · g[x/E]
= α · wp[x := E](f ) + β · wp[x := E](g)
By the continuity of wp (Lemma A.1) we can establish that
H1 is continuous. Analogously the value of H2 (X1 , X2 ) does
not depend on X1 and it is given by
m) The Observation observe G.:
wp[observe G](α · f + β · g)
H2 ( , X2 ) = χG · wlp[P ](X2 ) + χ¬G · g .
= χG · (α · f + β · g)
By the continuity of wlp (Lemma A.1) we can establish that
H2 is continuous.
As both H1 and H2 are continuous, we can apply Bekić’s
Theorem [29] which
tells us that the least fixed point of H is
c1 , X
c2 with
given as X
= α · χG · f + β · χG · g
= α · wp[observe G](f ) + β · wp[observe G](g)
n) The Concatenation P ; Q.:
wp[P ; Q](α · f + β · g)
c1 = µ⊑ X1 • H1 X1 , µ⊒ X2 • H2 (X1 , X2 )
X
= µ ⊑ X 1 • H1 X 1 ,
= wp[P ](wp[Q](α · f + β · g))
= µ⊑ X1 • χG · wp[P ](X1 ) + χ¬G · f
= wp[while (G) {P }](f )
and
c2 = µ⊒ X2 • H2 µ⊒ X1 • H1 (X1 , X2 ), X2
X
= µ ⊒ X 2 • H2 , X 2
cwp[while (G) {P }](f, g) =
=
c1 , X
c2
X
+ β · wp[P ](wp[Q](g))
= α · wp[P ; Q](f ) + β · wp[P ; Q](g)
(I.H. on P )
wp[ite (G) {P } {Q}](α · f + β · g)
= χG · wp[P ](α · f + β · g)
= ν ⊑ X2 • χG · wlp[P ](X2 ) + χ¬G · g
= wlp[while (G) {P }](g) ,
(I.H. on Q)
o) The Conditional Choice ite (G) {P } {Q}.:
= µ⊒ X2 • χG · wlp[P ](X2 ) + χ¬G · g
which gives us in total
= wp[P ](α · wp[Q](f ) + β · wp[Q](g))
= α · wp[P ](wp[Q](f ))
+ χ¬G · wp[Q](α · f + β · g)
= χG · (α · wp[P ](f ) + β · wp[P ](g))
+ χ¬G · (α · wp[Q](f ) + β · wp[Q](g))
= α · (χG · wp[P ](f ) + χ¬G · wp[Q](f ))
(I.H.)
+ β · (χG · wp[P ](g) + χ¬G · wp[Q](g))
= α · wp[ite (G) {P } {Q}](f )
wp[while (G) {P }](f ), wlp[while (G) {P }](f ) .
+ β · wp[ite (G) {P } {Q}](g)
The argument for cwlp is completely analogous.
p) The Probabilistic Choice {P } [p] {Q}.:
C. Linearity of wp
⊠
Lemma A.2 (Linearity of wp). For any P ∈ cpGCL ,
any post–expectations f, g ∈ E and any non–negative real
constants α, β,
wp[{P } [p] {Q}](α · f + β · g)
= p · wp[P ](α · f + β · g)
+ (1 − p) · wp[Q](α · f + β · g)
wp[P ](α · f + β · g) = α · wp[P ](f ) + β · wp[P ](g) .
= p · (α · wp[P ](f ) + β · wp[P ](g))
+ (1 − p) · (α · wp[Q](f ) + β · wp[Q](g))
Proof. The proof proceeds by induction on the structure of P .
j) The Effectless Program skip.:
= α · (p · wp[P ](f ) + (1 − p) · wp[Q](f ))
+ β · (p · wp[P ](g) + (1 − p) · wp[Q](g))
wp[skip](α · f + β · g)
= α·f +β·g
= α · wp[skip](f ) + β · wp[skip](g)
= α · wp[{P } [p] {Q}](f )
+ β · wp[{P } [p] {Q}](g)
14
(I.H.)
q) The Loop while (G) {P }.: The main idea of the
proof is to show that linearity holds for the n-th unrolling
of the loop and then use a continuity argument to show that
the property carries over to the loop.
The fact that linearity holds for the n–unrolling of the loop
is formalized by formula H n (0) = α·I n (0)+β ·J n (0), where
r) Proof of i): We do the proof for transformer cwp;
the proof for cwp is analogous. On view of Theorem V.1, the
monotonicity of cwp reduces to the monotonicity of wp which
follows immediately from its continuity (see Lemma A.1).
s) Proof of ii): Once again, on view of Theorem V.1, the
linearity of cwp follows from the linearity of wp, which we
prove in Lemma A.2.7
t) Proof of iii): Let us begin by proving that
cwp[P ](0) = 0. On account of Theorem V.1 this assertion
reduces to wp[P ](0) = 0, which has already been proved
for pGCL programs (see e.g. [9]). Therefore we only have
to deal with the case of observe statements and the claim
holds since wp[observe G](0) = χG ·0 = 0. Finally formula
cwlp[P ](1) = 1 follows immediately from Theorem V.1.
H(X) = χG · wp[P ](X) + χ¬G · (α · f + β · g)
I(X) = χG · wp[P ](X) + χ¬G · f
J(X) = χG · wp[P ](X) + χ¬G · g
We prove this formula by induction on n. The base case n = 0
is immediate. For the inductive case we reason as follows
H n+1 (0)
E. Proof of Lemma V.4 (i)
= H(H n (0))
= H(α · I n (0) + β · J n (0))
= χG · wp[P ](α · I (0) + β · J (0))
+ χ¬G · (α · f + β · g)
For proving Lemma V.4 (i) we rely on the fact that allowing
a bounded while–loop to be executed for an increasing number
of times approximates the behavior of an unbounded while–
loop. We first define bounded while–loops formally:
= χG · (α · wp[P ](I n (0)) + β · wp[P ](J n (0)))
+ χ¬G · (α · f + β · g)
(I.H. on P )
Definition A.1 (Bounded while–Loops). Let P ∈ pGCL.
Then we define:
(I.H. on n)
n
n
= α · (χG · wp[P ](I n (0)) + χ¬G · f )
+ β · (χG · wp[P ](J n (0)) + χ¬G · g)
while<0 (G) {P } , abort
while<k+1 (G) {P } , ite (G) {P k } {skip}
= α · I(I n (0)) + β · J(J n (0))
P k , P ; while<k (G) {P }
= α · I n+1 (0) + β · J n+1 (0)
We can now establish that by taking the supremum on the
bound k we obtain the full behavior of the unbounded while–
loop:
Now we turn to the proof of the main claim. We apply
the Kleene Fixed Point Theorem to deduce that the least fixed
points of H, I and J can be built by iteration from expectation
0 since the three transformers are continuous (due to the
continuity of wp established in Lemma A.1). Then we have
Lemma A.4. Let G be a predicate, P ∈ pGCL, and f ∈ E.
Then it holds that
sup wp[while<k (G) {P }](f ) = wp[while (G) {P }](f ) .
wp[while (G) {P }](α · f + β · g)
G
=
H n (0)
k∈N
Proof. For any predicate G, any program P ∈ pGCL, and any
expectation f ∈ E let
n
=
G
n
=α·
α · I n (0) + β · J n (0)
G
n
I n (0) + β ·
G
F (X) = χG · wp[P ](X) + χ¬G · f .
J n (0)
We first show by induction on k ∈ N that
n
wp[while<k (G) {P }](f ) = F k (0) .
= α · wp[while (G) {P }](f )
+ β · wp[while (G) {P }](g)
For the induction base we have k = 0. In that case we have
wp[while<0 (G) {P }](f )
= wp[abort](f )
D. Proof of Lemma V.3
Lemma V.3 (Elementary properties of cwp and cwlp). For
every P ∈ cpGCL⊠ with at least one feasible execution
(from every initial state), post–expectations f, g ∈ E and non–
negative real constants α, β:
= 0
= F 0 (0) .
As the induction hypothesis assume now that
i) f ⊑ g implies cwp[P ](f ) ⊑ cwp[P ](g) and likewise for
cwlp (monotonicity).
wp[while<k (G) {P }](f ) = F k (0)(f )
ii) cwp[P ](α · f + β · g) = α · cwp[P ](f ) + β · cwp[P ](g).
7 We cannot adopt the results from the original work [9] because their
analyses is restricted to bounded expectations.
iii) cwp[P ](0) = 0 and cwlp[P ](1) = 1.
15
holds for some arbitrary but fixed k. Then for the induction
step we have
wp[while<k+1 (G) {P }](f )
habort, σi
hsink i
0
0
In this RMC we have Π := Paths(habort, σi, hsink i) = ∅.
Then we have for the expected reward:
= wp[P ; ite (G) {while<k (G) {P }} {skip}](f )
= (χG · wp[P ] ◦ wp[while<k (G) {P }]
f
ExpRewRσ JabortK (♦sink )
X
=
Pr(π̂) · r(π̂)
+ χ¬G · wp[skip])(f )
= χG · wp[P ](wp[while<k (G) {P }](f ))
π̂∈Π
+ χ¬G · wp[skip](f )
= χG · wp[P ](F k (0)) + χ¬G · f
=
(I.H.)
X
Pr(π̂) · r(π̂)
π̂∈∅
= F k+1 (0)(f ) .
= 0
= 0(σ)
We have by now established that
= wp[abort](f )(σ)
wp[while<k (G) {P }](f ) = F k (0)
The Assignment x := E. The RMC for this program is of
the following form:
holds for every k ∈ N. Ergo, we can also establish that
sup wp[while<k (G) {P }](f )
k∈N
= sup F k (0)
k∈N
= µ X. F (X)
hx := E, σi
h↓, σ[E/x]i
hsink i
0
f (σ[E/x])
0
In this RMC we have Π := Paths(hx := E, σi, hsink i) =
{π̂1 } with π̂1 = hx := E, σi → h↓, σ[E/x]i → hsink i. Then
we have for the expected reward:
= wp[while (G) {P }](f ) .
With Lemma A.4 in mind, we can now restate and prove
Lemma V.4 (i):
f
ExpRewRσ Jx:=EK (♦sink )
X
=
Pr(π̂) · r(π̂)
Lemma V.4 (i). For P ∈ cpGCL⊠ , f ∈ E, g ∈ E≤1 , and
σ ∈ S:
π̂∈Π
f
ExpRewRσ JP K (♦hsink i) = wp[P ](f )(σ)
= Pr(π̂1 ) · r(π̂1 )
Proof. The proof goes by induction over all cpGCL⊠ programs. For the induction base we have:
The Effectless Program skip. The RMC for this program
is of the following form:8
= 1 · f (σ[E/x])
= f (σ[E/x])
hskip, σi
h↓, σi
hsink i
0
f (σ)
0
= f [E/x](σ)
= wp[x := E](f )(σ)
The Observation observe G. For this program there are
two cases: In Case 1 we have σ |= G, so we have χG (σ) = 1.
The RMC in this case is of the following form:
In the above RMC we have Π := Paths(hskip, σi, hsink i) =
{π̂1 } with π̂1 = hskip, σi → h↓, σi → hsink i. Then we have
for the expected reward:
Rfσ JskipK
ExpRew
(♦sink )
X
=
Pr(π̂) · r(π̂)
hobserve G, σi
h↓, σi
hsink i
0
f (σ)
0
In this RMC we have Π := Paths(hobserve G, σi, hsink i)
= {π̂1 } with π̂1 = hobserve G, σi → h↓, σi → hsink i.
Then we have for the expected reward:
π̂∈Π
= Pr(π̂1 ) · r(π̂1 )
= 1 · f (σ)
f
ExpRewRσ Jobserve
X
=
Pr(π̂) · r(π̂)
= f (σ)
= wp[skip](f )(σ)
GK
(♦sink )
π̂∈Π
= Pr(π̂1 ) · r(π̂1 )
The Faulty Program abort. The RMC for this program
is of the following form:
= 1 · f (σ)
= χG (σ) · f (σ)
8 If transitions have probability 1, we omit this in our figures. Moreover,
all states—with the exception of hsink i—are left out if they are not reachable
from the initial state.
= (χG · f )(σ)
= wp[observe G](f )(σ)
16
In Case 2 we have σ 6|= G, so we have χG (σ) = 0. The RMC
in this case is of the following form:
hobserve G, σi
h i
hsink i
0
0
0
= wp[P ; Q](f )
The Conditional Choice ite (G) {P } {Q}. For this program there are two cases: In Case 1 we have σ |= G, so we
have χG (σ) = 1 and χ¬G (σ) = 0. The RMC in this case is
of the following form:
In this RMC we have Π := Paths(hobserve G, σi, hsink i)
= {π̂1 } with π̂1 = hobserve G, σi → h i → hsink i. Then
for the expected reward we also have:
f
ExpRewRσ Jobserve
X
=
Pr(π̂) · r(π̂)
GK
hite (G) {P } {Q} G, σi
hP, σi
0
0
(♦sink )
..
.
π̂∈Π
In this RMC every path in Paths(hite (G) {P } {Q}, σi,
hsink i) starts with hite (G) {P } {Q}, σi → hP, σi → · · · .
As the state hite (G) {P } {Q}, σi collects zero reward, the
expected reward of the above RMC is equal to the expected
reward of the following RMC:
= Pr(π̂1 ) · r(π̂1 )
= 1·0
= 0
= 0 · f (σ)
= χG (σ) · f (σ)
= (χG · f )(σ)
hP, σi
0
= wp[observe G](f )(σ)
But the above RMC is exactly Rfσ JP K for which the expected
reward is known by the induction hypothesis. So we have
The Concatenation P ; Q. For this program the RMC is of
the following form:
′
hP ; Q, σi
h↓; Q, σ i
hQ, σ i
0
0
0
h↓; Q, σ ′′ i
hQ, σ ′′ i
0
0
..
.
f
ExpRewRσ Jite (G) {P } {Q}K (♦sink )
...
′
f
= ExpRewRσ JP K (♦sink )
= wp[P ](f )(σ)
= 1 · wp[P ](f )(σ) + 0 · wp[Q](f )(σ)
...
In Case 2 we have σ 6|= G, so we have χG (σ) = 0 and
χ¬G (σ) = 1. The RMC in this case is of the following form:
hite (G) {P } {Q} G, σi
0
0
ExpRew
..
.
(♦sink )
In this RMC every path in Paths(hite (G) {P } {Q}, σi,
hsink i) starts with hite (G) {P } {Q}, σi → hQ, σi → · · · .
As the state hite (G) {P } {Q}, σi collects zero reward, the
expected reward of the above RMC is equal to the expected
reward of the following RMC:
h↓, σ ′′ i
..
.
f
ExpRewRσ′′ JQK (♦sink )
f
λτ.ExpRewRτ JQK (♦sink )
hQ, σi
0
But the above RMC is exactly Rσ
JP K for
which the expected reward is also known by the induction
hypothesis. So we have
ExpRew
= ExpRew
hQ, σi
h↓, σ ′ i
Rfσ′ JQK
Rfσ JP ; QK
wp[Q](f )
JP K
= ExpRewRσ
(♦sink )
= wp[P ](wp[Q](f ))(σ)
...
But the above RMC is exactly Rfσ JQK for which the expected
reward is known by the induction hypothesis. So we also have
(♦sink )
f
Rτ JQK (♦sink )
Rλτ.ExpRew
JP K
σ
(I.H.)
= χG (σ) · wp[P ](f )(σ) + χ¬G (σ) · wp[Q](f )(σ)
= wp[ite (G) {P } {Q}](f )(σ) .
In this RMC every path in Paths(hP ; Q, σi, hsink i) starts
with hP ; Q, σi, eventually reaches h↓; Q, σ ′ i, and then immediately after that reaches hQ, σ ′ i which is the initial
state of Rfσ′ JQK for which the expected reward is given by
f
ExpRewRσ′ JQK (♦sink ). By this insight we can transform the
above RMC into the RMC with equal expected reward below:
hP, σi
0
...
f
ExpRewRσ Jite (G) {P } {Q}K (♦sink )
(♦sink )
f
= ExpRewRσ JQK (♦sink )
= wp[Q](f )(σ)
(I.H. on Q)
(I.H. on P )
17
(I.H.)
f
= ExpRewRσ Jwhile (G) {P }K (♦sink ) .
= 0 · wp[P ](f )(σ) + 1 · wp[Q](f )(σ)
= χG (σ) · wp[P ](f )(σ) + χ¬G (σ) · wp[Q](f )(σ)
While the above is intuitively evident, it is a tedious and technically involved task to prove it. Herefore we just provide an
intuition thereof: For showing (5) ≤ (6), we know that every
path in the RMDP Rfσ Jwhile<k (G) {P }K either terminates
properly or is prematurely aborted (yielding 0 reward) due
to the fact that the bound of less than k loop iterations was
reached. The RMDP Rfσ Jwhile (G) {P }K for the unbounded
while–loop does not prematurely abort executions, so left–
hand–side is upper bounded by the right–hand–side of the
equation. For showing (5) ≥ (6), we know that a path that
collects positive reward is necessarily finite. Therefore there
exists some k ∈ N such that Rfσ Jwhile<k (G) {P }K includes
this path. Taking the supremum over k we eventually include
every path in Rfσ Jwhile (G) {P }K that collects positive reward.
= wp[ite (G) {P } {Q}](f )(σ) .
The Probabilistic Choice {P } [p] {Q}. For this program
the RMC is of the following form:
p
h{P } [p] {Q}, σi
1−p
0
hP, σi
0
...
hQ, σi
0
...
In this RMC every path in Paths(h{P } [p] {Q}, σi, hsink i)
starts with h{P } [p] {Q}, σi and immediately after that
reaches hP, σi with probability p or hQ, σi with probability
1 − p. hP, σi is the initial state of Rfσ JP K and hQ, σi is the
initial state of Rfσ JQK. By this insight we can transform the
above RMC into the RMC with equal expected reward below:
F. Proof of Lemma V.4 (ii)
Lemma V.4 (ii). For P ∈ cpGCL⊠ , f ∈ E, g ∈ E≤1 , and
σ ∈ S:
g
p
LExpRewRσ JP K (♦hsink i) = wlp[P ](g)(σ)
h{P } [p] {Q}, σi
hP, σi
f
ExpRewRσ JP K (♦sink )
1−p
0
Proof. The proof goes by induction over all cpGCL⊠ programs. For the induction base we have: The Effectless Program skip. The RMC for this program is of the following
form:
hQ, σi
f
ExpRewRσ JQK (♦sink )
The expected reward of the above RMC is given by p ·
f
f
ExpRewRσ JP K (♦sink ) + (1 − p) · ExpRewRσ JQK (♦sink ), so
in total we have for the expected reward:
f
hsink i
0
f (σ)
0
g
f
+ (1 − p) · ExpRewRσ JQK (♦sink )
= p · wp[P ](f )(σ) + (1 − p) · wp[Q](f )(σ)
π̂∈Π
(I.H.)
= Pr(π̂) · r(π̂) + 0
= 1 · g(σ)
= wp[{P } [p] {Q}](f ) .
The Loop while (G) {Q}. By Lemma A.4 we have
= g(σ)
= wlp[skip](g)(σ)
wp[while (G) {P }](f ) = sup wp[while<k (G) {P }](f )
k∈N
The Faulty Program abort. The RMC for this program
is of the following form:
and as while<k (G) {P } is a purely syntactical construct
(made up from abort, skip, conditional choice, and P )
we can (using what we have already established on abort,
skip, conditional choice, and using the induction hypothesis
on P ) also establish that
wp[while (G) {P }](f )
<k
h↓, σi
LExpRewRσ JskipK (♦sink )
X
=
Pr(π̂) · r(π̂) + Pr(¬♦hsink i)
f
= p · ExpRewRσ JP K (♦sink )
= sup ExpRewRσ Jwhile
hskip, σi
In this RMC we have Π := Paths(hskip, σi, hsink i) = {π̂1 }
with π̂1 = hskip, σi → h↓, σi → hsink i. Then we have for
the liberal expected reward:
ExpRewRσ J{P } [p] {Q}K (♦sink )
f
(G) {P }K
habort, σi
hsink i
0
0
In this RMC we have Π := Paths(habort, σi, hsink i) = ∅.
Then we have for the liberal expected reward:
(♦sink ) .
k∈N
g
ExpRewRσ JabortK (♦sink )
X
=
Pr(π̂) · r(π̂) + Pr(¬♦hsink i)
It is now left to show that
f
(6)
<k
sup ExpRewRσ Jwhile
(G) {P }K
(♦sink )
(5)
π̂∈Π
k∈N
18
=
X
Pr(π̂) · r(π̂) + 1
In this RMC we have Π := Paths(hobserve G, σi, hsink i)
= {π̂1 } with π̂1 = hobserve G, σi → h i → hsink i. Then
we have for the liberal expected reward:
π̂∈∅
= 0+1
= 1
= 1(σ)
g
LExpRewRσ Jobserve GK (♦sink )
X
=
Pr(π̂) · r(π̂) + Pr(¬♦hsink i)
= wlp[abort](g)(σ)
π̂∈Π
= Pr(π̂1 ) · r(π̂1 ) + 0
= 1·0
The Assignment x := E. The RMC for this program is of
the following form:
hx := E, σi
h↓, σ[E/x]i
hsink i
0
f (σ[E/x])
0
= 0
= 0 · g(σ)
= χG (σ) · g(σ)
= (χG · g)(σ)
= wlp[observe G](g)(σ)
In this RMC we have Π := Paths(hx := E, σi, hsink i) =
{π̂1 } with π̂1 = hx := E, σi → h↓, σ[E/x]i → hsink i. Then
we have for the liberal expected reward:
The Concatenation P ; Q. For this program the RMC is of
the following form:
Rgσ Jx:=EK
LExpRew
(♦sink )
X
=
Pr(π̂) · r(π̂) + Pr(¬♦hsink i)
diverge. . .
π̂∈Π
diverge. . .
= Pr(π̂1 ) · r(π̂1 ) + 0
= 1 · g(σ[E/x])
= g(σ[E/x])
= g[E/x](σ)
= wlp[x := E](g)(σ)
hP ; Q, σi
0
The Observation observe G. For this program there are
two cases: In Case 1 we have σ |= G, so we have χG (σ) = 1.
The RMC in this case is of the following form:
hobserve G, σi
h↓, σi
hsink i
0
f (σ)
0
h↓; Q, σ ′ i
0
hQ, σ ′ i
0
...
h↓; Q, σ ′′ i
0
hQ, σ ′′ i
0
...
In this RMC every path in Paths(hP ; Q, σi, hsink i) starts
with hP ; Q, σi, eventually reaches h↓; Q, σi, and then immediately after that reaches hQ, σi which is the initial state
of Rgσ JQK. Every diverging path either diverges because the
program P diverges or because the program Q diverges. If
we attempt to make the RMC smaller (while preserving the
liberal expected reward) by cutting it off at states of the form
h↓; Q, τ i, we haveg to assign to them the liberal expected
reward LExpRewRτ JQK (♦sink ) in order to not loose the non–
termination probability caused by Q. By this insight we can
now transform the above RMC into the RMC with equal liberal
expected reward below:
In this RMC we have Π := Paths(hobserve G, σi, hsink i)
= {π̂1 } with π̂1 = hobserve G, σi → h↓, σi → hsink i.
Then we have for the liberal expected reward:
g
LExpRewRσ Jobserve GK (♦sink )
X
=
Pr(π̂) · r(π̂) + Pr(¬♦hsink i)
π̂∈Π
= Pr(π̂1 ) · r(π̂1 ) + 0
= 1 · g(σ)
diverge. . .
= χG (σ) · g(σ)
= (χG · g)(σ)
= wlp[observe G](g)(σ)
hP, σi
f
In Case 2 we have σ 6|= G, so we have χG (σ) = 0. The RMC
in this case is of the following form:
hobserve G, σi
h i
hsink i
0
0
0
h↓, σ ′ i
0
LExpRewRσ′ JQK (♦sink )
h↓, σ ′′ i
f
LExpRewRσ′′ JQK (♦sink )
19
g
LExpRewRσ JQK (♦sink )
expected reward of the above RMC is equal to the expected
reward of the following RMC:
JP K for
But the above RMC is exactly Rσ
which the liberal expected reward is known by the induction
hypothesis. So we have for the liberal expected reward:
LExpRew
= LExpRew
Rgσ JP ; QK
(♦sink )
g
LExpRewRσ JQK (♦sink )
Rσ
JP K
But the above RMC is exactly Rgσ JQK for which the expected
reward is known by the induction hypothesis. A similar
argument can be applied to the probability of not eventually
reaching hsink i. So we also have for the liberal expected
reward:
(♦sink )
wlp[Q](g)
JP K
= LExpRewRσ
(♦sink )
= wlp[P ](wlp[Q](g))(σ)
...
hQ, σi
0
(I.H. on Q)
(I.H. on P )
= wlp[P ; Q](g) .
g
LExpRewRσ Jite (G) {P } {Q}K (♦sink )
The Conditional Choice ite (G) {P } {Q}. For this program there are two cases: In Case 1 we have σ |= G, so we
have χG (σ) = 1 and χ¬G (σ) = 0. The RMC in this case is
of the following form:
hite (G) {P } {Q} G, σi
g
= ExpRewRσ Jite (G) {P } {Q}K (♦sink )
g
+ PrRσ Jite (G) {P } {Q}K (¬♦hsink i)
g
hP, σi
= wlp[Q](g)(σ)
= 0 · wlp[P ](g)(σ) + 1 · wlp[Q](g)(σ)
0
0
The Probabilistic Choice {P } [p] {Q}. For this program
the RMC is of the following form:
As the state hite (G) {P } {Q}, σi collects zero reward, the
expected reward of the above RMC is equal to the expected
reward of the following RMC:
p
h{P } [p] {Q}, σi
...
1−p
0
But the above RMC is exactly Rgσ JP K for which the expected
reward is known by Lemma . A similar argument can be
applied to the probability of not eventually reaching hsink i.
So we have for the liberal expected reward:
g
= ExpRewRσ Jite (G) {P } {Q}K (♦sink )
g
+ PrRσ Jite (G) {P } {Q}K (¬♦hsink i)
g
= ExpRewRσ JP K (♦sink ) + PrRσ JP K (¬♦hsink i)
= wlp[P ](g)(σ)
(I.H.)
= 1 · wlp[P ](g)(σ) + 0 · wlp[Q](g)(σ)
= χG (σ) · wlp[P ](g)(σ) + χ¬G (σ) · wlp[Q](g)(σ)
h{P } [p] {Q}, σi
In Case 2 we have σ 6|= G, so we have χG (σ) = 0 and
χ¬G (σ) = 1. The RMC in this case is of the following form:
0
...
hQ, σi
0
...
p
= wlp[ite (G) {P } {Q}](g)(σ) .
hite (G) {P } {Q} G, σi
hP, σi
0
In this RMC every path in Paths(h{P } [p] {Q}, σi, hsink i)
starts with h{P } [p] {Q}, σi and immediately after that
reaches hP, σi with probability p or hQ, σi with probability
1 − p. hP, σi is the initial state of Rfσ JP K and hQ, σi is the
initial state of Rfσ JQK. The same holds for all paths that do not
eventually reach hsink i. By this insight we can transform the
above RMC into the RMC with equal liberal expected reward
below:
g
LExpRewRσ Jite (G) {P } {Q}K (♦sink )
g
(I.H.)
= χG (σ) · wlp[P ](g)(σ) + χ¬G (σ) · wlp[Q](g)(σ)
= wlp[ite (G) {P } {Q}](g)(σ) .
..
.
hP, σi
0
g
= ExpRewRσ JQK (♦sink ) + PrRσ JQK (¬♦hsink i)
1−p
0
hQ, σi
hP, σi
f
ExpRewRσ JP K (♦sink )
hQ, σi
f
ExpRewRσ JQK (♦sink )
0
The liberal expected reward of the above RMC is given by
f
f
p·LExpRewRσ JP K (♦sink )+(1−p)·LExpRewRσ JQK (♦sink ),
so in total we have for the liberal expected reward:
..
.
In this RMC every path in Paths(hite (G) {P } {Q}, σi,
hsink i) starts with hite (G) {P } {Q}, σi → hQ, σi → · · · .
As the state hite (G) {P } {Q}, σi collects zero reward, the
f
LExpRewRσ J{P } [p] {Q}K (♦sink )
f
= p · LExpRewRσ JP K (♦sink )
20
f
+ (1 − p) · LExpRewRσ JQK (♦sink )
= p · wlp[P ](f )(σ) + (1 − p) · wlp[Q](f )(σ)
I. Proof of Theorem VI.1
(I.H.)
Theorem VI.1 (Program Transformation Correctness). Let
P ∈ cpGCL⊠ admit at least one feasible run for every initial
state and T (P, 1) = (P̂ , ĥ). Then for any f ∈ E and g ∈ E≤1 ,
= wlp[{P } [p] {Q}](f ) .
The Loop while (G) {Q}.
The argument is dual to the case for the (non–liberal)
expected reward.
wp[P̂ ](f ) = cwp[P ](f ) and wlp[P̂ ](g) = cwlp[P ](g).
In view of Theorem V.1, the proof reduces to showing equations ĥ·wp[P̂ ](f ) = wp[P ](f ), ĥ·wlp[P̂ ](f ) = wlp[P ](f ) and
ĥ = wlp[P ](1), which follow immediately from the auxiliary
Lemma A.5 below by taking h = 1.
G. Proof of Lemma V.5
Lemma V.5. For P ∈ cpGCL⊠ , g ∈ E≤1 , and σ ∈ S:
g
PrRσ JP K (¬♦ ) = wlp[P ](1)(σ) .
Lemma A.5. Let P ∈ cpGCL⊠ . Then for all expectations
f ∈ E and g, h ∈ E≤1 , it holds
Proof. First, observe that paths on reaching Xor immediately move to the state hsink i. Moreover, all paths that never
visit either (a) visit a terminal X–state (which are the only
states that can possibly collect positive reward) or (b) diverge
and never reach hsink i and therefore neither reach Xnor .
Furthermore the set of “(a)–paths” and the set of “(b)–paths”
are disjoint. Thus:
Pr
= Pr
Rfσ JP K
Rfσ JP K
(♦X) + Pr
and by assigning reward one to every X–state, and zero to
all other states, we can turn the probability measure into an
expected reward, yielding
g
= ExpRewRσ JP K (♦X) + PrRσ JP K (¬♦sink )
= LExpRew
g
(9)
h = wlp[skip](h).
(♦sink )
= wlp[P ](1)
ĥ = wlp[P ](h),
and
= ExpRewRσ JP K (♦sink ) + PrRσ JP K (¬♦sink )
R1
σ JP K
(8)
h · wp[skip](f ) = h · f = wp[skip](h · f )
As every path that reaches sink over a –state cumulates zero
reward, we finally get:
1
ĥ · wlp[P̂ ](g) = wlp[P ](h · g)
Proof. We prove only equations (7) and (9) since (8) follows
a reasoning similar to (7). The proof proceeds by induction
on the structure of P . In the remainder we will refer to the
inductive hypothesis about (7) as to IH1 and to the inductive
hypothesis about (9) as to IH2 .
The Effectless Program skip. We have T (skip, h) =
(skip, h) and the statement follows immediately since
(¬♦sink )
1
(7)
where (P̂ , ĥ) = T (P, h).
(¬♦ )
Rfσ JP K
ĥ · wp[P̂ ](f ) = wp[P ](h · f )
The Faulty Program abort. We have T (abort, h) =
(abort, 1) and the statement follows immediately since
(Lemma V.4)
1 · wp[abort](f ) = 1 · 0 = wp[abort](h · f )
H. Proof of Theorem V.6
and
Theorem V.6 (Correspondence theorem). For P ∈ cpGCL⊠ ,
f ∈ E, g ∈ E≤1 and σ ∈ S,
1 = wlp[abort](h).
f
CExpRewRσ JP K (♦sink | ¬♦ ) = cwp[P ](f )(σ)
CLExpRewRσ JP K (♦sink | ¬♦ ) = cwlp[P ](g)(σ) .
The Assignment x := E. We have T (x := E, h) = (x :=
E, h[x/E]) and the statement follows immediately since
Proof. We prove only the first equation. The proof of the
second equation goes along the same arguments.
h[x/E] · wp[x := E](f ) = h[x/E] · f [x/E]
= (h · f )[x/E] = wp[x := E](h · f )
g
f
CExpRewRσ JP K (♦sink | ¬♦ )
and
f
=
ExpRewRσ JP K (♦sink )
h[x/E] = wlp[x := E](h).
f
PrRσ JP K (¬♦ )
wp[P ](f )
=
wlp[P ](1)
cwp1 [P ](f, 1)
=
cwp2 [P ](f, 1)
= cwp[P ](f )
The Observation observe G. We have T (observe G, h)
= (skip, χG · h) and the statement follows immediately since
(Lemmas V.4, V.5)
(Theorem V.1)
χG · h · wp[skip](f ) = χG · h · f
= wp[observe G](h · f )
21
and
of the lemma we need to make a case distinction between
those states that are mapped by ĥ to a positive number and
those that are mapped to 0. In the first case, i.e. if ĥ(s) > 0,
we reason as follows:
ĥ(s) · wp[{P̂ } φ·ĥP/ĥ {Q̂}](f )(s)
= ĥ(s) · φ·ĥĥP (s) · wp[P̂ ](f )(s)
(1−φ)·ĥQ
+
(s)
·
wp[
Q̂](f
)(s)
ĥ
χG · h = wlp[observe G](h).
The Concatenation P ; Q. Let (Q̂, ĥQ ) = T (Q, h) and
(P̂ , ĥP ) = T (P, ĥQ ). In view of these definitions, we obtain
T (P ; Q, h) = (P̂ ; Q̂, ĥP ).
Now
ĥP · wp[P̂ ; Q̂](f )
= ĥP · wp[P̂ ] wp[Q̂](f )
= wp[P ](ĥQ · wp[Q̂](f ))
= wp[P ](wp[Q](h · f ))
= φ(s) · ĥP (s) · wp[P̂ ](f )(s)
+ (1 − φ)(s) · ĥQ (s) · wp[Q̂](f )(s)
= φ(s) · wp[P ](h · f )(s)
+ (1 − φ)(s) · wp[Q](h · f )(s)
(IH1 on P )
(IH1 on Q)
= wp[{P } [φ] {Q}](h · f )(s)
= wp[P ; Q](h · f )
while in the second case, i.e. if ĥ(s) = 0, the claim holds
because we will have wp[{P } [φ] {Q}](h · f )(s) = 0. To see
this note that if ĥ(s) = 0 then either φ(s) = 0 ∧ ĥQ (s) = 0 or
φ(s) = 1 ∧ ĥP (s) = 0 holds. Now assume we are in the first
case (an analogous argument works for the other case); using
the IH1 over Q we obtain
and
ĥP = wlp[P ](ĥQ )
= wlp[P ](wlp[Q](h))
(IH1 )
(IH2 on P )
(IH2 on Q)
= wlp[P ; Q](h).
The Conditional Choice ite (G) {P } {Q}. Let (P̂ , ĥP ) =
T (P, h) and (Q̂, ĥQ ) = T (Q, h). On view of these definitions,
we obtain
wp[{P } [0] {Q}](h · f )(s) = wp[Q](h · f )(s)
= ĥQ (s) · wp[Q](f )(s) = 0.
The proof of the second claim of the lemma is straightforward:
T (ite (G) {P } {Q}, h) =
(ite (G) {P̂ } {Q̂}, χG · ĥP + χ¬G · ĥQ ).
φ · ĥP + (1 − φ) · ĥQ
= φ · wlp[P ](h) + (1 − φ) · wlp[Q](h)
Now
(χG · ĥP + χ¬G · ĥQ )
= wlp[{P } [φ] {Q}](h).
· wp[ite (G) {P̂ } {Q̂}](f )
The Loop while (G) {Q}. Let ĥ = ν F where F (X) =
χG ·TP (X)+χ¬G ·h and TP (·) is a short–hand for π2 ◦T (P, ·).
Now if we let (P̂ , θ) = T (P, ĥ) by definition of T we obtain
= (χG · ĥP + χ¬G · ĥQ )
· (χG · wp[P̂ ](f ) + χ¬G · wp[Q̂](f ))
= χG · ĥP · wp[P̂ ](f ) + χ¬G · ĥQ · wp[Q̂](f )
= χG · wp[P ](h · f ) + χ¬G · wp[Q](h · f )
= wp[ite (G) {P } {Q}](h · f )
(IH2 )
T (while (G) {P }, h) = (while (G) {P̂ }, ĥ).
(IH1 )
The first claim of the lemma says that
ĥ · wp[while (G) {P̂ }](f ) = wp[while (G) {P }](h · f ).
and
Now if we let H(X) = χG · wp[P̂ ](X) + χ¬G · f and
I(X) = χG · wp[P ](X) + χ¬G · h · f , the claim can be
rewritten as ĥ · µ H = µ I and a straightforward argument
using the Kleene fixed point theorem (and the continuity of
wp established in Lemma A.1) shows that it is entailed by
formula ∀n• ĥ · H n (0) = I n (0). We prove the formula by
induction on n. The case n = 0 is trivial. For the inductive
case we reason as follows:
χG · ĥP + χ¬G · ĥQ
= χG · wlp[P ](h) + χ¬G · wlp[Q](h)
= wlp[ite (G) {P } {Q}](h)
(IH2 )
The Probabilistic Choice {P } [p] {Q}. Let (P̂ , ĥP ) =
T (P, h) and (Q̂, ĥQ ) = T (Q, h). On view of these definitions, we obtain
T ({P } [φ] {Q}, h) =
({P̂ } φ·ĥP/ĥ {Q̂}, φ · ĥP + (1 − φ) · ĥQ )
ĥ · H n+1 (0)
= F (ĥ) · H n+1 (0)
= (χG · TP (ĥ) + χ¬G · h) · H
with ĥ = φ · ĥP + (1 − φ) · ĥQ .
To prove the first claim
ĥ · wp[{P̂ } φ·ĥP/ĥ {Q̂}](f ) = wp[{P } [φ] {Q}](h · f )
(def. ĥ)
n+1
(0)
(def. F )
= (χG · TP (ĥ) + χ¬G · h)
· (χG · wp[P̂ ](H n (0)) + χ¬G · f )
22
(def. H)
= χG · TP (ĥ) · wp[P̂ ](H n (0))
+ χ¬G · h · f
=
(def. θ)
= χG · wp[P ](ĥ · H n (0)) + χ¬G · h · f
(IH1 on P)
= I(ĥ · H n (0))
(def. I)
=I
(0)
= F (ĥ)
(def. F )
= ĥ
(def. ĥ)
K. Detailed calculations for Section VI-D
We refer to the labels init and loop introduced in the
program P in Section VI-D. Further let body denote the
program in the loop’s body. For readability we abbreviate
the variable names delivered as del, counter as cntr and
intercepted as int. In the following we consider del and int
as boolean variables. In order to determine (1) we first start
with the numerator. This quantity is given by
(def. F )
= χG · wlp[P ](ν J) + χ¬G · h
= J(ν J)
(IH2 on P )
(def. J)
=νJ
(def. ν J)
J. Proof of Theorem VI.2
cwp[P ](f )(sI )
=
=
ExpRew
P
Pr
∞
X
1 − Pr
(12)
(13)
· f (π̂)
(14)
(♦ rerun)
′
f
PrRsI JP K (♦ rerun)i
i=0
=
RfsI JP ′ K
·
X
Φ(0) = [¬del] · wp[body](0) + [del ∧ cntr ≤ k ∧ ¬int]
X
f
π̂∈♦ sink ∩¬♦ rerun
∞
X
Rfs JP ′ K
π̂∈♦ sink ∩¬♦ rerun i=0
′
PrRsI JP K (π̂) · f (π̂)
Pr
(23)
where Φn denotes the n-fold application of Φ. Equation (21)
is given directly by the semantics of sequential composition
of cpGCL commands. In the next line we apply the definition
of loop semantics in terms of the least fixed point. Finally,
(23) is given by the Kleene fixed point theorem as a solution
to the fixed point equation in (22). We can explicitly find the
supremum by considering the expression for several n and
deducing a pattern. Let Φ(F ) = [¬del] · wp[body](F ) + [del ∧
cntr ≤ k ∧ ¬int]. Then we have
(¬♦ rerun)
Rfs JP ′ K
I
(π̂)
π̂∈♦ sink ∩¬♦ rerun Pr
(22)
n
(11)
(♦ sink ∩ ¬♦ rerun)
RfsI JP ′ K
= wp[init](µF • ([¬del] · wp[body](F )
+[del ∧ cntr ≤ k ∧ ¬int]))
+[del ∧ cntr ≤ k ∧ ¬int]) )
f
(♦ sink | ¬♦ rerun)
(20)
(21)
n
(10)
= CExpRewRsI JP K (♦ sink | ¬♦ )
wp[init; loop; observe(cntr ≤ k)]([¬int])
= wp[init](wp[loop]([cntr ≤ k ∧ ¬int]))
= wp[init](sup ([¬del] · wp[body](0)
Proof. Let us take the operational point of view. Let sI be
some initial state of P .
=
(19)
Rewriting (15) into (16) precisely captures the expected cumulative reward of all terminating paths in P ′′ which is the
expression in the following line. Finally we return from the
operational semantics to the denotational semantics and obtain
the desired result.
(def. J)
(IH2 on P )
(def. θ)
I
(18)
∞
= χG · TP (ĥ) + χ¬G · h
Rfs JP ′ K
I
(♦ sink )
X
r
ai r .
=
1−a
i=0
of the lemma. By letting J(X) = χG · wlp[P ](X) + χ¬G · h,
the claim reduces to ν F = ν J, which we prove showing
that ĥ = ν F is a fixed point of J and ν J is a fixed point
of F . (These assertions basically imply that ν F ≥ ν J and
ν J ≥ ν F , respectively.)
= CExpRew
K
The equality (12) holds because, by construction, the probability to violate an observation in P agrees with the probability
to reach a state in P ′ where rerun is true. In order to obtain
equation (15) we use the fact that for a fixed real value r and
probability a it holds
ĥ = wlp[while (G) {P }](h)
Rfs JP ′ K
′′
= wp(P , f )(sI ) .
We now turn to proving the second claim
F (ν J) = χG · TP (ν J) + χ¬G · h
(17)
′′
(IH on n)
J(ĥ) = χG · wlp[P ](ĥ) + χ¬G · h
= χG · θ + χ¬G · h
f
= ExpRewRsI JP
= χG · θ · wp[P̂ ](H (0)) + χ¬G · h · f
n+1
′′
f
PrRsI JP K (π̂) · f (π̂)
π̂∈♦ sink
(algebra)
n
X
I
= [del ∧ cntr ≤ k ∧ ¬int]
(15)
Φ2 (0) = Φ([del ∧ cntr ≤ k ∧ ¬int])
= [¬del] · wp[body]([del ∧ cntr ≤ k ∧ ¬int])
(♦ rerun)i
f
′
·PrRsI JP K (π̂) · f (π̂)
+ [del ∧ cntr ≤ k ∧ ¬int]
= [¬del] · (p(1 − c) · [del ∧ cntr + 1 ≤ k ∧ ¬int]
(16)
23
+ [¬del ∧ cntr + 1 ≤ k ∧ ¬int]
+(1 − p) · [cntr ≤ k ∧ ¬int])
+ [del ∧ cntr ≤ k ∧ ¬int]
1 − p(1 − c)k−cntr
·(1 − p)
1 − p(1 − c)
= [del ∧ cntr ≤ k ∧ ¬int]
+ [¬del ∧ cntr ≤ k ∧ ¬int] · (1 − p)
= [¬del ∧ cntr ≤ k ∧ ¬int] · (1 − p)
+ [del ∧ cntr ≤ k ∧ ¬int]
+ [¬del ∧ cntr + 1 ≤ k ∧ ¬int]
Φ3 (0) = Φ([¬del ∧ cntr ≤ k ∧ ¬int] · (1 − p)
+ [del ∧ cntr ≤ k ∧ ¬int])
· (1 − p)(p(1 − c))
= ...
= [¬del ∧ cntr ≤ k ∧ ¬int] · (1 − p)
+ [¬del ∧ cntr + 1 ≤ k ∧ ¬int] · (1 − p)
+ [¬del ∧ cntr + 1 ≤ k ∧ ¬int]
As we continue to compute Φn (0) in each step we add a
summand of the form
· (1 − p)(p(1 − c))
[¬del ∧ cntr + i ≤ k ∧ ¬int] · (1 − p)(p(1 − c))i
+ [¬del ∧ cntr = k ∧ ¬int] · (1 − p)
+ [¬del ∧ cntr + 1 ≤ k ∧ ¬int]
1 − p(1 − c) + (p(1 − c)) 1 − p(1 − c)k−cntr
· (1 − p)
1 − p(1 − c)
= [del ∧ cntr ≤ k ∧ ¬int]
[del ∧ cntr ≤ k ∧ ¬int]
[¬del ∧ cntr + i ≤ k ∧ ¬int] · (1 − p)(p(1 − c))i
1 − p(1 − c)k−cntr+1
1 − p(1 − c)
Moreover this fixed point is the only fixed point and therefore
the least. The justification is given by [9] where they show
that loops which terminate almost surely have only one fixed
point. We can now continue our calculation from (23).
= [del ∧ cntr ≤ k ∧ ¬int]
· (1 − p)
k−cntr
X
(1 − p)(p(1 − c))i
i=0
= [del ∧ cntr ≤ k ∧ ¬int]
+ [¬del ∧ cntr ≤ k ∧ ¬int]
· (1 − p)
1 − (p(1 − c))k−cntr+1
.
1 − p(1 − c)
= wp[init]([del ∧ cntr ≤ k ∧ ¬int]
+ [¬del ∧ cntr ≤ k ∧ ¬int]
ark = a
k=0
(24)
1 − (p(1 − c))k−cntr+1
· (1 − p)
)
1 − p(1 − c)
1 − (p(1 − c))k
.
(25)
= (1 − c)(1 − p)
1 − p(1 − c)
This concludes the calculation of the numerator of (1). Analogously we find the denominator
where for the last equation we use a property of the finite
geometric series, namely that for r 6= 1
n−1
X
+ [¬del ∧ cntr ≤ k ∧ ¬int]
i=0
+ [¬del ∧ cntr ≤ k ∧ ¬int] ·
1 − p(1 − c)k−cntr
1 − p(1 − c)
= [del ∧ cntr ≤ k ∧ ¬int]
However we see that the predicate evaluates to false for all
i > k − cntr. Hence the non-zero part of the fixed point is
given by
k−cntr
X
1 − p(1 − c)k−cntr
1 − p(1 − c)
= [del ∧ cntr ≤ k ∧ ¬int]
+ [¬del ∧ cntr = k ∧ ¬int] · (1 − p)
+ [¬del ∧ cntr + 1 ≤ k ∧ ¬int] · (1 − p)p(1 − c)
+ [del ∧ cntr ≤ k ∧ ¬int]
+
1 − rn
.
1−r
The result coincides with the intuition that in a state where
del = false, the probability to fail to reach the goal ¬int ∧
cntr ≤ k is distributed geometrically with probability p(1 −
c). It is easy to verify that our educated guess is correct by
checking that we indeed found a fixed point of Φ:
wlp[init; loop; observe(cntr ≤ k)](1)
= wlp[init](wlp[loop]([cntr ≤ k]))
= wlp[init](νF • ([¬del] · wlp[body](F )
+[del ∧ cntr ≤ k]))
= wlp[init](sup ([¬del] · wlp[body](1)
Φ([del ∧ cntr ≤ k ∧ ¬int]
+ [¬del ∧ cntr ≤ k ∧ ¬int]
n
1 − (p(1 − c))k−cntr+1
)
1 − p(1 − c)
= [del ∧ cntr ≤ k ∧ ¬int]
+ [¬del] · (1 − p) · [cntr ≤ k ∧ ¬int]
+ p(1 − c) [del ∧ cntr + 1 ≤ k ∧ ¬int]
n
+[del ∧ cntr ≤ k]) )
· (1 − p)
= wlp[init]([del ∧ cntr ≤ k]
+ [¬del ∧ cntr ≤ k] · (1 − pk−counter+1 ))
= 1 − pk .
(26)
The only difference is that here the supremum is taken with
respect to the reversed order ≥ in which 1 is the bottom
24
0.9
0.8
0.7
0.6
0.5
0
5
10
c = 0.1 p = 0.6
c = 0.2 p = 0.6
15
20
c = 0.1 p = 0.8
c = 0.2 p = 0.8
Fig. 5. The conditional probability that a message is intercepted as a function
of k for fixed c and p.
and 0 is the top element. However as mentioned earlier loop
terminates with probability one and the notions of wp and wlp
coincide. We divide (25) by (26) to finally arrive at
cwp[P]([¬intercepted])
= (1 − c)(1 − p)
1
1 − (p(1 − c))k
.
·
1 − p(1 − c)
1 − pk
One can visualise it as a function in k by fixing the parameters c and p. For example, Figure 5 shows the conditional
probability plotted for various parameter settings.
25
| 6 |
A Covert Queueing Channel
1
in FCFS Schedulers
AmirEmad Ghassami and Negar Kiyavash
arXiv:1707.07234v1 [] 23 Jul 2017
Abstract
We study covert queueing channels (CQCs), which are a kind of covert timing channel that may be exploited in
shared queues across supposedly isolated users. In our system model, a user modulates messages to another user via
his pattern of access to the shared resource scheduled in a first-come-first-served (FCFS) manner. One example of
such a channel is the cross-virtual network covert channel in data center networks resulting from the queueing effects
of the shared resource. First, we study a system comprising a transmitter and a receiver that share a deterministic and
work-conserving FCFS scheduler, and we compute the capacity of this channel. We also consider the effect of the
presence of other users on the information transmission rate of this channel. The achievable information transmission
rates obtained in this study demonstrate the possibility of significant information leakage and great privacy threats
brought by CQCs in FCFS schedulers.
Index Terms
Covert Queueing Channel, First-Come-First-Served Scheduler, Capacity Limit.
I. I NTRODUCTION
The existence of side and covert channels due to the fragility of isolation mechanisms is an important privacy
and security threat in computer networks. Such channels may be created across users which were supposed to be
isolated, resulting in information leakage. By definition, a covert channel is a hidden communication channel which
is not intended to exist in the system and is created furtively by users [2]. Covert channels may be exploited by a
trusted user, or possibly a malware inside a system with access to secret information to leak it to a distrusted user.
On the other hand, in a side channel a malicious user attempts to learn private information by observing information
not intended for him. In this scenario, there is no collaboration between the source of information and the recipient
[3].
AmirEmad Ghassami is with the Department of Electrical and Computer Engineering and, the Coordinated Science Lab, University of
Illinois at Urbana-Champaign, Urbana, IL, email: [email protected]. Negar Kiyavash is with the Department of Electrical and Computer
Engineering, Department of Industrial and Enterprise Systems Engineering and, the Coordinated Science Lab, University of Illinois at UrbanaChampaign, Urbana, IL, email: [email protected]. The results in this paper were presented in part at the 2015 IEEE International Symposium
on Information Theory, Hong Kong, June 14 - June 19, 2015 [1].
A special case of covert and side channels is a timing channel in which information is conveyed through timing
of occurrence of events (e.g., inter-arrival times of packets). For instance, queueing covert/side timing channels may
arise between users who share a packet scheduler in a network.
Packet schedulers serve packets from multiple streams which are queued in a single queue. This causes dependencies between delays observed by users. Particularly, the delay that one user experiences depends on the amount of
traffic generated by other streams, as well as his own traffic. Hence, a user can gain information about other users’
traffic by observing delays of his own stream. This dependency between the streams can breach private information
as well as create hidden communication channels between the users.
One example of a covert/side queueing channel is the cross-virtual network covert channel in data center networks
and cloud environments. In recent years, migrating to commercial clouds and data centers is becoming increasingly
popular among companies that deal with data. The multi-tenant nature of cloud and sharing infrastructure between
several users has made data protection and avoiding information leakage a serious challenge in such environments
[4]. In data center networks, software-defined-networks are frequently used for load balancing [5]. This generates
logically isolated virtual networks and prevents direct data exchange. However, since packet flows belonging to
different VNs inevitably share underlying network infrastructure (such as a router or a physical link), it is possible
to transfer data across VNs through timing channels resulting from the queueing effects of the shared resource(s).
In this paper, we study covert queueing channels (CQCs) in a shared deterministic and work-conserving firstcome-first-served (FCFS) scheduler. We present an information-theoretic framework to describe and model the data
transmission in this channel and calculate its capacity. First, we consider the two users setting and then study the
effect of the presence of a third user on the information transmission rate. The approach for analyzing the effect
of the presence of the third user may be extended to calculate the capacity of the covert queueing channel serving
any number of users.
The rest of this paper is organized as follows: We review related works in Section II. In Section III, we describe
the system model. The capacities of the introduced channel for the two and three user cases are calculated in
Sections IV and V, respectively. Our concluding remarks are presented in Section VI.
II. R ELATED W ORKS
The existing literature on covert/side timing channels has mainly concentrated on timing channels in which the
receiver/adversary has direct access to the timing sequence produced by the transmitter/victim or a noisy version
of it. However, in a covert/side queueing channel, the receiver/adversary does the inference based on the timing of
his own packets which has been influenced by the original stream.
In a queuing side channel, where a malicious user, called an attacker, attempts to learn another user’s private
information, the main approach used by the attacker is traffic analysis. That is, the attacker tries to infer private
information from the victim’s traffic pattern. The attacker can have an estimation of the features of the other user’s
2
Node 𝑈𝑒𝑇
FCFS
Scheduler
Node 𝑈𝑑𝑇
Node 𝑈𝑒𝑅
Node 𝑈𝑑𝑅
Fig. 1: Covert queueing channel in a system with 2 users.
stream such as packet size and timing by emitting frequent packets in his own sequence. Previous work shows
that through traffic analysis, the attacker can obtain various private information including visited web sites [6], sent
keystrokes [7], and even inferring spoken phrases in a voice-over-IP connection [8].
In [9], Gong et al. proposed an attack where a remote attacker learns about a legitimate user’s browser activity by
sampling the queue sizes in the downstream buffer of the user’s DSL link. The information leakage of a queueing
side channel in an FCFS scheduler is analyzed in [10]. The analysis of more general work-conserving policies has
been done in [11] and [12]. The authors in [12] present an analytical framework for modeling information leakage
in queuing side channels and quantify the leakage for several common scheduling policies.
Most of the work in covert timing channels is devoted to the case in which two users communicate by modulating
the timings, and the receiver sees a noisy version of the transmitter’s inputs [13]–[21]. Also, there are many works
devoted to the detection of such channels [14], [22], [23]. The setup of CQC is new in the field of covert channels
and as far as the authors are aware, there are very few works on this setup [24], [25].
III. S YSTEM D ESCRIPTION
Consider the architecture depicted in Figure 1. In this model, a scheduler serves packets from 2 users: Ue and Ud .
Each user, Ui , i ∈ {e, d}, is modeled by a transmitter and a receiver node, denoted by UiT and UiR , respectively.
UiR is the node which receives UiT ’s packet stream. Note that UiT and UiR could correspond to the uplink and
downlink of the same entity. Ue intends to send a message to Ud , but there is no direct channel between them.
However, since UeT and UdT ’s packets share the same queue, UeT can encode messages in the arrival times of its
packets, which are passed onto Ud via queueing delays. Therefore, a timing channel is created between users via
the delays experienced through the coupling of their traffic due to the shared scheduler.
To receive the messages from Ue , user Ud sends a packet stream from the node UdT . He then uses the delays
he experiences by receiving the packet stream at UdR to decode the message. Therefore, effectively, the nodes UeT
3
Arrived stream
of the scheduler
Ai
A i+1
Time
User e's output
stream
User d's output
stream
Time
Di
D i+1
Time
Fig. 2: An example of the input and output streams of the FCFS scheduler serving two users. Red packets belong
to Ue and blue packets belong to Ud . We assume that one packet is buffered in the queue at time Ai .
and UeR are on the encoder side and the nodes UdT and UdR are on the decoder side of the channel of our interest.
Throughout the paper, we call Ud ’s sent stream the probe stream.
We consider an FCFS scheduler, which is commonly used in DSL routers. We assume this scheduler is deterministic and work-conserving. Time is discretized into slots, and the scheduler is capable of processing at most one
packet per time slot. At each time slot, each user either issues one packet or remains idle. Furthermore, we assume
that all packets are the same size.
Figure 2 shows an example of the input and output streams of the system depicted in Figure 1 with an FCFS
scheduler. In this figure, the first stream is the arrival stream i.e., arrivals from both UeT and UdT , depicted by red
and blue, respectively. The second stream is the output stream of user Ue (received by UeR ), and the third one is the
output stream of user Ud (received by UdR ). In this example, we assume that one packet is buffered in the queue at
time Ai , where a packet arrives from both UdT and UeT . If user Ue had not sent the two packets (depicted in red),
the second packet of user Ud which arrives at time Ai+1 could have departed one time slot earlier. Therefore, Ud
knows that Ue has issued two packets.
Throughout the paper, we assume that the priorities of the users are known. Particularly, without loss of generality,
we assume that Ud has the highest priority among all users; i.e., in the case of simultaneous arrivals, Ud ’s packet
will be served first.
As mentioned earlier, at each time slot, each user is allowed to either send one packet or none; hence, the input
and output packet sequences of each user could be viewed as a binary bitstream, where ‘1’ and ‘0’ indicates whether
a packet was sent or not in the corresponding time slot.
4
Assume message W drawn uniformly from the message set {1, 2, ..., M } is transmitted by UeT , and Ŵ is Ud ’s
estimate of the sent message. Our performance metric is the average error probability, defined as follows:
Pe , P (W 6= Ŵ ) =
M
X
1
P (Ŵ 6= m|W = m).
M
m=1
Ue encodes each message into a binary sequence of length n, ∆n , to create the codebook, which is known at the
decoder, Ud .
In order to send a message, UeT emits a packet in the i-th time slot if ∆i = 1 and remains idle otherwise, i.e.,
1
⇒ UeT issues a
packet in time slot i.
∆i =
0
⇒ UeT remains
idle in time slot i.
To decode this message, UdT sends a binary length n stream (the probe stream) to the scheduler during the same
length n time period. User Ud will use this stream and the response stream received at node UdR to decode the sent
message.
We define the code, rate of the code, and the channel capacity similar to the definitions in [13], [26] and [27], as
follows:
Definition 1. An (n, M, )-code consists of a codebook of M equiprobable binary codewords, where messages take
on average n time slots to be received, and the error probability satisfies Pe ≤ .
Definition 2. The information transmission rate, R, of a code is the amount of conveyed information (logarithm
of the codebook size) normalized by the average number of used time slots for the message to be received, i.e.,
R=
log M
.
n
Rate may be interpreted as the average amount of information conveyed in a single time slot.
Definition 3. (Channel Capacity) The Shannon capacity, C, for a channel is the maximum achievable rate at which
one can communicate through the channel when the average probability of error goes to zero. In other words, C
is the suprimum of rates, R, which satisfies the following property [27]:
∀δ > 0, ∃(n, M, n )-code
log M > R − δ
n
s.t.
→0
n
The following notations will be used through out the paper:
5
as n → ∞.
•
ri : UiT ’s packet rate.
•
Ai : Arrival time of the i-th packet in the probe stream.
•
Di : Departure time of the i-th packet of the probe stream.
•
•
m
We assume m packets are sent by Ud during n time slots and we have: rd = lim
.
n→∞ n
PAi+1 −1
Xi : Number of Ue ’s packets sent in the interval [Ai , Ai+1 ). Note that Xi = j=Ai ∆j .
Ti = Ai+1 − Ai : inter-arrival time between i-th and (i + 1)-th packet of the probe stream. We denote a
realization of T by τ .
•
Yi = Di+1 − Di − 1
•
X̂i : estimation of Xi by decoder.
•
Ŵ : decoded message.
In an FCFS scheduler, Ud can have an estimation of the number of the packets of other users between any of
his own consecutive packets. The estimation of the number of packets in the interval [Ai , Ai+1 ) is accurate if the
scheduler is deterministic and work-conserving and a sufficient number of packets is buffered in the queue at time
Ai 1 . In that case, the number of other users’ packets arriving in the interval [Ai , Ai+1 ) could be simply calculated
by Di+1 − Di − 1. Note that Ud cannot pinpoint the location of the sent packets; that is, if the inter-arrival time is
τ , Ud can distinguish between τ + 1 different sets of bit streams sent during this time. Therefore, we look at any
probe stream sent during n time slots as a combination of different inter-arrival times.
If the sum of the packet rates of the users used during sending a message of length n is on average larger than
1, then the message will be arrived on average during more than n time slots. Also, this will destabilize the input
queue of the scheduler. For example, for a system with two users Ud and Ue , if UdT sends packets in every time
slot, then sending a packet by UeT in any time slot would cause a delay in the serving of the next packet of UdT
and hence could be detected. Therefore, in each time slot, UeT could simply idle to signal a bit ‘0’ or send a packet
to signal a bit ‘1’, resulting in the information rate of
1
1.5
bit per time slot in the case that bits are equiprobable.
But, this scheme is not feasible in practice as it would destabilize the queue and result in severe packet drops.
In order to have queue stability, it suffices that the total packet arrival rate does not exceed the service rate, which
for a deterministic and work-conserving scheduler is equal to 1 (see Appendix A for the proof of stability which
is based on a Lyapunov stability argument for the general case that the serving rate is assumed to be 0 ≤ ρ ≤ 1
1 If
the service rate of the scheduler is equal to 1, there should be at least Ai+1 − Ai − 1 packets buffered in the queue at time Ai . Therefore,
user Ud needs to know the queue length. This is feasible using the following formula:
q(Ai ) = Di − Ai − 1
where q(Ai ) denotes the queue length at the time that the i-th packet in the probe stream arrives at the queue. The extra 1 in the formula is
the time needed for the i-th packet of the probe stream to be served. Therefore, user Ud should always be aware of the queue length and keep
it sufficiently large by sending extra packets when needed.
6
and arbitrary number of users is considered). Specifically, for the case of two users we need
re + rd < 1.
(1)
On the other hand, if the sum of the packet rates of the users used during sending a message of length n is
on average less than 1, the length of the input queue may go to zero, and consequently Ud may not be able to
count the number of packets of other users correctly. Note that increasing rd increases the resolution available for
user Ud and hence this user can have a better estimation of the number of other users’ sent packets. Therefore, in
the case of two users, in order to achieve the highest information rate, the operation point should tend to the line
re + rd = 1. Therefore, we focus on the coding schemes where the sum of the rates is held at 1, with considering
a preamble stage in our communication to guarantee sufficient queue length.
IV. T WO - USER C ASE
In this section, using achievability and converse arguments, the capacity of the introduced system is calculated
for a system with a deterministic and work-conserving FCFS scheduler serving packets from two users.
As depicted in Figure 1, user Ue is attempting to send a message to Ud through the covert queueing channel
between them. Note that since we have considered service rate of 1 for the FCFS scheduler and users can agree
on the packet stream sent by UdT ahead of time, the feedback UeR is already available at the encoder. Therefore,
the following Markov chain holds
W → X m → Y m → Ŵ .
(2)
Note that as mentioned earlier, if there is a sufficient number of packets buffered in the shared queue, X̂i could be
accurately estimated as Yi .
The main result of this section is the following theorem, the proof of which is developed in the rest of the section.
Theorem 1. The capacity of the timing channel in a shared FCFS scheduler with service rate 1 depicted in Figure
1 is equal to 0.8114 bits per time slot, which can be obtained by solving the following optimization problem:
1
C = sup αH̃(γ1 , 1) + (1 − α)H̃(γ2 , )
2
α,γ1 ,γ2
(3)
s.t.
1
α(γ1 + 1) + (1 − α)(γ2 + ) = 1,
2
where 0 ≤ α ≤ 1 and 0 ≤ γ1 , γ2 ≤
1
2
and the function H̃ : [0, 1] × { k1 : k ∈ N} 7→ [0, 1] is defined as:
1
1
H̃(γ, ) =
k
k
sup
H(X),
X∈{0,1,...,k}
E[X]=kγ
We first investigate some of the properties of the function H̃.
7
k ∈ N, 0 ≤ γ ≤ 1.
(4)
Lemma 1. Let Uk ∼ unif ({0, 1, ..., k}). The distribution which achieves the optimum value in (4) is the tilted
version of unif ({0, 1, ..., k}) with parameter λ:
eiλ
PX (i) = Pk
i=0
eiλ
,
i ∈ {0, 1, ..., k},
0
0
where λ = (ψU
)−1 (kγ), where the function ψU
(·) is the derivative of the log-moment generating function of Uk .
k
k
See Appendix B for the proof of Lemma 1.
Lemma 2. The function H̃ could be computed using the following expression:
1
1
∗
H̃(γ, ) = [log2 (k + 1) − ψU
(kγ) log2 e],
k
k
k
(5)
∗
where Uk ∼ unif ({0, 1, ..., k}), and the function ψU
(·) is the rate function given by the Legendre-Fenchel transform
k
of the log-moment generating function, ψUk (·):
∗
ψU
(γ) = sup{λγ − ψUk (λ)}.
k
(6)
λ∈R
In order to prove this lemma, first we note that for any random variable X defined over the set {0, 1, ..., k},
H(X) =
k
X
PX (i) log
i=0
=
k
X
1
PX (i)
PX (i) log (k + 1) −
i=0
k
X
PX (i) log
i=0
PX (i)
1
k+1
= log (k + 1) − D(PX ||Uk ),
where D(PX ||Uk ) denotes the KL-divergence between PX and Uk . Therefore, in order to maximize H(X), we
need to minimize D(PX ||Uk ). Using the following well-known fact [28] concludes the lemma.
∗
min D(PX ||Uk ) = ψU
(kγ) log2 e.
k
(7)
E[X]=kγ
Figure 3 shows the function H̃(γ, k1 ) for different values of γ and k ∈ {1, 2, 3}.
1
Lemma 3. The function H̃(·, ·) is concave in pair (γ, ) in the sense that for integers k1 , k2 , k3 , and for values
k
1
1
1
0 ≤ γ1 , γ2 , γ3 ≤ 1, and for α ∈ [0, 1] such that α(γ1 , ) + (1 − α)(γ3 , ) = (γ2 , ), we have:
k1
k3
k2
1
1
1
αH̃(γ1 , ) + (1 − α)H̃(γ3 , ) ≤ H̃(γ2 , ).
(8)
k1
k3
k2
See Appendix C for the proof of Lemma 3.
Substituting (5) in (3) and solving it, the capacity of the timing channel in the shared FCFS scheduler with
service rate 1 depicted in Figure 1 is equal to 0.8114 bits per time slot, achieved by α = 0.177, γ1 = 0.43 and
γ2 = 0.407.
8
1
k=1
1
k=2
0.8
0.8
k=3
0.6
~
H
0.6
0.4
0.4
0.2
0
1
0.2
0.8
1
0.6
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.8
0.6
0.4
1
.
1
k
0.4
0.2
0.2
0
0
.
Fig. 3: H̃(γ, k1 ) for different values of γ and k ∈ {1, 2, 3}.
1
1
Lemma 4. For all γ ∈ [0, 1] and k ∈ N, we have H̃(γ, ) = H̃(1 − γ, ).
k
k
See Appendix D for the proof of Lemma 4.
In the following, the proof of Theorem 1 is given. The proof is based on converse and achievability arguments.
A. Converse
In the converse side, the ultimate goal is to prove that
1
C ≤ sup αH̃(γ1 , 1) + (1 − α)H̃(γ2 , )
2
α,γ1 ,γ2
s.t.
1
α(γ1 + 1) + (1 − α)(γ2 + ) = 1,
2
where 0 ≤ α ≤ 1 and 0 ≤ γ1 , γ2 ≤ 12 . We break the proof into two lemmas.
Lemma 5. For the timing channel in a shared FCFS scheduler with service rate 1 depicted in Figure 1, any code
consisting of a codebook of M equiprobable binary codewords, where messages take on average n time slots to
be received, satisfies
n
X
1
1
log M ≤
[πτ H̃(µτ , )] + n
n
τ
τ =1
where
Pn
τ =1
πτ (µτ + τ1 ) = 1 and for all τ , 0 ≤ µτ ≤ 12 . In this expression, n =
(9)
1
n (H(Pe )
+ Pe log2 (M − 1)),
πτ is the portion of time that user Ud sends packets with inter-arrival time equal to τ in the probe stream, and µτ
is UeT ’s average packet rate when the inter-arrival time is equal to τ .
9
Proof. We first we note that
1
(a) 1
log M = H(W )
n
n
(b) 1
= H(W |τ m )
n
1
1
= I(W ; Ŵ |τ m ) + H(W |Ŵ , τ m )
n
n
(c) 1
≤ I(W ; Ŵ |τ m ) + n
n
(d) 1
≤ I(X m ; Y m |τ m ) + n ,
n
where (a) holds because W is a uniform random variable over the set of messages {1, ..., M }, (b) follows from the
fact that the chosen message is independent of the inter-arrival time of decoder’s packets, (c) follows from Fano’s
inequality with n =
1
n (H(Pe )
chain in (2). Therefore
+ Pe log2 (M − 1)), and (d) follows from data processing inequality in Markov
1
1
log M ≤ [H(X m |τ m ) − H(X m |Y m , τ m )] + n
n
n
1
≤ H(X m |τ m ) + n
n
m
1X
≤
H(Xj |τ m ) + n
n j=1
m
≤
1X
max H(Xj |τ m ) + n ,
n j=1 PXj |τ m
In the maximization above, the mean of the distribution PXj |τ m is E[Xj |τ m ]. As mentioned in Section III, in
order to find the maximum information rate while having stability, we are interested in the asymptotic regime in
which the operating point is converging to the line re + rd = 1. Therefore, the information rate is upper bounded by
m
1X
E[Xj |τ m ] + rd = 1.
having the set of means, {E[X1 |τ m ], E[X2 |τ m ], ..., E[Xm |τ m ]} satisfying the constraint
n j=1
Let ξj =
E[Xj |τ m ]
.
τj
Using (4), we have
max
PXj |τ m
E[Xj |τ m ]=τj ξj
H(Xj |τ m ) = τj H̃(ξj ,
1
),
τj
(10)
where as mentioned in Lemma 1, the distribution for each Xj which achieves the maximum value in (10) is the
0
tilted distribution of Uτj with parameter λ, such that λ = (ψU
)−1 (τj ξj ). Therefore, we will have:
τ
j
1
1
log M ≤
n
n
m
X
τj H̃(ξj ,
j=1
m
1
) + n ,
τj
1X
τj ξj + rd = 1. The inter-arrival times take values in
n j=1
the set {1, 2, ..., n}. Therefore, in the summation above we can fix the value of inter-arrival time on the value τ
such that the set {ξ1 , ξ2 , ..., ξm } satisfies the constraint
10
and count the number of times that τj has that value. Defining mτ as the number of times that the inter-arrival
Pn
time is equal to τ (note that n = τ =1 τ · mτ ), we can break the summation above as follows
m
n
τ
1
1XX
1
log M ≤
[
τ H̃(µτ,k , )] + n
n
n τ =1
τ
k=1
mτ
n
1X X
1
=
[τ
H̃(µτ,k , )] + n
n τ =1
τ
k=1
mτ
n
X
1
1
1X
[τ · mτ
H̃(µτ,k , )] + n ,
=
n τ =1
mτ
τ
k=1
where µτ,k is equal to the k-th ξj which has τj = τ .
By Lemma 3, the function H̃(·, ·) is a concave function of its first argument. Therefore, by Jensen’s inequality,
mτ
X
1
1
1
H̃(µτ,k , ) ≤ H̃(µτ , ),
mτ
τ
τ
(11)
k=1
where µτ =
mτ
n
X
1 X
µτ,k . Using (11) and the equation n =
τ · mτ , we have:
mτ
τ =1
k=1
n
X
τ · mτ
1
1
[ Pn
log M ≤
H̃(µτ , )] + n
n
τ
τ =1 τ · mτ
τ =1
n
X
(12)
1
[πτ H̃(µτ , )] + n ,
=
τ
τ =1
where πτ =
Pnτ ·mτ
τ =1 τ ·mτ
.
The packet rates of the users could be written as follows:
m
n
m
τ
1X
1 XX
re =
τj ξj =
τ µτ,k
n j=1
n τ =1
k=1
mτ
n
n
X
1 X
1
1X
τ mτ
µτ,k =
τ mτ µτ
=
n τ =1
mτ
n
τ =1
k=1
=
n
X
n
X
τ · mτ
Pn
πτ µτ ,
µτ =
τ =1 τ · mτ
τ =1
τ =1
and
rd =
n
X
1
πτ .
τ
τ =1
Therefore, the constraint could be written as follows:
n
X
1
πτ (µτ + ) = 1.
τ
τ =1
(13)
Suppose the set of pairs {(µτ , τ1 )}nτ=1 satisfies (13) and maximizes the right hand side of (12). By Lemma 4, there
exists another set of pairs {(µ̂τ , τ1 )}nτ=1 with µ̂τ defined as
µ
if 0 ≤ µτ ≤ 12 ,
τ
µ̂τ =
1−µ
if 12 ≤ µτ ≤ 1,
τ
11
that gives the same value for the right-hand side of (12), but it has
Pn
τ =1
πτ (µ̂τ + τ1 ) ≤
Pn
τ =1
πτ (µτ + τ1 ). Therefore,
Ud can increase his packet rate and increase the information rate using the values µ̂τ . Hence, in the maximizing
set, for all τ , we have 0 ≤ µτ ≤ 12 . Therefore, the optimal operating point will be on the line re + rd = 1, with
0 ≤ re ≤
1
2
and
1
2
≤ rd ≤ 1.
Applying Lemma 3, we can replace all pairs of form (µτ , τ1 ), τ ≥ 2, with a single pair of form (µ, 21 ):
Lemma 6. For any set of pairs S = {(µτ , τ1 ), τ ∈ [n]} where for all τ , 0 ≤ µτ ≤ 21 , with weights {πτ , τ ∈ [n]}
with operating point on the line 0 ≤ re ≤
1
2
and
1
2
≤ rd ≤ 1, there exists 0 ≤ α ≤ 1 and 0 ≤ γ1 , γ2 ≤
1
2
such that
1
α(γ1 + 1) + (1 − α)(γ2 + ) = 1,
2
and
n
X
1
1
[πτ H̃(µτ , )] ≤ αH̃(γ1 , 1) + (1 − α)H̃(γ2 , ).
τ
2
τ =1
Proof. For all τ ∈ {3, ..., n}, there exists βτ ∈ [0, 1], such that
1
1
βτ (µ1 , 1) + (1 − βτ )(µτ , ) = (µτ2 , ),
τ
2
Clearly, the set {(µ1 , 1), (µ2 , 12 ), (µ32 , 12 ), ..., (µn2 , 12 )} can also give the same operating point as S does. By Lemma
3,
1
1
βτ H̃(µ1 , 1) + (1 − βτ )H̃(µτ , ) ≤ H̃(µτ2 , ),
τ
2
∀τ ∈ {3, ..., n},
Therefore,
n
X
1
1
1
ζτ (βτ H̃(µ1 , 1) + (1 − βτ )H̃(µτ , ))
πτ H̃(µτ , ) = ζ1 H̃(µ1 , 1) + ζ2 H̃(µ2 , ) +
τ
2
τ
τ =3
τ =1
n
X
n
X
1
1
ζτ H̃(µτ2 , )
≤ ζ1 H̃(µ1 , 1) + ζ2 H̃(µ2 , ) +
2
2
τ =3
Pn
ζ2 µ2 + τ =3 ζτ µτ2 1
≤ ζ1 H̃(µ1 , 1) + (1 − ζ1 )H̃(
, ),
1 − ζ1
2
where π1 = ζ1 +
Pn
τ =3 ζτ βτ ,
(14)
π2 = ζ2 and πτ = ζτ (1 − βτ ) for 3 ≤ τ ≤ n and we have used Lemma 3 again in
the last inequality.
From Lemmas 5 and 6, we have:
1
1
log M ≤ αH̃(γ1 , 1) + (1 − α)H̃(γ2 , ) + n
n
2
1
≤ sup αH̃(γ1 , 1) + (1 − α)H̃(γ2 , ) + n .
2
α,γ1 ,γ2
12
Letting n → ∞, n goes to zero and we have
1
C ≤ sup αH̃(γ1 , 1) + (1 − α)H̃(γ2 , )
2
α,γ1 ,γ2
s.t.
1
α(γ1 + 1) + (1 − α)(γ2 + ) = 1,
2
where 0 ≤ α ≤ 1 and 0 ≤ γ1 , γ2 ≤ 12 . This completes the proof of the converse part.
B. Achievability
The sequence of steps in our achievability scheme is as follows:
•
Set α = 0.177 − δ, for a small and positive value of δ.
•
Fix a binary distribution P1 such that P1 (1) = 0.43 and P1 (0) = 0.57. Generate a binary codebook C1
containing 2αnR1 sequences of length αn of i.i.d. entries according to P1 .
Fix a ternary distribution P2 over set of symbols {a0 , a1 , a2 } such that P2 (a0 ) = 0.43, P2 (a1 ) = 0.325 and
P2 (a2 ) = 0.245. Generate a ternary codebook C2 containing 2(1−α)nR2 sequences of length 21 (1 − α)n of i.i.d.
entries according to P2 . Substitute a0 with 00, a1 with 10 and a2 with 11, so we will have 2(1−α)nR2 binary
sequences of length (1 − α)n.
Combine C1 and C2 to get C, such that C has 2n(αR1 +(1−α)R2 ) binary sequences of length n where we
concatenate i-th row of C1 with j-th row of C2 to make the ((i − 1)(2(1−α)nR2 ) + j)-th row of C (note that
2(1−α)nR2 is the number of rows in C2 ). Rows of C are our codewords. In above, n should be chosen such
that αnR1 , αn, (1 − α)nR2 and 12 (1 − α)n are all integers.
•
Encoding: UdT sends the stream of all ones (one packet in each time slot) in the first αn time slots and sends
bit stream of concatenated 10’s for the rest of (1 − α)n time slots.
To send message m, UeT sends the corresponding row of C, that is, it sends the corresponding part of m from
C1 in the first αn time slots and the corresponding part of m from C2 in the rest of (1 − α)n time slots.
•
Decoding: Assuming the queue is not empty,2 since there is no noise in the system, the decoder can always
learn the exact sequence sent by Ue .
Consequently, we will have:
C≥
log2 2n(αR1 +(1−α)R2 )
n
= αR1 + (1 − α)R2 .
2 Since
in our achievable scheme, UdT ’s packets are spaced by either one or two time slots, it is enough to have one packet buffered in the
queue, where since we are working in the heavy traffic regime, it will not be a problem.
13
Node 𝑈𝑒𝑅
Node 𝑈𝑒𝑇
FCFS
Scheduler
Node 𝑈𝑝𝑇
Node 𝑈𝑝𝑅
Node 𝑈𝑑𝑇
Node 𝑈𝑑𝑅
Fig. 4: Covert queueing channel in a system with three users.
In infinite block-length regime, where n → ∞, we can choose R1 = H(P1 ), R2 = 21 H(P2 ) and find codebooks
C1 and C2 such that this scheme satisfies the rate constraint. Therefore,
1
C ≥ αH(P1 ) + (1 − α) H(P2 ).
2
Substituting the values in the expression above, and letting δ go to zero, we see that the rate 0.8114 bits per time
slot is achievable.
V. T HREE - USER C ASE
As an extension to the basic problem, in this section we consider the case that a third user is also using the
shared scheduler. We add user Up to our basic system model. This user has nodes UpT and UpR as his transmitter
and receiver nodes, respectively (Figure 4). We assume that the node UpT sends packets according to a Bernoulli
process with rate rp to the shared scheduler. The shared scheduler is again assumed to be FCFS with service rate
1 and we analyze the capacity for coding schemes satisfying queueing stability condition in the asymptotic regime
where the operating point is converging to the line re + rp + rd = 1. Also, in this section we consider the extra
assumption that the inter-arrival time of the packets in the probe stream is upper bounded by the value τmax .
Assuming that a sufficient number of packets are buffered in the shared queue, user Ud can still count the number
of packets sent by the other two users between any of his own consecutive packets, yet he cannot distinguish between
packets sent by user Ue and the packets sent by user Up . Hence, user Ud has uncertainty in estimating the values
of X. We model this uncertainty as a noise in receiving X. Suppose UdT sends two packets with τi = 2. Each
of the other users can possibly send at most 2 packets in the interval [Ai , Ai+1 ) and hence, Y ∈ {0, 1, 2, 3, 4}.
Therefore, we have the channel shown in Figure 5 for this example. In the general case, for the inter-arrival time
14
(1 − 𝑟𝑝 ) 2
0
(1 − 𝑟𝑝 ) 2
1
X
0
1
(1 − 𝑟𝑝 ) 2
2
2
Y
3
4
Fig. 5: Channel between the encoder and the decoder of the system for the case that the inter-arrival time of two
packets of the probe stream is 2.
𝑊
𝑋1
𝑌1
𝑋2
𝑌2
⁞
⁞
𝑋𝑚
𝑌𝑚
𝑊
Fig. 6: The graphical model representing the statistical relation between W , X m , Y m and Ŵ .
τ , given X = x, we have Y ∈ {x + 0, ..., x + τ } such that
τ
P r(Y = i + x|X = x) =
(rp )i (1 − rp )τ −i ,
i
i ∈ {0, ..., τ },
(15)
which is a binomial distribution Bin(τ, rp ). Therefore, the support of the random variable Y is {0, 1, ..., 2τ }. For
the mean of Y , we have:
E[Y |τ ] = E[E[Y |X, τ ]|τ ] = E[X + τ rp |τ ] = τ (re + rp ).
(16)
Because of user Up ’s stream, the encoder is not aware of the stream received at node UeR beforehand and
this output can provide information to the encoder about UpT ’s stream. The more packets node UeT sends to the
scheduler, the more information this stream contains about UpT ’s stream. Using this information, the encoder can
have an estimation of the output of the channel at the decoder’s side and hence it could be considered as a noisy
feedback to the encoder. Figure 6 shows the graphical model for random variables in our system.
15
The main result of this section is evaluation of the capacity of the introduced channel, presented in Theorem 2.
In the following, the subscript rp denotes that the calculation is done when the rate of Up is rp .
Theorem 2. If the rate of Up is rp , the capacity of the timing channel in a shared FCFS scheduler with service
rate 1 depicted in Figure 4 is given by
C(rp ) =
1
1
αI˜rp (γ1 , ) + (1 − α)I˜rp (γ2 ,
)
τ
τ
+
1
α,γ1 ,γ2 ,τ
sup
(17)
s.t.
α(γ1 +
1
1
) + (1 − α)(γ2 +
) = 1 − rp ,
τ
τ +1
where 0 ≤ α ≤ 1 and 0 ≤ γ1 , γ2 ≤ 1 and 1 ≤ τ ≤ τmax − 1. The function I˜rp : [0, 1] × { k1 : k ∈ N} 7→ [0, 1] is
defined as:
1
1
I˜rp (γ, ) =
k
k
sup
Irp (X; Y ),
k ∈ N, 0 ≤ γ ≤ 1.
(18)
X∈{0,1,...,k}
E[X]=kγ
The proof is based on converse and achievability arguments. Before giving the proof, we first investigate some
˜
of the properties of the function I.
Lemma 7. The function I˜rp could be computed using the following expression:
1
1
1
1
I˜rp (γ, ) = Ȟrp (γ, ) − H(Bin(k, rp )),
k
k
k
k
1
where Ȟrp (γ, ) =
k
sup
(19)
Hrp (Y ) and the second term is the entropy of the binomial distribution with
X∈{0,1,...,k}
E[X]=kγ
parameters k and rp .
See Appendix E for the proof of Lemma 7.
In order to calculate Ȟrp (γ, k1 ), the following optimization problem should be solved:
max log2 e
PX ≥0
2k
X
i=0
s.t.
PY (i) ln(
1
)
PY (i)
Pk iP (i) = E[X] = kγ,
X
i=0
Pk P (i) = 1,
i=0 X
(20)
where PY = PX ∗ PBin(k,rp ) , that is,
PY (i) =
k
X
PX (j)PBin(k,rp ) (i − j),
i ∈ {0, 1, ..., 2k}.
j=0
Figure 7 shows the functions I˜0 (γ, k1 ), I˜0.1 (γ, k1 ) and I˜0.2 (γ, k1 ) for different values of γ and k ∈ {1, 2, 3}.
16
(21)
1
0.9
k=1
0.8
k=2
k=3
0.7
1
1
0.9
0.9
0.8
0.8
0.7
0.7
k=1
0.6
0.6
0.6
I~0:2
I~0
I~0:1
k=2
0.5
0.5
k=3
k=2
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
k=1
0.5
0.4
k=3
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
.
.
.
Fig. 7: I˜rp (γ, k1 ) for different values of γ and k ∈ {1, 2, 3} and rp ∈ {0, 0.1, 0.2}.
1
0.9
0.8
0.7
C(rp )
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
rp
Fig. 8: capacity of the timing channel in the shared FCFS scheduler of Figure 4 for different values of rp .
Lemma 8. For all 0 ≤ rp ≤ 1, integers 1 ≤ k1 , k2 , k3 ≤ τmax , values 0 ≤ γ1 , γ2 , γ3 ≤ 1, and α ∈ [0, 1] such
1
1
1
that α(γ1 , ) + (1 − α)(γ3 , ) = (γ2 , ), we have:
k1
k3
k2
1
1
1
(22)
αI˜rp (γ1 , ) + (1 − α)I˜rp (γ3 , ) ≤ I˜rp (γ2 , )
k1
k3
k2
See Appendix F for the proof of Lemma 8.
Using the mentioned properties, the capacity of the timing channel in the shared FCFS scheduler of Figure 4 for
different values of rp can be calculated. Figure 8 shows the value of the capacity with respect to rp .
The following proof of Theorem 2 is based on converse and achievability arguments.
A. Converse
Suppose the rate of Up is rp . Similar to the proof of Lemma 5, for the timing channel in a shared FCFS scheduler
with service rate 1 depicted in Figure 4, any code consisting of a codebook of M equiprobable binary codewords,
17
where messages take on average n time slots to be received, satisfies
1
1
log M ≤ Irp (W ; Ŵ |τ m ) + n
n
n
(a) 1
≤ Irp (W ; Y m |τ m ) + n ,
n
where n =
1
n (H(Pe )
+ Pe log2 (M − 1)) and (a) follows from data processing inequality in the model in Figure
6. Therefore,
m
1X
1
log M ≤
Ir (W ; Yj |Y j−1 τ m ) + n
n
n j=1 p
m
≤
(a)
1X
Ir (W, Y j−1 ; Yj |τ m ) + n
n j=1 p
m
≤
1X
Ir (Xj ; Yj |τ m ) + n
n j=1 p
≤
1X
max Ir (Xj ; Yj |τ m ) + n ,
n j=1 PXj |τ m p
m
where (a) again follows from data processing inequality in the model in Figure 6. In the maximization above, the
mean of the distribution PXj |τ m is E[Xj |τ m ] and in order to find the maximum information rate, the set of means,
m
1X
E[Xj |τ m ] +
{E[X1 |τ m ], E[X2 |τ m ], ..., E[Xm |τ m ]}, should satisfy the constraint re + rp + rd = 1, that is,
n j=1
rp + rd = 1. Let ξj =
E[Xj |τ m ]
.
τj
Using (18), we have:
max
PXj |τ m
E[Xj |τ m ]=τj ξj
Irp (Xj ; Yj |τ m ) = τj I˜rp (ξj ,
This implies that
1
).
τj
(23)
m
1X ˜
1
1
log M ≤
τj I(ξj , ) + n .
n
n j=1
τj
Next, using Lemma 8 and similar to the proof of Lemma 5, by breaking the summation, using Jensen’s inequality
τX
max
τ · mτ , we will have:
and the equation n =
τ =1
τX
max
1
1
log M ≤
πτ I˜rp (µτ , ) + n ,
n
τ
τ =1
(24)
τ · mτ
where µτ is the average of ξj ’s which have τj = τ and πτ = Pτmax
. In this expression, πτ could be
τ =1 τ · mτ
interpreted as the portion of time that user Ud sends packets with inter-arrival time equal to τ .
Also, using the same approach as the one used in the proof of Lemma 5, the constraint of the problem could be
written as follows:
τX
max
τ =1
πτ (µτ +
1
) = 1 − rp .
τ
18
(25)
Pτ
1
˜
Suppose the set of pairs S = {(µτ , τ1 ), τ ∈ [τmax ]} with weights {πτ , τ ∈ [τmax ]} gives τmax
=1 πτ Irp (µτ , τ )
P
τmax
1
1
and has its operating point on the line re + rd = 1 − rp , and we have τ1∗ ≤
τ =1 πτ τ ≤ τ ∗ +1 , for some
1 ≤ τ ∗ ≤ τmax − 1. We have
β (µ ∗ , 1 ) + (1 − β )(µ , 1 ) = (µτ ∗ , 1 )
τ
τ +1 τ ∗ +1
τ
τ τ
τ
τ∗
β (µ ∗ , 1 ) + (1 − β )(µ , 1 ) = (µτ ∗ , 1 )
τ
τ
τ∗
τ
τ τ
τ +1 τ ∗ +1
τ ≤ τ ∗ − 1,
τ ≥ τ ∗ + 2,
for some βτ ∈ [0, 1]. Clearly, the set
{(µ1τ ∗ ,
∗
∗
1
1
1
1
1
1
), · · · , (µττ ∗ −1 , ∗ ), (µτ ∗ , ∗ ), (µτ ∗ +1 , ∗
), (µττ ∗ +2
), · · · , (µττmax
)}
∗ +1 ,
+1 , ∗
∗
∗
τ
τ
τ
τ +1
τ +1
τ +1
can give the same operating point as S does. Therefore, using the technique presented in (14) and twice use of
Lemma 8 we have
1
1
1
log M ≤ αI˜rp (γ1 , ∗ ) + (1 − α)I˜rp (γ2 , ∗
) + n
n
τ
τ +1
1
1
) + n .
≤ sup αI˜rp (γ1 , ) + (1 − α)I˜rp (γ2 ,
τ
τ +1
α,γ1 ,γ2 ,τ
Letting n → ∞, n goes to zero and we get the desired result.
B. Achievability
Achieving the proposed upper bound could be done by a method exactly like the one used in Subsection IV-B.
We need to solve the optimization problem (17) to find parameters α, γ1 , γ2 and τ . Because of Lemma 8, in order
to find the optimal τ , we can start with τ = 1 and optimize other parameters and then calculate αI˜rp (γ1 , τ1 ) + (1 −
1
α)I˜rp (γ2 , τ +1
) and, in each step, increase the value of τ by 1, stopping whenever the obtained value is decreased
compared to the previous step. For instance, for rp ≤ 0.1, the optimal τ is 1, and hence the procedure stops after
checking two steps. After calculating parameters γ1 , γ2 and τ , the optimal input distribution could be obtained
using optimization problem (20).
VI. C ONCLUSION
We studied convert queueing channels (CQCs) that can occur through delays experienced by users who are
sharing a scheduler. As the scheduling policy plays a crucial role in the possible information transmission rate in
this type of channel, we focused on deterministic and work-conserving FCFS scheduling policy. An informationtheoretic framework was proposed to derive the capacity of the CQC under this scheduling policy. First, we studied
a system comprising a transmitter and a receiver that share a deterministic and work-conserving FCFS scheduler.
We obtained the maximum information transmission rate in this CQC and showed that an information leakage rate
as high as 0.8114 bits per time slot is possible. We also considered the effect of the presence of other users on
the information transmission rate of this channel. We extended the model to include a third user who also uses
the shared resource and studied the effect of the presence of this user on the information transmission rate. The
solution approach presented in this extension may be applied to calculate the capacity of the covert queueing channel
19
among any number of users. The achievable information transmission rates obtained from this study demonstrate
the possibility of significant information leakage and great privacy threats brought by CQCs in FCFS schedulers.
Based on this result, special attention must be paid to CQCs in high security systems. Finding the capacity of CQCs
under other scheduling policies, especially non-deterministic policies, remains to be done in the research area of
covert communications and is considered as the main direction for future work. Furthermore, a comprehensive study
is required to design suitable scheduling policies that can simultaneously guarantee adequate levels of both security
and throughput.
A PPENDIX A
P ROOF OF S TABILITY
For the system model with M users and service rate ρ, the arrival process has a Poisson binomial distribution
with probability mass function
P (K = k) =
X Y
A∈Fk i∈A
Y
ri
(1 − rj ),
j∈Ac
with support k ∈ {0, 1, ..., M }, where Fk is the set of all subsets of k integers that can be selected from
M
X
ri .
{1, 2, 3, ..., M }. The mean of this distribution is µ =
i=1
We denote arrival, service and queue length at time k, with a(k), s(k) and q(k), respectively, and we have
q(k + 1) = (q(k) + a(k) − s(k))+ .
Using Foster-Lyapunov theorem with Lyapunov function V (q(k)) = (q(k))2 and calculating the drift, we have
E[q 2 (k + 1) − q 2 (k)|q(k) = q] ≤ E[(q + a − s)2 − q 2 ]
= E[2q(a − s)] + E[(a − s)2 ],
where E[(a − s)2 ] is a constant and we denote it by K. Therefore, for some > 0, if µ < ρ, for large enough
value of q, we have
E[q 2 (k + 1) − q 2 (k)|q(k) = q] ≤ 2q(µ − ρ) + K ≤ −,
which implies the stability.
A PPENDIX B
P ROOF OF L EMMA 1
In order to find the optimum distribution, PX , the optimization problem could be written as follows:
max log2 e
PX ≥0
k
X
i=0
s.t.
PX (i) ln(
1
)
PX (i)
Pk iP (i) = E[X] = kγ,
X
i=0
Pk P (i) = 1,
i=0 X
20
(26)
which could be solved using the Lagrange multipliers method. The Lagrangian function would be as follows:
k
X
PX (i) ln(
i=0
k
k
X
X
1
) + λ(
iPX (i) − kγ) + ρ(
PX (i) − 1).
PX (i)
i=0
i=0
Setting the derivative with respect to PX (i) equal to zero, we get ln( PX1(i) ) − 1 + iλ + ρ = 0, which implies that
PX (i) = eρ−1 · eiλ .
(27)
Also, from the second constraint we have
k
X
i=0
eρ−1 · eiλ = 1 ⇒ eρ−1 = Pk
1
i=0
eiλ
.
(28)
Combining (27) and (28), we have:
eiλ
PX (i) = Pk
i=0
eiλ
,
which is the tilted distribution of Uk with parameter λ.
In order to calculate λ, from the first constraint:
k
X
kγ =
k
X
iPX (i) =
k
X
i=0
i=0
eiλ
i Pk
i=0
eiλ
=
ieiλ
i=0
k
X
=
eiλ
E[Uk eUk λ ]
d
=
(ln E[eUk λ ])
U
λ
k
E[e
]
dλ
i=0
0
= ψU
(λ).
k
A PPENDIX C
P ROOF OF LEMMA 3
First we note that
1
1
∗
(kγ) log2 e]
H̃(γ, ) = [log2 (k + 1) − ψU
k
k
k
Pk
eiλ
1
1
)} log2 e]
= [ log2 (k + 1) − sup{kγλ − log( i=0
k
k λ
k+1
= −sup{γλ log2 e −
λ
k
X
1
log2 (
eiλ )}.
k
i=0
k
X
1
1
Therefore, if we can show that for any given λ the function h(γ, ) = γλ log2 e − log2 (
eiλ ) is convex, then
k
k
i=0
since the supremum of convex functions is convex, we can conclude the desired concavity of the function H̃(·, ·).
k
X
1
1
1 − e(k+1)λ
Noting that log2 (
eiλ ) = log2 (
), to prove the convexity of h(·, ·), it suffices to prove that the
k
k
1 − eλ
i=0
1
1 − e( x +1)λ
function g(x) = x log(
), 0 < x ≤ 1, is concave. This is true from the concavity of the function
1 − eλ
(x+1)λ
1−e
ĝ(x) = log(
), and the fact that for any function f , xf ( x1 ) is concave if f (x) is concave. The concavity
1 − eλ
of the function ĝ(·) can be easily seen by taking its second derivative.
21
A PPENDIX D
P ROOF OF L EMMA 4
∗
For a given γ and support set {0, 1, ..., k}, suppose the distribution PX
is defined over {0, 1, ..., k} and has mean
EP ∗ [X] = kγ and
sup
X∈{0,1,...,k}
E[X]=kγ
1
1
∗
H(X) = H(PX
).
k
k
Define distribution QX as follows:
∗
QX (i) = PX
(k − i),
0 ≤ i ≤ k,
∗
Therefore the entropy of QX will be the same as the entropy of PX
and we have
EQ [X] =
k
X
iQX (i) =
i=0
k
X
∗
iPX
(k − i) = −
i=0
k
X
=k−
k
X
∗
(−k + (k − i))PX
(k − i)
i=0
∗
(k − i)PX
(k − i) = k − kγ = k(1 − γ).
i=0
Hence, we have
1
1
1
1
∗
sup
H(X) = H(PX
) = H(QX ) ≤
H̃(γ, ) =
k
k
k
k
X∈{0,1,...,k}
E[X]=kγ
sup
X∈{0,1,...,k}
E[X]=k(1−γ)
1
1
H(X) = H̃(1 − γ, ).
k
k
(29)
Similarly, suppose for the distribution Q∗X , defined over {0, 1, ..., k} and with mean EQ∗ [X] = k(1 − γ),
sup
X∈{0,1,...,k}
E[X]=k(1−γ)
1
1
H(X) = H(Q∗X ).
k
k
Define distribution PX as follows:
PX (i) = Q∗X (k − i),
0 ≤ i ≤ k,
Therefore the entropy of PX will be the same as the entropy of Q∗X and we have
EP [X] =
k
X
iPX (i) =
iQ∗X (k
k
X
(−k + (k − i))Q∗X (k − i)
− i) = −
i=0
i=0
=k−
k
X
i=0
k
X
(k − i)Q∗X (k − i) = k − k(1 − γ) = kγ.
i=0
Hence, we have
1
H̃(1 − γ, ) =
k
sup
X∈{0,1,...,k}
E[X]=k(1−γ)
1
1
1
1
1
H(X) = H(Q∗X ) = H(PX ) ≤
sup
H(X) = H̃(γ, ).
k
k
k
k
X∈{0,1,...,k} k
E[X]=kγ
Comparing (29) and (30) gives the desired result.
22
(30)
A PPENDIX E
P ROOF OF L EMMA 7
1
1
I˜rp (γ, ) =
sup
Irp (X; Y )
k
k
X∈{0,1,...,k}
E[X]=kγ
=
1
[Hrp (Y ) − Hrp (Y |X)]
X∈{0,1,...,k} k
sup
E[X]=kγ
=
k
X
1
[Hrp (Y ) −
PX (x)Hrp (Y |X = x)]
X∈{0,1,...,k} k
x=0
sup
E[X]=kγ
(a)
=
k
X
1
[Hrp (Y ) −
PX (x)H(Bin(k, rp ))]
X∈{0,1,...,k} k
x=0
sup
E[X]=kγ
=
1
[Hrp (Y ) − H(Bin(k, rp ))]
k
X∈{0,1,...,k}
sup
E[X]=kγ
=
1
1
Hrp (Y ) − H(Bin(k, rp ))
k
k
X∈{0,1,...,k}
sup
E[X]=kγ
1
1
1
= Ȟrp (γ, ) − H(Bin(k, rp )),
k
k
k
where (a) follows from (15).
A PPENDIX F
P ROOF OF L EMMA 8
˜ ·) is concave in its first argument. Let P ∗ and P ∗ be the optimum
We first prove that the function I(·,
X1
X3
distributions resulted from optimization problem (20) for parameters (γ1 , k1 ) and (γ3 , k1 ), respectively. Therefore
for any 0 ≤ α ≤ 1,
1
1
αI˜rp (γ1 , ) + (1 − α)I˜rp (γ3 , )
k
k
1
1
(a) 1
∗
∗
= αH(PX
∗ Bin(k, rp )) + (1 − α)H(PX
∗ Bin(k, rp )) − H(Bin(k, rp ))
1
3
k
k
k
(b) 1
1
∗
∗
∗ Bin(k, rp )) + (1 − α)(PX
∗ Bin(k, rp ))) − H(Bin(k, rp ))
≤ H(α(PX
1
3
k
k
1
1
sup
H(PX ∗ Bin(k, rp )) − H(Bin(k, rp ))
≤
k
k
X∈{0,1,...,k}
E[X]=k(αγ1 +(1−α)γ3 )
1
= I˜rp (αγ1 + (1 − α)γ3 , ),
k
where (a) follows from Lemma 7 and (b) follows from the concavity of the entropy function.
23
˜ there is no straightforward analytic
Because of the complexity and lack of symmetry or structure in the function I,
method for proving its concavity. But we notice that it is suffices to show that for all 2 ≤ k ≤ τmax − 1, and α
such that
α
1
1
1
+ (1 − α)
= ,
k−1
k+1
k
(31)
we have
αI˜rp (γ1 ,
From (31) we have α =
1
1
1
) + (1 − α)I˜rp (γ3 ,
) ≤ I˜rp (αγ1 + (1 − α)γ3 , ).
k−1
k+1
k
k−1
, hence using Lemma 7, (32) reduces to
2k
1
1
1
) − Ȟrp (γ3 ,
) + f (k, rp ) ≥ 0,
2Ȟrp (γ2 , ) − Ȟrp (γ1 ,
k
k−1
k+1
(32)
(33)
where f (k, rp ) = H(Bin(k − 1, rp )) + H(Bin(k + 1, rp )) − 2H(Bin(k, rp )). Noting that the left-hand side is a
Lipschitz continuous function of γ1 , γ3 (away from zero), and rp and the fact that k takes finitely many values,
the validation of inequality (33) can be done numerically.
R EFERENCES
[1] A. Ghassami, X. Gong, and N. Kiyavash, “Capacity limit of queueing timing channel in shared FCFS schedulers,” in 2015 IEEE
International Symposium on Information Theory (ISIT), pp. 789–793, IEEE, 2015.
[2] V. Gligor, “Covert channel analysis of trusted systems. a guide to understanding,” tech. rep., DTIC, 1993.
[3] S. Kadloor, X. Gong, N. Kiyavash, T. Tezcan, and N. Borisov, “Low-cost side channel remote traffic analysis attack in packet networks,”
in Communications, 2010 IEEE International Conference on, IEEE, 2010.
[4] D. Wakabayashi, “Breach complicates Sony’s network ambitions,” The Wall Street Journal, April 28, 2011.
[5] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Openflow: enabling innovation
in campus networks,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 2, pp. 69–74, 2008.
[6] M. Liberatore and B. N. Levine, “Inferring the source of encrypted http connections,” in Proceedings of the 13th ACM conference on
Computer and communications security, pp. 255–263, ACM, 2006.
[7] D. X. Song, D. Wagner, and X. Tian, “Timing analysis of keystrokes and timing attacks on SSH.,” in USENIX Security Symposium,
vol. 2001, 2001.
[8] C. V. Wright, L. Ballard, S. E. Coull, F. Monrose, and G. M. Masson, “Uncovering spoken phrases in encrypted voice over ip conversations,”
ACM Transactions on Information and System Security (TISSEC), vol. 13, no. 4, p. 35, 2010.
[9] X. Gong, N. Borisov, N. Kiyavash, and N. Schear, “Website detection using remote traffic analysis,” in International Symposium on Privacy
Enhancing Technologies Symposium, pp. 58–78, Springer, 2012.
[10] X. Gong, N. Kiyavash, and P. Venkitasubramaniam, “Information theoretic analysis of side channel information leakage in FCFS schedulers,”
in Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on, pp. 1255–1259, IEEE, 2011.
[11] S. Kadloor, N. Kiyavash, and P. Venkitasubramaniam, “Mitigating timing based information leakage in shared schedulers,” in INFOCOM,
2012 Proceedings IEEE, pp. 1044–1052, IEEE, 2012.
[12] X. Gong and N. Kiyavash, “Quantifying the information leakage in timing side channels in deterministic work-conserving schedulers,”
arXiv preprint arXiv:1403.1276, 2014.
[13] V. Anantharam and S. Verdu, “Bits through queues,” IEEE Transactions on Information Theory, vol. 42, no. 1, pp. 4–18, 1996.
[14] S. Cabuk, C. E. Brodley, and C. Shields, “Ip covert timing channels: design and detection,” in Proceedings of the 11th ACM Conference
on Computer and Communications Security, pp. 178–187, ACM, 2004.
[15] S. J. Murdoch and S. Lewis, “Embedding covert channels into TCP/IP,” in International Workshop on Information Hiding, 2005.
24
[16] D. Llamas, A. Miller, and C. Allison, “An evaluation framework for the analysis of covert channels in the TCP/IP protocol suite.,” in
ECIW, pp. 205–214, 2005.
[17] M. H. Kang, I. S. Moskowitz, and D. C. Lee, “A network pump,” IEEE Transactions on Software Engineering, vol. 22, no. 5, pp. 329–338,
1996.
[18] S. K. Gorantla, S. Kadloor, N. Kiyavash, T. P. Coleman, I. S. Moskowitz, and M. H. Kang, “Characterizing the efficacy of the nrl network
pump in mitigating covert timing channels,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 1, pp. 64–75, 2012.
[19] R. Soltani, D. Goeckel, D. Towsley, and A. Houmansadr, “Covert communications on poisson packet channels,” in Communication, Control,
and Computing (Allerton), 2015 53rd Annual Allerton Conference on, pp. 1046–1052, IEEE, 2015.
[20] R. Soltani, D. Goeckel, D. Towsley, and A. Houmansadr, “Covert communications on renewal packet channels,” in Communication, Control,
and Computing (Allerton), 2016 54th Annual Allerton Conference on, pp. 548–555, IEEE, 2016.
[21] P. Mukherjee and S. Ulukus, “Covert bits through queues,” in Communications and Network Security (CNS), 2016 IEEE Conference on,
pp. 626–630, IEEE, 2016.
[22] V. Berk, A. Giani, G. Cybenko, and N. Hanover, “Detection of covert channel encoding in network packet delays,” Rapport Technique
TR536, de lUniversité de Dartmouth, p. 19, 2005.
[23] S. Gianvecchio and H. Wang, “Detecting covert timing channels: an entropy-based approach,” in Proceedings of the 14th ACM conference
on Computer and communications security, pp. 307–316, ACM, 2007.
[24] R. Tahir, M. T. Khan, X. Gong, A. Ahmed, A. Ghassami, H. Kazmi, M. Caesar, F. Zaffar, and N. Kiyavash, “Sneak-peek: High speed
covert channels in data center networks,” in IEEE International Conference on Computer Communications (INFOCOM), IEEE, 2016.
[25] A. Ghassami, A. Yekkehkhany, N. Kiyavash, and Y. Lu, “A covert queueing channel in round robin schedulers,” arXiv preprint
arXiv:1701.08883, 2017.
[26] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley & Sons, 2012.
[27] I. Csiszar and J. Körner, Information Theory: Coding Theorems for Discrete Memoryless Systems. Cambridge University Press, 2011.
[28] Y. Polyanskiy and Y. Wu, “Lecture notes on information theory, MIT (6.441), UIUC (ECE 563),” 2014.
25
| 7 |
Abstract Learning via Demodulation in a Deep
Neural Network
Andrew J.R. Simpson #1
#
Centre for vision, speech and signal processing (CVSSP), University of Surrey,
Guildford, Surrey, UK
1
[email protected]
Abstract—Inspired by the brain, deep neural networks (DNN)
are thought to learn abstract representations through their
hierarchical architecture. However, at present, how this happens
is not well understood. Here, we demonstrate that DNN learn
abstract representations by a process of demodulation. We
introduce a biased sigmoid activation function and use it to show
that DNN learn and perform better when optimized for
demodulation. Our findings constitute the first unambiguous
evidence that DNN perform abstract learning in practical use.
Our findings may also explain abstract learning in the human
brain.
Index terms—Deep learning, abstract representation, neural
networks, demodulation.
I. INTRODUCTION
Deep neural networks (DNN) are state of the art for many
machine learning problems [1], [2], [3], [4], [5], [6]. The
architecture of deep neural networks is inspired by the
hierarchical structure of the brain [7]. Deep neural networks
feature a hierarchical, layer-wise arrangement of nonlinear
activation functions (neurons) fed by inputs scaled by linear
weights (synapses). Since the brain is adept at abstraction, it is
anticipated that deep neural architecture might somehow
capture abstract representations. However, it is not presently
known how (or even if) abstract learning occurs in a DNN.
One possible way to engineer the process of abstraction is
known as demodulation. Consider a 1 kHz sinusoidal carrier
signal that is multiplied with a 10 Hz sinusoidal modulation
signal. This shapes the envelope of the carrier signal into a
representation of the modulation signal, and this allows the
slower modulation signal to pass through a band-pass circuit
that would not otherwise support the low frequency
information. In this case, the abstract information (i.e., about
the 1 kHz carrier) is that the carrier is modulated at 10 Hz.
Recovery of the modulation signal is achieved via a nonlinear
demodulation operation.
In this paper, we take a sampling theory perspective [8] and
we interpret the nonlinear activation function of the DNN as a
demodulation device within the context of the well-known
MNIST [4] hand-written character classification problem
(example image in Fig. 1).
We consider the archetypal sigmoid activation function
[ = 1/(1 + exp(− ]. In order for the sigmoid to act as a
demodulator, the input data must be asymmetrical. Taking the
example of the sinusoidal carrier signal multiplied with the
sinusoidal modulator, if the sinusoids are centred within the
sigmoid there will be exactly zero demodulation because the
demodulation energy exactly cancels out. However, if the
sigmoid is biased, then the energy does not cancel out. Thus,
only asymmetrical signals may be demodulated by this
method.
The sigmoidal activation function comprises a zero-centred
sigmoid which is mapped to the range [0, 1]. As a result,
demodulation is not symmetrical. We added a bias term (β) to
the sigmoid to illustrate this;
=
(1)
(
We set x to be a discrete sampled signal comprising the pair
of carrier/modulator multiplied sinusoids as described above,
where the modulator frequency was ω. Computing y for
different values of β, we used the Fourier transform (of y) to
compute a modulation energy power spectrum, H, with N bins.
From this power spectrum we computed a ratio of the power
of the demodulated signal to the average power in the FFT as
follows;
=
∑
(
(2)
Where g is the overall measure of demodulation utility and
where ω is the modulation frequency in question. This ratio is
unscaled. However, real networks have noise floors, so we
include a subtractive logarithmic term to represent the loss in
overall power capable of relating the overall modulation
power to the ‘overall noise floor’ (meaning that an infinitely
large ratio at an infinitely small energy is not useful). We
extend Eq. 2 as follows;
=
∑
(
−
)
log10($ ∑$
&' %& (
(3)
where α represents an arbitrary scalar on the subtractive term
and κ raises the order of the term (i.e., to account for more
complex network related effects). We set α to 1.8x10-13 and κ
was set to 12 in order to provide some punitive drop-off
within the range in question.
This allows us to visualize the point at which the gains in
ratio are overcome by the losses in overall gain. Fig. 2a plots
the resulting functions of β for Eq. 2 (grey line) and Eq. 3
(superposed blue line). At zero bias (β = 0) there is no
demodulation power. At negative bias (β < 0) there is very
little demodulation power but at positive bias the
demodulation ratio is asymptotic by around β = 6 and this is
then pulled back by the overall gain term (Eq. 3). Hence, the
optimal value of β is around 6 (depending on the strength of
the subtractive overall gain term).
We applied the same activation function (Eq. 1) to a typical
feed-forward DNN, which we used to solve the well-known
problem of hand-written character recognition on the MNIST
dataset [4]. We evaluated both learning and overall
performance of the network as a function of β. This allowed
us to assess whether demodulation was a factor in DNN
learning, and hence allowed us to assess the question of
abstract learning in a quantitative way.
Using the biased sigmoid activation function, we built a
fully connected network of size 784x784x10 units, with a 10unit softmax output layer, corresponding to the 10-way digit
classification problem. Separate instances of the model were
independently trained at various values of β, using stochastic
gradient descent (SGD). However, each individual instance of
the model was trained from the exact same random seed. This
means that the weights of the network were always initialized
to the same (random) values and that the same (randomly
chosen) paths were taken during SGD. Hence, changes in
performance can be attributed to the different values of β.
We trained the various instances of the model on the 60,000
training examples from the MNIST dataset [4]. Each iteration
of SGD consisted of a complete sweep of the entire training
data set. The resulting models were tested on the 10,000
separate test examples at various (full-sweep) iteration points.
Models were trained without dropout.
III. RESULTS
Fig. 2b plots classification error as a function of β, applied
to classifying the separate test data (10,000 examples), after
one iteration of training (SGD). The shape of the function
matches well the demodulation (Eq. 3) function of Fig. 2a and
the two functions are highly correlated (r = -0.74, P < 0.001,
Spearman). The classification error function shows the same
asymmetrical shape and minima around β = 6. This
demonstrates that demodulation, and hence abstraction, plays
a role in the learning and performance of the model.
Fig. 1. Example MNIST image. We took the 28x28 pixel images and
Fig. 2c plots classification error as a function of iteration
unpacked them into a vector of length 784 to form the input at the first layer
number for β = 0 (representing a traditional sigmoidal
of the DNN.
activation function) and β = 6. The biased model (β = 6) learns
more rapidly and performs around 80% better. Both functions
appear to have converged, so it would not appear that the
II. METHOD
difference in results can be attributed to different learning
We chose the well-known computer vision problem of rates. Indeed, it appears that the difference in performance is
hand-written character classification using the MNIST dataset sufficiently fundamental that extra training would not be able
[3], [1], [4]. For the input layer we unpacked the images of to account for the difference. These results confirm that
28x28 pixels into vectors of length 784. An example digit is demodulation, and hence abstraction, plays a key role in
given in Fig. 1. Pixel intensities were normalized to zero mean. learning and performance.
Fig. 2. Optimal demodulation equals optimal learning. a Idealized demodulation performance; utility function (g of Eq. 2 and Eq. 3) of sinusoidal modulator
demodulated from sinusoidal carrier as a function of bias (β). The faint grey line represents Eq. 2 and the superposed blue line represents Eq. 3. Note units are
arbitrary and the y-axis is inverted so as to allow comparison with panel b. b Classification error in the test set after the first iteration (an iteration indicates a
full sweep of SGD) for different degrees of bias (β). The two functions are strongly correlated (r = -0.74, P < 0.001, Spearman), indicating that the DNN has
performed abstract learning through demodulation. c Classification error in the test as a function of training iterations for bias of β = 0 and β = 6.
IV. DISCUSSION
We have set out some simple theory describing how DNN
might perform abstract learning through demodulation. We
have also introduced a biased sigmoidal activation function
and we have characterized the demodulation of this function
in both idealized and practical demonstrations. The optimum
bias of the practical DNN example matched the theoretical
optimum bias for demodulation and provided greatly
improved performance in the model. These results provide the
first unambiguous evidence that DNN learn abstract
representations through demodulation.
In this context, the alternate nonlinear activation functions
(abs, tanh, rectified linear unit - ReLU, etc [6]) may be
understood as demodulators with different properties. It has
been observed that models employing the ReLU function outperform sigmoid activated models [6] in ways that are similar
to the results in Fig. 2c. However, it has also been observed
that such models require far greater degrees of regularization
(such as dropout [6]), which the biased sigmoid activation
function does not appear to require. Assuming that
demodulation performance is the reason for the improved
performance in both cases, the difference in terms of need for
regularization is likely due to the high order distortion
generated by the abrupt nonlinearity (of the ReLU) and its
increased potential for aliasing [8]. Thus, the low-order
(smooth) nonlinearity of the biased sigmoid may prove to
learn as fast as the ReLU but require less regularization.
Future work might characterize the various activation
functions in terms of their properties and trade-off of both
demodulation performance versus distortion/aliasing, overfitting and regularization requirements.
We have noted that the unbiased sigmoid is, in the idealized
case, incapable of performing abstraction by demodulation.
Thus, it might be asked: how do these deep networks succeed
at all with the unbiased sigmoid? The likely answer to this
question is that real data are almost never symmetrical, and
hence some small degree of bias would be sufficient to
provide abstract learning (e.g., at the level shown in this
paper). Indeed, the data employed here is decidedly
asymmetrical (even after zero-mean normalization). Thus, it
may be generally concluded that traditional sigmoid-activated
networks are only capable of abstract learning when the data
is asymmetrical. It is also evident that this class of
demodulator is inherently sensitive to the polarity of the data,
and it may be that some data is more informative in one
direction than the other.
More generally, given that the DNN takes its inspiration
from the brain, it is interesting to note that the bias scheme
described here is not necessary in order for the equivalent
brain network to perform abstract learning because the
sigmoid activated neurons of the brain are not signed or zeromean operated [7]. Hence, it would appear that the brain is
ideally suited to perform abstract learning via demodulation.
V. CONCLUSION
In this paper we have provided evidence that deep neural
networks perform abstract learning through a process of
demodulation that is sensitive to asymmetry in the data. We
have introduced a biased sigmoid activation function that is
capable of improved learning and performance. We have
shown that the optimum bias point in a practical model
matches well that of the idealized demodulation example and
that the optimally biased model is fundamentally superior to
the traditional sigmoid activated model. These results have
broad implications for how deep neural networks are
interpreted, designed and understood. Furthermore, our
findings may provide insight into the exceptional abstract
learning capabilities of the human brain.
ACKNOWLEDGMENT
AJRS was supported by grant EP/L027119/1 from the UK
Engineering and Physical Sciences Research Council
(EPSRC).
REFERENCES
[1] Hinton G. E., Osindero S., Teh Y. (2006). “A fast learning algorithm
for deep belief nets”, Neural Computation 18: 1527–1554.
[2] BengioY (2009) Learning deep architectures for AI, Foundations and
Trends in Machine Learning 2:1–127.
[3] LeCun Y., Bottou L., Bengio Y., Haffner P. (1998) Gradient-based
learning applied to document recognition. Proc. IEEE 86: 2278–2324.
[4] Ciresan, C. D., Meier U., Gambardella L. M., Schmidhuber J. (2010)
“Deep big simple neural nets excel on handwritten digit recognition”,
Neural Computation 22: 3207–3220.
[5] Hinton G. E., Srivastava N., Krizhevsky A., Sutskever I.,
Salakhutdinov R. (2012) “Improving neural networks by preventing
co-adaptation of feature detectors,” The Computing Research
Repository (CoRR), abs/1207.0580.
[6] Dahl G. E., Sainath T. N., Hinton G. E. (2013) “Improving deep neural
networks for LVCSR using rectified linear units and dropout”, in Proc.
IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP),
pp. 8609-8613.
[7] Dayan P, Abbott LF (2001) Theoretical Neuroscience: Computational
and Mathematical Modeling of Neural Systems (MIT Press,
Cambridge, Massachusetts, 2001).
[8] Simpson AJR (2015) Over-Sampling in a Deep Neural Network,
arxiv.org abs/1502.03648
.
| 9 |
arXiv:1410.7051v6 [] 21 Jul 2017
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
CHARLES GARNET COX
Abstract. For a fixed n ∈ {2, 3, . . .}, the Houghton group Hn consists of
bijections of Xn = {1, . . . , n} × N that are ‘eventually translations’ of each
copy of N. The Houghton groups have been shown to have solvable conjugacy
problem. In general, solvable conjugacy problem does not imply that all finite
extensions and finite index subgroups have solvable conjugacy problem. Our
main theorem is that a stronger result holds: for any n ∈ {2, 3, . . .} and any
group G commensurable to Hn , G has solvable conjugacy problem.
1. Introduction
Given a presentation hS | Ri = G, a ‘word’ in G is an ordered f -tuple a1 . . . af
with f ∈ N and each ai ∈ S ∪ S −1 . Dehn’s problems and their generalisations
(known as decision problems) ask seemingly straightforward questions about finite
presentations. The problems that we shall consider include:
• WP(G), the word problem for G: show there exists an algorithm which given
any two words a, b ∈ G, decides whether a =G b or a 6=G b i.e. whether
these words represent the same element of the group. This is equivalent
to asking whether or not ab−1 =G 1. There exist finitely presented groups
where this problem is undecidable (see [Nov58] or [Boo59]).
• CP(G), the conjugacy problem for G: show there exists an algorithm which
given any two words a, b ∈ G, decides whether or not there exists an x ∈ G
such that x−1 ax = b. Since {1G } is always a conjugacy class of G, if CP(G)
is solvable then so is WP(G). There exist groups where WP(G) is solvable
but CP(G) is not (e.g. see [Mil71]) and so CP(G) is strictly weaker than
WP(G).
• TCPφ (G), the φ-twisted conjugacy problem for G: for a fixed φ ∈ Aut(G),
show there exists an algorithm which given any two words a, b ∈ G, decides
whether or not there exists an x ∈ G such that (x−1 )φax = b (meaning
that a is φ-twisted conjugate to b). Note that TCPId (G) is CP(G).
• TCP(G), the (uniform) twisted conjugacy problem for G: show there exists
an algorithm which given any φ ∈ Aut(G) and any two words a, b ∈ G,
decides whether or not they are φ-twisted conjugate. There exist groups G
such that CP(G) is solvable but TCP(G) is not (e.g. see [BMV10]).
Should any of these problems be solved for one finite presentation, then they
may be solved for any other finite presentation of that group. We therefore say
that such problems are solvable if an algorithm exists for one such presentation.
Date: July 4, 2017.
2010 Mathematics Subject Classification. Primary: 20F10, 20B99.
Key words and phrases. uniform twisted conjugacy problem, Houghton group, permutation
group, orbit decidability, conjugacy problem for commensurable groups, computable centralisers.
1
2
CHARLES GARNET COX
Many decision problems may also be considered for any group that is recursively
presented.
We say that G is a finite extension of H if H E G and H is finite index in G. If
CP(G) is solvable, then we do not have that finite index subgroups of G or finite
extensions of G have solvable conjugacy problem, even if these are of degree 2.
Explicit examples can be found for both cases (see [CM77] or [GK75]). Thus it is
natural to ask, if CP(G) is solvable, whether the conjugacy problem holds for finite
extensions and finite index subgroups of G. The groups we investigate in this paper
are Houghton groups (denoted Hn with n ∈ N).
Theorem 1. Let n ∈ {2, 3, . . .}. Then TCP(Hn ) is solvable.
We say that two groups A and B are commensurable if there exists NA ∼
= NB
where NA is normal and finite index in A and NB is normal and finite index in B.
Theorem 2. Let n ∈ {2, 3, . . .}. Then, for any group G commensurable to Hn ,
CP(G) is solvable.
As a consequence of this work we also obtain.
Theorem 3. There exists an algorithm which takes as input any n ∈ {2, 3, . . .} and
any a ∈ Hn ⋊ Sn , and which decides whether or not CHn (a) is finitely generated.
If CHn (a) is finitely generated, then the algorithm also outputs a finite generating
set for CHn (a).
We structure the paper as follows. In Section 2 we introduce the Houghton
groups, make some simple observations for them, and reduce TCP(Hn ) to a problem similar to CP(Hn ⋊ Sn ), the difference being that, given a, b ∈ Hn ⋊ Sn , we
are searching for a conjugator x ∈ Hn . This occurs since, for all n ∈ {2, 3, . . .},
Aut(Hn ) ∼
= Hn ⋊ Sn . In Section 3 we describe the orbits of elements of Hn ⋊ Sn and
produce identities that a conjugator of elements in Hn ⋊ Sn must satisfy. These are
then used in Section 4 to reduce our problem of finding a conjugator in Hn to finding
a conjugator in the subgroup of Hn consisting of all finite permutations (which we
denote by FSym, see Notation 2.1 below). Constructing such an algorithm provides
us with Theorem 1. In Section 5 we use Theorem 1 and [BMV10, Thm. 3.1] to
prove our main result, Theorem 2. In Section 6 we discuss the structure of CHn (a)
where a ∈ Hn ⋊ Sn and prove Theorem 3.
Acknowledgements. I thank the authors of [ABM15] whose work is drawn upon
extensively. I especially thank the author Armando Martino, my supervisor, for
his encouragement and the many helpful discussions which have made this work
possible. I thank Peter Kropholler for his suggested extension which developed into
Theorem 2. Finally, I thank the referee for their time and many helpful comments.
2. Background
As with the authors of [ABM15], the author does not know of a class that contains
the Houghton groups and for which the conjugacy problem has been solved.
2.1. Houghton’s groups. Throughout we shall consider N := {1, 2, 3, . . .}. For
convenience, let Zn := {1, . . . , n}. For a fixed n ∈ N, let Xn := Zn × N. Arrange
these n copies of N as in Figure 1 below (so that the k th point from each copy of N
form the vertices of a regular n-gon). For any i ∈ Zn , we will refer to the set i × N
as a branch or ray and will let (i, m) denote the mth point on the ith branch.
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
3
Notation 2.1. For a non-empty set X, the set of all permutations of X form a
group which we denote Sym(X). Those permutations which have finite support (i.e.
move finitely many points) form a normal subgroup which we will denote FSym(X).
If there is no ambiguity for X, then we will write just Sym or FSym respectively.
Note that, if X is countably infinite, then FSym(X) is countably infinite but is
not finitely generated, and Sym(X) is uncountable and so uncountably generated.
Definition 2.2. Let n ∈ N. The nth Houghton group, denoted Hn , is a subgroup
of Sym(Xn ). An element g ∈ Sym(Xn ) is in Hn if and only if there exist constants
z1 (g), . . . , zn (g) ∈ N and (t1 (g), . . . , tn (g)) ∈ Zn such that, for all i ∈ Zn ,
(1)
(i, m)g = (i, m + ti (g)) for all m > zi (g).
For simplicity, the numbers z1 (g), . . . , zn (g) are assumed to be minimal i.e. for
any m′ < zi (g), (i, m′ )g 6= (i, m′ + ti (g)). The vector t(g) := (t1 (g), . . . , tn (g))
represents the ‘eventual translation length’ for each g in Hn since ti (g) specifies
how far g moves the points {(i, m) | m > zi (g)}. We shall say that these points are
those which are ‘far out’, since they are the points where g acts in the regular way
described in (1). As g induces a bijection from Xn to Xn , we have that
n
X
(2)
ti (g) = 0.
i=1
Given g ∈ Hn , the numbers zi (g) may be arbitrarily large. Thus FSym(Xn ) 6 Hn .
Also, for any n ∈ {2, 3, . . .}, we have (as proved in [Wie77]) the short exact sequence
1 −→ FSym(Xn ) −→ Hn −→ Zn−1 −→ 1
where the homomorphism Hn → Zn−1 is given by g 7→ (t1 (g), . . . , tn−1 (g)).
These groups were introduced in [Hou78] for n ∈ {3, 4, . . .}. The generating
set that we will use when n ∈ {3, 4, . . .} is {gi | i = 2, . . . , n} where, for each
i ∈ {2, . . . , n} and (j, m) ∈ Xn ,
(1, m + 1) if j = 1
(1, 1)
if j = i, m = 1
(j, m)gi =
(i,
m
−
1)
if
j = i, m > 1
(j, m)
otherwise.
Note, for each i, t1 (gi ) = 1 and tj (gi ) = −δi,j for j ∈ {2, . . . , n}. Figure 1 shows
a geometric visualisation of g2 , g4 ∈ H5 . Throughout we shall take the vertical
ray as the ‘first ray’ (the set of points {(1, m) | m ∈ N}) and order the other rays
clockwise.
We shall now see that, for any n ∈ {3, 4, . . .}, the set {gi | i = 2, . . . , n} generates Hn . First, any valid eventual translation lengths (those satisfying (2)) can
be obtained by these generators. Secondly, the commutator (which we define as
[g, h] := g −1 h−1 gh for every g, h in G) of any two distinct elements gi , gj ∈ Hn is a
2-cycle, and so conjugation of this 2-cycle by some combination of gk ’s will produce
a 2-cycle with support equal to any two points of Xn . This is enough to produce any
element that is ‘eventually a translation’ i.e. one that satisfies condition (1), and so
is enough to generate all of Hn . An explicit finite presentation for H3 can be found
in [Joh99], and this was generalised in [Lee12] by providing finite presentations for
Hn for all n > 3.
4
CHARLES GARNET COX
g4
g2
Figure 1. Part of the set X5 and a geometrical representation of
the action of the standard generators g2 , g4 ∈ H5 .
We now describe H1 and H2 . If g ∈ H1 , then t1 (g) = 0 (since the eventual
translation lengths of g must sum to 0 by condition (2)) and so H1 = FSym(N). For
H2 we have hg2 i ∼
= Z. Using a conjugation argument similar to the one above, it can
be seen that a suitable generating set for H2 is {g2 , ((1, 1) (2, 1))}. These definitions
of H1 and H2 agree with the result for Hn in [Bro87], that (for n > 3) each Hn is
FPn−1 but not FPn i.e. H1 is not finitely generated and H2 is finitely generated
but not finitely presented. Since H1 = FSym(N) and Aut(FSym(N)) ∼
= Sym(N)
(see, for example, [DM96] or [Sco87]) we will work with Hn where n ∈ {2, 3, . . .}.
2.2. A reformulation of TCP(Hn ). We require knowledge of Aut(Hn ). We
noted above that Aut(H1 ) ∼
= Sym(N), and so will work with n ∈ {2, 3, . . .}.
Notation. Let g ∈ Sym(X). Then (h)φg := g −1 hg for all h ∈ G.
From [BCMR14], we have for all n ∈ {2, 3, . . .} that NSym(Xn ) (Hn ) ∼
= Aut(Hn )
via the map ρ 7→ φρ and that NSym(Xn ) (Hn ) ∼
= Hn ⋊ Sn . We will make an abuse
of notation and consider Hn ⋊ Sn as acting on Xn via the natural isomorphism
NSym(Xn ) (Hn ) 6 Sym(Xn ). Here Inn(Hn ) ∼
= Hn because Hn is centreless, and
Sn acts on Xn by isometrically permuting the rays, where g ∈ Hn ⋊ Sn is an
isometric permutation of the rays if and only if there exists a σ ∈ Sn such that
(i, m)g = (iσ, m) for all m ∈ N and all i ∈ Zn .
Notation. For any given g ∈ Hn ⋊ Sn , let Ψ : Hn ⋊ Sn → Sn , g 7→ σg where
σg denotes the isometric permutation of the rays induced by g. Furthermore, let
ωg := gσg−1 . Thus, for any g ∈ Hn ⋊ Sn , we have g = ωg σg and will consider
ωg ∈ Hn and σg ∈ Sn . We shall therefore consider any element g of Hn ⋊ Sn as
a permutation of Xn which is eventually a translation (denoted ωg ) followed by an
isometric permutation of the rays σg .
Definition 2.3. Let G 6 Hn ⋊ Sn and g, h ∈ Hn ⋊ Sn . Then we shall say g and h
are G-conjugated if there is an x ∈ G such that x−1 gx = h.
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
5
We now relate twisted conjugacy in Hn to conjugacy in Hn ⋊Sn . Let c ∈ Hn ⋊Sn .
Then the equation for φc -twisted conjugacy becomes:
(x−1 )φc gx = h ⇔ c−1 x−1 cgx = h ⇔ x−1 (ωc σc g)x = ωc σc h.
Thus, for any n ∈ {2, 3, . . .}, two elements g, h ∈ Hn are φc -twisted conjugate if and
only if ωc σc g and ωc σc h ∈ Hn ⋊ Sn are Hn -conjugated. Note that, if g, h ∈ Hn ⋊ Sn
are Hn -conjugated, then σg = σh . Thus for the remainder of this note a and b will
refer to the elements in Hn ⋊ Sn that we wish to decide are Hn -conjugated, where
σa = σb . In order to solve TCP(Hn ), we will therefore produce an algorithm to
search for an x ∈ Hn which conjugates a to b.
3. Computations in Hn ⋊ Sn
Our first lemma provides the generating set that we will use for Hn ⋊ Sn .
Lemma 3.1. Let n ∈ {2, 3, . . .}. Then Hn ⋊ Sn can be generated by 3 elements.
Proof. If n > 3, two elements can be used to generate all of the isometric permutations of the rays. Our third generator will be g2 , the standard generator for Hn .
Conjugating g2 by the appropriate isometric permutations of the rays produces the
set {gi | i = 2, . . . , n}, which can then be used to generate all permutations in Hn .
For H2 ⋊ S2 we note that H2 is 2-generated and that S2 is cyclic.
3.1. The orbits of elements in Hn ⋊ Sn . Our main aim for this section is to
describe the orbits of any element g ∈ Hn ⋊ Sn ‘far out’. For elements of Hn , any
element eventually acted like a translation. In a similar way, any element of Hn ⋊Sn
eventually moves points in a uniform manner. More specifically, g ∈ Sym(Xn ) is in
Hn ⋊Sn if and only if there exist constants z1 (g), . . . , zn (g) ∈ N, (t1 (g), . . . , tn (g)) ∈
Zn , and a permutation σ ∈ Sn such that for all i ∈ Zn
(3)
(i, m)g = (iσ, m + ti (g)) for all m > zi (g).
If g ∈ Hn ⋊ Sn , then g = ωg σg . Therefore for any g ∈ Hn ⋊ Sn , σg (the isometric
permutation of the rays induced by g) will induce the permutation denoted σ in
(3), we have that ti (ωg ) = ti (g) for all i ∈ Zn , and z1 (ωg ), . . . , zn (ωg ) are suitable
values for the constants z1 (g), . . . , zn (g).
Definition 3.2. Let g ∈ Hn ⋊ Sn and i ∈ Zn . Then a class of σg , denoted [i]g ,
is the support of the disjoint cycle of σg which contains i i.e. [i]g = {iσgd | d ∈ Z}.
Additionally, we define the size of a class [i]g to be the length of the cycle of σg
containing i, i.e. the cardinality of the set [i]g , and denote this by |[i]g |.
We shall choose z1 (g), . . . , zn (g) ∈ N to be the smallest numbers such that
(4)
(i, m)g d = (iσgd , m +
d−1
X
tiσgs (g)) for all m > zi (g) and all 1 6 d 6 |[i]g |.
s=0
Note that, for any g ∈ Hn ⋊ Sn and all i ∈ Zn , we have zi (g) > zi (ωg ). We now
justify the introduction of condition (4). Consider a g ∈ Hn ⋊ Sn , i ∈ Zn , and
m ∈ N such that
zi (ωg ) 6 m < zi (g) and m + ti (ωg ) < ziσg (ωg ).
This would mean that (i, m)g = (iσg , m + ti (ωg )), but it may also be that
(i, m)g 2 = (iσg , m + ti (g))g 6= (iσg2 , m + ti (g) + tiσg (g)).
6
CHARLES GARNET COX
Thus the condition (4) above means that, for any g ∈ Hn ⋊ Sn , the numbers
z1 (g), . . . , zn (g) capture the ‘eventual’ way that g permutes the points of a ray.
Let us fix some g ∈ Hn ⋊ Sn . Consider if σg acts trivially on a particular branch
i′ . This will mean that this branch has orbits like those occurring for elements of
Hn . If ti′ (g) = 0, then g leaves all but a finite number of points on this branch
fixed. If ti′ (g) 6= 0, then for any given m′ > zi′ (g),
{(i′ , m′ )g d | d ∈ Z} ⊃ {(i′ , m) | m > m′ and m ≡ m′ mod |ti′ (g)|}.
Notice that, for any g ∈ Hn ⋊ Sn , the σg -classes form a partition of Zn , relating
to the branches of Xn . We now consider the case |[k]g | > 1. We first note, for any
i ∈ [k]g and m1 > zi (g), that
(5)
(i, m1 )g |[k]g | ∈ {(i, m) | m ∈ N}
and that for any 1 6 p < |[k]g |, i ∈ [k]g , and m1 > zi (g),
(i, m1 )g p 6∈ {(i, m) | m ∈ N}.
In fact we may compute (i, m)g |[k]g | for any i ∈ [k]g and m > zi (g),
|[k]g |−1
(6)
(i, m)g
|[k]g |
= (i, m +
X
tiσgs (g)).
s=0
In light of this, we introduce the following.
Definition 3.3. For any g ∈ Hn ⋊ Sn , class [i]g , and k ∈ [i]g = {i1 , i2 , . . . , iq }, let
t[k] (g) :=
q
X
tis (g).
s=1
Hence, if t[k] (g) = 0, then for all i′ ∈ [k]g and m′ > zi′ (g), the point (i′ , m′ ) will
lie on an orbit of length |[k]g |. If t[k] (g) 6= 0, then (6) states that for all i′ ∈ [k]g
and all m′ > zi′ (g), that (i′ , m′ )g |[k]g | 6= (i′ , m′ ). Hence when t[k] (g) = 0, almost all
points of the k th ray will lie on an orbit of g of length |[k]g |, and when t[k] (g) 6= 0
almost all points of the k th ray will lie in an infinite orbit of g. Since different
arguments will be required for finite orbits and infinite orbits, we introduce the
following notation.
Notation. Let g ∈ Hn ⋊ Sn . Then I(g) := {i ∈ Zn | t[i] (g) 6= 0} consists of
all i ∈ Zn corresponding to rays of Xn which have infinite intersection with some
infinite orbit of g. Let I c (g) := Zn \ I(g), the complement of I(g).
Definition 3.4. Two sets are almost equal if their symmetric difference is finite.
For any g ∈ Hn ⋊ Sn and any infinite orbit Ω of g, our aim is now to describe
a set almost equal to Ω, so to have a suitable description of the infinite orbits of
elements of Hn ⋊ Sn . We work with t[k] (g) > 0, since if t[k] (g) is negative, we will
be able to apply our arguments to g −1 . For any i1 ∈ [k]g and m1 > zi1 (g), we shall
compute the orbit of (i1 , m1 ) under g. First,
{(i1 , m1 )g d|[k]g | | d ∈ N} = {(i1 , m) | m > m1 , m ≡ m1 mod |t[k] (g)|}.
Similarly, {(i1 , m1 )g d|[k]g |+1 | d ∈ N} is equal to
(7)
{(i1 σg , m) | m > m1 + ti1 (g), m ≡ m1 + ti1 (g) mod |t[k] (g)|}.
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
For every 2 6 s 6 |[k]g |, setting is := i1 σgs−1 and ms := m1 +
7
s−1
P
tid (g) we have,
d=1
for any 0 6 r < |[k]g |, that
{(i1 , m1 )g d|[k]|g +r | d ∈ N} = {(ir+1 , m) | m > mr+1 , m ≡ mr+1 mod |t[k] (g)|}
and hence {(i1 , m1 )g d | d ∈ N} is equal to
|[k]g |
(8)
G
{(iq , m) | m > mq , m ≡ mq mod |t[k] (g)|}.
q=1
It is therefore natural to introduce the following.
Definition 3.5. Let g ∈ Hn ⋊ Sn , i1 ∈ I(g), and m1 ∈ N. Then
Xi1 ,m1 (g) := {(i1 , m) ∈ Xn | m ≡ m1 mod |t[i1 ] (g)|}.
Furthermore, for every 2 6 s < |[k]g | let is := i1 σgs−1 , ms := m1 +
s−1
P
tid (g), and
d=1
|[i1 ]g |
X[i1 ],m1 (g) :=
G
Xiq ,mq (g).
q=1
Note that suppressing m2 , . . . , m|[k]g | from the notation is not ambiguous since these
are uniquely determined from i1 , m1 , and g.
Let us summarise what we have shown.
Lemma 3.6. Let g ∈ Hn ⋊ Sn and i ∈ I(g). Then, for any infinite orbit Ω of
g intersecting {(i, m) ∈ Xn | m > zi (g)}, there exists i′ ∈ I(g) and constants
d1 , e1 ∈ N such that the set
X[i],d1 (g) ⊔ X[i′ ],e1 (g)
is almost equal to Ω.
3.2. Identities arising from the equation for conjugacy. In Section 2.2 we
showed that TCP(Hn ) was equivalent to producing an algorithm which, given any
a, b ∈ Hn ⋊ Sn , decides whether or not a and b are Hn -conjugated. In this section
we shall show some necessary conditions which any x ∈ Hn must satisfy in order to
conjugate a to b. First, some simple computations to rewrite ti (σa xσa−1 ) are needed.
Note that, since σa = σb is a necessary condition for a and b to be Hn -conjugated
(and σa , σb are computable), the following will not be ambiguous.
Notation. We will write [i] to denote [i]a (which is also a class of b).
Lemma 3.7. For any isometric permutation of the rays σ and any y ∈ Hn ,
ti (σ −1 yσ) = tiσ−1 (y) for all i ∈ Zn .
Proof. Let σ = σ1 σ2 . . . σs be written in disjoint cycle notation, and let σ1 =
(i1 i2 . . . iq ). Since σ ∈ NSym(Xn ) (Hn ), we have that σ −1 yσ ∈ Hn . We compute
ti1 (σ −1 yσ) by considering the image of (i1 , m) where m > max{zi (y) | i ∈ Zn }:
(i1 , m)σ −1 yσ = (iq , m)yσ = (iq , m + tiq (y))σ = (i1 , m + tiq (y)).
Similarly, for 1 < s 6 q,
(is , m)σ −1 yσ = (is−1 , m)yσ = (is−1 , m + tis−1 (y))σ = (is , m + tis−1 (y)).
Thus ti (σ −1 yσ) = tiσ−1 (y) for any i ∈ Zn .
8
CHARLES GARNET COX
Lemma 3.8. Let a, b ∈ Hn ⋊ Sn . Then a necessary condition for a and b to be
Hn -conjugated is that, for all [i] classes, t[i] (a) = t[i] (b). Also if x ∈ Hn conjugates
a to b, then ti (ωb ) = −ti (x) + ti (ωa ) + tiσa (x) for all i ∈ Zn .
Proof. From our hypotheses, we have that
ωb σa = x−1 ωa σa x
⇒ ωb = x−1 ωa (σa xσa−1 )
and so, since ωa , ωb , x, and σa xσa−1 can all be considered as elements of Hn , we
have for all i ∈ Zn that
ti (ωb ) = ti (x−1 ) + ti (ωa ) + ti (σa xσa−1 )
which from the previous lemma can be rewritten as
ti (ωb ) = −ti (x) + ti (ωa ) + tiσa (x).
′
Now, for any branch i , we sum over all k in [i′ ]
X
X
X
X
tk (x)
tk (ωa ) +
tk (x) +
tk (ωb ) = −
⇒
X
k∈[i′ ]
k∈[i′ ]
k∈[i′ ]
k∈[i′ ]
tk (ωb ) =
X
k∈[i′ ]
tk (ωa )
k∈[i′ ]
⇒ t[i′ ] (b) = t[i′ ] (a)
as required.
Thus, if a, b ∈ Hn ⋊Sn are Hn -conjugated, then I(a) = I(b). For any g ∈ Hn ⋊Sn
the set I(g) is computable, and so the first step of our algorithm can be to check
that I(a) = I(b). Hence the following is not ambiguous.
Notation. We shall use I to denote I(a) and I(b).
Lemma 3.9. Let a, b ∈ Hn ⋊ Sn be conjugate by x ∈ Hn . Then, for each class [k],
choosing a value for ti′ (x) for some i′ ∈ [k] determines values for {ti (x) | i ∈ [k]}.
Moreover, let i1 ∈ [k] and, for 2 6 s 6 |[k]|, let is := i1 σas−1 . Then the following
formula determines tis (x) for all s ∈ Z|[k]|
tis (x) = ti1 (x) +
s−1
X
(tid (ωb ) − tid (ωa )).
d=1
Proof. If |[k]| = 1, then there is nothing to prove. Lemma 3.8 states that
ti (ωb ) = −ti (x) + ti (ωa ) + tiσa (x)
for all i ∈ Zn , and so we are free to let i := is ∈ [k], where s ∈ Z|[k]| , to obtain
tis (ωb ) = −tis (x) + tis (ωa ) + tis σa (x)
⇒ tis σa (x) = tis (x) + tis (ωb ) − tis (ωa ).
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
9
Setting s = 1 provides a formula for ti2 (x). If 2 6 s < |[k]|, then
tis σa (x) = tis (x) + tis (ωb ) − tis (ωa )
= tis−1 (x) + tis−1 (ωb ) − tis−1 (ωa ) + tis (ωb ) − tis (ωa )
= ...
= ti1 (x) +
s
X
(tid (ωb ) − tid (ωa )).
d=1
Thus, for all s ∈ Z|[k]| , we have a formula for tis (x) which depends on the com
putable values {ti (a), ti (b) | i ∈ [k]} and the value of ti1 (x).
4. An algorithm for finding a conjugator in Hn
In this section we construct an algorithm which, given a, b ∈ Hn ⋊ Sn with
σa = σb , either outputs an x ∈ Hn such that x−1 ax = b, or halts without outputting
such an x if one does not exist.
We will often need to make a choice of some i ∈ [k]g . For each class [k]g we shall
do this by introducing an ordering on [k]g . We shall choose this ordering to be the
one defined by i1 := inf[k]g (under the usual ordering of N) and is := i1 σgs−1 for all
2 6 s 6 |[k]g |. Hence [k]g = {i1 , . . . , i|[k]g | }.
4.1. An algorithm for finding a conjugator in FSym. Many of the arguments
of this section draw their ideas from [ABM15, Section 3]. By definition, any element
which conjugates a to b will send the support of a to the support of b. If we wish to
find a conjugator in FSym, this means that the symmetric difference of these sets
must be finite, whilst supp(a) ∩ supp(b) can be infinite.
Notation. For any g, h ∈ Sym, let N (g, h) := supp(g) ∩ supp(h), the intersection
of the supports of g and h.
Notation. Let g ∈ Hn ⋊ Sn . Then Z(g) := {(i, m) ∈ Xn | i ∈ Zn and m < zi (g)}
which is analogous to the Z region used by some authors when dealing with the
Houghton groups.
This yields a quadratic solution to WP(Hn ⋊Sn ). First note that, for any i ∈ Zn ,
(i, zi (g))g = (iσg , zi (g) + ti (g)). Thus if g acts trivially on {(i, zi (g)) | i ∈ Zn }, then
supp(g) ⊆ Z(g). Secondly, by using [ABM15, Lem 2.1], the size of Z(g) is bounded
by a linear function in terms of |g|S (the word length of g with respect to S). Finally,
for each point in Z(g)∪{(i, zi (g)) | i ∈ Zn }, compute the action of g (which requires
|g|S computations). This is sufficient to determine if g is the identity, since we have
that g is the identity if and only if g acts trivially on Z(g) ∪ {(i, zi (g)) | i ∈ Zn }.
Definition 4.1. Let g ∈ Hn ⋊Sn . Then, for a fixed r ∈ N, let gr denote the element
of Sym(Xn ) which consists of the product of all of the r-cycles of g. Furthermore,
let g∞ denote the element of Sym(Xn ) which consists of the product of all of the
infinite cycles of g.
Our strategy for deciding whether a, b ∈ Hn ⋊ Sn are FSym-conjugated is as
follows. We show, for any r ∈ N, that if ar and br are FSym-conjugated, then ar
and br are conjugate by an x ∈ FSym where supp(x) is computable. Similarly we
show that if a∞ and b∞ are FSym-conjugated, then there is a computable finite
set such that there is a conjugator of a∞ and b∞ with support contained within
10
CHARLES GARNET COX
this set. In order to decide if a, b ∈ Hn ⋊ Sn are FSym-conjugated we may then
decide if a∞ and b∞ are FSym-conjugated, produce such a conjugator y1 if one
exists, and then decide if y1−1 ay1 and b are FSym-conjugated by deciding whether
(y1−1 ay1 )r and br are FSym-conjugated for every r ∈ N (which is possible since br
is non-trivial for only finitely many r ∈ N, see Lemma 4.4).
Lemma 4.2. If g, h ∈ Sym are FSym-conjugated, then
| supp (g) \ N (g, h)| = | supp (h) \ N (g, h)| < ∞.
Proof. The proof [ABM15, Lem 3.2] applies to our more general hypotheses.
Lemma 4.3. Let g ∈ Hn ⋊ Sn and r ∈ N. Then gr ∈ Hn ⋊ Sn . Note this means
that gr restricts to a bijection on Z(gr ) and Xn \ Z(gr ).
Proof. It is clear that gr ∈ Sym(Xn ). From our description of orbits in Section 3.1,
for all (i, m) ∈ Xn \ Z(g) we have that (i, m) lies either on: an infinite orbit of g;
an orbit of g of length s 6= r; or on an orbit of g of length r. In the first two cases,
we have that (i, m)gr = (i, m) for all (i, m) ∈ Xn \ Z(g). In the final case, we have
that (i, m)gr = (i, m)g for all Xn \ Z(g). Hence gr is an element for which there
exists an isometric permutation of the rays σ, and constants t1 (gr ), . . . , tn (gr ) ∈ Z
and z1 (gr ), . . . , zn (gr ) ∈ N such that for all i ∈ Zn
(i, m)gr = (iσ, m + ti (gr )) for all m > zi (gr )
which was labelled (3) in Section 3.1. Thus, gr ∈ Hn ⋊ Sn .
Lemma 4.4. Let S be the generating set of Hn ⋊ Sn described in Lemma 3.1.
Then there is an algorithm which, given any (i, m) ∈ Xn and any word w over
S ±1 representing g ∈ Hn ⋊ Sn , computes the set Og (i, m) := {(i, m)g d | d ∈ Z}.
Moreover the set {gr 6= id | r ∈ N ∪ {∞}} is finite and computable.
Proof. From [ABM15, Lem 2.1], the numbers t1 (ωg ), . . . , tn (ωg ), z1 (ωg ), . . . , zn (ωg )
are computable. Hence the action of σg is computable, as are the classes [i]g and
the numbers zi (g) and t[i] (g).
We note that for all (i′ , m′ ) ∈ Xn , the number |Og (i′ , m′ )| is computable. First,
let (i′ , m′ ) ∈ Xn \ Z(g). Then |Og (i′ , m′ )| is infinite if t[i′ ] (g) 6= 0 and is equal to
|[i′ ]g | otherwise. If (i′ , m′ ) ∈ Z(g), either |Og (i′ , m′ )| is finite and so (i′ , m′ )g d =
(i′ , m′ ) for some 0 6 d 6 |Z(g)|, or (i′ , m′ )g d ∈ Xn \ Z(g) for some d ∈ N and
so (i′ , m′ ) lies in an infinite orbit of g. When Og (i′ , m′ ) is finite, it is clearly
computable.
Now we deal with the infinite orbits of g. For every i ∈ I(g) and d1 ∈ N,
the set X[i],d1 (g) - introduced in Definition 3.5 - is computable from the numbers
t1 (g), . . . , tn (g) and classes {[k]g | k ∈ Zn }. Given (j, m) ∈ X[i],d1 (g), the set
Fg (j, m) := {(j, m)g d | d ∈ Z} ∩ Z(g) is finite and computable. Hence we may
compute the i′ ∈ I(g) and e1 ∈ N such that Og (j, m) is almost equal to X[i],d1 (g) ⊔
X[i′ ],e1 (g). Thus
Og (j, m) = Fg (j, m) ⊔ (X[i],d1 (g) \ Z(g)) ⊔ (X[i′ ],e1 (g) \ Z(g))
and is therefore computable.
Since we can compute the size of orbits and the set Z(g), we may determine
those r ∈ N for which gr is non-trivial. First, record those r for which there is an
orbit of length r within Z(g). Secondly, for each k ∈ I c (g), compute |[k]| (since
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
11
then g will have infinitely many orbits of length |[k]|). Finally compute |I(g)| to
determine whether or not g∞ is trivial. Hence the set {gr 6= id | r ∈ N ∪ {∞}} is
finite and computable.
This lemma means that, given a word w in S ±1 representing g ∈ Hn ⋊Sn , we may
compute the equations (4) - which appeared after Definition 3.2 - that capture the
‘eventual’ way that g acts. Since Z(g) is finite, only finitely many more equations
are required to completely describe the action of g on Xn .
This lemma also means that, given a word w over S ±1 representing g ∈ Hn ⋊ Sn
and any r ∈ N ∪ {∞}, the action of the element gr can be determined.
Notation. Let g ∈ Sym(Xn ). Given any set Y ⊆ Xn for which Y g = Y (so that g
restricts to a bijection on Y ), let g Y denote the element of Sym(Xn ) which acts as
g on the set Y and leaves all points in Xn \ Y invariant i.e. for every (i, m) ∈ Xn
(i, m)g if (i, m) ∈ Y
(i, m)(g Y ) :=
(i, m)
otherwise.
Notation. Given g ∈ Hn ⋊ Sn , let Z ∗ (g) := Z(g) ∩ supp(g).
Lemma 4.5. Fix an r ∈ {2, 3, . . . } and let g, h ∈ Hn ⋊ Sn . If gr and hr are
FSym-conjugated, then there exists an x ∈ FSym which conjugates gr to hr such
that supp(x) ⊆ Z ∗ (gr ) ∪ Z ∗ (hr ).
Proof. Let Z := Z(gr )∪Z(hr ) and Z c := Xn \Z. If gr and hr are FSym-conjugated,
then
(i, m)gr = (i, m)hr for all (i, m) ∈ Z c .
(9)
By Lemma 4.3 and (9), gr and hr restrict to a bijection of Z c . Thus, gr and hr
must also restrict to a bijection of Z. We may therefore apply Lemma 4.2 to gr
and hr to obtain
| supp (gr ) \ N (gr , hr )| = | supp (hr ) \ N (gr , hr )|
c
which, since Z ⊆ N (gr , hr ), implies that
| supp (gr Z) \ N (gr Z, hr Z)| = | supp (hr Z) \ N (gr Z, hr Z)|
and since Z is finite,
| supp(gr Z)| = | supp(hr Z)| < ∞.
Since gr and hr consist of only r-cycles, gr Z and hr Z are elements of FSym with
the same cycle type, and so are FSym-conjugated. Thus there is a conjugator
x ∈ FSym with supp(x) ⊆ supp(gr Z) ∪ supp(hr Z) = Z ∗ (gr ) ∪ Z ∗ (hr ).
Lemma 4.6. If g∞ = h∞ and g, h ∈ Hn ⋊ Sn are FSym-conjugated, then there is
an x ∈ FSym which conjugates g to h with supp(x) ⊆ Y , a computable finite set
disjoint from supp(g∞ ).
Proof. By Lemma 4.4, {gr 6= id | r ∈ N} is finite and computable implying that
there is a computable k ∈ N such that gr = id for all r > k. Using Lemma 4.5,
compute x2 ∈ FSym that conjugates g2 to h2 with supp(x2 ) ⊆ Z ∗ (g2 ). Therefore
−1
(2)
:= x−1
(x−1
2 gx2 . Now, for each i =
2 gx2 )∞ = h∞ and (x2 gx2 )2 = h2 . Let g
3, 4, . . . , k, use Lemma 4.5 to define xi to conjugate (g (i−1) )i to hi with supp(xi ) ⊆
Q
(i−1)
Z ∗ ((g (i−1) )i ) ∪ Z ∗ (hi ) and g (i) := x−1
xi . Then kr=2 xr conjugates g to h
i g
−1 −1
since (x2 x3 . . . xk )−1 g(x2 x3 . . . xk ) = x−1
k (. . . (x3 (x2 gx2 )x3 ) . . .)xk .
12
CHARLES GARNET COX
We now reduce finding an FSym-conjugator of a and b to the case of Lemma
4.6. In order to do this we require a well known lemma.
Lemma 4.7. Let x ∈ G conjugate a, b ∈ G. Then, x′ ∈ G also conjugates a to b
if, and only if, x′ = cx for some c ∈ CG (a).
Lemma 4.8. Let g, h ∈ Hn ⋊ Sn . If g∞ and h∞ are FSym-conjugated, then there
exists an x ∈ FSym which conjugates g∞ to h∞ with supp(x) ⊆ Z(g∞ ) ∪ Z(h∞ ).
Proof. We show that a bound for zi (x) is computable for each i ∈ Zn , inspired in
part by the proof of [ABM15, Prop 3.1]. Note that I(g∞ ) = I(g) = I(h) = I(h∞ ).
For all i ∈ I(g∞ ) and all m > zi (g∞ ), we have that (i, m)(g∞ )|[i]g | = (i, m + t[i] (g)).
Our claim is that there is an x ∈ FSym which conjugates g∞ to h∞ with
(i, m)x = (i, m) for all i ∈ Zn and for all m > max(zi (g∞ ), zi (h∞ )).
Let Z := Z(g∞ )∪Z(h∞ ). We first work with I c (g∞ ). Let j ∈ I c (g), (j, m) ∈ Xn \Z,
and assume that (j, m) ∈ supp(x). Then (j, m)h∞ = (j, m) and so (j, m)x−1 g∞ x =
(j, m). If (j, m)x−1 6∈ supp(g∞ ), then the 2-cycle γ := ((j, m) (j, m)x−1 ) is
in CFSym (g∞ ) and so x′ := γx also conjugates g∞ to h∞ by Lemma 4.7. Now
(j, m)γx = (j, m) and so (j, m) 6∈ supp(x′ ). If (j, m)x−1 = (j ′ , m′ ) ∈ supp(g∞ ),
then x sends (j ′ , m′ ) to (j, m). But, from the fact that (j, m)x−1 g∞ x = (j, m),
x must send (j ′ , m′ )g∞ to (j, m), and so x sends both (j ′ , m′ ) and (j ′ , m′ )g∞ to
(j, m), a contradiction. Hence {(j, m) | j ∈ I c (g)} ∩ supp(x) ⊆ Z(g∞ ) ∪ Z(h∞ ).
We now work with i ∈ I(g∞ ). Let t[i] (g) > 0, since replacing g and h with
−1
g and h−1 respectively will provide an argument for t[i] (g) < 0. Assume, for a
contradiction, that for some i ∈ I(g∞ ) we have that
zi (x) > max(zi (g∞ ), zi (h∞ )).
For all m ∈ {0, 1, . . .}, we have that zi (x) + m > zi (x) > max(zi (g∞ ), zi (h∞ )).
Therefore
(i, zi (x) + m)(x−1 g∞ x)−|[i]g | = (i, zi (x) + m)(h∞ )−|[i]g |
=(i, zi (x) + m − t[i] (h∞ )) = (i, zi (x) + m − t[i] (g∞ ))
and similarly
(i, zi (x) + m)(x−1 g∞ x)−|[i]g | = (i, zi (x) + m)x−1 (g∞ )−|[i]g | x
=(i, zi (x) + m)(g∞ )−|[i]g | x = (i, zi (x) + m − t[i] (g∞ ))x
implying that (i, zi (x) + m − t[i](g∞ ))x = (i, zi (x) + m − t[i] (g∞ )), which contradicts
the minimality of zi (x).
Proposition 4.9. Given g, h ∈ Hn ⋊ Sn , there is an algorithm which decides
whether or not g and h are FSym-conjugated, and produces a conjugator in FSym
if one exists.
Proof. By Lemma 4.8, enumerating all possible bijections of the set Z(g∞ )∪Z(h∞ )
is sufficient to decide whether or not g∞ and h∞ are FSym-conjugated. If g∞ and
h∞ are not FSym-conjugated, then g and h are not FSym-conjugated since such a
conjugator would also conjugate g∞ to h∞ . Let y1 ∈ FSym conjugate g∞ to h∞
′
and let g ′ := y1−1 gy1 . By construction, g∞
= h∞ .
Lemma 4.6 states that there is a finite computable set Y such that if g ′ and h
are FSym-conjugated, then there is a conjugator y2 with supp(y2 ) ⊆ Y . Therefore
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
13
enumerating all possible bijections of Y is sufficient to decide whether g ′ and h are
FSym-conjugated. A suitable conjugator of g and h is then y1 y2 . Note that g and
h are FSym-conjugated if and only if g ′ and h are FSym-conjugated.
4.2. Reducing the problem to finding a conjugator in FSym. The previous
section provided us with an algorithm for deciding whether a, b ∈ Hn ⋊ Sn are
FSym-conjugated. Our problem is to decide if a and b are Hn -conjugated. We
start with Lemma 4.11, a simple observation that allows further application of our
algorithm determining conjugacy in FSym.
Definition 4.10. Given g, h ∈ Hn ⋊ Sn and a group G such that FSym(Xn ) 6
G 6 Hn , a witness set of G-conjugation is a finite subset V (g, h, G) of Zn satisfying
that if g, h are G-conjugated, then there is an x ∈ G such that g = x−1 hx and
t(x) ∈ V (g, h, G).
Note that if FSym(Xn ) 6 U, V 6 Hn and {t(u) | u ∈ U } = {t(v) | v ∈ V }, then
U = V . For assume that u ∈ U , v ∈ Hn , and t(u) = t(v). Then t(u−1 v) = 0 which
implies that u−1 v = σ ∈ FSym(Xn ). Thus v = uσ ∈ U , and deciding whether or
not a given h ∈ Hn lies in such a U becomes membership of t(h) in {t(u) | u ∈ U }.
Lemma 4.11. Let FSym(Xn ) 6 G 6 Hn . If, for any g, h ∈ Hn ⋊ Sn it is possible
to compute a witness set of G-conjugation V (g, h, G), then there is an algorithm
which, given any a, b ∈ Hn ⋊ Sn , decides whether they are G-conjugated.
Proof. We use, from the previous section, that there exists an algorithm
for deciding
Pn
if g, h ∈ Hn ⋊ Sn are FSym-conjugated. For any v ∈ Zn with i=1 vi = 0 let
(10)
xv :=
n
Y
−vi
gi
i=2
where the elements {gi | i = 2, . . . , n} are our generators of Hn so that t(xv ) = v.
Thus, if g and h are G-conjugated, then there exists v ∈ V (g, h, G) and x ∈ FSym
such that xv x conjugates g to h. Now, to decide if g and h are G-conjugated, it
is sufficient to check whether any of the pairs {(x−1
v gxv , h) | v ∈ V (g, h, G)} are
FSym-conjugated. This is because
−1
x−1 (x−1
g(xv x) = h
v gxv )x = h ⇔ (xv x)
and so a pair is FSym-conjugated if and only if g and h are G-conjugated.
From this lemma, if, for any g, h ∈ Hn ⋊ Sn , the set V (g, h, Hn ) was computable
(from only g and h), then TCP(Hn ) would be solvable. We shall show that solving
our problem can be achieved by producing an algorithm to decide if elements are
G-conjugated where G is a particular subgroup of Hn .
Definition 4.12. Let g ∈ Hn ⋊ Sn , |σg | denote the order of σg ∈ Sn , and
Hn∗ (g) := {x ∈ Hn | ti (x) ≡ 0 mod ti (g |σg | ) for all i ∈ I(g)},
Hn (g) := {x ∈ Hn | ti (x) ≡ 0 mod |t[i] (g)| for all i ∈ I(g)}.
Recall that if a, b ∈ Hn ⋊ Sn are Hn -conjugated, then σa = σb . Also, from
Lemma 3.8, t[i] (a) = t[i] (b) for all i ∈ Zn . Hence, if a and b are Hn -conjugated,
then Hn∗ (a) = Hn∗ (b) and Hn (a) = Hn (b). Note that Hn∗ (g) 6 Hn (g) since, for any
i ∈ I(g), |t[i] (g)| = |ti (g |[i]g | )| and |ti (g |[i]g | )| divides |ti (g |σg | )|. Given x ∈ Hn (g)
and any infinite orbit Og of g, we have that (Og )x and Og are almost equal, and
hence elements of Hn∗ (g) also have this property.
14
CHARLES GARNET COX
Lemma 4.13. Assume there exists an algorithm which, given any g, h ∈ Hn ⋊ Sn ,
decides whether g and h are Hn∗ (g)-conjugated. Then there exists an algorithm
which, given any g, h ∈ Hn ⋊ Sn , decides whether g and h are Hn -conjugated.
Proof. Given g ∈ Hn ⋊ Sn , construct the set
Pg := {(v1 , . . . , vn ) ∈ Zn : 0 6 vi < ti (g |σg | ) for all i ∈ I(g) and vk = 0 otherwise}.
Note that, for
F any y ∈ Hn ⋊ Sn , Py will be finite. Define xv as in (10) above. Note
that Hn = v∈Pg xv Hn∗ (g) and so any element of Hn is expressible as a product of
xv for some v ∈ Pg and an element in Hn∗ (g). Thus, deciding whether any of the
∗
finite number of pairs {(x−1
v gxv , h) | v ∈ Pg } are Hn (g)-conjugated is sufficient to
decide whether g and h are Hn -conjugated.
Remark 4.14. From Lemma 4.11, if for any g, h ∈ Hn ⋊ Sn a set V (g, h, Hn∗ (g))
is computable, then it is possible to decide whether any g, h ∈ Hn ⋊ Sn are Hn∗ (g)conjugated. By Lemma 4.13, it will then be possible to decide whether any g, h ∈
Hn ⋊ Sn are Hn -conjugated. From Section 2.2, this will mean that TCP(Hn ) is
solvable.
In the following two sections we will show that for any g, h ∈ Hn ⋊ Sn a witness
set of Hn∗ (g)-conjugation is computable from only g and h. In Section 4.3 we show
that the following is computable.
Notation. Let n ∈ {2, 3, . . .} and g, h ∈ Hn ⋊ Sn . Let MI (g, h) denote a number
such that, if g and h are Hn∗ (g)-conjugated, then there is a conjugator x ∈ Hn∗ (g)
with
X
|ti (x)| < MI (g, h).
i∈I(g)
In Section 4.4 we show that for any g, h ∈ Hn ⋊Sn , numbers {yj (g, h) | j ∈ I c (g)}
are computable (using only g and h) such that if there exists an x ∈ Hn∗ (g) which
conjugates g to h, then there is an x′ ∈ Hn∗ (g) which conjugates g to h such that
ti (x′ ) = ti (x) for all i ∈ I(g) and tj (x′ ) = yj (g, h) for all j ∈ I c (g).
Remark 4.15. Note that if the numbers MI (g, h) and {yj (g, h) | j ∈ I c (g)} are
computable using only g and h then the set V (g, h, Hn∗ (g)) is computable from only g
and h. This is because defining V (g, h, Hn∗ (g)) to consist of all vectors v satisfying:
P
i)
i∈I |vi | < MI (g, h);
ii) P
vi = yi for all i ∈ I c (g);
n
iii)
i=1 vi = 0.
provides us with a finite set such that if g and h are Hn∗ (g)-conjugated, then they
are conjugate by an x ∈ Hn∗ (g) with t(x) ∈ V (g, h, Hn∗ (g)).
4.3. Showing that MI (g, h) is computable. Let g, h ∈ Hn ⋊ Sn and x ∈ Hn∗ (g)
conjugate g to h. In this section we will show that a number MI (g, h) is computable
from only the elements g and h.
Notation. Let x ∈ Hn∗ (a) conjugate a, b ∈ Hn ⋊ Sn . Then, for each i ∈ I, let li (x)
be chosen so that ti (x) = li (x)|t[i] (a)|. Note, since ti (x) ≡ 0 mod t[i] (a) for all
i ∈ I, that li (x) ∈ Z for each i ∈ I.
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
15
Recall that σa = σb and that Lemma 3.8 tells us that, if a and b are Hn conjugated, then t[i] (a) = t[i] (b) for every [i] class of σa . Thus, for every i ∈ I and
d1 ∈ N, we have that Xi1 ,d1 (a) = Xi1 ,d1 (b) (the sets from Definition 3.5). Also,
(Xi1 ,d1 (a))x is almost equal to Xi1 ,d1 (a) since the set Xi1 ,d1 (a) consists of all points
(i1 , m) ∈ Xn where m ≡ d1 mod |t[i1 ] (a)|. Moreover (X[i1 ],d1 (a))x is almost equal
to X[i1 ],d1 (a) since X[i1 ],d1 (a) is the union of sets Xis ,ds (a) where for each is ∈ [i1 ]
we have that (Xis ,ds (a))x is almost equal to Xis ,ds (a).
Remark 4.16. From Lemma 3.8, we have for any Hn -conjugated g, h ∈ Hn ⋊ Sn
and any i ∈ I(g) that tiσg (x) − ti (x) = ti (ωh ) − ti (ωg ). If we assume that x ∈
Hn∗ (g), we then have that ti (ωg ) ≡ ti (ωh ) mod |t[i] (g)|. Hence if g and h are Hn∗ (g)
conjugate, then for any infinite orbit Og of g, there must be an infinite orbit of h
which is almost equal to Og . It will therefore not be ambiguous to omit a and b and
simply write Xi1 ,d1 and X[i],d1 .
Definition 4.17. Given g ∈ Hn ⋊ Sn , we define an equivalence relation ∼g on
{[k]g | k ∈ I(g)} as the one generated by setting [i]g ∼g [j]g if and only if there
is an orbit of g almost equal to X[i],d1 (g) ⊔ X[j],e1 (g) for some d1 , e1 ∈ N. Writing
[i] ∼g [j] will not be ambiguous since the relation ∼g will always be used with
respect to the σg -classes of g. Note that if a, b ∈ Hn ⋊ Sn are Hn -conjugated, then
σa = σb and a and b produce the same equivalence relation.
Proposition 4.18. Let a, b ∈ Hn ⋊ Sn be Hn∗ (a)-conjugated. Then there exists
a computable constant K(a, b) (computable from only a and b) such that for any
x ∈ Hn∗ (a) which conjugates a to b and any given i, j where [i] ∼a [j], we have that
|[i]||li (x)| − |[j]||lj (x)| < K(a, b).
Proof. We follow in spirit the proof of [ABM15, Prop 4.3]. For convenience, we
introduce notation to describe a set almost equal to X[i],d1 .
Notation. For any set Y ⊆ Xn , and any q := (q(1), q(2), . . . , q(n)) ∈ Nn , let
Y q := Y \ {(i, m) | i ∈ Zn and m < q(i)}.
We will assume that a, b ∈ Hn ⋊ Sn and x ∈ Hn∗ (a) are known. Let i, j ∈ I
satisfy [i] ∼a [j]. Then there exist d1 , e1 ∈ N such that X[i],d1 ⊔ X[j],e1 is almost
equal to an infinite orbit of a and hence, by Remark 4.16, is almost equal to an
infinite orbit of b. Denote these infinite orbits by Oa and Ob respectively. Note
that Oa x = Ob .
Let ǫk be the smallest integers such that
i) for all k ∈ [i] ∪ [j], ǫk > max(zk (a), zk (b));
ii) for all k ∈ [i], ǫk ≡ dk mod |t[i] (a)|;
iii) for all k ∈ [j], ǫk ≡ ek mod |t[j] (a)|
ǫk if k ∈ [i] ∪ [j]
and define v ∈ Zn by vk =
1 otherwise.
We now have that
⊆ X[i],d1 ∩ Ob ;
X[i],d1
v
⊆ X[i],d1 ∩ Oa ; X[i],d1
X[j],e1
v
⊆ X[j],e1 ∩ Oa ; and X[j],e1
v
v
⊆ X[j],e1 ∩ Ob .
This allows us to decompose Oa and Ob :
Oa = X[i],d1
v
⊔ X[j],e1
v
⊔ Si,j
Ob = X[i],d1
v
⊔ X[j],e1
v
⊔ Ti,j
16
CHARLES GARNET COX
where Si,j and Ti,j are finite sets. Define ǫ′k to be the smallest integers such that
for all k ∈ [i] ∪ [j]
i) ǫ′k > zk (x);
ii) ǫ′k > ǫk + |tk (x)|;
iii) ǫ′k ≡ ǫk mod |t[k] (a)|
′
ǫk if k ∈ [i] ∪ [j]
′
n
′
and define v ∈ Z by vk =
1 otherwise.
These conditions for ǫ′k imply that x restricts to a bijection from
X[i],d1
v′
to X[i],d1
v ′ +t(x)
and X[j],e1
v′
to X[j],e1
v ′ +t(x)
.
Hence x restricts to a bijection between the following finite sets
(11)
X[i],d1 v \ X[i],d1 v′ ⊔ X[j],e1 v \ X[j],e1 v′ ⊔ Si,j
(12)
and X[i],d1 v \ X[i],d1 v′ +t(x) ⊔ X[j],e1 v \ X[j],e1 v′ +t(x) ⊔ Ti,j .
By definition, x eventually translates with amplitude tk (x) = lk (x)·|t[k] (a)| for each
k ∈ [i] ⊔ [j]. Thus
X
lk (x)
X[i],d1 |v \ X[i],d1 |v′ +t(x) = X[i],d1 |v \ X[i],d1 |v′ +
k∈[i]
and X[j],e1 |v \ X[j],e1 |v′ +t(x) =
X[j],e1 |v \ X[j],e1 |v′
Now, since (11) and (12) have the same cardinality, we have
X
X
(13)
lk′ (x) + |Ti,j | = |Si,j |
lk (x) +
k∈[i]
+
X
lk′ (x).
k′ ∈[j]
k′ ∈[j]
where |Si,j | and |Ti,j | are constants computable from only a and b. Using Lemma
3.9 we may rewrite each element of {lk (x) | k ∈ [i]} as a computable constant plus
li (x) and each element of {lk′ (x) | k ∈ [j]} as a computable constant plus lj (x).
Let Ai,j denote the sum of all of these constants (which ‘adjust’ the values of the
translation lengths of x amongst each σa class). Now (13) becomes
|[i]|li (x) + |[j]|lj (x) + Ai,j + |Ti,j | = |Si,j |.
By the generalised triangle inequality we have
|[i]||li (x)| 6 |[j]||lj (x)| + |Ai,j | + |Si,j | + |Ti,j |
and |[j]||lj (x)| 6 |[i]||li (x)| + |Ai,j | + |Si,j | + |Ti,j |.
Thus
|[i]||li (x)| − |[j]||lj (x)| 6 |Ai,j | + |Si,j | + |Ti,j | =: C(i, j)
|[j]||lj (x)| − |[i]||li (x)| 6 |Ai,j | + |Si,j | + |Ti,j | = C(j, i)
⇒ |[i]||li (x)| − |[j]||lj (x)| 6 C(i, j).
We may then complete this process for all pairs of rays i′ , j ′ ∈ I such that there
exist d′1 , e′1 ∈ N such that X[i′ ],d′1 ⊔ X[j ′ ],e′1 is almost equal to an infinite orbit of a.
Let Ĉ(a, b) denote the maximum of all of the C(i′ , j ′ ).
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
17
Now, consider if k, k ′ ∈ I satisfy [k] ∼a [k ′ ]. This means that there exist
(0) (1)
(f ) (1)
(f ) (f +1)
k (1) , k (2) , . . . , k (f ) ∈ I and d1 , d1 , . . . , d1 , e1 , . . . , e1 , e1
∈ N such that
for all p ∈ Zf −1
X[k],d(0) ⊔ X[k(1) ],e(1) , X[k(p) ],d(p) ⊔ X[k(p+1) ],e(p+1) , and X[k(f ) ],d(f ) ⊔ X[k′ ],e(f +1)
1
1
1
1
1
1
are almost equal to orbits of a. We wish to bound |[k]||lk (x)| − |[k ′ ]||lk′ (x)| , and
will do this by producing bounds for
|[k]||lk (x)| − |[k ′ ]||lk′ (x)| and |[k ′ ]||lk′ (x)| − |[k]||lk (x)|.
We start by rewriting |[k]||lk (x)| − |[k ′ ]||lk′ (x)| as
|[k]||lk (x)| − |[k (1) ]||lk(1) (x)| + |[k (1) ]||lk(1) (x)| − |[k (2) ]||lk(2) (x)| + . . .
. . . + |[k (f −1) ]||lk(f −1) (x)| − |[k (f ) ]||lk(f ) (x)| + |[k (f ) ]||lk(f ) (x)| − |[k ′ ]||lk′ (x)|
which by definition is bounded by
C(k, k (1) ) +
f
−1
X
C(k (q) , k (q+1) ) + C(k (f ) , k ′ )
q=1
and so
|[k]||lk (x)| − |[k ′ ]||lk′ (x)| 6 C(k, k (1) ) +
f
−1
X
C(k (q) , k (q+1) ) + C(k (f ) , k ′ )
q=1
6 (f + 1)Ĉ(a, b)
6 n · Ĉ(a, b)
′
Similarly, |[k ]||lk′ (x)| − |[k]||lk (x)| 6 (f + 1)Ĉ(a, b) 6 n · Ĉ(a, b). Thus n · Ĉ(a, b) + 1
is a suitable value for K(a, b).
Now we note that, without knowledge of the conjugator x, for all k, k ′ such that
X[k],d1 ⊔ X[k′ ],e1 is almost equal to an infinite orbit of a, the sets Sk,k′ and Tk,k′ are
computable, and so the constants C(k, k ′ ) are also computable. Hence Ĉ(a, b) and
so K(a, b) are computable using only the elements a and b.
We shall now show that, if a is conjugate to b in Hn∗ (a), then there is a conjugator
x ∈ Hn∗ (a) such that for all i ∈ I there exists a j ∈ I such that [j] ∼a [i] and
tj (x) = 0. This will allow us to use the previous proposition to bound |tk (x)| for all
[k] ∼a [j]. We will produce such a conjugator using an adaptation of the element
defined in [ABM15, Lem 4.6]. As with their argument, we again use Lemma 4.7,
which stated that if x ∈ G conjugates a to b then y ∈ G also conjugates a to b if
and only if there exists a c ∈ CG (a) such that cx = y.
Notation. Let g ∈ Hn ⋊ Sn and i ∈ I(g). Then Cg ([i]) := {k | [k] ∼g [i]} ⊆ I(g).
This is the set of all k ∈ Zn corresponding to rays of Xn whose σg -class is related
to [i]g .
This following definition is inspired by [ABM15, Lem 4.6].
Definition 4.19. Let h ∈ Hn ⋊Sn , i ∈ I(h), and g ∈ CHn (h). Then g[i] ∈ Sym(Xn )
is defined to be equal to the product of all cycles of g∞ whose support have infinite
intersection with a set Xj,d (g) where j ∈ Ch ([i]) and d ∈ N.
18
CHARLES GARNET COX
Lemma 4.20. Let n ∈ {2, 3, . . .}, h ∈ Hn ⋊ Sn , g ∈ CHn (h) and i ∈ I. If ti (g) 6= 0,
then tj (g) 6= 0 for all j ∈ Ch ([i]).
Proof. If j ∈ [i], then by Lemma 3.9, tj (g) = ti (g) 6= 0.
Since the equivalence relation ∼h is generated by pairs of classes [i] 6= [j] with
X[i],d (h) ⊔ X[j],e (h) for some d, e ∈ N almost equal to some infinite orbit of h, it is
enough to prove the lemma in that case. Let (i, m) ∈ X[i],d with m > zi (g) and let
|s| >> 0 so that (i, m)hs = (j ′ , m′ ) ∈ X[j],e where m′ > zj ′ (g). If tj ′ (g) = 0, then
(i, m)hs gh−s = (i, m) 6= (i, m + ti (g)) = (i, m)g. Hence tj ′ (g) 6= 0 and, therefore,
tk (g) 6= 0 for all k ∈ [j ′ ] = [j].
Lemma 4.21. Let h ∈ Hn ⋊ Sn , i ∈ I(h), and g ∈ CHn (h). Then g has no
finite orbits within supp((h|σh | )[i] ). Also, if tj (g) 6= 0 for some j ∈ Ch ([i]), then
supp(g[i] ) = supp((h|σh | )[i] ).
Proof. Consider if {(k, m)g d | d ∈ Z} is finite for some (k, m) ∈ supp((h|σh | )[i] ).
Then, for all e ∈ Z, {(k, m)h−e g d he | d ∈ Z} is finite since h−e g d he = g d . But this
implies that g has infinitely many finite orbits, contradicting that g ∈ Hn .
From now on fix a j ∈ Ch ([i]) and m > max{zj (g), zj (h)}. By Lemma 4.20
if tj (g) 6= 0 for some j ∈ Ch ([i]), then tk (g) 6= 0 for all k ∈ Ch ([i]). Consider if (j, m)g d 6∈ supp(h∞ ) for some d ∈ Z. Then there exists e ∈ N such
that (j, m)g d he g −d = (j, m) and because g ∈ CHn (g) this is a contradiction:
(j, m)g d he g −d = (j, m)he 6= (j, m).
Now consider if (j, m)g d 6∈ supp((h|σh | )[i] ). From the previous two paragraphs,
we may assume that (j, m)g d lies in an infinite orbit of h and an infinite orbit of g.
Hence we may choose |d| >> 0 so that (j, m)g d = (j ′ , m′ ) where j ′ 6∈ Ch ([i]) and
m′ > max{zj ′ (g), zj ′ (h)}. If f t[j ′ ] (h) > 0 and f t[j ′ ] (h) − dtj ′ (g) > 0, then
′
(j, m)g d hf |[j ]| g −d = (j ′ , m′ + f t[j ′ ] (h) − dtj ′ (g))
′
and so {(j, m)g d hf |[j ]| g −d | f ∈ Z} has infinite intersection with Rj ′ . But this
′
contradicts that j ′ 6∈ Ch ([i]) i.e. that {(j, m)h|[j ]|f | f ∈ Z} ∩ Rj ′ is finite. Hence
supp(g[i] ) ⊆ supp((h|σh | )[i] ). Assume that supp((h|σh | )[i] ) 6⊆ supp(g[i] ). Then there
exists e ∈ Z such that (j, m)he 6∈ supp(g). But then (j, m)he gh−e = (j, m) implies
that (j, m)g = (j, m), contradicting that (j, m) ∈ supp(g).
Lemma 4.22. Let h ∈ Hn ⋊ Sn and g ∈ CHn (h). Then, for every i ∈ I(h),
g[i] ∈ CHn (h). Moreover, tj (g[i] ) = tj (g) if j ∈ Ch ([i]) and tj (g[i] ) = 0 otherwise.
Proof. If tj (g) = 0 for some j ∈ Ch ([i]), Lemma 4.20 states that tk (g) = 0 for all
k ∈ Ch ([i]), and hence g[i] = idSym(Xn ) and the statement trivially holds. We are
therefore free to apply Lemma 4.21 so that supp(g[i] ) = supp((h|σh | )[i] ). Thus for
each k 6∈ Ch ([i]), supp(g[i] ) ∩ Rk is finite and for each k ∈ Ch ([i]) the action of
g[i] can only differ from the action of g on a finite subset of Rk . It follows that
g[i] ∈ Hn and that tj (g[i] ) = tj (g) if j ∈ Ch ([i]) and tj (g[i] ) = 0 otherwise. Finally
we show h and g[i] commute. If (k, m) ∈ supp((h|σh | )[i] ), then (k, m) ∈ supp(g)
and (k, m) ∈ supp(g[i] ). Hence (k, m)gh = (k, m)hg ⇒ (k, m)g[i] h = (k, m)hg[i] . If
(k, m) 6∈ supp((h|σh | )[i] ), then (k, m), (k, m)h 6∈ supp(g[i] ).
From this lemma, if h ∈ Hn ⋊ Sn and i ∈ I(h), then (h|σh | )[i] ∈ CHn (h).
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
19
Lemma 4.23. Let a, b ∈ Hn ⋊ Sn be conjugate by some x ∈ Hn∗ (a). Then there
exists a conjugator x′ ∈ Hn∗ (a) which, for each i ∈ I, there is a j such that [j] ∼a [i]
and tj (x′ ) = 0.
Proof. Let x ∈ Hn∗ (a) conjugate a to b. From the definition of Hn∗ (a), we have for
all i ∈ I that ti (x) ≡ 0 mod |ti (a|σa | )|. Thus there exist constants m1 , . . . , mn ∈ Z
such that, for all i ∈ I,
ti (x) = mi |ti (a|σa | )|.
1
u
Let R(a)
Qu:= {j , . . . j } ⊆ I be a set of representatives for the ∼a -classes (so that
a∞ = s=1 a[j s ] ). Thus, given i ∈ I, there is a unique d ∈ {1, . . . , u} such that
[j d ] ∼a [i]. Choose some j ∈ R(a) and consider
(a−|σa |mj )[j] x.
By Lemma 4.22 (ad|σa | )[j] ∈ CHn (a) and (ad|σa | )[j] ∈ Hn∗ (a) for every d ∈ Z. Now
tj ((a−|σa |mj )[j] x) = 0
and (a−|σa |mj )[j] x conjugates a to b by Lemma 4.7. Thus a suitable candidate for
x′ is
Y
(a−|σa |mj )[j] x.
j∈R(a)
Recall, given a, b ∈ Hn ⋊ Sn , that MI (a, b) was a number such that if a and b
∗
∗
are
P Hn (a)-conjugated, then there exists x ∈ Hn (a) which conjugates a to b with
i∈I |ti (x)| < MI (a, b).
Proposition 4.24. Let a, b ∈ Hn ⋊ Sn be Hn∗ (a)-conjugated. Then a number
MI (a, b) is computable.
F
Proof. Let S(a) := {i1 , . . . , iv } ⊆ I be representatives of I, so that i∈S(a) [i] = I
and, for any distinct d, e ∈ Zv , we have [id ] 6= [ie ] . We work for a computable
bound for {|li (x)| | i ∈ S(a)} since ti (x) = li (x)|t[i] (a)| and the numbers |t[i] (a)| are
computable. Lemma 3.9 from Section 3.2 will then provide a bound for |li (x)| for
all i ∈ I. Proposition 4.18 says that there is a computable number K(a, b) =: K
such that for every i, j ∈ I where [i] ∼a [j], we have
|[i]||li (x)| − |[j]||lj (x)| < K.
By Lemma 4.23, we can assume that for any given i ∈ S(a), either ti (x) = 0 or
there exists a j ∈ I such that [j] ∼a [i] and tj (x) = 0. If ti (x) = 0, then we are
done. Otherwise,
K
< K.
|[i]||li (x)| − |[j]||lj (x)| < K ⇒ |[i]||li (x)| < K ⇒ |li (x)| <
|[i]|
Continuing this process for each i ∈ S(a) (of which there are at most n) implies
that
X
|li (x)| < nK.
i∈S(a)
We may then compute, using Lemma 3.9 from Section 3.2, a number K ′ such that
X
|li (x)| < K ′ .
i∈I
20
CHARLES GARNET COX
A suitable value for MI (a, b) is therefore K ′ · max {|t[i] (a)| : i ∈ I}.
4.4. Showing that the numbers {yj (g, h) | j ∈ I c } are computable. In this
section we will show, given any g, h ∈ Hn ⋊ Sn , numbers {yj (g, h) | j ∈ I c (g)}
are computable such that if there exists an x ∈ Hn∗ (g) which conjugates g to h,
then there is an x′ ∈ Hn∗ (g) which conjugates g to h such that ti (x′ ) = ti (x) for all
i ∈ I(g) and tj (x′ ) = yj (g, h) for all j ∈ I c (g).
Note that the condition on elements to be in Hn∗ (g) provides no restriction on
the translation lengths for the rays in I c (g). This means that the arguments in this
section work as though our conjugator is in Hn .
From Section 3.1 we have that for any g ∈ Hn ⋊ Sn , any point (j, m) such that
j ∈ I c (g) and m > zj (g) lies in an orbit of g of size |[j]|.
Notation. Let g ∈ Hn ⋊ Sn and r ∈ N. Then Irc (g) := {j ∈ I c (g) | |[j]| = r}. Also,
′
we may choose jr1 , . . . , jru such that [jr1 ] ∪ [jr2 ] ∪ . . . ∪ [jru ] = Irc (g) and [jrk ] 6= [jrk ] for
every distinct k, k ′ ∈ Zu . We shall say that jr1 , . . . , jru are representatives of Irc (g).
Lemma 4.25. Let a ∈ Hn ⋊ Sn . Fix an r ∈ N, let jr1 , . . . , jru be representatives of
Irc (a), and let d, d′ be distinct numbers in Zu . Fix an ordering on the r-cycles of a
within Z(a). Label these σ1 , . . . , σf and, for each e ∈ Zf , let (ie , me ) ∈ supp(σe ).
Now define cjrd ,jrd′ ∈ Sym(Xn ) by
(i, m + 1)
if i ∈ [jrd ] and m > zi (a)
′
if i ∈ [jrd ] and m > zi (a) + 1
(i, m − 1)s−1
′
(i1 , m1 )a
if (i, m) = (jrd , zjrd′ (a))as−1 for some s ∈ Zr
(i, m)cjrd ,jrd′ :=
(ie+1 , me+1 )as−1 if (i, m) = (ie , me )as−1 for some e ∈ Zf −1 , s ∈ Zr
if (i, m) = (if , mf )as−1 for some s ∈ Zr
(j d , z d (a))as−1
r jr
(i, m)
otherwise.
Then cjrd ,jrd′ ∈ CHn (g).
Proof. Note that cjrd ,jrd′ produces a bijection on the r-cycles of g, and so conjugates
g to g i.e. cjrd ,jrd′ ∈ CSym(Xn ) (g). By construction it satisfies the condition to be in
Hn .
Lemma 4.26. Let g, h ∈ Hn ⋊ Sn and r ∈ N. If x1 , x2 ∈ Hn both conjugate g to
h and jr1 , . . . , jru are representatives of Irc (g), then
u
u
X
X
tjrs (x1 ) =
tjrs (x2 ).
s=1
s=1
Proof. Given x1 , x2 ∈ Hn which both conjugate g to h, it is possible to produce,
by multiplying by an element of the centraliser which is a product of elements
cjr1 ,k (with k ∈ {jr2 , . . . , jru }), elements x′1 , x′2 ∈ Hn which both conjugate g to h
and for which tjrs (x′1 ) = 0 = tjrs (x′2 ) for all s ∈ {2, . . . , u} and so by Lemma 3.9,
tj (x′1 ) = tj (x′2 ) for all j ∈ Irc (g) \ [jr1 ]. By construction we then have that
tjr1 (x′1 ) =
u
X
s=1
tjrs (x1 ) and tjr1 (x′2 ) =
u
X
tjrs (x2 ).
s=1
Now consider y := x′1 (x′2 )−1 . By construction tj (y) = 0 for all j ∈ Irc (g) \ [jr1 ]
since tj (x′1 ) = tj (x′2 ) for all j ∈ Irc (g) \ [jr1 ]. Also tj (y) = tjr1 (y) for all j ∈ [jr1 ]
since y conjugates g to g (and so we also have that y ∈ CHn (g)). If tjr1 (y) 6= 0,
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
21
then y contains an infinite cycle with support intersecting the branch jr1 and a
branch jp ∈ I c (g) \ Irc (g) so that |[jp ]| = p 6= r i.e. the infinite cycle contains
(jr1 , m1 ), (jp , m2 ) ∈ Xn . This means there exists an e ∈ Z such that (jr1 , m1 )y e =
(jp , m2 ). But then y e ∈ CHn (g) and y e sends an r-cycle to an p-cycle where r 6= p,
a contradiction. Hence for any x, x′ ∈ Hn which both conjugate g to h,
u
u
X
X
tjrs (x) =
tjrs (x′ ).
s=1
s=1
From this proof, the following is well defined.
Notation. Let g, h ∈ Hn ⋊ Sn , r ∈ N and jr1 , . . . , jru be representatives of Irc (g).
Then M{jr1 ,...,jru } (g, h) denotes the number such that, for any x ∈ Hn∗ (g) which
Pu
conjugates g to h, d=1 tjrd (x) = M{jr1 ,...,jru } (g, h). Since we will fix a set of representatives, we will often denote M{jr1 ,...,jru } (g, h) by Mr (g, h).
We will show that one combination {yd ∈ Z | d ∈ Zu } is computable in order
to show, for any g, h ∈ Hn ⋊ Sn and any r ∈ N, that Mr (g, h) is computable. The
following will be useful for this. Recall that for any g ∈ Hn ⋊ Sn , Z(g) := {(i, m) ∈
Xn | i ∈ Zn and m < zi (g)}.
Notation. Let g ∈ Hn ⋊ Sn and r ∈ {2, 3, . . .}. Then ηr (g) := supp(gr Z(gr )) /r
denotes the number of orbits of gr Z(gr ) of size r. This is well defined since Lemma
4.3 states that gr restricts to a bijection on Z(gr ). Also, let η1 (g) := |Z(g) \
(supp(g Z(g))|. Since Z(gr ) is finite for all r ∈ N, we have that ηr (g) is finite for
all r ∈ N.
Lemma 4.27. Let g, h ∈ Hn ⋊ Sn , r ∈ N and jr1 , . . . , jru be representatives of Irc (g).
Then M{jr1 ,...,jru } (g, h) and the numbers {yj (g, h) | j ∈ I c (g)} are computable (using
only the elements g and h).
Proof. For each r ∈ N, any conjugator of g and h must send the r-cycles of g to
the r-cycles of h. Fix an r ∈ N and let jr1 , . . . , jru be representatives of Irc (g). Given
any g, h ∈ Hn ⋊ Sn which are Hn -conjugated, let
(14)
yk (g, h) := zk (h) − zk (g) for all k ∈ {jr2 , . . . , jru }.
(15)
yjr1 (g, h) := zjr1 (h) − zjr1 (g) + ηr (g) − ηr (h).
We work towards proving that the values for yk (g, h) defined in (14) and (15) are
suitable in 3 steps. First, consider if ηr (g) = ηr (h). This means that there is a
conjugator in FSym which conjugates gr Z(gr ) to hr Z(hr ) and hence the values
are sufficient. Secondly, consider if ηr (g) = ηr (h) + d for some d ∈ N. In this case,
first send ηr (h) r-cycles in Z(gr ) to those in Z(hr ). Then send the d remaining
cycles in Z(gr ) to the first d r-cycles on the branches [jr1 ] by increasing yjr1 (g, h)
by d. Finally, if ηr (g) = ηr (h) − e for some e ∈ N, then send the ηr (g) r-cycles in
Z(gr ) to r-cycles in Z(hr ) and then send the first e r-cycles of g on the branches
[jr1 ] to the remaining r-cycles in Z(hr ) by decreasing yjr1 (g, h) by e. With all of
these cases, the values defined in (14) and (15) are suitable.
Proof of Theorem 1. From Remark 4.14 and Remark 4.15 of Section 4.2, TCP(Hn )
is solvable if, given a, b ∈ Hn ⋊ Sn , the numbers MI (a, b) and {yj (a, b) | j ∈ I c } are
22
CHARLES GARNET COX
computable from only a and b. The computability of these numbers was shown,
respectively, in Proposition 4.24 and Lemma 4.27.
5. Applications of Theorem 1
Our strategy is to use [BMV10, Thm. 3.1]. We first set up the necessary notation.
Definition 5.1. Let H be a group and G E H. Then AGEH denotes the subgroup
of Aut(G) consisting of those automorphisms induced by conjugation by elements
of H i.e. AGEH := {φh | h ∈ H}.
Definition 5.2. Let G be a finitely presented group. Then A 6 Aut(G) is orbit
decidable if, given any a, b ∈ G, there is an algorithm which decides whether there
is a φ ∈ A such that aφ = b. If Inn(G) 6 A, then this is equivalent to finding a
φ ∈ A and x ∈ G such that x conjugates aφ to b.
The algorithmic condition in the following theorem means that certain computations for D, E, and F are possible. This is satisfied by our groups being given by
recursive presentations, and the maps between them being defined by the images
of the generators.
Theorem 5.3. (Bogopolski, Martino, Ventura [BMV10, Thm. 3.1]). Let
1 −→ D −→ E −→ F −→ 1
be an algorithmic short exact sequence of groups such that
(i) D has solvable twisted conjugacy problem,
(ii) F has solvable conjugacy problem, and
(iii) for every 1 6= f ∈ F , the subgroup hf i has finite index in its centralizer
CF (f ), and there is an algorithm which computes a finite set of coset representatives, zf,1 , . . . , zf,tf ∈ F ,
CF (f ) = hf izf,1 ⊔ · · · ⊔ hf izf,tf .
Then, the conjugacy problem for E is solvable if and only if the action subgroup
ADEE = {φg | g ∈ E} 6 Aut(D) is orbit decidable.
Remark. For all that follows, the action subgroup ADEE is provided as a recursive
presentation where the generators are words from Aut(D).
5.1. Conjugacy for finite extensions of Hn . We shall say that B is a finite
extension of A if A E B and A is finite index in B. The following is well known.
Lemma 5.4. If G is finitely generated and H is a finite extension of G, then H is
finitely generated.
Proposition 5.5. Let n ∈ {2, 3, . . .}. If E is a finite extension of Hn , then CP(E)
is solvable.
Proof. Within the notation of Theorem 5.3, set D := Hn and F to be a finite group
so to realise E as a finite extension of Hn . Conditions (ii) and (iii) of Theorem
5.3 are satisfied since F is finite. Theorem 1, the main theorem of the previous
section, states that condition (i) is satisfied. Thus CP(E) is solvable if and only if
AHn EE = {φe | e ∈ E} is orbit decidable. We note that AHn EE contains a copy
of Hn (since Hn is centreless). Moreover, it can be considered as a group lying
between Hn and Hn ⋊ Sn . Hence AHn EE is isomorphic to a finite extension of
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
23
Hn , and so by Lemma 5.4 is finitely generated. Thus AHn EE = hφe1 , φe2 , . . . , φek i
where {e1 , . . . , ek } is a finite generating set of E. From Lemma 4.4, given any
g ∈ NSym(Xn ) (Hn ) ∼
= Aut(Hn ), we may compute σg : the isometric permutation
of the rays induced by g. Thus we may compute hσei | i ∈ Zk i =: Eσ . Now,
given a, b ∈ Hn , our aim is to decide whether there exists φe ∈ AHn EE such that
(a)φe = b. Since Inn(Hn ) 6 AHn EE , this is equivalent to finding a τ ∈ Eσ and
x ∈ Hn such that (xτ )−1 a(xτ ) = b, which holds if and only if x−1 ax = τ bτ −1 .
Finally, since Eσ is finite (there are at most n! permutations of the rays), searching for an x ∈ Hn which conjugates a to σe bσe−1 for all σe ∈ Eσ provides us with a
suitable algorithm. Searching for such a conjugator can be achieved by Theorem 1
or [ABM15, Thm. 1.2].
5.2. Describing Aut(U ) for U finite index in Hn . Recall that g2 , . . . , gn were
elements of Hn such that gi : translates the first branch of Xn by 1; translates
the ith branch by -1; sends (i, 1) to (1, 1); and does not move any points of the
other branches (which meant that, if n ∈ {3, 4, . . .}, then Hn = hgi | i = 2, . . . , ni).
For any given n ∈ {2, 3, . . .}, the family of finite index subgroups Up 6 Hn were
defined (for p ∈ N) in [BCMR14] as follows. Note that FAlt(X) denotes the index
2 subgroup of FSym(X) consisting of all even permutations on X.
Up := h FAlt(Xn ), gip | i ∈ {2, ..., n} i
Notation. Let A 6f B denote that A has finite index in B.
Let n ∈ {3, 4, . . .}. If p is odd, then Up consists of all elements of Hn whose
eventual translation lengths are all multiples of p. If p is even, then Up consists of
all elements u of Hn whose eventual translations are all multiples of p and
n
Y
t (u)
(16)
gi i ∈ FAlt(Xn )
u
i=2
i.e. FSym(Xn ) 6 Up if and only if p is odd. This can be seen by considering, for
some i, j ∈ Zn , the commutator of gip and gjp . This will produce p 2-cycles which
will produce an odd permutation if and only if p is odd. If n = 2, then for all p ∈ N
all u ∈ Up 6 H2 will satisfy (16).
Lemma 5.6 (Burillo, Cleary, Martino, Röver [BCMR14]). Let n ∈ {2, 3, . . .}. For
every finite index subgroup U of Hn , there exists a p ∈ 2N with
FAlt(Xn ) = Up′ < Up 6f U 6f Hn
where Up′ denotes the commutator subgroup of Up .
Alternative proof. Let n ∈ {2, 3, . . .} and let U 6f Hn . Thus FAlt ∩U 6f FAlt.
Since FAlt is both infinite and simple, FAlt 6 U . Let πn : Hn → Zn−1 , g 7→
(t2 (g), . . . , tn (g)). Thus (U )πn 6f Zn−1 and so there is a number d ∈ N such that
(dZ)n−1 6 (U )πn ([(U )πn : Zn−1 ] is one such value for d). This means that for any
k ∈ Zn \ {1} there exists a u ∈ U such that tk (u) = −d, t1 (u) = d, and ti (u) = 0
otherwise. Moreover, for each k ∈ Zn \ {1} there is a σ ∈ FSym such that gkd σ ∈ U .
First, let n ∈ {3, 4, . . .}. Since FAlt 6 U , we may assume that either σ is trivial or
is a 2-cycle with disjoint support from supp(gk ). Thus (gkd σ)2 = gk2d ∈ U . If n = 2,
we may assume that σ is either trivial or equal to ((1, s) (1, s + 1)) for any s ∈ N.
24
CHARLES GARNET COX
Now, by direct computation, g2d ((1, 1) (1, 2))g2d ((1, d + 1) (1, d + 2)) = g22d . Thus,
for any n ∈ {2, 3, . . .},
hg22d , . . . , gn2d , FAlt(Xn )i 6 U.
Hence, if p := 2d, then Up 6 U .
Remark. {(Up )πn | p ∈ N} are the congruence subgroups of Zn−1 .
Now, given U 6f Hn , our strategy for showing that CP(U ) is solvable is as
follows. First, we show for all p ∈ N that TCP(Up ) is solvable. Using Theorem 5.3,
we then obtain that all finite extensions of Up have solvable conjugacy problem.
By the previous lemma, we have that any finite index subgroup U of Hn is a finite
extension of some Up (note that Up E U since Up E Hn ). This will show that CP(U )
is solvable.
TCP(Up ) requires knowledge of Aut(Up ). From [Cox16, Prop. 1], we have that
any group G for which there exists an infinite set X where FAlt(X) 6 G 6 Sym(X)
has NSym(X) (G) ∼
= Aut(G) by the map ρ 7→ φρ . By Lemma 5.6 any finite index
subgroup of Hn contains FAlt(Xn ). Thus, if U 6f Hn , then NSym(Xn ) (U ) ∼
=
Aut(U ) by the map ρ 7→ φρ . In fact we may show that a stronger condition holds.
Lemma 5.7. If 1 6= N E Hn , then FAlt(Xn ) 6 N .
Proof. We have that N ∩ FAlt(Xn ) E Hn . Since FAlt(Xn ) is simple, the only way
for our claim to be false is if N ∩ FAlt(Xn ) were trivial. Now, N 6 Hn , and so
[N, N ] 6 FSym(Xn ). Thus [N, N ] must be trivial, and so N must be abelian. But
the condition for elements α, β ∈ Sym(Xn ) to commute (that, when written in
disjoint cycle notation, either a power of a cycle in α is a power of a cycle in β or
the cycle in α has support outside of supp(β)) is not preserved under conjugation
by FAlt(Xn ), and so cannot be preserved under conjugation by Hn i.e. N is not
normal in Hn , a contradiction.
Remark. It follows that all Houghton groups are monolithic: each has a unique
minimal normal subgroup which is contained in every non-trivial normal subgroup.
The unique minimal normal subgroup in each case will be FAlt(Xn ).
We now introduce notation to help describe any finite index subgroup of Hn
(where n ∈ {2, 3, . . .}).
Notation. For each i ∈ Zn , let Ti (U ) := min{ti (u) | u ∈ U and ti (u) > 0}.
Pk
Furthermore for all k ∈ Zn , let T k (U ) := i=1 Ti (U ) and let T 0 (U ) := 0.
We will now introduce a bijection φU : Xn → XT n (U) . This bijection will induce
an isomorphism φ̂U : Sym(Xn ) → Sym(XT n (U) ) which restricts to an isomorphism
NSym(Xn ) (U ) → NSym(XT n (U ) ) ((U )φ̂U ).
Our bijection φU will send the ith branch of Xn to Ti (U ) branches in XT n (U) . For
simplicity let g1 := g2−1 . Now, for any i ∈ Zn and d ∈ N,
T (U)
Xi,d (gi i
T (U)
) = {(i, m) | m ≡ d mod |ti (gi i
)|}
T (U)
|ti (gi i )|
th
where
Thus the i
= Ti (U ) by the definition of gi .
branch of Xn may be partitioned into Ti (U ) parts:
T (U)
Xi,1 (gi i
T (U)
) ⊔ Xi,2 (gi i
T (U)
) ⊔ . . . ⊔ Xi,Ti (U) (gi i
).
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
25
We will now define the bijection φU by describing the image under φU of all points
T (U)
T (U)
in each set Xi,d (gi i ) where i ∈ Zn and d ∈ ZTi (U) . Let (i, m) ∈ Xi,d (gi i ).
Then
m−d
((i, m))φU := T i−1 (U ) + d,
+1
Ti (U )
T (U)
i.e. φU sends, for all i ∈ Zn and d ∈ ZTi (U) , the ordered points of Xi,d (gi i ) to
the ordered points of the (T i−1 (U ) + d)th branch of XT n (U) . An example of this
bijection with n = T1 (U ) = T2 (U ) = T3 (U ) = 3 is given below.
The set Xn
The set XT n (U )
Figure 2. Our bijection between Xn and XT n (U) , which can be
visualised by rotating the rectangles 90 degrees clockwise.
We now describe the image of U under φ̂U . First, φ̂U preserves cycle type.
Thus FAlt(XT n (U) ) 6 (U )φ̂U . Moreover FSym(XT n (U) ) 6 (U )φ̂U if and only if
FSym(Xn ) 6 U . The following will be useful to describe (U )φ̂U .
Notation. For any n ∈ N and any i ∈ Zn , let Ri := i × N, the ith branch of Xn
or XT n (U) , and Qi := (Ri )φU , so that Qi consists of Ti (U ) branches of XT n (U) .
Lemma 5.8. Let n ∈ {2, 3, . . .} and U 6f Hn . If u ∈ U , then (u)φ̂U ∈ HT n (U) .
Moreover, for each i ∈ ZT n (U) there exists a g ∈ (U )φ̂U such that ti (g) = 1.
Proof. Let n ∈ {2, 3, . . .} and U 6f Hn . Since φ̂U preserves cycle type, outside of a finite set, (u)φ̂U consists of fixed points and infinite cycles. If i ∈ Zn
and m > zi (u) + |ti (u)|, let (i′ , m′ ) = (i, m)φU and note, since Ti (U ) ti (u), that
(i′ , m′ )(u)φ̂U = (i′ , m′ + ti (u)/Ti (U )). Hence (u)φ̂U ∈ HT n (U) . Now, from the
definition of Ti (U ), there exists a g ∈ (U )φ̂U such that ti (g) = 1.
Using the above notation we have, for any g ∈ (U )φ̂U , that
(17)
if ti (g) = d, then tj (g) = d for all j such that Rj ⊆ Qi
i.e. for any u ∈ U and any i ∈ Zn , the eventual translation lengths of (u)φ̂ for the
branches in Qi must be the same.
Lemma 5.9. Let n ∈ {2, 3, . . .} and U 6f Hn . Then NSym(XT n (U ) ) ((U )φ̂U ) 6
HT n (U) ⋊ R where R 6 ST n (U) consists of isometric permutations of the rays of
XT n (U) . Thus if ρ ∈ NSym(Xn ) (U ), then (ρ)φ̂U ∈ HT n (U) ⋊ ST n (U) .
Proof. Let G := NSym(XT n (U ) ) ((U )φ̂U ). Describing G will be simpler than describing NSym(Xn ) (U ). We will do this in three stages: we first show, for any ρ ∈ G,
that the image under ρ of each ray of XT n (U) is almost equal to a ray of XT n (U) ;
26
CHARLES GARNET COX
secondly we show that ρσρ−1 ∈ HT n (U) , meaning that G 6 HT n (U) ⋊ ST n (U) ; and
finally we will describe a subgroup of R 6 ST n (U) such that {σρ | ρ ∈ G} 6 R.
Let i, j ∈ ZT n (U) and ρ ∈ G. Consider if (Ri )ρ and Rj have infinite intersection
but are not almost equal. Either (Ri )ρ has infinite intersection with Rj and Rj ′
where j ′ 6= j, or there is an i′ 6= i such that (Ri′ )ρ is almost equal to an infinite
subset of Rj . For the first case, let g ∈ (U )φ̂U be chosen so that ti (g) = 1. Thus
g has an infinite cycle containing {(i, m) | m > zi (g)}. Let (i′ , m′ ) = (i, zi (g))ρ.
Then {(i′ , m′ )(ρ−1 gρ)d | d ∈ N} = {(i′ , m′ )ρ−1 g d ρ | d ∈ N} = {(i, m)ρ | m > zi (g)}
which has infinite intersection with Rj and Rj ′ . But from the description of orbits
of Hn in [ABM15], ρ−1 gρ 6∈ HT n (U) and so (ρ−1 gρ)φ̂−1
U 6∈ U i.e. ρ 6∈ G. The second
case reduces to the first, since it implies that (Rj )ρ−1 has infinite intersection with
Ri and Ri′ . Hence if (Ri )ρ and Rj have infinite intersection (where i, j ∈ ZT n (U) ),
then (Ri )ρ and Rj are almost equal.
Let ρ ∈ G and let ω := ρσρ−1 ∈ Sym so that ω sends almost all of each branch of
XT n (U) to itself. Since, for each branch, ω preserves the number of infinite orbits
induced by g, we have for all k ∈ ZT n (U) and all g ∈ (U )φ̂U that tk (ω −1 gω) = tk (g).
Fix an i ∈ ZT n (U) and choose g ∈ HT n (U) so that ti (g) = 1. Note that g −1 ωgω −1 ∈
FSym. Thus there is a d ∈ N such that, for all m > d, (i, m)g −1 ωgω −1 = (i, m).
We may now assume for some m′ > d that (i, m′ )ω = (i, m′ + s), where s ∈ N. This
is because ω sends only finitely many points of Ri to another branch and to ensure
the positivity of s we may replace ω with ω −1 . Hence
(i, m′ + 1)g −1 ωgω −1 = (i, m′ )ωgω −1 = (i, m′ + s)gω −1 = (i, m′ + s + 1)ω −1 .
But, from our assumptions, (i, m′ + 1)g −1 ωgω −1 = (i, m′ + 1). Hence
ω : (i, m′ + 1) 7→ (i, m′ + s + 1)
i.e. ω : (i, m) 7→ (i, m + s) for all m > m′ . Running this argument for each
k ∈ ZT n (U) we have for any ρ ∈ G that ρσρ−1 ∈ HT n (U) .
We now describe necessary conditions on the branches of XT n (U) for there to
be a ρ ∈ G that permutes those branches. Let us assume that ρ ∈ G and that
(Rj )ρ is almost equal to Rj ′ , where Rj ⊆ Qk and Rj ′ ⊆ Qk′ . If k = k ′ then for
all g ∈ (U )φ̂U , tj (g) = tj ′ (g); hence all permutations of the branches in Qk may lie
in G. If k 6= k ′ then let g ∈ U be such that tk (g) > 0, tk′ (g) < 0, and ti (g) = 0
for all branches i in Xn \ (Rk ∪ Rk′ ). Such an element exists by Lemma 5.6: there
is a p ∈ N such that Up 6 U . Let h := (g)φ̂. For a ray j ′ in Qk′ ⊆ XT n (U) we
have that tj ′ (ρ−1 hρ) > 0 and so, by (17), if ρ−1 hρ ∈ (U )φ̂U then we must have for
all branches i′ in Qk′ that ti′ (ρ−1 hρ) > 0. From our choice of g, we may conclude
that Qk cannot contain fewer branches than Qk′ . Similarly tj (ρhρ−1 ) < 0 and so
for all rays i in Qk , ti (ρhρ−1 ) < 0 meaning that Qk′ cannot contain fewer branches
than Qk . Hence a necessary condition on j, j ′ ∈ ZT n (U) for there to be a ρ ∈ G
such that (Rj )ρ is almost equal to Rj ′ is that Qk ⊇ Rj and Qk′ ⊇ Rj ′ contain
the same number of branches. This condition is equivalent to the statement that
Tk (U ) = Tk′ (U ).
Remark 5.10. We can more precisely
describe the subgroup R from the previous
Ln
lemma. We have that R = ( i=1 STi (U) ) ⋊ A, where A 6 Sn corresponds to
permuting the factors of the summand that have the same size. More explicitly, if
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
27
σ ∈ Sn and σ = σ1 . . . σd in disjoint cycle notation, then σ ∈ A if and only if, for
every i ∈ Zd and k, k ′ ∈ supp(σi ), Tk (U ) = Tk′ (U ).
We now consider the particular case of Um := hFAlt(Xn ), gim | i ∈ Zn i 6f Hn
(where gi is our standard generator of Hn with t1 (g) = 1 and ti (g) = −1). In this
case the sets Qi , i ∈ Zn , all consist of m rays. Thus Lemma 5.9 states that
NSym(Xmn ) ((Um )φ̂Um ) 6 Hmn ⋊ (Sm ≀ Sn )
Ln
where Sm ≀Sn = ( i=1 Sm )⋊Sn . In fact (18) is an equality. Let ρ ∈ Hmn ⋊(Sm ≀Sn ).
Then conjugation by ρ preserves cycle type. Thus ρ−1 (FAlt(Xmn ))ρ = FAlt(Xmn ).
Given gim ∈ Um , we have that (gim )φ̂Um is a product of m infinite cycles, each
with support equal to two branches of Xmn . Since conjugation by any ω ∈ Hmn
preserves cycle type and sends almost all of each branch of Xmn to itself, we have
that ω −1 ((gim )φ̂Um )ω ∈ (Um )φ̂Um . Elements of the head of Sm ≀ Sn send (gim )φ̂Um
to (gjm gk−m )φ̂Um for some j, k ∈ Zn . We now consider the preimage, under φ̂Um , of
elements of the base of Sm ≀ Sn .
(18)
Notation. Let Yi,0 (U ) := {(i, m) | 1 6 m 6 Ti (U )} and, for any s ∈ N, let
−sT (U)
Yi,s (U ) := {(i, m) | sTi (U ) + 1 6 m 6 (s + 1)Ti (U )} = Yi,0 (U )gi i . Thus
∞
∞
F
F
−sT (U)
Ri =
Yi,s (U ) =
Yi,0 (U )gi i .
s=0
s=0
Definition 5.11. Given any i ∈ Zn and σ ∈ FSym(Xn ) with supp(σ) ⊆ Yi,0 (U )),
let uσ,i be the element of Sym(Xn ) such that supp(uσ,i ) ⊆ Ri and, for every s ∈
−sT (U)
sT (U)
i.e. let uσ,i induce the permutation σ on
N ∪ {0}, uσ,i Yi,s (U ) = gi i σgi i
every set Yi,s (U ).
The preimage of the base of Sm ≀Sn is therefore {uσ,i | i ∈ Zn , σ ∈ FSym(Yi,0 (Um ))}.
Let j ∈ Zn and σ ∈ FSym(Yj,0 (Um )). We have that conjugation by uσ,j sends gim
−1 m
m
to an element of Hn and, in particular, t1 (u−1
σ,j gi uσ,j ) = −ti (uσ,j gi uσ,j ) = m.
m
Now u−1
σ,j gi uσ,j ∈ Um since conjugation by elements of Sym preserves cycle type.
Hence all elements of Hmn ⋊ (Sm ≀ Sn ) normalise (Um )φ̂Um .
Proposition 5.12. Let n ∈ {2, 3, . . .} and U 6f Hn . Then there exists an m ∈ N
such that Um 6 U and NSym(Xn ) (U ) 6 NSym(Xn ) (Um ). Importantly this implies
that Um is characteristic in U .
Proof. Let n ∈ {2, 3, . . .} and U 6f Hn . Lemma 5.6 states that there exists an m ∈
2N such that Um 6f U . Let Gm := NSym(Xmn ) ((Um )φ̂Um ), which, from the above,
equals Hmn ⋊ (Sm ≀ Sn ). We wish to show that NSym(Xn ) (U ) 6 NSym(Xn ) (Um ).
From Lemma 5.9, NSym(XT n (U ) ) ((U )φ̂) 6 HT n (U) ⋊ R where R 6 ST n (U) . It is
−1
therefore sufficient to show that (HT n (U) ⋊ R)φ̂−1
U 6 (Gm )φ̂Um .
Ln
−1
First consider (R)φ̂U . By Remark 5.10, R = ( i=1 STi (U) ) ⋊ A. But (A)φ̂−1
U 6
Sn corresponds to the isometric permutations of the rays of Xn , all of which lie in
Ln
−1
(Gm )φ̂−1
i=1 STi (U) )φ̂U consists of {uσ,i | i ∈ Zn , σ ∈ FSym(Yi,0 (U ))}.
Um . Now (
But since Ti (U ) m for every i ∈ Zn , this is a subset of {uσ,i | i ∈ Zn , σ ∈
FSym(Yi,0 (Um ))} 6 (Gm )φ̂−1
Um .
Finally we consider (HT n (U) )φ̂−1
U . Let gj ∈ HT n (U) . Then t1 (gj ) = 1 and
−1
tj (gj ) = −1 where Rj ⊆ Qk . We first consider the image of gj under φ̂U
. Since
28
CHARLES GARNET COX
−1
(R)φ̂−1
U 6 (Gm )φ̂Um , we may assume that j is the lowest numbered branch in Qk ,
T (U)
T (U)
−1
1
) ⊔ Xk,1 (gj k ). We
meaning that supp((gj )φ̂−1
U ) = (supp(gj ))φU = X1,1 (gj
now consider the image of this element under φ̂Um . For every d ∈ N let id :=
(d − 1) · T1 (U ) + 1, let kd := (d − 1) · Tk (U ) + (k − 1)m + 1 and, for each i ∈ Zn ,
let ci denote the constant such that m = ci Ti (U ). Then
!
c1
c1
G
G
T1 (U)
c1 T1 (U)
(X1,1 (gk
))φUm =
X1,id (gk
) φUm =
Rid
d=1
d=1
and
T (U)
(Xk,1 (gk k ))φUm
=
ck
G
!
c T (U)
X1,kd (gkk k )
d=1
φUm =
ck
G
Rkd
d=1
Fc1
F ck
where c1 T1 (U ) = ck Tk (U ) = m. Thus supp((gj )φ̂−1
U φ̂Um ) = ( d=1 Rid )⊔( d=1 Rkd ).
Consider the element g ∈ Gm that: has one infinite orbit; preserves the colexicographic
Fc1 order (inherited
Fck from this ordering on Xmn = {(i, m) | i ∈ Zmn , m ∈ N})
on d=1
Rid and on d=1
Rkd ; sends (k1 , 1) to (1, 1); and fixes all other points of
Xmn . We note that g and (gj )φ̂−1
U φ̂Um have the same supports, and considering
their actions on Xmn we see that g = (gj )φ̂−1
U φ̂Um .
5.3. Conjugacy for groups commensurable to Hn . In Section 6.2 we show that
there exists an algorithm which, for any n ∈ {2, 3, . . .}, p ∈ N, and Hnp -conjugated
a, b ∈ Hnp ⋊ Snp , decides whether a and b are (Up )φ̂Up -conjugated.
Proposition 5.13. Let n ∈ {2, 3, . . .}, p ∈ 2N and Up 6 Hn . Then TCP(Up ) is
solvable.
Proof. Our aim is to produce an algorithm which, given a, b ∈ Up and φρ ∈ Aut(Up ),
decides whether there exists a u ∈ Up such that (u−1 )φρ au = b i.e. u−1 ρau = ρb.
Let φ̂ := φ̂Up and let us rephrase our question in (Up )φ̂:
u−1 ρau = ρb
⇔ (u−1 ρau)φ̂ = (ρb)φ̂
⇔ (u−1 )φ̂(ρa)φ̂(u)φ̂ = (ρb)φ̂
where (ρa)φ̂, (ρb)φ̂ ∈ Hnp ⋊ Snp and (u)φ̂ ∈ (Up )φ̂ 6 Hnp from Lemma 5.8 and
Lemma 5.9. The algorithm for TCP(Hnp ) in Section 4 may be used to produce a
conjugator x ∈ Hnp if one exists. Given such a x, Proposition 6.8 decides whether
there exists a y ∈ (Up )φ̂ which conjugates (ρa)φ̂ to (ρb)φ̂.
Proposition 5.14. Let n ∈ {2, 3, . . .}, p ∈ 2N, and Up 6 Hn . If E is a finite
extension of Up , then AUp EE is orbit decidable.
Proof. Recall that for AUp EE = {φe | e ∈ E} to be orbit decidable, there must exist
an algorithm which decides, given any a′ , b′ ∈ Up , whether there exists a ψ ∈ AUp EE
such that
(19)
(a′ )ψ = b′ .
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
29
Since Aut(Up ) ∼
= NSym(Xn ) (Up ), we may rewrite (19) as searching for an element
φρ ∈ Aut(Up ) such that
(a′ )φρ = b′ and φρ ∈ AUp EE
i.e. searching for a ρ ∈ Ē, the image of the natural epimorphism from AUp EE , such
that ρ−1 a′ ρ = b′ . Now we rephrase this question using the map φ̂ := φ̂Up :
(ρ−1 )φ̂(a′ )φ̂(ρ)φ̂ = (b′ )φ̂.
Let a := (a′ )φ̂, b := (b′ )φ̂, and y := (ρ)φ̂. Thus a, b ∈ (Up )φ̂ 6 Hnp are known, and y
must be chosen to be any element in (Ē)φ̂ so that y −1 ay = b. We have that (Up )φ̂ 6c
(Ē)φ̂ 6 Hnp ⋊ Snp for some c ∈ N. Thus there exist coset representatives e2 , . . . , ec
and (Ē)φ̂ = h(g2p )φ̂, . . . , (gnp )φ̂, FAlt(Xmn ), e2 , . . . , ec i. Now, deciding whether a, b
are (Ē)φ̂-conjugated is equivalent to deciding whether any pair in {(a, ei be−1
i )|2 6
i 6 c} is (Up )φ̂-conjugated, by noting that every element in (Ē)φ̂ decomposes as uei
for some u ∈ (Up )φ̂ and 2 6 i 6 c and that u−1 au = ei be−1
⇔ (uei )−1 a(uei ) = b.
i
Deciding if any pair is (Up )φ̂-conjugated is possible by first deciding whether any
pair is Hmn -conjugated and then, for each such pair, applying Proposition 6.8.
Proposition 5.15. Let n ∈ {2, 3, . . .}, p ∈ 2N, and Up Ef G. Then CP(G) is
solvable.
Proof. We again use [BMV10, Thm. 3.1]. G is a finite extension of Up by F , some
finite group. TCP(Up ) is solvable by Proposition 5.13. AUp EG is orbit decidable by
Proposition 5.14. Hence CP(G) is solvable.
Recall that A and B are commensurable if and only if there exist NA ∼
= NB with
NA finite index and normal in A and NB finite index and normal in B. Our aim is
to prove Theorem 2, that, for any n ∈ {2, 3, . . .} and any group G commensurable
to Hn , CP(G) is solvable.
Proof of Theorem 2. Fix an n ∈ {2, 3, . . .} and let G and Hn be commensurable.
Then there is a U Ef G, Hn . By Proposition 5.12, there exists an m ∈ 2N such
that Um is finite index and characteristic in U . It is a well know result that if A is
characteristic in B and B is normal in C, then A is normal in C. Hence Um E G
and we may apply Proposition 5.15 to obtain that CP(G) is solvable.
6. Further computational results
6.1. Computational results regarding centralisers in Hn .
Proposition 6.1. There is an algorithm that, given any n ∈ {2, 3, . . .} and any
g ∈ Hn ⋊ Sn , outputs a finite generating set for t(CHn (g)) 6 Zn .
In order to prove Proposition 6.1 we need some notation.
Notation. Given a subset A ⊆ Zn and g ∈ Hn ⋊ Sn , let tA (g) = (x1 , . . . , xn ) ∈ Zn
be given by xi = ti (g) if i ∈ A and xi = 0 otherwise.
Note that tA is not a homomorphism: consider h, an isometric permutation
of the 1st and 2nd branches of Xn , and g2 , a standard generator of Hn . Then
t1 (h−1 g2 h) = t1 (g2−1 ) = −1 but t1 (h−1 ) + t1 (g2 ) + t1 (h) = 0 + 1 + 0. It is a
homomorphism from Hn however. Also, given any g ∈ Hn ⋊ Sn and A ⊆ Zn , the
vector tA (g) is computable by Lemma 4.4.
30
CHARLES GARNET COX
Lemma 6.2. Given n ∈ {2, 3, . . .} and g ∈ Hn ⋊ Sn ,
t(CHn (g)) = tI c (g) (CHn (g)) ⊕ tI(g) (CHn (g)).
Proof. Let h ∈ CHn (g) and i ∈ I(g). From Lemma 4.22, h[i] ∈ CHn (g) and
−1
tj (hh−1
[i] ) = 0 if j ∈ Cg ([i]) and tj (hh[i] ) = tj (h) otherwise. Thus, repeating the
argument for all classes of I(g), we construct h′ ∈ CHn (g) with t(h′ ) = tI c (g) (h), and
thus t(h(h′ )−1 ) = tI(g) (h). It follows that t(h) ∈ htI c (g) (CHn (g)), tI(g) (CHn (g))i.
Clearly tI c (g) (CHn (g)) ∩ tI(g) (CHn (g)) is the trivial element of Zn and hence they
generate a direct sum.
P
Lemma 6.3. Let n ∈ {2, 3, . . .}, g ∈ Hn ⋊ Sn , and r ∈ N. Then i∈I c (g) ti (h) = 0
r
for all h ∈ CHn (g).
1
u
Proof. By Lemma
all j ∈ [i]g . If
Pjur , . . . jr are representatives
P 3.9, tj (h) = ti (h)Pfor
u
c
of Ir (g), then
s=1 tjrs (id) = 0 by Lemma
i∈Irc (g) ti (h) = r
s=1 tjrs (h) = r
4.26.
Definition 6.4. Given g ∈ Hn ⋊Sn , let Θ(g) be the set of all cjrd ,jrd′ of the statement
of Lemma 4.25 where r ∈ N, jr1 , . . . , jru are distinct representatives of Irc (g), and
d, d′ ∈ Zu are distinct. Note that since I c (g) is finite, Θ(g) is finite.
Lemma 6.5. Given n ∈ {2, 3, . . .} and g ∈ Hn ⋊ Sn , tI c (g) (CHn (g)) is generated
by the image of Θ(g) under t.
Proof. By Lemma
P 3.9 and Lemma 6.3, for any h ∈ CHn (g) and for every r ∈ N
we have that i∈Irc (g) ti (h) = 0 and ti (h) = tk (h) if [k] = [i]. Thus tI c (g) (CHn (g))
must be contained in
X
xi = 0 for all r ∈ N, xj = xk if [j] = [k]}.
A := {(x1 , . . . , xn ) ∈ Zn | xi = 0 if i ∈ I(g),
i∈Irc (g)
Note that it follows from the definition of Θ(g) that t(c) ∈ tI c (g) (CHn (g)) for all
c ∈ Θ(g). Thus ht(Θ(g))i ⊆ tI c (g) (CHn (g)) ⊆ A. We claim that ht(Θ(g))i = A.
Clearly, the lemma follows now from the claim. To see that t(Θ(g)) generates
PA, we
argue by induction on the L1 -norm of (x1 , . . . , xn ) ∈ A (i.e. ||(x1 , ..., xn )|| = |xi |).
Let y ∈ A. If ||y|| = 0, then y ∈ ht(Θ(g))i. Suppose that y = (x1 , ..., xn ). If
||y|| > 0, since xi = 0 for all i ∈ I(g), there must be j ∈ I c with xj 6= 0. Assume
xj < 0. Let r ∈ N such that j ∈ Irc (g). Then, by Lemma 6.3, there must be
k ∈ Irc (g), [k] 6= [j] with xk > 0. Observe that tl (cj,k ) is equal to 1 for l ∈ [j], is
equal to −1 for l ∈ [k], and zero otherwise. Hence ||yt(ck,j )|| = ||y|| − 2r, and we
conclude by induction that yt(ck,j ) ∈ ht(Θ(g))i and hence y ∈ ht(Θ(g))i.
We now have to find a generating set for tI(g) (CHn (g)).
Lemma 6.6. Let n ∈ {2, 3, . . .} and g ∈ Hn ⋊ Sn . Then for each i ∈ I(g),
tCg ([i]) (CHn (g)) 6 Zn is infinite cyclic and
M
tI(g) (CHn (g)) =
tCg ([i]) (CHn (g)),
i∈R
where R is a set of different representatives of I(g)/ ∼g .
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
31
Proof. Let i ∈ I(g). Write g = ωg σg . Note that g |σg | ∈ CHn (g) and that t((g |σg | )[i] )
is non-zero and lies in tCg ([i]) (CHn (g)). Thus tCg ([i]) (CHn (g)) is non-trivial.
Let h ∈ CHn (g), such that ti (h) is positive and minimum. Take c ∈ CHn (g). We
will show that tCg ([i]) (c) ∈ htCg ([i]) (h)i.
If ti (c) = 0 then, by Lemma 4.20, tCg ([i]) (c) has all its coordinates equal to zero
and hence lies in htCg ([i]) (h)i.
Assume that ti (c) > 0, the other is analogous. By the Euclidean algorithm,
ti (c) = dti (h) + r with 0 6 r < ti (h). Note that r = ti (ch−d ) and ch−d ∈ CHn (g)
and thus r = 0 by the minimality of h. Therefore ti (ch−d ) = 0, and, by Lemma 4.20,
tCg ([i]) (ch−d ) has all its coordinates equal to zero, and hence tCg ([i]) (c) = tCg ([i]) (hd ).
By Lemma 4.21 and Lemma 4.22, if h ∈ CHn (g), then h[i] ∈ CHn (g) and
tCg ([i]) (h) =
P tCg ([i]) (h[i] ) = t(h[i] ). Thus tCg ([i]) (CHn (g)) 6 tI(g) (CHn (g)). Since
tI(g) (h) = j∈R t(h[j] ), we get that tI(g) (CHn (g)) is generated by {tCg ([j]) (CHn (g)) |
j ∈ R} and clearly if [j] 6∼g [k] the cyclic subgroups tCg ([j]) (CHn (g)) and tCg ([k]) (CHn (g))
intersect trivially. Now the direct sum follows.
Lemma 6.7. There is an algorithm that given any n ∈ {2, 3, . . .}, g ∈ Hn ⋊Sn , and
i ∈ Zn , decides if i ∈ I(g) and moreover, if i ∈ I(g), it outputs some h ∈ CHn (g)
such that t(h) generates tCg ([i]) (CHn (g))
Proof. Deciding if i ∈ I(g) follows from Lemma 4.4.
So suppose that i ∈ I(g). By Lemma 6.6, tCg ([i]) (CHn (g)) is infinite cyclic, so
there must be h ∈ CHn (g) such that tCg ([i]) (h) generates tCg ([i]) (CHn (g)).
From Lemma 4.22 if h ∈ CHn (g) then h[i] ∈ CHn (g) and tCg ([i]) (h) = t(h[i] ).
Thus we can assume that h ∈ CHn (g) has only infinite orbits, its support has finite
intersection with the rays Rj , j 6∈ Cg ([i]), and t(h) generates tCg ([i]) (CHn (g)).
Write g = ωg σg . Note that g |σg | ∈ CHn (g) and that t((g |σg | )[i] ) is non-zero
and lies in tCg ([i]) (CHn (g)) = ht(h)i. Let g ∗ denote (g |σg | )[i] and observe that
h ∈ CHn (g) ⇒ h ∈ CHn (g ∗ ). Thus |tj (h)| 6 |tj (g ∗ )| for j ∈ [i], and tj (g ∗ ) is
computable from g by Lemma 4.4.
We will find computable bounds for zk (h), k ∈ Zn . If k ∈ I c (g), then Lemma
4.21 states that supp(h) = supp(g ∗ ) and so zk (h) 6 zk (g ∗ ) for all k ∈ I c (g). Now
consider if k ∈ I(g). We will show that zk (h) 6 zk (g ∗ ) + |tk (g ∗ )|. Assume, for
a contradiction, that zk (h) is minimal and that zk (h) > zk (g ∗ ) + |tk (g ∗ )|. Since
h ∈ CHn (g ∗ ), we have
(i, m)g ∗ h = (i, m)hg ∗ for all (i, m) ∈ Xn .
(20)
We may assume that tk (g ∗ ) < 0, since replacing g ∗ with (g ∗ )−1 yields a proof for
when tk (g ∗ ) > 0. Let m > zk (g ∗ ) + |tk (h)|. Note that
(k, m)g ∗ h = (k, m + tk (g ∗ ))h
and also that
(k, m)g ∗ h = (k, m)hg ∗ = (k, m + tk (h))g ∗ = (k, m + tk (g ∗ ) + tk (h)).
Thus (k, m + tk (g ∗ ))h = (k, m + tk (g ∗ ) + tk (h)), which, since tk (g ∗ ) < 0, contradicts
the minimality of zk (h).
Let S = {s ∈ Hn : zk (s) 6 zk (g) + |tk (g ∗ )|, |tk (s)| 6 |tk (g ∗ )|, k ∈ Zn }. Note
that S must contain h ∈ CHn (g). Note also that S is finite, and with the given
restrictions, one can enumerate all elements of S. For each s ∈ S, using the word
problem for Hn ⋊ Sn we can decide if s ∈ CHn (g), and using that the functions tA
32
CHARLES GARNET COX
are computable, we can decide if t(s) is non-zero and equal to tCg ([i]) (s). Let S ′ be
the subset of S of the elements satisfying the two conditions above. Finally, since
S ′ is finite, we can find c ∈ S ′ such that tCg ([i]) (s) ∈ ht(c)i for all s ∈ S ′ . Note that
since h ∈ S ′ such an c must exist.
Proof of Proposition 6.1. From Lemma 4.4 there is an algorithm that computes
I(g), I c (g), and Cg ([i]) for each i ∈ I(g). From Lemma 6.2 t(CHn (g)) = tI c (g) (CHn (g))⊕
tI(g) (CHn (g)), and we only need algorithms that gives generating sets for each of
the direct summands. By Lemma 6.5, there is an algorithm that outputs a generating set for tI c (g) (CHn (g)). By Lemma 6.7, there is an algorithm that computes
a generator of tCg ([i]) (CHn (g)) for each i ∈ I(g) and by Lemma 6.6, those elements
generate tI(g) (CHn (g)).
6.2. Deciding conjugacy in (Up )φ̂Up . Recall Up := hg2p , . . . , gnp , FAlt(Xn )i 6 Hn .
From Section, 5.2 (Up )φ̂Up 6 Hnp and
n−1
X
t((Up )φ̂Up ) = (a1 . . . a1 a2 . . . a2 . . . an . . . an )T a1 , . . . , an−1 ∈ Z, an = −
ai .
| {z }
| {z } | {z }
p
p
p
i=1
When p is even, FSym(Xnp ) ∩ (Up )φ̂Up = FAlt(Xnp ). In this section we prove the
following (which has been used in Section 5.3 above).
Proposition 6.8. There is an algorithm which, given any n ∈ {2, 3, . . .}, p ∈ 2N,
a, b ∈ Hnp ⋊ Snp and an x ∈ Hnp which conjugates a to b, decides whether a and b
are (Up )φ̂Up -conjugated.
Lemma 4.7 stated that if x ∈ G conjugates a to b then y ∈ G also conjugates a
to b if and only if there exists a c ∈ CG (a) such that cx = y.
Lemma 6.9. Let a, b ∈ Hnp ⋊ Snp be conjugate by x ∈ Hnp . Then a and b are
(Up )φ̂Up -conjugated if and only if there is a c ∈ CHnp (a) such that cx ∈ (Up )φ̂Up .
Proof. We apply Lemma 4.7. If there is a c ∈ CHnp (a) such that cx ∈ (Up )φ̂Up
then a and b are conjugate by cx ∈ (Up )φ̂Up . If there exists y ∈ (Up )φ̂Up such that
y −1 ay = b, let c := yx−1 . Then c ∈ CHnp (a) and cx = y ∈ (Up )φ̂Up .
Lemma 6.10. Let a, b ∈ Hnp ⋊ Snp and let x ∈ Hnp conjugate a to b. Then there
is an algorithm that decides whether there exists c ∈ CHnp (a) such that t(cx) ∈
t((Up )φ̂Up ), and outputs such an element if one exists.
Proof. Let {δ1 , . . . , δ e } denote a finite generating set of t(CHnp (a)), which is computable by Proposition 6.1. Deciding whether there is a c ∈ CHnp (a) such that
t(cx) ∈ t((Up )φ̂Up ) is equivalent to finding powers αi of the generators δ i ∈ Zn such
that
e
X
αi δ i ∈ t((Up )φ̂Up ).
t(x) +
i=1
Hence we must decide whether there are constants {a1 , . . . , an−1 } and {α1 , . . . , αe }
such that
e
n−1
X
X
αi δ i = (a1 . . . a1 a2 . . . a2 . . . . . . an . . . an )T , where an := −
(21) t(x) +
ai .
i=1
i=1
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
33
Viewing this as np linear equations, this system of equations has a solution if and
only if there is an element c ∈ CHnp (a) such that t(cx) ∈ t((Up )φ̂Up ). By writing
these equations as a matrix equation we may compute the Smith normal form to
decide if the equations have an integer solution (see, for example, [Laz96]) and
compute one should one exist (either from the reference or by enumerating all
possible inputs). Definition 6.4 and Lemma 6.7 then provide suitable preimages for
the elements {δ 1 , . . . , δ e } in order to output a viable c.
Lemma 6.11. Let n ∈ {2, 3, . . .} and g ∈ Hn ⋊ Sn . Then it is algorithmically
decidable whether CHn (g)∩(FSym(Xn )\FAlt(Xn )) is empty. A necessary condition
for it to be empty is that I c (g) = ∅.
Proof. We use Lemma 4.4 to compute I c (g). If there exists j ∈ I c (g), either |[j]g | is
odd or even. If |[j]g | is even, then we have that the first |[j]g |-cycle on this branch
is a finite order element of the centraliser that lies in FSym(Xn ) \ FAlt(Xn ). If
|[j]g | is odd, then the element that permutes only the first two |[j]g |-cycles of the
branches [j]g provides an element of the centraliser in FSym(Xn )\FAlt(Xn ). Hence
CHn (g)∩ FSym(Xn )\FAlt(Xn ) is non-empty whenever g has an even length orbit
or g has two or more orbits of length m for some odd number m. Since one of these
is satisfied if I c (g) is non-empty, assume that I(g) = Zn . From Section 3.1, this
implies that all of the finite orbits of g lie inside a finite subset of Xn , which is
computable by Lemma 4.4. Searching for orbits of g of the appropriate length is
therefore also possible in this case.
Definition 6.12. Let sgnX : FSym(X) → {1, −1} be the sign function for FSym(X),
Q
t (g)
so that the preimage of 1 is FAlt(X). For any g ∈ Hn , let fg := g( ni=2 gi i ). Note
that t(fg ) = 0 and so fg ∈ FSym(Xn ). Thus let ξn : Hn → {1, −1}, h 7→ sgnXn (fh ).
Note that for any g ∈ Hn , ξn (g) = ξn (fg ).
Lemma 6.13. There exists an algorithm which, given g ∈ Hn , computes ξn (g).
Q
t (g)
Proof. Since g( ni=2 gi i ) lies in FSym(Xn ), we may apply Lemma 4.4 to determine its cycle type, which is sufficient to compute ξn (g).
The previous definition exactly captures the form of the elements in (Up )φ̂Up .
Lemma 6.14. Let n ∈ {2, 3, . . .}, p ∈ 2N, g ∈ Hnp , and t(g) ∈ t((Up )φ̂Up ). Then
g ∈ (Up )φ̂Up if and only if ξnp (g) = 1.
Proof. Let h := (g)φ̂−1
Up . Using that t(g) ∈ t((Up )φ̂Up ) and that p ti (h) for every
i ∈ Zn : h ∈ Up ⇔ fh ∈ Up ⇔ fh ∈ FAlt(Xn ) ⇔ fg ∈ FAlt(Xnp ) ⇔ ξnp (g) = 1.
Lemma 6.15. Let n ∈ {2, 3, . . .}. Then ξn is a homomorphism.
Proof. Clearly the function sgnX is a homomorphism. Let g, h ∈ Hn . Then
n
n
n
Y
Y
Y
−(t (g)+ti (h))
−t (h)
−t (g)
)
gh = fg ( gi i )fh ( gi i ) = fg fh′ ( gi i
i=2
i=2
i=2
Q
Q
t (g)
−t (g)
where
= ( ni=2 gi i )fh ( ni=2 gi i ) has the same cycle type as fh . Thus
′
fgh = fg fh and ξn (gh) = ξn (fgh ) = ξn (fg fh′ ) = ξn (fg )ξn (fh′ ) = ξn (fg )ξn (fh ) =
ξn (g)ξn (h).
fh′
34
CHARLES GARNET COX
Proof of Proposition 6.8. By Lemma 6.9, a and b are (Up )φ̂Up -conjugated if and
only if there exists a c ∈ CHnp (a) such that cx ∈ (Up )φ̂Up . A necessary condition for
cx ∈ (Up )φ̂Up is that t(cx) ∈ (Up )φ̂Up . Lemma 6.10 decides whether this condition
can be satisfied. If not, then CHnp (a)x ∩ (Up )φ̂Up is empty and a and b are not
(Up )φ̂Up -conjugated. We can therefore assume that this condition is satisfied.
Let c ∈ CHnp (a) be the element outputted by the algorithm of Lemma 6.10 so
that t(cx) ∈ (Up )φ̂Up . By Lemma 6.14, cx ∈ (Up )φ̂Up if and only if ξnp (cx) = 1.
By Lemma 6.13, ξnp (cx) is computable. If ξnp (cx) = 1 we are done and so let us
assume that ξnp (cx) = −1. If there exists c′ ∈ CHnp (a) ∩ (FSym(Xnp ) \ FAlt(Xnp ))
then c′ c ∈ CHnp (a), t(c′ cx) = t(c′ ) + t(cx) = t(cx) ∈ t((Up )φ̂Up ), and ξnp (c′ cx) =
ξnp (c′ )ξnp (cx) = 1 since ξnp is a homomorphism by Lemma 6.15. Hence c′ cx
conjugates a to b and c′ cx ∈ (Up )φ̂Up . Lemma 6.11 decides whether such a c′
exists. If no such c′ exists, Lemma 6.11 states that I c = ∅. Thus the generating
set {δ 1 , . . . , δ e } for t(CHnp (a)) from Proposition 6.1 consists only of those elements
from Lemma 6.7. Lemma 6.7 computes δ̂ 1 , . . . , δ̂e ∈ CHnp (a) such that t(δ̂ j ) = δ j
for each j ∈ Ze . We then take the equations
t(x) +
e
X
αi δ i = (a1 . . . a1 a2 . . . a2 . . . . . . an . . . an )T , where an := −
n−1
X
ai .
i=1
i=1
labelled (21) above (where a1 , . . . , an−1 and α1 , . . . , ae are to be found) and add
the equation
e
Y
ξ(δ̂ j )αj ξ(x) = 1
j=1
which ensures that the chosen c ∈ CHnp (a) satisfies ξ(cx) = 1 (from Lemma 6.15).
From our assumptions, the choice of generating set and preimage are arbitrary: if
h, h′ ∈ CHnp (a) satisfy t(h) = t(h′ ), then we musthave that h−1 h′ ∈ FAlt(Xnp ),
since otherwise CHnp (a) ∩ FSym(Xnp ) \ FAlt(Xnp ) would be non-empty. We may
then write these equations as a matrix equation and compute the Smith normal
form to decide whether or not the equations have an integer solution.
6.3. Proving Theorem 3. The structure of centralisers for elements in Hn was
studied in [JG15]. In this section we show, for any g ∈ Hn ⋊ Sn , when CHn (g)
is finitely generated and when it is, that a finite generating set is algorithmically
computable. From Section 6.1, a finite set {δ 1 , . . . , δ e } is computable, from only
hδ 1 , . . . , δ e i. Hence, given
n ∈ {2, 3, . . .} and a ∈ Hn ⋊ Sn , such that t(CHn (a)) =P
e
c ∈ CHn (a), there exist α1 , . . . , αe ∈ Z such that t(c) = i=1 αi δ i .
Notation. Let {δ 1 , . . . , δ e′ } be the image of Θ(a) under t (where the set Θ(a) was
introduced in Definition 6.4) and let {δe′ +1 , . . . , δ e } be the image under t of the
elements outputted by the algorithm of Lemma 6.7.
We will begin to describe a finite generating set for CHn (a) (should one exist)
by choosing certain preimages (under t) of {δ1 , . . . , δ e }. Lemma 4.25 states that
Θ(a) ⊆ CHn (a), and so our preimages of {δ 1 , . . . , δ e′ } will be the elements of Θ(a).
The following lemma shows that, by imposing the condition that the preimage may
only have infinite orbits, there is a unique preimage for the elements {δe′ +1 , . . . , δ e }.
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
35
Lemma 6.16. Let n ∈ {2, 3, . . .}, a ∈ Hn ⋊ Sn , i ∈ I, and g ∈ CHn (a) such
that ht(g)i = tCa ([i]) (CHn (a)). If h ∈ CHn (a) and t(h) = t(g), then g[i] = h[i] . In
addition, for each s ∈ {e′ + 1, . . . , e} there is a unique preimage δs of δ s having only
infinite orbits.
Proof. Lemma 4.21 states that supp(g[i] ) = supp((a|σa | )[i] ) = supp(h[i] ) and so
|σa |
supp(g[i] h−1
)[i] ). Now, g[i] h−1
[i] ) ⊆ supp((a
[i] ∈ FSym(Xn ) must be trivial by the
first part of the statement of Lemma 4.21: elements of CHn (a) cannot have finite
orbits intersecting supp((a|σa | )[i] ).
The previous lemma therefore provides a unique preimage for each of the elements {δ e′ +1 , . . . , δ e }. We now describe the structure of CFSym(Xn ) (a) in order to
determine when CHn (a) is not finitely generated, and to describe a finite generating
set when one exists.
(0)
Notation. Let g ∈ Sym(X) for some non-empty set X. Then Yg
(1)
Yg
:= X \ supp(g), and, for each r ∈ N,
(r)
(r)
Yg
:= supp(g
:= supp(g r ). Thus:
(1)
Yg
∞
),
denotes
the fixed points of g; Yg , where r ∈ {2, 3, . . .}, is the union of the orbits of g of
(0)
size r; and Yg is the union of the infinite orbits of g.
Lemma 6.17. Let g ∈ Sym(X) and G 6 Sym(X) for some non-empty set X. If
Q∞
(r)
c ∈ CG (g), then c = r=0 αr where, for every r ∈ {0, 1, 2, . . .}, supp(αr ) ⊆ Yg .
Proof. We show that c must restrict to a bijection, for each r ∈ {0, 1, 2, . . .}, on
(r)
the set Yg . Consider if c sent an r-cycle of g to an s-cycle of g where r 6= s. By
possibly replacing c with c−1 , we may assume that r < s (where s may be finite
or infinite). Let y be in this orbit of size r. Then (y)g r = y. Since c ∈ CG (g),
(y)c−1 g r c = y. But then c sends both (y)c−1 and (y)c−1 g r to y, a contradiction.
Lemma 6.18. Let g ∈ Sym(X) for some non-empty set X. Then
∞
M
CFSym(Y (r) ) (g)
CFSym(X) (g) ∼
=
g
r=0
Q∞
Proof. Given c ∈ CFSym(X) (g), the previous lemma states that c = r=0 αr where,
(r)
for every r ∈ {0, 1, 2, . . .}, supp(αr ) ⊆ Yg . Now c ∈ FSym ⇒ | supp(c)| < ∞,
(r)
and so, for every r, | supp(αr )| < ∞ ⇒ αr ∈ FSym(Yg ). Clearly, if r 6= s, then
(r)
(s)
FSym(Yg ) ∩ FSym(Yg ) is trivial.
Lemma 6.19. Let g ∈ Sym(X), where X is a non-empty set, and let y, z ∈ X. If
c ∈ CSym(X) (g) and c : y 7→ z, then c : yg d 7→ zg d for all d ∈ Z.
Proof. For all d ∈ Z,
(z)c−1 g d c = yg d c and (z)c−1 g d c = (z)g d .
Thus c : yg d 7→ zg d for all d ∈ Z.
Lemma 6.20. Let r ∈ N and g ∈ Sym(X) where X is a non-empty set. Then
(r)
CFSym(Y (r) ) (g) ∼
Cr ≀ FSym g Yg
=
g
(r)
(r)
where g Yg is the set obtained from Yg by the equivalence relation x ∼ y ⇔
(r)
xg d = y for some d ∈ Z. Moreover, as a subgroup of FSym(Yg ), the base of
36
CHARLES GARNET COX
this wreath product consists of all of the r-cycles of g, and the head consists of all
finitary permutations of the orbits of g of size r.
Proof. Let c ∈ CFSym(Y (r) ) (g). From Lemma 6.18 and Lemma 6.19, c restricts to a
g
bijection of the r-cycles of g. We will impose the following condition on c: whenever
c sends an r-cycle σ of g to an r-cycle ω of g, it sends the smallest point of supp(σ)
(under some total ordering on X) to the smallest point of supp(ω). We may do
this by replacing c with hc where h is a product of r-cycles of g, since each r-cycle
of g clearly lies in CFSym(Y (r) ) (g). Then c is determined by its action on the points
g
(r)
Y
in g g , where the representatives can be chosen to be the smallest point in each
g-orbit of size r. Thus every element of CFSym(Y (r) ) (g) is a product of r-cycles of
g
g together with a permutation of the r-cycles of g i.e. CFSym(Y (r) ) (g) is the above
g
wreath product.
We now work through the cases |Irc (a)| = 0, |Irc (a)| = r, and |Irc (a)| > 2r.
(r)
Definition 6.21. Let n ∈ {2, 3, . . .} and g ∈ Hn ⋊ Sn . For each r ∈ N where Yg
is finite, let Ωr (g) consist of the elements of CFSym(Y (r) ) (g). This set is enumerable
g
since, from our description of orbits in Section 3.1, all such r-cycles of g lie in Z(g)
which is finite and computable from only the element g.
Notation. Given n ∈ N and Y ⊆ Xn , let Hn (Y ) := {h ∈ Hn | supp(h) ⊆ Y }.
Lemma 6.22. Let r ∈ N and |Irc (g)| = r. Then CHn (g) is not finitely generated.
Proof. By Lemma 6.5, if c ∈ CHn (g) then tj (c) = 0 for all j ∈ Irc (g). By Lemma
(r)
(r)
6.17, if c ∈ CHn (g) then c = f c′ where supp(f ) ⊆ Yg and supp(c′ ) ⊆ Xn \ Yg .
Thus
CHn (g) = CFSym(Y (r) ) (g) ⊕ CHn (Xn \Y (r) ) (g).
g
g
Lemma 6.20 states that
(r)
.
CFSym(Y (r) ) (g) ∼
Cr ≀ FSym g Yg
=
g
(22)
If (22) were finitely generated then FSym would also be finitely generated. Hence
CHn (g) is not finitely generated since it has a non-finitely generated quotient.
Lemma 6.23. Fix an r ∈ N such that |Irc (g)| = 2r and fix a j ∈ Irc (g). Then
CFSym(Y (r) ) (g) is a subgroup of the group generated by:
g
i) an element h ∈ Θ(g) with tj (h) 6= 0;
ii) an r-cycle
(g)) (j, zj (g))g . . . (j, zj (g))g r−1 ); and
Qr λr := ((j, zjs−1
(j, zj (g))g s−1 ).
iii) µr := s=1 ((j, zj (g))g
(r)
lie in
Proof. We show that the elements of CFSym(Y (r) ) (g) ∼
Cr ≀ FSym g Yg
=
g
(r)
hh, λr , µr i. Note that, by construction, supp(h) = Yg . Thus, given any r-cycle
σ of g, there exists d ∈ Z such that h−d λr dd = σ and so the base of our wreath
(r)
product lies in hh, λr , µr i. In the space g Yg , h consists of a single infinite orbit
(r)
and µr a single
It follows that hh, µr i ∼
= H2 , and so, in g Yg , we have
2-cycle.
(r)
6 hh, µr i. Thus all finitary permutations of the orbits of g of
that FSym g Yg
size r lie in hh, µr i, and so CFSym(Y (r) ) (g) 6 hh, λr , µr i.
g
TWISTED CONJUGACY IN HOUGHTON’S GROUPS
37
Lemma 6.24. Fix an r ∈ N such that |Irc (g)| = sr where s > 3. Fix representatives
jr1 , . . . , jrs of Irc (g). Then CFSym(Y (r) ) (g) is a subgroup of the group generated by:
g
i) elements h2 , . . . , hs ∈ Θ(g) with tjr1 (hi ) = 1 for all 2 6 i 6 s and tjrk (hk ) = −1
for each 2 6 k 6 s; and
ii) an r-cycle λr := ((jr1 , zjr1 (g)) (jr1 , zjr1 (g))g . . . (jr1 , zjr1 (g))g r−1 ).
(r)
Proof. We again show that the elements of CFSym(Y (r) ) (g) ∼
Cr ≀ FSym g Yg
=
g
lie in hh2 , . . . , hs , λr i. Note that
s
s
\
[
supp(hi ) ⊇ {(j, m) | j ∈ [jr1 ], m > zj (g)}.
supp(hi ) = Yg(r) and that
i=2
i=2
Thus, given any r-cycle σ of g, there is an i ∈ {2, . . . , s} and d ∈ Z such that
d
h−d
i λr hi = σ, and so the base of our wreath product lies in hh2 , . . . , hs , λr i. Now, in
(r)
the space g Yg , each hi consists of a single infinite orbit and there is a natural map
hh2 , . . . hs i ։ Hs , hi 7→ gi where {gi | i = 2, . . . , s} denotes our standard generating
set of Hs . To be more specific, this epimorphism is induced by the surjection of sets
Xn → Xs , where: for each d ∈ {2, . . . , s} and each e ∈ N, {(k, zk (g) + e − 1) | k ∈
[jrd ]} 7→ (d, e); for each 1 6 l 6 f , supp(σl ) 7→ (1, l) where σ1 , . . . , σf are the ordered
r-cycles within Z(g); and, for each e ∈ N, {(k, zk
(g) + e − 1) | k ∈ [jr1 ]} 7→ (1, f + e).
(r)
(r)
6 hh2 , . . . , hs i and so all finitary
Thus, in g Yg , we have that FSym g Yg
permutations of the orbits of g of size r lie in hh2 , . . . , hs i.
Note that the elements appearing in Lemma 6.23 and Lemma 6.24 lie in CHn (g).
This is because the element λr , of type (ii), is an orbit of g of size r, and the
(r)
elements of type (i) and (iii) have support within Yg and induce a permutation
of the orbits of g of size r.
Definition 6.25. Let Fa consist of:
i) the preimages of the elements {δ 1 , . . . , δ e } defined above;
ii) the set Ωr (a) (of Definition 6.21) for those r ∈ N such that |Irc (a)| = 0;
iii) the elements λr and µr of Lemma 6.23 for those r ∈ N such that |Irc (a)| = 2r;
iv) the element λr of Lemma 6.24 for those r ∈ N such that |Irc (a)| > 3r.
Proof of Theorem 3. By Lemma 4.4 we may compute the set {gr 6= id | r ∈ N} and,
for each r ∈ N, the sets Irc (a). If |Irc (a)| = r for any r ∈ N, then our algorithm may
conclude, by Lemma 6.22, that CHn (a) is not finitely generated. We now show that
if this is not the case, then the set Fa defined above is sufficient to generate CHn (a).
Note that Fa is finite since {gr 6= id | r ∈ N} is finite. Also Fa is computable.
P
Let c ∈ CHn (a). There exist numbers α1 , . . . , αe ∈ Z such that t(c) = ei=1 αi δ i .
Thus, by using the preimages of the elements {δ1 , . . . , δ e } in Fa , we can reduce to
the case where c ∈ CFSym(Xn ) (a). By Lemma 6.18 it is sufficient to show, for each
r ∈ N, that CFSym(Y (r) ) (a) 6 hFa i. For each r ∈ N, one of the following three cases
a
occurs: |Irc (a)| = 0; |Irc (a)| = 2r; or |Irc (a)| > 3r. In the first case, the elements of
Ωr (a) are sufficient to generate CFSym(Y (r) ) (a). The second and third cases were
a
dealt with by Lemma 6.23 and Lemma 6.24 respectively, since Fa contains the finite
generating sets described in each.
38
CHARLES GARNET COX
References
[ABM15]
Y. Antolı́n, J. Burillo, and A. Martino, Conjugacy in Houghton’s groups, Publ. Mat.
59 (2015), no. 1, 3–16. MR 3302573
[BCMR14] J. Burillo, S. Cleary, A. Martino, and C. E. Röver, Commensurations and Metric
Properties of Houghton’s Groups, Pacific Journal of Mathematics, 285(2) (2016), 289–
301 (doi:10.2140/pjm.2016.285.289)
[BMV10] O. Bogopolski, A. Martino, and E. Ventura, Orbit decidability and the conjugacy
problem for some extensions of groups, Trans. Amer. Math. Soc. 362 (2010), no. 4,
2003–2036. MR 2574885 (2011e:20045)
[Boo59]
W.W. Boone, The word problem, Annals of mathematics 70(2) (1959), 207–265.
[Bro87]
K. S. Brown, Finiteness properties of groups, Proceedings of the Northwestern conference on cohomology of groups (Evanston, Ill., 1985), vol. 44, 1987, pp. 45–75.
MR 885095 (88m:20110)
[CM77]
D. J. Collins and C. F. Miller, III, The conjugacy problem and subgroups of finite
index, Proc. London Math. Soc. (3) 34 (1977), no. 3, 535–556. MR 0435227 (55
#8187)
[Cox16]
C. Cox, A note on the R∞ property for groups FAlt(X) 6 G 6 Sym(X),
arXiv:1602.02688 (2016)
[DM96]
J. D. Dixon and B. Mortimer, Permutation groups, Graduate Texts in Mathematics,
vol. 163, Springer-Verlag, New York, 1996. MR 1409812 (98m:20003)
[GS14]
D. Gonçalves and S. Parameswaran, Sigma theory and twisted conjugacy-II: Houghton
groups and pure symmetric automorphism groups, Pacific Journal of Mathematics,
280(2) (2016), 349–369 (doi: 10.2140/pjm.2016.280.349)
[GK75]
A. V. Gorjaga and A. S. Kirkinskiı̆, The decidability of the conjugacy problem cannot
be transferred to finite extensions of groups, Algebra i Logika 14 (1975), no. 4, 393–
406. MR 0414718 (54 #2813)
[Hou78]
C. H. Houghton, The first cohomology of a group with permutation module coefficients,
Archiv der Mathematik 31 (1978), 254–258.
[Joh99]
D. L. Johnson, Embedding some recursively presented groups, Groups St. Andrews
1997 in Bath, II, London Math. Soc. Lecture Note Ser., vol. 261, Cambridge Univ.
Press, Cambridge, 1999, pp. 410–416. MR 1676637 (2000h:20057)
[Laz96]
F. Lazebnik, On Systems of Linear Diophantine Equations, Math. Mag. Vol. 69, 1996,
pp. 261–266
[Lee12]
S. R. Lee, Geometry of Houghton’s Groups, arXiv:1212.0257v1 (2012).
[Mil71]
C.F. Miller III, On group-theoretic decision problems and their classification, Annals
of Math. Studies 68, (1971).
[Nov58]
P.S. Novikov On the algorithmic unsolvability of the word problem in group theory,
Trudy Mat. Inst. Steklov. 44 (1955), 143 pages. Translation in Amer. Math. Soc.
Transl. 9(2) (1958), 1–122.
[Sco87]
W. R. Scott, Group theory, second ed., Dover Publications, Inc., New York, 1987.
MR 896269 (88d:20001)
[JG15]
S. St. John-Green, Centralizers in Houghton’s groups, Proceedings of the Edinburgh
Mathematical Society (Series 2) 58 (2015), 769–785.
[Wie77]
J. Wiegold, Transitive groups with fixed-point-free permutations, II. Arch. Math.
(Basel) 29 (1977), no. 6, 571–573.
Mathematical Sciences, University of Southampton, SO17 1BJ, UK
E-mail address: [email protected]
| 4 |
Bayesian GAN
arXiv:1705.09558v3 [stat.ML] 8 Nov 2017
Yunus Saatchi
Uber AI Labs
Andrew Gordon Wilson
Cornell University
Abstract
Generative adversarial networks (GANs) can implicitly learn rich distributions over
images, audio, and data which are hard to model with an explicit likelihood. We
present a practical Bayesian formulation for unsupervised and semi-supervised
learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator
networks. The resulting approach is straightforward and obtains good performance
without any standard interventions such as feature matching or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator,
the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised
learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming
DCGAN, Wasserstein GANs, and DCGAN ensembles.
1
Introduction
Learning a good generative model for high-dimensional natural signals, such as images, video
and audio has long been one of the key milestones of machine learning. Powered by the learning
capabilities of deep neural networks, generative adversarial networks (GANs) [4] and variational
autoencoders [6] have brought the field closer to attaining this goal.
GANs transform white noise through a deep neural network to generate candidate samples from
a data distribution. A discriminator learns, in a supervised manner, how to tune its parameters
so as to correctly classify whether a given sample has come from the generator or the true data
distribution. Meanwhile, the generator updates its parameters so as to fool the discriminator. As
long as the generator has sufficient capacity, it can approximate the CDF inverse-CDF composition
required to sample from a data distribution of interest. Since convolutional neural networks by design
provide reasonable metrics over images (unlike, for instance, Gaussian likelihoods), GANs using
convolutional neural networks can in turn provide a compelling implicit distribution over images.
Although GANs have been highly impactful, their learning objective can lead to mode collapse, where
the generator simply memorizes a few training examples to fool the discriminator. This pathology is
reminiscent of maximum likelihood density estimation with Gaussian mixtures: by collapsing the
variance of each component we achieve infinite likelihood and memorize the dataset, which is not
useful for a generalizable density estimate. Moreover, a large degree of intervention is required to
stabilize GAN training, including feature matching, label smoothing, and mini-batch discrimination
[9, 10]. To help alleviate these practical difficulties, recent work has focused on replacing the
Jensen-Shannon divergence implicit in standard GAN training with alternative metrics, such as
f-divergences [8] or Wasserstein divergences [1]. Much of this work is analogous to introducing
various regularizers for maximum likelihood density estimation. But just as it can be difficult to
choose the right regularizer, it can also be difficult to decide which divergence we wish to use for
GAN training.
It is our contention that GANs can be improved by fully probabilistic inference. Indeed, a posterior
distribution over the parameters of the generator could be broad and highly multimodal. GAN
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
training, which is based on mini-max optimization, always estimates this whole posterior distribution
over the network weights as a point mass centred on a single mode. Thus even if the generator
does not memorize training examples, we would expect samples from the generator to be overly
compact relative to samples from the data distribution. Moreover, each mode in the posterior over the
network weights could correspond to wildly different generators, each with their own meaningful
interpretations. By fully representing the posterior distribution over the parameters of both the
generator and discriminator, we can more accurately model the true data distribution. The inferred
data distribution can then be used for accurate and highly data-efficient semi-supervised learning.
In this paper, we propose a simple Bayesian formulation for end-to-end unsupervised and semisupervised learning with generative adversarial networks. Within this framework, we marginalize the
posteriors over the weights of the generator and discriminator using stochastic gradient Hamiltonian
Monte Carlo. We interpret data samples from the generator, showing exploration across several
distinct modes in the generator weights. We also show data and iteration efficient learning of the true
distribution. We also demonstrate state of the art semi-supervised learning performance on several
benchmarks, including SVHN, MNIST, CIFAR-10, and CelebA. The simplicity of the proposed
approach is one of its greatest strengths: inference is straightforward, interpretable, and stable. Indeed
all of the experimental results were obtained without feature matching or any ad-hoc techniques.
We have made code and tutorials available at
https://github.com/andrewgordonwilson/bayesgan.
2
Bayesian GANs
Given a dataset D = {x(i) } of variables x(i) ∼ pdata (x(i) ), we wish to estimate pdata (x). We
transform white noise z ∼ p(z) through a generator G(z; θg ), parametrized by θg , to produce
candidate samples from the data distribution. We use a discriminator D(x; θd ), parametrized by θd ,
to output the probability that any x comes from the data distribution. Our considerations hold for
general G and D, but in practice G and D are often neural networks with weight vectors θg and θd .
By placing distributions over θg and θd , we induce distributions over an uncountably infinite space of
generators and discriminators, corresponding to every possible setting of these weight vectors. The
generator now represents a distribution over distributions of data. Sampling from the induced prior
distribution over data instances proceeds as follows:
(1) Sample θg ∼ p(θg ); (2) Sample z(1) , . . . , z(n) ∼ p(z); (3) x̃(j) = G(z(j) ; θg ) ∼ pgenerator (x).
For posterior inference, we propose unsupervised and semi-supervised formulations in Sec 2.1 - 2.2.
We note that in an exciting recent pre-print Tran et al. [11] briefly mention using a variational
approach to marginalize weights in a generative model, as part of a general exposition on hierarchical
implicit models (see also Karaletsos [5] for a nice theoretical exploration of related topics in graphical
model message passing). While promising, our approach has several key differences: (1) our GAN
representation is quite different, preserving a clear competition between generator and discriminator;
(2) our representation for the posteriors is straightforward, requires no interventions, provides novel
formulations for unsupervised and semi-supervised learning, and has state of the art results on many
benchmarks. Conversely, Tran et al. [11] is only pursued for fully supervised learning on a few small
datasets; (3) we use sampling to explore a full posterior over the weights, whereas Tran et al. [11]
perform a variational approximation centred on one of the modes of the posterior (and due to the
properties of the KL divergence is prone to an overly compact representation of even that mode);
(4) we marginalize z in addition to θg , θd ; and (5) the ratio estimation approach in [11] limits the
size of the neural networks they can use, whereas in our experiments we can use comparably deep
networks to maximum likelihood approaches. In the experiments we illustrate the practical value of
our formulation.
Although the high level concept of a Bayesian GAN has been informally mentioned in various
contexts, to the best of our knowledge we present the first detailed treatment of Bayesian GANs,
including novel formulations, sampling based inference, and rigorous semi-supervised learning
experiments.
2
2.1
Unsupervised Learning
To infer posteriors over θg , θd , we can iteratively sample from the following conditional posteriors:
!
ng
Y
(i)
p(θg |z, θd ) ∝
D(G(z ; θg ); θd ) p(θg |αg )
(1)
p(θd |z, X, θg ) ∝
i=1
n
d
Y
ng
Y
i=1
i=1
D(x(i) ; θd ) ×
(1 − D(G(z(i) ; θg ); θd )) × p(θd |αd ) .
(2)
p(θg |αg ) and p(θd |αd ) are priors over the parameters of the generator and discriminator, with
hyperparameters αg and αd , respectively. nd and ng are the numbers of mini-batch samples for the
d
discriminator and generator, respectively.1 We define X = {x(i) }ni=1
.
We can intuitively understand this formulation starting from the generative process for data samples.
Suppose we were to sample weights θg from the prior p(θg |αg ), and then condition on this sample
of the weights to form a particular generative neural network. We then sample white noise z from
p(z), and transform this noise through the network G(z; θg ) to generate candidate data samples.
The discriminator, conditioned on its weights θd , outputs a probability that these candidate samples
came from the data distribution. Eq. (1) says that if the discriminator outputs high probabilities, then
the posterior p(θg |z, θd ) will increase in a neighbourhood of the sampled setting of θg (and hence
decrease for other settings). For the posterior over the discriminator weights θd , the first two terms of
Eq. (2) form a discriminative classification likelihood, labelling samples from the actual data versus
the generator as belonging to separate classes. And the last term is the prior on θd .
Marginalizing the noise In prior work, GAN updates are implicitly conditioned on a set of noise
samples z. We can instead marginalize z from our posterior updates using simple Monte Carlo:
=p(z)
Z
p(θg |θd ) =
Z
p(θg , z|θd )dz =
Jg
z }| {
1 X
p(θg |z, θd ) p(z|θd ) dz ≈
p(θg |z(j) , θd ) , z(j) ∼ p(z)
Jg j=1
By following a similar derivation, p(θd |θg ) ≈
1
Jd
PJd
j
p(θd |z(j) , X, θg ), z(j) ∼ p(z).
This specific setup has several nice features for Monte Carlo integration. First, p(z) is a white noise
distribution from which we can take efficient and exact samples. Secondly, both p(θg |z, θd ) and
p(θd |z, X, θg ), when viewed as a function of z, should be reasonably broad over z by construction,
since z is used to produce candidate data samples in the generative procedure. Thus each term in the
simple Monte Carlo sum typically makes a reasonable contribution to the total marginal posterior
estimates. We do note, however, that the approximation will typically be worse for p(θd |θg ) due to
the conditioning on a minibatch of data in Equation 2.
Classical GANs as maximum likelihood Our proposed probabilistic approach is a natural
Bayesian generalization of the classical GAN: if one uses uniform priors for θg and θd , and performs iterative MAP optimization instead of posterior sampling over Eq. (1) and (2), then the local
optima will be the same as for Algorithm 1 of Goodfellow et al. [4]. We thus sometimes refer
to the classical GAN as the ML-GAN. Moreover, even with a flat prior, there is a big difference
between Bayesian marginalization over the whole posterior versus approximating this (often broad,
multimodal) posterior with a point mass as in MAP optimization (see Figure 3, Appendix).
Posterior samples By iteratively sampling from p(θg |θd ) and p(θd |θg ) at every step of an epoch
one can, in the limit, obtain samples from the approximate posteriors over θg and θd . Having such
samples can be very useful in practice. Indeed, one can use different samples for θg to alleviate
GAN collapse and generate data samples with an appropriate level of entropy, as well as forming
a committee of generators to strengthen the discriminator. The samples for θd in turn form a
committee of discriminators which amplifies the overall adversarial signal, thereby further improving
the unsupervised learning process. Arguably, the most rigorous method to assess the utility of these
posterior samples is to examine their effect on semi-supervised learning, which is a focus of our
experiments in Section 4.
1
For mini-batches, one must make sure the likelihood and prior are scaled appropriately. See Appendix A.1.
3
2.2
Semi-supervised Learning
We extend the proposed probabilistic GAN formalism to semi-supervised learning. In the semisupervised setting for K-class classification, we have access to a set of n unlabelled observations,
(i) (i)
s
{x(i) }, as well as a (typically much smaller) set of ns observations, {(xs , ys )}N
i=1 , with class
(i)
labels ys ∈ {1, . . . , K}. Our goal is to jointly learn statistical structure from both the unlabelled
and labelled examples, in order to make much better predictions of class labels for new test examples
x∗ than if we only had access to the labelled training inputs.
In this context, we redefine the discriminator such that D(x(i) = y (i) ; θd ) gives the probability that
sample x(i) belongs to class y (i) . We reserve the class label 0 to indicate that a data sample is the
output of the generator. We then infer the posterior over the weights as follows:
!
ng K
Y
X
p(θg |z, θd ) ∝
D(G(z(i) ; θg ) = y; θd ) p(θg |αg )
(3)
i=1 y=1
p(θd |z, X, ys , θg ) ∝
nd X
K
Y
i=1 y=1
D(x
(i)
= y; θd )
ng
Y
D(G(z(i) ; θg ) = 0; θd )
i=1
Ns
Y
(i)
(D(x(i)
s = ys ; θd ))p(θd |αd )
i=1
(4)
During every iteration we use ng samples from the generator, nd unlabeled samples, and all of the
Ns labeled samples, where typically Ns n. As in Section 2.1, we can approximately marginalize
z using simple Monte Carlo sampling.
Much like in the unsupervised learning case, we can marginalize the posteriors over θg and θd . To
compute the predictive distribution for a class label y∗ at a test input x∗ we use a model average over
all collected samples with respect to the posterior over θd :
Z
T
1 X
(k)
(k)
p(y∗ |x∗ , D) = p(y∗ |x∗ , θd )p(θd |D)dθd ≈
p(y∗ |x∗ , θd ) , θd ∼ p(θd |D) .
(5)
T
k=1
We will see that this model average is effective for boosting semi-supervised learning performance.
In Section 3 we present an approach to MCMC sampling from the posteriors over θg and θd .
3
Posterior Sampling with Stochastic Gradient HMC
In the Bayesian GAN, we wish to marginalize the posterior distributions over the generator and
discriminator weights, for unsupervised learning in 2.1 and semi-supervised learning in 2.2. For this
purpose, we use Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) [3] for posterior sampling.
The reason for this choice is three-fold: (1) SGHMC is very closely related to momentum-based
SGD, which we know empirically works well for GAN training; (2) we can import parameter settings
(such as learning rates and momentum terms) from SGD directly into SGHMC; and most importantly,
(3) many of the practical benefits of a Bayesian approach to GAN inference come from exploring
a rich multimodal distribution over the weights θg of the generator, which is enabled by SGHMC.
Alternatives, such as variational approximations, will typically centre their mass around a single
mode, and thus provide a unimodal and overly compact representation for the distribution, due to
asymmetric biases of the KL-divergence.
The posteriors in Equations 3 and 4 are both amenable to HMC techniques as we can compute the
gradients of the loss with respect to the parameters we are sampling. SGHMC extends HMC to the
case where we use noisy estimates of such gradients in a manner which guarantees mixing in the
limit of a large number of minibatches. For a detailed review of SGHMC, please see Chen et al. [3].
Using the update rules from Eq. (15) in Chen et al. [3], we propose to sample from the posteriors
over the generator and discriminator weights as in Algorithm 1. Note that Algorithm 1 outlines
standard momentum-based SGHMC: in practice, we found it help to speed up the “burn-in” process
by replacing the SGD part of this algorithm with Adam for the first few thousand iterations, after
which we revert back to momentum-based SGHMC. As suggested in Appendix G of Chen et al. [3],
we employed a learning rate schedule which decayed according to γ/d where d is set to the number
of unique “real” datapoints seen so far. Thus, our learning rate schedule converges to γ/N in the
limit, where we have defined N = |D|.
4
Algorithm 1 One iteration of sampling for the Bayesian GAN. α is the friction term for SGHMC, η is the
learning rate. We assume that the stochastic gradient discretization noise term β̂ is dominated by the main
friction term (this assumption constrains us to use small step sizes). We take Jg and Jd simple MC samples for
the generator and discriminator respectively, and M SGHMC samples for each simple MC sample. We rescale
to accommodate minibatches as in Appendix A.1.
J ,M
g
d ,M
• Represent posteriors with samples {θgj,m }j=1,m=1
and {θdj,m }Jj=1,m=1
from previous iteration
for number of MC iterations Jg do
• Sample Jg noise samples {z(1) , . . . , z(Jg ) } from noise prior p(z). Each z(i) has ng samples.
• Update sample set representing p(θg |θd ) by running SGHMC updates for M iterations:
Jg Jd
X
X ∂ log p(θg |z(i) , θk,m )
d
+ n; n ∼ N (0, 2αηI)
θgj,m ← θgj,m + v; v ← (1 − α)v + η
∂θ
g
i=1
k=1
• Append θgj,m to sample set.
end for
for number of MC iterations Jd do
• Sample minibatch of Jd noise samples {z(1) , . . . , z(Jd ) } from noise prior p(z).
• Sample minibatch of nd data samples x.
• Update sample set representing p(θd |z, θg ) by running SGHMC updates for M iterations:
Jg
Jd X
(i)
k,m
X
)
∂
log
p(θ
|z
,
x,
θ
d
g
+ n; n ∼ N (0, 2αηI)
θdj,m ← θdj,m + v; v ← (1 − α)v + η
∂θ
d
i=1
k=1
• Append θdj,m to sample set.
end for
4
Experiments
We evaluate our proposed Bayesian GAN (henceforth titled BayesGAN) on six benchmarks (synthetic,
MNIST, CIFAR-10, SVHN, and CelebA) each with four different numbers of labelled examples. We
consider multiple alternatives, including: the DCGAN [9], the recent Wasserstein GAN (W-DCGAN)
[1], an ensemble of ten DCGANs (DCGAN-10) which are formed by 10 random subsets 80% the
size of the training set, and a fully supervised convolutional neural network. We also compare to the
reported MNIST result for the LFVI-GAN, briefly mentioned in a recent pre-print [11], where they
use fully supervised modelling on the whole dataset with a variational approximation. We interpret
many of the results from MNIST in detail in Section 4.2, and find that these observations carry
forward to our CIFAR-10, SVHN, and CelebA experiments.
For all real experiments we use a 5-layer Bayesian deconvolutional GAN (BayesGAN) for the generative model G(z|θg ) (see Radford et al. [9] for further details about structure). The corresponding
discriminator is a 5-layer 2-class DCGAN for the unsupervised GAN and a 5-layer, K + 1 class
DCGAN for a semi-supervised GAN performing classification over K classes. The connectivity
structure of the unsupervised and semi-supervised DCGANs were the same as for the BayesGAN.
Note that the structure of the networks in Radford et al. [9] are slightly different from [10] (e.g. there
are 4 hidden layers and fewer filters per layer), so one cannot directly compare the results here with
those in Salimans et al. [10]. Our exact architecture specification is also given in our codebase. The
performance of all methods could be improved through further calibrating architecture design for
each individual benchmark. For the Bayesian GAN we place a N (0, 10I) prior on both the generator
and discriminator weights and approximately integrate out z using simple Monte Carlo samples. We
run Algorithm 1 for 5000 iterations and then collect weight samples every 1000 iterations and record
out-of-sample predictive accuracy using Bayesian model averaging (see Eq. 5). For Algorithm 1
we set Jg = 10, Jd = 1, M = 2, and nd = ng = 64. All experiments were performed on a single
TitanX GPU for consistency, but BayesGAN and DCGAN-10 could be sped up to approximately the
same runtime as DCGAN through multi-GPU parallelization.
5
In Table 1 we summarize the semi-supervised results, where we see consistently improved performance over the alternatives. All runs are averaged over 10 random subsets of labeled examples for
estimating error bars on performance (Table 1 shows mean and 2 standard deviations). We also
qualitatively illustrate the ability for the Bayesian GAN to produce complementary sets of data
samples, corresponding to different representations of the generator produced by sampling from the
posterior over the generator weights (Figures 1, 2, 6). The supplement also contains additional plots
of accuracy per epoch and accuracy vs runtime for semi-supervised experiments. We emphasize
that all of the alternatives required the special techniques described in Salimans et al. [10] such as
mini-batch discrimination, whereas the proposed Bayesian GAN needed none of these techniques.
4.1
Synthetic Dataset
We present experiments on a multi-modal synthetic dataset to test the ability to infer a multi-modal
posterior p(θg |D). This ability not only helps avoid the collapse of the generator to a couple training
examples, an instance of overfitting in regular GAN training, but also allows one to explore a set of
generators with different complementary properties, harmonizing together to encapsulate a rich data
distribution. We generate D-dimensional synthetic data as follows:
z ∼ N (0, 10 ∗ Id ),
A ∼ N (0, ID×d ),
x
= Az + ,
∼ N (0, 0.01 ∗ ID ),
dD
We then fit both a regular GAN and a Bayesian GAN to such a dataset with D = 100 and d = 2. The
generator for both models is a two-layer neural network: 10-1000-100, fully connected, with ReLU
activations. We set the dimensionality of z to be 10 in order for the DCGAN to converge (it does not
converge when d = 2, despite the inherent dimensionality being 2!). Consistently, the discriminator
network has the following structure: 100-1000-1, fully-connected, ReLU activations. For this dataset
we place an N (0, I) prior on the weights of the Bayesian GAN and approximately integrate out z
using J = 100 Monte-Carlo samples. Figure 1 shows that the Bayesian GAN does a much better
job qualitatively in generating samples (for which we show the first two principal components), and
quantitatively in terms of Jensen-Shannon divergence (JSD) to the true distribution (determined
through kernel density estimates). In fact, the DCGAN (labelled ML GAN as per Section 2) begins to
eventually increase in testing JSD after a certain number of training iterations, which is reminiscent
of over-fitting. When D = 500, we still see good performance with the Bayesian GAN. We also see,
with multidimensional scaling [2], that samples from the posterior over Bayesian generator weights
clearly form multiple distinct clusters, indicating that the SGHMC sampling is exploring multiple
distinct modes, thus capturing multimodality in weight space as well as in data space.
4.2
MNIST
MNIST is a well-understood benchmark dataset consisting of 60k (50k train, 10k test) labeled images
of hand-written digits. Salimans et al. [10] showed excellent out-of-sample performance using only
a small number of labeled inputs, convincingly demonstrating the importance of good generative
modelling for semi-supervised learning. Here, we follow their experimental setup for MNIST.
We evaluate the Bayesian DCGAN for semi-supervised learning using Ns = {20, 50, 100, 200}
labelled training examples. We see in Table 1 that the Bayesian GAN has improved accuracy over the
DCGAN, the Wasserstein GAN, and even an ensemble of 10 DCGANs! Moreover, it is remarkable
that the Bayesian GAN with only 100 labelled training examples (0.2% of the training data) is able to
achieve 99.3% testing accuracy, which is comparable with a state of the art fully supervised method
using all 50, 000 training examples! We show a fully supervised model using ns samples to generally
highlight the practical utility of semi-supervised learning.
Moreover, Tran et al. [11] showed that a fully supervised LFVI-GAN, on the whole MNIST training
set (50, 000 labelled examples) produces 140 classification errors – almost twice the error of our
proposed Bayesian GAN approach using only ns = 100 (0.2%) labelled examples! We suspect
this difference largely comes from (1) the simple practical formulation of the Bayesian GAN in
Section 2, (2) marginalizing z via simple Monte Carlo, and (3) exploring a broad multimodal
posterior distribution over the generator weights with SGHMC with our approach versus a variational
approximation (prone to over-compact representations) centred on a single mode.
We can also see qualitative differences in the unsupervised data samples from our Bayesian DCGAN
and the standard DCGAN in Figure 2. The top row shows sample images produced from six generators
6
Figure 1: Left: Samples drawn from pdata (x) and visualized in 2-D after applying PCA. Right 2 columns:
Samples drawn from pMLGAN (x) and pBGAN (x) visualized in 2D after applying PCA. The data is inherently
2-dimensional so PCA can explain most of the variance using 2 principal components. It is clear that the
Bayesian GAN is capturing all the modes in the data whereas the regular GAN is unable to do so. Right:
(Top 2) Jensen-Shannon divergence between pdata (x) and p(x; θ) as a function of the number of iterations of
GAN training for D = 100 (top) and D = 500 (bottom). The divergence is computed using kernel density
estimates of large sample datasets drawn from pdata (x) and p(x; θ), after applying dimensionality reduction
to 2-D (the inherent dimensionality of the data). In both cases, the Bayesian GAN is far more effective at
minimizing the Jensen-Shannon divergence, reaching convergence towards the true distribution, by exploring
a full distribution over generator weights, which is not possible with a maximum likelihood GAN (no matter
how many iterations). (Bottom) The sample set {θgk } after convergence viewed in 2-D using Multidimensional
Scaling (using a Euclidean distance metric between weight samples) [2]. One can clearly see several clusters,
meaning that the SGHMC sampling has discovered pronounced modes in the posterior over the weights.
produced from six samples over the posterior of the generator weights, and the bottom row shows
sample data images from a DCGAN. We can see that each of the six panels in the top row have
qualitative differences, almost as if a different person were writing the digits in each panel. Panel
1 (top left), for example, is quite crisp, while panel 3 is fairly thick, and panel 6 (top right) has
thin and fainter strokes. In other words, the Bayesian GAN is learning different complementary
generative hypotheses to explain the data. By contrast, all of the data samples on the bottom row
from the DCGAN are homogenous. In effect, each posterior weight sample in the Bayesian GAN
corresponds to a different style, while in the standard DCGAN the style is fixed. This difference
is further illustrated for all datasets in Figure 6 (supplement). Figure 3 (supplement) also further
emphasizes the utility of Bayesian marginalization versus optimization, even with vague priors.
However, we do not necessarily expect high fidelity images from any arbitrary generator sampled
from the posterior over generators; in fact, such a generator would probably have less posterior
probability than the DCGAN, which we show in Section 2 can be viewed as a maximum likelihood
analogue of our approach. The advantage in the Bayesian approach comes from representing a whole
space of generators alongside their posterior probabilities.
Practically speaking, we also stress that for convergence of the maximum-likelihood DCGAN we had
to resort to using tricks including minibatch discrimination, feature normalization and the addition of
Gaussian noise to each layer of the discriminator. The Bayesian DCGAN needed none of these tricks.
7
Table 1: Detailed supervised and semi-supervised learning results for all datasets. In almost all experiments
BayesGAN outperforms DCGAN and W-DCGAN substantially, and typically even outperforms ensembles of
DCGANs. The runtimes, per epoch, in minutes, are provided in rows including the dataset name. While all
experiments were performed on a single GPU, note that DCGAN-10 and BayesGAN methods can be sped up
straightforwardly using multiple GPUs to obtain a similar runtime to DCGAN. Note also that the BayesGAN is
generally much more efficient per epoch than the alternatives, as per Figure 4. Results are averaged over 10
random supervised subsets ± 2 stdev. Standard train/test splits are used for MNIST, CIFAR-10 and SVHN. For
CelebA we use a test set of size 10k. Test error rates are across the entire test set.
Ns
No. of misclassifications for MNIST. Test error rate for others.
Supervised
DCGAN
W-DCGAN
DCGAN-10
BayesGAN
MNIST
N =50k, D = (28, 28)
14
15
114
32
20
50
100
200
—
—
2134 ± 525
1389 ± 438
1823 ± 412
453 ± 110
128 ± 11
95 ± 3.2
1687 ± 387
490 ± 170
156 ± 17
91 ± 5.2
1087 ± 564
189 ± 103
97 ± 8.2
78 ± 2.8
1432 ± 487
332 ± 172
79 ± 5.8
74 ± 1.4
CIFAR-10
N =50k, D = (32, 32, 3)
18
19
146
68
1000
2000
4000
8000
63.4 ± 2.6
56.1 ± 2.1
51.4 ± 2.9
47.2 ± 2.2
58.2 ± 2.8
47.5 ± 4.1
40.1 ± 3.3
29.3 ± 2.8
57.1 ± 2.4
49.8 ± 3.1
38.1 ± 2.9
27.4 ± 2.5
31.1 ± 2.5
29.2 ± 1.2
27.4 ± 3.2
25.5 ± 2.4
32.7 ± 5.2
26.2 ± 4.8
23.4 ± 3.7
21.1 ± 2.5
SVHN
N =75k, D = (32, 32, 3)
29
31
217
81
500
1000
2000
4000
53.5 ± 2.5
37.3 ± 3.1
26.3 ± 2.1
20.8 ± 1.8
31.2 ± 1.8
25.5 ± 3.3
22.4 ± 1.8
20.4 ± 1.2
29.4 ± 1.8
25.1 ± 2.6
23.3 ± 1.2
19.4 ± 0.9
27.1 ± 2.2
18.3 ± 1.7
16.7 ± 1.8
14.0 ± 1.4
22.5 ± 3.2
12.9 ± 2.5
11.3 ± 2.4
8.7 ± 1.8
CelebA
N =100k, D = (50, 50, 3)
103
98
649
329
1000
2000
4000
8000
53.8 ± 4.2
36.7 ± 3.2
34.3 ± 3.8
31.1 ± 4.2
52.3 ± 4.2
37.8 ± 3.4
31.5 ± 3.2
29.5 ± 2.8
51.2 ± 5.4
39.6 ± 3.5
30.1 ± 2.8
27.6 ± 4.2
47.3 ± 3.5
31.2 ± 1.8
29.3 ± 1.5
26.4 ± 1.1
33.4 ± 4.7
31.8 ± 4.3
29.4 ± 3.4
25.3 ± 2.4
This robustness arises from a Gaussian prior over the weights which provides a useful inductive bias,
and due to the MCMC sampling procedure which alleviates the risk of collapse and helps explore
multiple modes (and uncertainty within each mode). To be balanced, we also stress that in practice the
risk of collapse is not fully eliminated – indeed, some samples from p(θg |D) still produce generators
that create data samples with too little entropy. In practice, sampling is not immune to becoming
trapped in sharply peaked modes. We leave further analysis for future work.
Figure 2: Top: Data samples from six different generators corresponding to six samples from the posterior over
θg . The data samples show that each explored setting of the weights θg produces generators with complementary
high-fidelity samples, corresponding to different styles. The amount of variety in the samples emerges naturally
using the Bayesian approach. Bottom: Data samples from a standard DCGAN (trained six times). By contrast,
these samples are homogenous in style.
8
4.3
CIFAR-10
CIFAR-10 is also a popular benchmark dataset [7], with 50k training and 10k test images, which is
harder to model than MNIST since the data are 32x32 RGB images of real objects. Figure 6 shows
datasets produced from four different generators corresponding to samples from the posterior over
the generator weights. As with MNIST, we see meaningful qualitative variation between the panels.
In Table 1 we also see again (but on this more challenging dataset) that using Bayesian GANs as a
generative model within the semi-supervised learning setup significantly decreases test set error over
the alternatives, especially when ns n.
4.4
SVHN
The StreetView House Numbers (SVHN) dataset consists of RGB images of house numbers taken
by StreetView vehicles. Unlike MNIST, the digits significantly differ in shape and appearance. The
experimental procedure closely followed that for CIFAR-10. There are approximately 75k training
and 25k test images. We see in Table 1 a particularly pronounced difference in performance between
BayesGAN and the alternatives. Data samples are shown in Figure 6.
4.5
CelebA
The large CelebA dataset contains 120k celebrity faces amongst a variety of backgrounds (100k
training, 20k test images). To reduce background variations we used a standard face detector [12] to
crop the faces into a standard 50 × 50 size. Figure 6 shows data samples from the trained Bayesian
GAN. In order to assess performance for semi-supervised learning we created a 32-class classification
task by predicting a 5-bit vector indicating whether or not the face: is blond, has glasses, is male, is
pale and is young. Table 1 shows the same pattern of promising performance for CelebA.
5
Discussion
By exploring rich multimodal distributions over the weight parameters of the generator, the Bayesian
GAN can capture a diverse set of complementary and interpretable representations of data. We have
shown that such representations can enable state of the art performance on semi-supervised problems,
using a simple inference procedure.
Effective semi-supervised learning of natural high dimensional data is crucial for reducing the
dependency of deep learning on large labelled datasets. Often labeling data is not an option, or
it comes at a high cost – be it through human labour or through expensive instrumentation (such
as LIDAR for autonomous driving). Moreover, semi-supervised learning provides a practical and
quantifiable mechanism to benchmark the many recent advances in unsupervised learning.
Although we use MCMC, in recent years variational approximations have been favoured for inference
in Bayesian neural networks. However, the likelihood of a deep neural network can be broad with
many shallow local optima. This is exactly the type of density which is amenable to a sampling based
approach, which can explore a full posterior. Variational methods, by contrast, typically centre their
approximation along a single mode and also provide an overly compact representation of that mode.
Therefore in the future we may generally see advantages in following a sampling based approach in
Bayesian deep learning. Aside from sampling, one could try to better accommodate the likelihood
functions common to deep learning using more general divergence measures (for example based on
the family of α-divergences) instead of the KL divergence in variational methods, alongside more
flexible proposal distributions.
In the future, one could also estimate the marginal likelihood of a probabilistic GAN, having integrated
away distributions over the parameters. The marginal likelihood provides a natural utility function for
automatically learning hyperparameters, and for performing principled quantifiable model comparison
between different GAN architectures. It would also be interesting to consider the Bayesian GAN in
conjunction with a non-parametric Bayesian deep learning framework, such as deep kernel learning
[13, 14]. We hope that our work will help inspire continued exploration into Bayesian deep learning.
Acknowledgements We thank Pavel Izmailov for helping to create a tutorial for the codebase and
helpful comments, and Soumith Chintala for helpful advice, and NSF IIS-1563887 for support.
9
References
[1] Arjovsky, M., Chintala, S., and Bottou, L. (2017).
arXiv:1701.07875.
Wasserstein GAN.
arXiv preprint
[2] Borg, I. and Groenen, P. J. (2005). Modern multidimensional scaling: Theory and applications.
Springer Science & Business Media.
[3] Chen, T., Fox, E., and Guestrin, C. (2014). Stochastic gradient Hamiltonian Monte Carlo. In
Proc. International Conference on Machine Learning.
[4] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A.,
and Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing
systems, pages 2672–2680.
[5] Karaletsos, T. (2016). Adversarial message passing for graphical models. arXiv preprint
arXiv:1612.05048.
[6] Kingma, D. P. and Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint
arXiv:1312.6114.
[7] Krizhevsky, A., Nair, V., and Hinton, G. (2010). Cifar-10 (Canadian institute for advanced
research).
[8] Nowozin, S., Cseke, B., and Tomioka, R. (2016). f-GAN: Training generative neural samplers
using variational divergence minimization. In Advances in Neural Information Processing Systems,
pages 271–279.
[9] Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised representation learning with deep
convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
[10] Salimans, T., Goodfellow, I. J., Zaremba, W., Cheung, V., Radford, A., and Chen, X. (2016).
Improved techniques for training gans. CoRR, abs/1606.03498.
[11] Tran, D., Ranganath, R., and Blei, D. M. (2017). Deep and hierarchical implicit models. CoRR,
abs/1702.08896.
[12] Viola, P. and Jones, M. J. (2004). Robust real-time face detection. Int. J. Comput. Vision,
57(2):137–154.
[13] Wilson, A. G., Hu, Z., Salakhutdinov, R., and Xing, E. P. (2016a). Deep kernel learning.
Artificial Intelligence and Statistics.
[14] Wilson, A. G., Hu, Z., Salakhutdinov, R. R., and Xing, E. P. (2016b). Stochastic variational
deep kernel learning. In Advances in Neural Information Processing Systems, pages 2586–2594.
A
Supplementary Material
In this supplementary material, we provide (1) futher details of the MCMC updates, (2) illustrate a
tutorial figure, (3) show data samples from the Bayesian GAN for SVHN, CIFAR-10, and CelebA,
and (4) give performance results as a function of iteration and runtime.
A.1
Rescaling conditional posteriors to accommodate mini-batches
The key updates in Algorithm 1 involve iteratively computing log p(θg |z, θd ) and log p(θd |z, X, θg ),
or log p(θd |z, X, Ds , θg ) for the semi-supervised learning case (where we have defined the supervised
dataset of size Ns as Ds ). When Equations (1) and (2) are evaluated on a minibatch of data, it is
necessary to scale the likelihood as follows:
!
ng
X
N
(i)
+ log p(θg |αg ) + constant
(6)
log p(θg |z, θd ) =
log D(G(z ; θg ); θd )
ng
i=1
For example, as the total number of training points N increases, the likelihood should dominate the
prior. The re-scaling of the conditional posterior over θd , as well as the semi-supervised objectives,
follow similarly.
10
A.2
Additional Results
Figure 3: We illustrate a multimodal posterior over the parameters of the generator. Each setting of
these parameters corresponds to a different generative hypothesis for the data. We show here samples
generated for two different settings of this weight vector, corresponding to different writing styles.
The Bayesian GAN retains this whole distribution over parameters. By contrast, a standard GAN
represents this whole distribution with a point estimate (analogous to a single maximum likelihood
solution), missing potentially compelling explanations for the data.
11
BayesGAN
Figure 4: Test accuracy as a function of iteration number. We can see that after about 1000 SG-HMC
iterations, the sampler is mixing reasonably well. We also see that per iteration the Bayesian GAN
with SG-HMC is learning the data distribution more efficiently than the alternatives.
12
Figure 5: Test accuracy as a function of walk-clock time.
13
CIFAR10
SVHN
CelebA
Figure 6: Data samples for the CIFAR10, SVHN and CelebA datasets from four different generators
created using four different samples from the posterior over θg . Each panel corresponding to a
different θg has different qualitative properties, showing the complementary nature of the different
aspects of the distribution learned using a fully probabilistic approach.
14
Figure 7: A larger set of data samples for CIFAR10 from four different generators created using four
different samples from the posterior over θg . Each panel corresponding to a different θg has different
qualitative properties, showing the complementary nature of the different aspects of the distribution
learned using a fully probabilistic approach.
Figure 8: A larger set of data samples for SVHN from four different generators created using four
different samples from the posterior over θg . Each panel corresponding to a different θg has different
qualitative properties, showing the complementary nature of the different aspects of the distribution
learned using a fully probabilistic approach.
15
Figure 9: A larger set of data samples for CelebA from four different generators created using four
different samples from the posterior over θg . Each panel corresponding to a different θg has different
qualitative properties, showing the complementary nature of the different aspects of the distribution
learned using a fully probabilistic approach.
16
| 1 |
Online Algorithms for Multi-Level Aggregation∗
Marcin Bienkowski†
Christoph Dürr¶
Martin Böhm‡
Jaroslaw Byrka†
Marek Chrobak§
Lukáš Folwarczný‡
Lukasz Jeż†
Jiřı́ Sgall‡
Nguyen Kim Thangk
Pavel Veselý‡
arXiv:1507.02378v3 [] 28 Dec 2016
December 30, 2016
Abstract
In the Multi-Level Aggregation Problem (MLAP), requests arrive at the nodes of an edgeweighted tree T , and have to be served eventually. A service is defined as a subtree X
of T that contains its root. This subtree X serves all requests that are pending in the
nodes of X, and the cost of this service is equal to the total weight of X. Each request
also incurs waiting cost between its arrival and service times. The objective is to minimize
the total waiting cost of all requests plus the total cost of all service subtrees. MLAP is a
generalization of some well-studied optimization problems; for example, for trees of depth 1,
MLAP is equivalent to the TCP Acknowledgment Problem, while for trees of depth 2, it is
equivalent to the Joint Replenishment Problem. Aggregation problems for trees of arbitrary
depth arise in multicasting, sensor networks, communication in organization hierarchies, and
in supply-chain management. The instances of MLAP associated with these applications are
naturally online, in the sense that aggregation decisions need to be made without information
about future requests.
Constant-competitive online algorithms are known for MLAP with one or two levels. However, it has been open whether there exist constant competitive online algorithms for trees
of depth more than 2. Addressing this open problem, we give the first constant competitive
online algorithm for trees of arbitrary (fixed) depth. The competitive ratio is O(D4 2D ), where
D is the depth of T . The algorithm works for arbitrary waiting cost functions, including the
variant with deadlines. We include several additional results in the paper. We show that a
standard lower-bound technique for MLAP, based on so-called Single-Phase instances, cannot
give super-constant lower bounds (as a function of the tree depth). This result is established
by giving an online algorithm with optimal competitive ratio 4 for such instances on arbitrary
trees. We prove that, in the offline case, these instances can be solved to optimality in polynomial time. We also study the MLAP variant when the tree is a path, for which we give a lower
bound of 4 on the competitive ratio, improving the lower bound known for general MLAP.
We complement this with a matching upper bound for the deadline setting. In addition, for
arbitrary trees, we give a simple 2-approximation algorithm for offline MLAP with deadlines.
1
Introduction
Certain optimization problems can be formulated as aggregation problems. They typically arise
when expensive resources can be shared by multiple agents, who incur additional expenses for
∗ Research
partially supported by NSF grants CCF-1536026, CCF-1217314 and OISE-1157129, Polish NCN
grants DEC-2013/09/B/ST6/01538, 2015/18/E/ST6/00456, project 14-10003S of GA ČR and GAUK project
548214.
† Institute of Computer Science, University of Wroclaw, Poland
‡ Computer Science Institute, Charles University, Czech Republic
§ Department of Computer Science, University of California at Riverside, USA
¶ Sorbonne Universités, UPMC Univ Paris 06, CNRS, LIP6, Paris, France
k IBISC, Université d’Evry Val d’Essonne, France
1
accessing a resource. For example, costs may be associated with waiting until the resource is
accessible, or, if the resource is not in the desired state, a costly setup or retooling may be
required.
1-level aggregation. A simple example of an aggregation problem is the TCP Acknowledgment
Problem (TCP-AP), where control messages (“agents”) waiting for transmission across a network
link can be aggregated and transmitted in a single packet (“resource”). Such aggregation can
reduce network traffic, but it also results in undesirable delays. A reasonable compromise is to
balance the two costs, namely the number of transmitted packets and the total delay, by minimizing
their weighted sum [17]. Interestingly, TCP-AP is equivalent to the classical Lot Sizing Problem
studied in the operations research literature since the 1950s. (See, for example, [33].) In the offline
variant of TCP-AP, that is when all arrival times of control messages are known beforehand, an
optimal schedule for aggregated packets can be computed with dynamic programming in time
O(n log n) [1]. In practice, however, packet aggregation decisions must be done on the fly, without
any information about future message releases. This scenario is captured by the online variant
of TCP-AP that has also been well studied; it is known that the optimal competitive ratio is 2
in the deterministic case [17] and e/(e − 1) ≈ 1.582 in the randomized case [20, 13, 31]. Online
variants of TCP-AP that use different assumptions or objective functions were also examined in
the literature [18, 2].
2-level aggregation. Another optimization problem involving aggregation is the Joint Replenishment Problem (JRP), well-studied in operations research. JRP models tradeoffs that arise in
supply-chain management. One such scenario involves optimizing shipments of goods from a supplier to retailers, through a shared warehouse, in response to their demands. In JRP, aggregation
takes place at two levels: items addressed to different retailers can be shipped together to the
warehouse, at a fixed cost, and then multiple items destined to the same retailer can be shipped
from the warehouse to this retailer together, also at a fixed cost, which can be different for different retailers. Pending demands accrue waiting cost until they are satisfied by a shipment. The
objective is to minimize the sum of all shipment costs and all waiting costs.
JRP is known to be NP-hard [3], and even APX-hard [28, 7]. The currently best approximation,
due to Bienkowski et al. [8], achieves a factor of 1.791, improving on earlier work by Levi et al. [24,
26, 27]. In the deadline variant of JRP, denoted JRP-D, there is no cost for waiting, but each
demand needs to be satisfied before its deadline. As shown in [7], JRP-D can be approximated
with ratio 1.574.
For the online variant of JRP, Buchbinder et al. [12] gave a 3-competitive algorithm using a
primal-dual scheme (improving an earlier bound of 5 in [11]) and proved a lower bound of 2.64,
that was subsequently improved to 2.754 [8]. The optimal competitive ratio for JRP-D is 2 [8].
Multiple-level aggregation. TCP-AP and JRP can be thought of as aggregation problems on
edge-weighted trees of depth 1 and 2, respectively. In TCP-AP, this tree is just a single edge
between the sender and the recipient. In JRP, this tree consists of the root (supplier), with one
child (warehouse), and any number of grandchildren (retailers). A shipment can be represented by
a subtree of this tree and edge weights represent shipping costs. These trees capture the general
problem on trees of depth 1 and 2, as the children of the root can be considered separately (see
Section 2).
This naturally extends to trees of any depth D, where aggregation is allowed at each level.
Multi-level message aggregation has been, in fact, studied in communication networks in several
contexts. In multicasting, protocols for aggregating control messages (see [10, 4], for example)
can be used to reduce the so-called ack-implosion, the proliferation of control messages routed
to the source. A similar problem arises in energy-efficient data aggregation and fusion in sensor
networks [19, 34]. Outside of networking, tradeoffs between the cost of communication and delay
2
arise in message aggregation in organizational hierarchies [29]. In supply-chain management,
multi-level variants of lot sizing have been studied [16, 22]. The need to consider more tree-like
(in a broad sense) supply hierarchies has also been advocated in [23].
These applications have inspired research on offline and online approximation algorithms for
multi-level aggregation problems. Becchetti et al. [5] gave a 2-approximation algorithm for the
deadline case. (See also [11].) Pedrosa [30] showed, adapting an algorithm of Levi et al. [25] for the
multi-stage assembly problem, that there is a (2 + ε)-approximation algorithm for general waiting
cost functions, where ε can be made arbitrarily small.
In the online case, Khanna et al. [21] gave a rent-or-buy solution (that serves a group of
requests once their waiting cost reaches the cost of their service) and showed that their algorithm
is O(log α)-competitive, where α is defined as the sum of all edge weights. However, they assumed
that each request has to wait at least one time unit. This assumption is crucial for their proof, as
demonstrated by Brito et al. [11], who showed that the competitive ratio of a rent-or-buy strategy
is Ω(D), even for paths with D edges. The same assumption of a minimal cost for a request and
a ratio dependent on the edge-weights is also essential in the work of Vaya [32], who studies a
variant of the problem with bounded bandwidth (the number of packets that can be served by a
single edge in a single service).
The existence of a primal-dual (2 + ε)-approximation algorithm [30, 25] for the offline problem
suggests the possibility of constructing an online algorithm along the lines of [13]. Nevertheless,
despite substantial effort of many researchers, the online multi-level setting remains wide open.
This is perhaps partly due to impossibility of direct emulation of the cleanup phase in primal-dual
offline algorithms in the online setting, as this cleanup is performed in the “reverse time” order.
The case when the tree is just a path has also been studied. An offline polynomial-time
algorithm that computes an optimal schedule was given in [9]. For the online variant, Brito
et al. [11] gave an 8-competitive algorithm. This result was improved by Bienkowski et al. [9] who
showed that the competitive ratio of this problem is between 2 + φ ≈ 3.618 and 5.
1.1
Our Contributions
We study online competitive algorithms for multi-level aggregation. Minor technical differences
notwithstanding, our model is equivalent to those studied in [11, 21], also extending the deadline
variant in [5] and the assembly problem in [25]. We have decided to choose a more generic
terminology to emphasize general applicability of our model and techniques.
Formally, our model consists of a tree T with positive weights assigned to edges, and a set
R of requests that arrive in the nodes of T over time. These requests are served by subtrees
rooted at the root of T . Such a subtree X serves all requests pending at the nodes of X at cost
equal to the total weight of X. Each request incurs a waiting cost, defined by a non-negative
and non-decreasing function of time, which may be different for each request. The objective is to
minimize the sum of the total service and waiting costs. We call this the Multi-Level Aggregation
Problem (MLAP).
In most earlier papers on aggregation problems, the waiting cost function is linear, that is,
it is assumed to be simply the delay between the times when a request arrives and when it is
served. We denote this version by MLAP-L. However, most of the algorithms for this model
extend naturally to arbitrary cost functions. Another variant is MLAP-D, where each request
is given a certain deadline, has to be served before or at its deadline, and there is no penalty
associated with waiting. This can be modeled by the waiting cost function that is 0 up to the
deadline and +∞ afterwards.
In this paper, we mostly focus on the online version of MLAP, where an algorithm needs
to produce a schedule in response to requests that arrive over time. When a request appears, its
waiting cost function is also revealed. At each time t, the online algorithm needs to decide whether
to generate a service tree at this time, and if so, which nodes should be included in this tree.
3
depth 1
rand. alg. for depth 1
depth 2
fixed depth D ≥ 2
paths of arbitrary depth
MLAP and MLAP-L
upper
lower
2∗ [17]
2 [17]
∗
1.582 [20] 1.582 [31]
3 [12]
2.754 [8]
O(D4 2D ) 2.754
5∗ [9]
3.618 [9], 4
MLAP-D
upper lower
1
1
1
1
2 [8]
2 [8]
D2 2D 2
4
4
Table 1: Previous and current bounds on the competitive ratios for MLAP for trees of various
depths. Ratios written in bold are shown in this paper. Unreferenced results are either immediate
consequences of other entries in the table or trivial observations. Asterisked ratios represent
results for MLAP with arbitrary waiting cost functions, which, though not explicitly stated in the
respective papers, are straightforward extensions of the corresponding results for MLAP-L. Some
values in the table are approximations: 1.582 represents e/(e−1) and 3.618 represents 2+φ, where
φ is the golden ratio.
The main result of our paper is an O(D4 2D )-competitive algorithm for MLAP for trees of depth
D, presented in Section 5. A simpler D2 2D -competitive algorithm for MLAP-D is presented in
Section 4. No competitive algorithms have been known so far for online MLAP for arbitrary depth
trees, even for the special case of MLAP-D on trees of depth 3.
For both results we use a reduction, described in Section 3, of the general problem to the special
case of trees with fast decreasing weights described. For such trees we then provide an explicit
competitive algorithm. While our algorithm is compact and elegant, it is not a straightforward
extension of the 2-level algorithm. (In fact, we have been able to show that naı̈ve extensions of
the latter algorithm are not competitive.) It is based on carefully constructing a sufficiently large
service tree whenever it appears that an urgent request must be served. The specific structure of
the service tree is then heavily exploited in an amortization argument that constructs a mapping
from the algorithm’s cost to the cost of the optimal schedule. We believe that these three new
techniques: the reduction to trees with fast decreasing weights, the construction of the service
trees, and our charging scheme, will be useful in further studies of online aggregation problems.
In Section 6 we study a version of MLAP, that we refer to as Single-Phase MLAP (or 1P-MLAP),
in which all requests arrive at the beginning, but they also have a common expiration time that
we denote by θ. Any request not served by time θ pays waiting cost at time θ and does not need
to be served anymore. In spite of the expiration-date feature, it can be shown that 1P-MLAP
can be represented as a special case of MLAP. 1P-MLAP is a crucial tool in all the lower bound
proofs in the literature for competitive ratios of MLAP, including those in [12, 9], as well as in
our lower bounds in Section 7. It also has a natural interpretation in the context of JRP (2-level
MLAP), if we allow all orders to be canceled, say, due to changed market circumstances. In the
online variant of 1P-MLAP all requests are known at the beginning, but the expiration time θ is
unknown. For this version we give an online algorithm with competitive ratio 4, matching the
lower bound. Since 1P-MLAP can be expressed as a special case of MLAP, our result implies that
the techniques from [12, 9] cannot be used to prove a lower bound larger than 4 on the competitive
ratio for MLAP, and any study of the dependence of the competitive ratio on the depth D will
require new insights and techniques.
In Section 7 we consider MLAP on paths. For this case, we give a 4-competitive algorithm
for MLAP-D and we provide a matching lower bound. We show that the same lower bound of 4
applies to MLAP-L as well, improving the previous lower bound of 3.618 from [9].
In addition, we provide two results on offline algorithms (for arbitrary trees). In Section 8
we provide a 2-approximation algorithm for MLAP-D, significantly simpler than the LP-rounding
algorithm in [5] with the same ratio. In Section 6.3, we give a polynomial time algorithm that
4
computes optimal solutions for 1P-MLAP.
Finally, in Section 9, we discuss several technical issues concerning the use of general functions
as waiting costs in MLAP. In particular, when presenting our algorithms for MLAP we assume that
all waiting cost functions are continuous (which cannot directly capture some interesting variants
of MLAP). This is done, however, only for technical convenience; as explained in Section 9, these
algorithms can be extended to left-continuous functions, which allows to model MLAP-D as a
special case of MLAP. We also consider two alternative models for MLAP: the discrete-time model
and the model where not all requests need to be served, showing that our algorithms can be
extended to these models as well.
An extended abstract of this work appeared in the proceedings of 24th Annual European
Symposium on Algorithms (ESA’16) [6].
2
Preliminaries
Weighted trees. Let T be a tree with root r. For any set of nodes Z ⊆ T and a node x, Zx
denotes the set of all descendants of x in Z; in particular, Tx is the induced subtree of T rooted at
x. The parent of a node x is denoted parent(x). The depth of x, denoted depth(x), is the number
of edges on the simple path from r to x. In particular, r is at depth 0. The depth D of T is the
maximum depth of a node of T .
We will deal with weighted trees in this paper. For x 6= r, by `x or `(x) we denote the weight
of the edge connecting node x to its parent. For the sake of convenience, we will often refer to `x
as the weight of x. We assume that all these weights are positive. We extend this notation to r
P
by setting `r = 0. If Z is any set of nodes of T , then the weight of Z is `(Z) = x∈Z `x .
Definition of MLAP. A request ρ is specified by a triple ρ = (σρ , aρ , ωρ ), where σρ is the node of
T in which ρ is issued, aρ is the non-negative arrival time of ρ, and ωρ is the waiting cost function
of ρ. We assume that ωρ (t) = 0 for t ≤ aρ and ωρ (t) is non-decreasing for t ≥ aρ . MLAP-L is the
variant of MLAP with linear waiting costs; that is, for each request ρ we have ωρ (t) = t − aρ , for
t ≥ aρ . In MLAP-D, the variant with deadlines, we have ωρ (t) = 0 for t ≤ dρ and ωρ (t) = ∞ for
t > dρ , where dρ is called the deadline of request ρ.
In our algorithms for MLAP with general costs we will be assuming that all waiting cost
functions are continuous. This is only for technical convenience and we discuss more general
waiting cost functions in Section 9; we also show there that MLAP-D can be considered a special
case of MLAP, and that our algorithms can be extended to the discrete-time model.
A service is a pair (X, t), where X is a subtree of T rooted at r and t is the time of this
service. We will occasionally refer to X as the service tree (or just service) at time t, or even omit
t altogether if it is understood from context.
An instance J = hT , Ri of the Multi-Level Aggregation Problem (MLAP) consists of a weighted
tree T with root r and a set R of requests arriving at the nodes of T . A schedule is a set S of
services. For a request ρ, let (X, t) be the service in S with minimal t such that σρ ∈ X and t ≥ aρ .
We then say that (X, t) serves ρ and the waiting cost of ρ in S is defined as wcost(ρ, S) = ωρ (t).
Furthermore, the request ρ is called pending at all times in the interval [aρ , t]. Schedule S is called
feasible if all requests in R are served by S.
The cost of a feasible schedule S, denoted cost(S), is defined by
cost(S) = scost(S) + wcost(S),
where scost(S) is the total service cost and wcost(S) is the total waiting cost, that is
X
X
scost(S) =
`(X) and wcost(S) =
wcost(ρ, S).
ρ∈R
(X,t)∈S
The objective of MLAP is to compute a feasible schedule S for J with minimum cost(S).
5
Online algorithms. We use the standard and natural definition of online algorithms and the
competitive ratio. We assume the continuous time model. The computation starts at time 0 and
from then on the time gradually progresses. At any time t new requests can arrive. If the current
time is t, the algorithm has complete information about the requests that arrived up until time
t, but has no information about any requests whose arrival times are after time t. The instance
includes a time horizon H that is not known to the online algorithm, which is revealed only at
time t = H. At time H, all requests that are still pending must be served. (In the offline case, H
can be assumed to be equal to the maximum request arrival time.)
If A is an online algorithm and R ≥ 1, we say that A is R-competitive 1 if cost(S) ≤ R · opt(J )
for any instance J of MLAP, where S is the schedule computed by A on J and opt(J ) is the
optimum cost for J .
Quasi-root assumption. Throughout the paper we will assume that r, the root of T , has only
one child. This is without loss of generality, because if we have an algorithm (online or offline)
for MLAP on such trees, we can apply it independently to each child of r and its subtree. This
will give us an algorithm for MLAP on arbitrary trees with the same performance. From now on,
let us call the single child of r the quasi-root of T and denote it by q. Note that q is included in
every (non-trivial) service.
Urgency functions. When choosing nodes for inclusion in a service, our online algorithms
give priority to those that are most “urgent”. For MLAP-D, naturally, urgency of nodes can be
measured by their deadlines, where a deadline of a node v is the earliest deadline of a request
pending in the subtree Tv , i.e., the induced subtree rooted at v. But for the arbitrary instances of
MLAP we need a more general definition of urgency, which takes into account the rate of increase
of the waiting cost in the future. To this end, each of our algorithms will use some urgency function
f : T → R ∪ {+∞}, which also depends on the set of pending requests and the current time step,
and which assigns some time value to each node. The earlier this value, the more urgent the node
is.
Fix some urgency function f . Then, for any set A of nodes in T and a real number β, let
Urgent(A, β, f ) be the set of nodes obtained by choosing the nodes from A in order of their increasing urgency value, until either their total weight exceeds β or we run out of nodes. More precisely,
we define Urgent(A, β, f ) as the smallest set of nodes in A such that (i) for all u ∈ Urgent(A, β, f ),
and v ∈ A − Urgent(A, β, f ) we have f (u) ≤ f (v), and (ii) either `(Urgent(A, β, f )) ≥ β or
Urgent(A, β, f ) = A. In case of ties in the values of f there may be multiple choices for A; we
choose among them arbitrarily.
3
Reduction to L-Decreasing Trees
One basic intuition that emerges from earlier works on trees of depth 2 (see [12, 11, 8]) is that
the hardest case of the problem is when `q , the weight of the quasi-root, is much larger than the
weights of leaves. For arbitrary depth trees, the hard case is when the weights of nodes quickly
decrease with their depth. We show that this is indeed the case, by defining the notion of Ldecreasing trees that captures this intuition and showing that MLAP reduces to the special case
of MLAP for such L-decreasing trees, increasing the competitive ratio by a factor of at most DL.
This is a general result, not limited only to algorithms in our paper.
Formally, for L ≥ 1, we say that T is L-decreasing if for each node u 6= r and each child v of
u we have `u ≥ L · `v . (The value of L used in our algorithms will be fixed later.)
1 Definitions
of competitiveness in the literature often allow an additive error term, independent of the request
sequence. For our algorithms, this additive term is not needed. Our lower bound proofs can be easily modified
(essentially, by iterating the adversary strategy) to remain valid if an additive term is allowed, even if it is a function
of T .
6
Note that the L-decreasing condition corresponds to the usual definition of hierarchically wellseparated trees (HSTs); however, for our purposes we do not need any balancing condition usually
also required from HSTs.
Theorem 3.1. Assume that there exists an R-competitive algorithm A for MLAP (resp. MLAP-D)
on L-decreasing trees (where R can be a function of D, the tree depth). Then there exists a (DLR)competitive algorithm B for MLAP (resp. MLAP-D) on arbitrary trees.
Proof. Fix the underlying instance J = (T , R), where T is a tree and R is a sequence of requests
in T . In our reduction, we convert T to an L-decreasing tree T 0 on the same set of nodes. We
then show that any service on T is also a service on T 0 of the same cost and, conversely, that any
service on T 0 can be converted to a slightly more expensive service on T .
We start by constructing an L-decreasing tree T 0 on the same set of nodes. For any node
u ∈ T − {r}, the parent of u in T 0 will be the lowest (closest to u) ancestor w of u in T such
that `w ≥ L · `u ; if no such w exists, we take w = r. Note that T 0 may violate the quasi-root
assumption, which does not change the validity of the reduction, as we may use independent
instances of the algorithm for each child of r in T 0 . Since in T 0 each node u is connected to one
of its ancestors from T , it follows that T 0 is a tree rooted at r with depth at most D. Obviously,
T 0 is L-decreasing.
The construction implies that if a set of nodes X is a service subtree of T , then it is also a
service subtree for T 0 . (However, note that the actual topology of the trees with node set X in
T and T 0 may be very different. For example, if L = 5 and T is a path with costs (starting from
the leaf) 1, 2, 22 , ..., 2D , then in T 0 the node of weight 2i is connected to the node of weight 2i+3 ,
except for the last three nodes that are connected to r. Thus the resulting tree consists of three
paths ending at r with roughly the same number of nodes.) Therefore, any schedule for J is also
a schedule for J 0 = (T 0 , R), which gives us that opt(J 0 ) ≤ opt(J ).
The algorithm B for T is defined as follows: On a request sequence R, we simulate A for R in
T 0 , and whenever A contains a service X, B issues the service X 0 ⊇ X, created from X as follows:
Start with X 0 = X. Then, for each u ∈ X − {r}, if w is the parent of u in T 0 , then add to X 0 all
inner nodes on the path from u to w in T . By the construction of T 0 , for each u we add at most
D −1 nodes, each of weight less than L·`u . It follows that `(X 0 ) ≤ ((D −1)L+1)`(X) ≤ DL·`(X).
In total, the service cost of B is at most DL times the service cost of A. Any request served
by A is served by B at the same time or earlier, thus the waiting cost of B is at most the waiting
cost of A (resp. for MLAP-D, B produces a valid schedule for J ). Since A is R-competitive, we
obtain
cost(B, J ) ≤ DL · cost(A, J 0 ) ≤ DLR · opt(J 0 ) ≤ DLR · opt(J ),
and thus B is DLR-competitive.
4
A Competitive Algorithm for MLAP-D
In this section we present our online algorithm for MLAP-D with competitive ratio at most D2 2D .
To this end, we will give an online algorithm that achieves competitive ratio RL = (2 + 1/L)D−1
for L-decreasing trees. Taking L = D/2 and using the reduction to L-decreasing trees from
Theorem 3.1, we obtain a D2 2D -competitive algorithm for arbitrary trees.
4.1
Intuitions
Consider the optimal 2-competitive algorithm for MLAP-D for trees of depth 2 [8]. Assume that
the tree is L-decreasing, for some large L. (Thus `q `v , for each leaf v.) Whenever a pending
request reaches its deadline, this algorithm serves a subtree X consisting of r, q and the set of
leaves with the earliest deadlines and total weight of about `q . This is a natural strategy: We have
7
to pay at least `q to serve the expiring request, so including an additional set of leaves of total
weight `q can at most double our overall cost. But, assuming that no new requests arrive, serving
this X can significantly reduce the cost in the future, since servicing these leaves individually is
expensive: it would cost `v + `q per each leaf v, compared to the incremental cost of `v to include
v in X.
For L-decreasing trees with three levels (that is, for D = 3), we may try to iterate this idea.
When constructing a service tree X, we start by adding to X the set of most urgent children of q
whose total weight is roughly `q . Now, when choosing nodes of depth 3, we have two possibilities:
(1) for each v ∈ X − {r, q} we can add to X its most urgent children of combined weight `v (note
that their total weight will add up to roughly `q , because of the L-decreasing property), or (2)
from the set of all children of the nodes in X − {r, q}, add to X the set of total weight roughly `q
consisting of (globally) most urgent children.
It is not hard to show that option (1) does not lead to a constant-competitive algorithm: The
counter-example involves an instance with one node w of depth 2 having many children with
requests with early deadlines and all other leaves having requests with very distant deadlines.
Assume that `q = L2 , `w = L, and that each leaf has weight 1. The example forces the algorithm
to serve the children of w in small batches of size L with cost more than L2 per batch or L per each
child of w, while the optimum can serve all the requests in the children of w at once with cost O(1)
per request, giving a lower bound Ω(L) on the competitive ratio. (The requests at other nodes
can be ignored in the optimal solution, as we can keep repeating the above strategy in a manner
similar to the lower-bound technique for 1P-MLAP that will be described in Section 6. Reissuing
requests at the nodes other than w will not increase the cost of the optimum.) A more intricate
example shows that option (2) by itself is not sufficient to guarantee constant competitiveness
either.
The idea behind our algorithm, for trees of depth D = 3, is to do both (1) and (2) to obtain X.
This increases the cost of each service by a constant factor, but it protects the algorithm against
both bad instances. The extension of our algorithm to depths D > 3 carefully iterates the process
of constructing the service tree X, to ensure that for each node v ∈ X and for each level i below
v we add to X sufficiently many urgent descendants of v at that level.
4.2
Notations
To give a formal description, we need some more notations. For any set of nodes Z ⊆ T , let Z i
denote the set of nodes in Z of depth i in tree T . (Recall that r has depth 0, q has depth 1, and
Si−1
leaves have depth at most D.) Let also Z <i = j=0 Z j and Z ≤i = Z <i ∪ Z i . These notations can
be combined with the notation Zx , so, e.g., Zx<i is the set of all descendants of x that belong to
Z and whose depth in T is smaller than i.
We assume that all the deadlines in the given instance are distinct. This may be done without
loss of generality, as in case of ties we can modify the deadlines by infinitesimally small perturbations and obtain an algorithm for the general case.
At any given time t during the computation of the algorithm, for each node v, let dt (v) denote
the earliest deadline among all requests in Tv (i.e., among all descendants of v) that are pending
for the algorithm; if there is no pending request in Tv , we set dt (v) = +∞. We will use the function
dt as the urgency (see Section 2) of nodes at time t, i.e., a node u will be considered more urgent
than a node v if dt (u) < dt (v).
4.3
Algorithm OnlTreeD
At any time t when some request expires, that is when t = dt (r), the algorithm serves a subtree
X constructed by first initializing X = {r, q}, and then incrementally augmenting X according to
the following pseudo-code:
8
for each depth i = 2, . . . , D
Z i ← set of all children of nodes in X i−1
for each v ∈ X <i
U (v, i, t) ← Urgent(Zvi , `v , dt )
X ← X ∪ U (v, i, t)
In other words, at depth i, we restrict our attention to Z i , the children of all the nodes in
X i−1 , i.e., of the nodes that we have previously selected to X at level i − 1. (We start with i = 2
and X 1 = {q}.) Then we iterate over all v ∈ X <i and we add to X the set U (v, i, t) of nodes from
Tvi (descendants of v at depth i) whose parents are in X, one by one, in the order of increasing
deadlines, stopping when either their total weight exceeds `v or when we run out of such nodes.
Note that these sets do not need to be disjoint.
The constructed set X is a service tree, as we are adding to it only nodes that are children of
the nodes already in X.
Let ρ be the request triggering the service at time t, i.e., satisfying dρ = t. (By the assumption
about different deadlines, ρ is unique.) Naturally, all the nodes u on the path from r to σρ have
dt (u) = t and qualify as the most urgent, thus the node σρ is included in X. Therefore every
request is served before its deadline.
4.4
Analysis
Intuitively, it should be clear that Algorithm OnlTreeD cannot have a better competitive ratio
than `(X)/`q : If all requests are in q, the optimum will serve only q, while our algorithm uses a
set X with many nodes that turn out to be useless. As we will show, via an iterative charging
argument, the ratio `(X)/`q is actually achieved by the algorithm.
Recall that RL = (2 + 1/L)D−1 . We now prove a bound on the cost of the service tree.
Lemma 4.1. Let X be the service tree produced by Algorithm OnlTreeD at time t. Then
`(X) ≤ RL · `q .
Proof. We prove by induction that `(X ≤i ) ≤ (2 + 1/L)i−1 `q for all i ≤ D.
The base case of i = 1 is trivial, as X ≤1 = {r, q} and `r = 0. For i ≥ 2, X i is the union of
the sets U (v, i, t) over all nodes v ∈ X <i . Since T is L-decreasing, each node in the set U (v, i, t)
has weight at most `v /L. Thus the total weight of U (v, i, t) is at most `(U (v, i, t)) ≤ `v + `v /L =
(1 + 1/L)`v . Therefore, by the inductive assumption, we get that
`(X ≤i ) ≤ (1 + (1 + 1/L)) · `(X <i )
≤ (2 + 1/L) · (2 + 1/L)i−2 `q = (2 + 1/L)i−1 `q ,
proving the induction step and completing the proof that `(X) ≤ RL · `q .
The competitive analysis uses a charging scheme. Fix some optimal schedule S∗ . Consider
a service (X, t) of Algorithm OnlTreeD. We will identify in X a subset of “critically overdue”
nodes (to be defined shortly) of total weight at least `q ≥ `(X)/RL , and we will show that for each
such critically overdue node v we can charge the portion `v of the service cost of X to an earlier
service in S∗ that contains v. Further, any node in service of S∗ will be charged at most once.
This implies that the total cost of our algorithm is at most RL times the optimal cost, giving us
an upper bound of RL on the competitive ratio for L-decreasing trees.
In the proof, by nostv we denote the time of the first service in S∗ that includes v and is strictly
after time t; we also let nostv = +∞ if no such service exists (nos stands for next optimal service).
For a service (X, t) of the algorithm, we say that a node v ∈ X is overdue at time t if dt (v) < nostv .
Servicing of such v is delayed in comparison to S∗ , because S∗ must have served v before or at
time t. Note also that r and q are overdue at time t, as dt (r) = dt (q) = t by the choice of the
9
σρ
w
r
q
Y
v
x
U(v,i,t)
X
i
Figure 1: Illustration of the proof of Lemma 4.2.
service time. We define v ∈ X to be critically overdue at time t if (i) v is overdue at t, and (ii)
there is no other service of the algorithm in the time interval (t, nostv ) in which v is overdue.
We are now ready to define the charging for a service (X, t). For each v ∈ X that is critically
overdue, we charge its weight `v to the last service of v in S∗ before or at time t. This charging is
well-defined as, for each overdue v, there must exist a previous service of v in S∗ . The charging is
obviously one-to-one because between any two services in S∗ that involve v there may be at most
one service of the algorithm in which v is critically overdue. The following lemma shows that the
total charge from X is large enough.
Lemma 4.2. Let (X, t) be a service of Algorithm OnlTreeD and suppose that v ∈ X is overdue
at time t. Then the total weight of critically overdue nodes in Xv at time t is at least `v .
Proof. The proof is by induction on the depth of Tv , the induced subtree rooted at v.
The base case is when Tv has depth 0, that is when v is a leaf. We show that in this case
v must be critically overdue, which implies the conclusion of the lemma. Towards contradiction,
suppose that there is some other service at time t0 ∈ (t, nostv ) in which v is overdue. Since v is a
leaf, after the service at time t there are no pending requests in Tv = {v}. This would imply that
there is a request ρ with σρ = v such that t < aρ ≤ dρ < nostv . But this is not possible, because
S∗ does not serve v in the time interval (t, nostv ). Thus v is critically overdue and the base case
holds.
Assume now that v is not a leaf, and that the lemma holds for all descendants of v. If v is
critically overdue, the conclusion of the lemma holds.
Thus we can now assume that v is not critically overdue. This means that there is a service
(Y, t0 ) of Algorithm OnlTreeD with t < t0 < nostv which contains v and such that v is overdue
0
at t0 . Thus nostv = nostv .
0
Let ρ be the request with dρ = dt (v), i.e., the most urgent request in Tv at time t0 .
We claim that aρ ≤ t, i.e., ρ arrived no later than at time t. Indeed, since v is overdue at time
0
t0 , it follows that dρ < nostv = nostv . The optimal schedule S∗ cannot serve ρ after time t, as S∗
has no service from v in the interval (t, dρ ]. Thus S∗ must have served ρ before or at t, and hence
aρ ≤ t, as claimed.
Now consider the path from σρ to v in Y . (See Figure 1.) As ρ is pending for the algorithm at
time t and ρ is not served by (X, t), it follows that σρ 6∈ X. Let w be the last node on this path
in Y − X. Then w is well-defined and w 6= v, as v ∈ X. Let i be the depth of w. Note that the
parent of w is in Xv<i , so w ∈ Z i in the algorithm when X is constructed.
10
The node σρ is in Tw and ρ is pending at t, thus we have dt (w) ≤ dρ . Since w ∈ Z i but w
was not added to X at time t, we have that `(U (v, i, t)) ≥ `v and each x ∈ U (v, i, t) is at least as
urgent as w. This implies that such x satisfies
0
dt (x) ≤ dt (w) ≤ dρ < nostv = nostv ≤ nostx ,
thus x is overdue at time t. By the inductive assumption, the total weight of critically overdue
nodes in each induced subtree Xx is at least `x . Adding these weights over all x ∈ U (v, i, t),
we obtain that the total weight of critically overdue nodes in Xv is at least `(U (v, i, t)) ≥ `v ,
completing the proof.
Now consider a service (X, t) of the algorithm. The quasi-root q is overdue at time t, so
Lemmata 4.2 and 4.1 imply that the charge from (X, t) is at least `q ≥ `(X)/RL . Since each
node in any service in S∗ is charged at most once, we conclude that Algorithm OnlTreeD is
RL -competitive for any L-decreasing tree T .
From the previous paragraph, using Theorem 3.1, we now obtain that there exists a DLRL =
DL(2 + 1/L)D−1 -competitive algorithm for general trees. For D ≥ 2, choosing L = D/2 yields a
competitive ratio bounded by 21 D2 2D−1 · (1 + 1/D)D ≤ 41 D2 2D · e ≤ D2 2D . (For D = 1 there is a
trivial 1-competitive algorithm for MLAP-D.) Summarizing, we obtain the following result.
Theorem 4.3. There exists a D2 2D -competitive online algorithm for MLAP-D.
5
A Competitive Algorithm for MLAP
In this section we show that there is an online algorithm for MLAP whose competitive ratio for
trees of depth D is O(D4 2D ). As in Section 4, we will assume that the tree T in the instance
is L-decreasing. Then, for L-decreasing trees, we will present a competitive algorithm, which
will imply the existence of a competitive algorithm for arbitrary trees by using Theorem 3.1 and
choosing an appropriate value of L.
5.1
Preliminaries and Notations
Recall that ωρ (t) denotes the waiting cost function of a request ρ. As explained in Section 2, we
assume that the waiting cost functions are continuous. (In Section 9 we discuss how to extend our
results to arbitrary waiting cost functions.) We will overload this notation, so that we can talk
about the waiting cost of a set of requests or a set of nodes. Specifically, for a set P of requests
and a set Z of nodes, let
X
ωP (Z, t) =
ωρ (t).
ρ∈P :σρ ∈Z
Thus ωP (Z, t) is the total waiting cost of the requests from P that are issued in Z. We sometimes
omit P , in which case the notation refers to the set of all requests in the instance, that is ω(Z, t) =
ωR (Z, t). Similarly, we omit Z when Z contains all nodes, that is ωP (t) = ωP (T , t).
Maturity time. In our algorithm for MLAP-D in Section 4, the times of services and the urgency
of nodes are both naturally determined by the deadlines. For MLAP with continuous waiting costs
there are no hard deadlines. Nevertheless, we can still introduce the notion of maturity time of a
node, which is, roughly speaking, the time when some subtree rooted at this node has its waiting
cost equal to its service cost; this subtree is then called mature. This maturity time will be our
urgency function, as discussed earlier in Section 2. We use the maturity time in two ways: one,
the maturity times of the quasi-root determine the service times, and two, maturity times of other
nodes are used to prioritize them for inclusion in the service trees. We now proceed to define these
notions.
11
Consider some time t and any set P ⊆ R of requests. A subtree Z of T (not necessarily rooted
at r) is called P -mature at time t if ωP (Z, t) ≥ `(Z). Also, let µP (Z) denote the minimal time τ
such that ωP (Z, τ ) = `(Z); we let µP (Z) = ∞ if such τ does not exist. In other words, µP (Z) is
the earliest τ at which Z is P -mature. Since ωP (Z, 0) = 0 and ωP (Z, t) is a non-decreasing and
continuous function of t, µP (Z) is well-defined.
For a node v, let the P -maturity time of v, denoted MP (v), be the minimum of values µP (Z)
over all subtrees Z of T rooted at v. The tree Z that achieves this minimum will be denoted CP (v)
and called the P -critical subtree rooted at v; if there are more such trees, choose one arbitrarily.
Therefore we have ωP (CP (v), MP (v)) = `(CP (v)).
The following simple lemma guarantees that the maturity time of any node in the P -critical
subtree CP (v) is upper bounded by the maturity time of v.
Lemma 5.1. Let u ∈ CP (v) and let Y = (CP (v))u be the induced subtree of CP (v) rooted at u.
Then MP (u) ≤ µP (Y ) ≤ MP (v).
Proof. The first inequality follows directly from the definition of MP (u). To show the second
inequality, we proceed by contradiction. Let t = MP (v). If the second inequality does not hold,
then u 6= v and ωP (Y, t) < `(Y ). Take Y 0 = CP (v) − Y , which is a tree rooted at v. Since
ωP (CP (v), t) = `(CP (v)), we have that ωP (Y 0 , t) = ωP (CP (v), t) − ωP (Y, t) > `(CP (v)) − `(Y ) =
`(Y 0 ). This in turn implies that µP (Y 0 ) < t, which is a contradiction with the definition of
t = MP (v).
Most of the references to maturity of a node or to its critical set will be made with respect to
the set of requests pending for our algorithm at a given time. For any time t, we will use notation
M t (v) and C t (v) to denote the time MP (v) and the P -critical subtree CP (v), where P is the set
of requests pending for the algorithm at time t; if the algorithm schedules a service at some time
t, P is the set of requests that are pending at time t right before the service is executed. Note
that in general it is possible that M t (v) < t. However, our algorithm will maintain the invariant
that for the quasi-root q we will have M t (q) ≥ t at each time t.
5.2
Algorithm
We now describe our algorithm for L-decreasing trees. A service will occur at each maturity
time of the quasi-root q (with respect to the pending requests), that is at each time t for which
t = M t (q). At such a time, the algorithm chooses a service that contains the critical subtree
C = C t (q) of q and an extra set E, whose service cost is not much more expensive than that of C.
The extra set is constructed similarly as in Algorithm OnlTreeD, where the urgency of nodes is
now measured by their maturity time. In other words, our urgency function is now f = M t (see
Section 2.) As before, this extra set will be a union of a system of sets U (v, i, t) for i = 2, . . . , D,
and v ∈ C <i ∪ E <i , except that now, for technical reasons, the sets U (v, i, t) will be mutually
disjoint and also disjoint from C.
Algorithm OnlTree. At any time t such that t = M t (q), serve the set X = C ∪ E constructed
according to the following pseudo-code:
C ← C t (q) ∪ {r}
E←∅
for each depth i = 2, . . . , D
Z i ← set of all nodes in T i − C whose parent is in C ∪ E
for each v ∈ (C ∪ E)<i
U (v, i, t) ← Urgent(Zvi , `v , M t )
E ← E ∪ U (v, i, t)
Z i ← Z i − U (v, i, t)
12
At the end of the instance (when t = H, the time horizon), if there are any pending requests, issue
the last service that contains all nodes v with a pending request in Tv .
Note that X = C ∪ E is indeed a service tree, as it contains r, q and we are adding to it only
nodes that are children of the nodes already in X. The initial choice and further changes of Z i
imply that the sets U (v, i, t) are pairwise disjoint and disjoint from C – a fact that will be useful
in our analysis.
We also need the following fact.
Lemma 5.2. (a) Suppose that Algorithm OnlTree issues a service at a time t, that is M t (q) = t.
+
+
Denote by M t (q) the maturity time of q right after the service at time t. Then M t (q) > t. (b)
At any time t we have M t (q) ≥ t.
+
To clarify the meaning of “right after the service” in this lemma, M t (q) is defined formally
as the limit of M τ (q), with τ approaching t from the right.
Proof. (a) Let M t (q) = t and let (X, t) be the service at time t. This means that we have
ω(X, t) = `(X) and ω(Y, t) ≤ `(Y ) for all subtrees Y of T rooted at r. Consider any subtree Y
of T rooted at r different from X. Denoting by ω(Y, t+ ) the waiting cost of the packets that are
pending in Y right after the service (X, t), it is sufficient to prove that ω(Y, t+ ) < `(Y ).
Towards contradiction, suppose that ω(Y, t+ ) ≥ `(Y ). Then we have
ω(X ∪ Y, t) = ω(X, t) + ω(Y − X, t)
= ω(X, t) + ω(Y, t+ )
≥ `(X) + `(Y )
> `(X ∪ Y ),
where the last (strict) inequality follows from q ∈ X ∩ Y and `q > 0. But X ∪ Y is a subtree of T
rooted at r, so the inequality ω(X ∪ Y, t) > `(X ∪ Y ) contradicts our assumption that M t (q) = t.
(b) The lemma holds trivially at the beginning, at time t = 0. In any time interval without
new requests released nor services, the inequality M t (q) ≥ t is preserved, by the definition of the
service times and continuity of waiting cost functions. Releasing a request ρ at a time aρ = t
cannot decrease M t (q) to below t, because the waiting cost function of ρ is identically 0 up to t
and thus releasing ρ does not change the waiting costs at time t or before. Finally, part (a) implies
that the inequality is also preserved when services are issued.
By Lemma 5.2 (and the paragraph before), the definition of the algorithm is sound, that is the
sequence of service times is non-decreasing. In fact, the lemma shows that no two services can
occur at the same time.
5.3
Competitive Analysis
We now present the proof of the existence of an O(D4 2D )-competitive algorithm for MLAP for
trees of depth D. The overall argument is quite intricate, so we will start by summarizing its main
steps:
• First, as explained earlier, we will assume that the tree T in the instance is L-decreasing.
For such T we will show that Algorithm OnlTree has competitive ratio O(D2 RL ), where
RL = (2 + 1/L)D−1 . Our bound on the competitive ratio for arbitrary trees will then follow,
by using Theorem 3.1 and choosing an appropriate value of L (see Theorem 5.8).
• For L-decreasing trees, the bound of the competitive ratio of Algorithm OnlTree involves
four ingredients:
13
– We show (in Lemma 5.3) that the total cost of Algorithm OnlTree is at most twice
its service cost.
– Next, we show that the service cost of Algorithm OnlTree can be bounded (within a
constant factor) by the total cost of all critical subtrees C t (q) of the service trees in its
schedule.
– To facilitate the estimate of the adversary cost, we introduce the concept of a pseudoschedule denoted S. The pseudo-schedule S is a collection of pseudo-services, which
include the services from the original adversary schedule S∗ . We show (in Lemma 5.5)
that the adversary pseudo-schedule has service cost not larger than D times the cost of
S∗ . Using the pseudo-schedule allows us to ignore the waiting cost in the adversary’s
schedule.
– With the above bounds established, it remains to show that the total cost of critical
subtrees in the schedule of Algorithm OnlTree is within a constant factor of the
service cost of the adversary’s pseudo-schedule. This is accomplished through a charging
scheme that charges nodes (or, more precisely, their weights) from each critical subtree
of Algorithm OnlTree to their appearances in some earlier adversary pseudo-services.
Two auxiliary bounds. We now assume that T is L-decreasing and proceed with our proof,
according to the outline above.
The definition of the maturity time implies that the waiting cost of all the requests served is
at most the service cost `(X), as otherwise X would be a good candidate for a critical subtree at
some earlier time. Denoting by S the schedule computed by Algorithm OnlTree, we thus obtain:
Lemma 5.3. cost(S) ≤ 2 · scost(S).
Using Lemma 5.3, we can restrict ourselves to bounding the service cost, losing at most a factor
of 2. We now bound the cost of a given service X; recall that RL = (2 + 1/L)D−1 .
Lemma 5.4. Each service tree X = C ∪E constructed by the algorithm satisfies `(X) ≤ RL ·`(C).
Proof. Since T is L-decreasing, the weight of each node that is a descendant of v is at most `v /L
and thus `(U (v, i, t)) ≤ (1 + 1/L)`v .
We now estimate `(X). We claim and prove by induction for i = 1, . . . , D that
`(X ≤i ) ≤ (2 + 1/L)i−1 `(C ≤i ) .
(1)
The base case for i = 1 is trivial, as X ≤1 = C ≤1 = {r, q}. For i ≥ 2, the set X i consists of C i
and the sets U (v, i, t), for v ∈ X <i . Each of these sets U (v, i, t) has weight at most (1 + 1/L)`v .
Therefore
`(X i ) ≤ (1 + 1/L)`(X <i ) + `(C i ) .
(2)
Now, using (2) and the inductive assumption (1) for i − 1, we get
`(X ≤i ) = `(X <i ) + `(X i )
≤ (2 + 1/L)`(X <i ) + `(C i )
≤ (2 + 1/L)i−1 `(C <i ) + `(C i ) ≤ (2 + 1/L)i−1 `(C ≤i ).
Taking i = D in (1), the lemma follows.
14
Waiting costs and pseudo-schedules. Our plan is to charge the cost of Algorithm OnlTree
to the optimal (or the adversary’s) cost. Let S∗ be an optimal schedule. To simplify this charging,
we extend S∗ by adding to it pseudo-services, where a pseudo-service from a node v is a partial
service of cost `v that consists only of the edge from v to its parent. We denote this modified
schedule S and call it a pseudo-schedule, reflecting the fact that its pseudo-services are not necessarily subtrees of T rooted at r. Adding such pseudo-services will allow us to ignore the waiting
costs in the optimal schedule.
We now define more precisely how to obtain S from S∗ . For each node v independently we define
the times when new pseudo-services of v occur in S. Intuitively, we introduce these pseudo-services
at intervals such that the waiting cost of the requests that arrive in Tv during these intervals adds
up to `v . The formal description of this process is given in the pseudo-code below, where we use
notation R(> t) for the set of requests ρ ∈ R with aρ > t (i.e., requests issued after time t). Recall
that H denotes the time horizon.
t ← −∞
while ωR(>t) (Tv , H) ≥ `v
let τ be the earliest time such that ωR(>t) (Tv , τ ) = `v
add to S a pseudo-service of v at τ
t←τ
We apply the above procedure to all the nodes v ∈ T − {r} such that R contains a request in
Tv . The new pseudo-schedule S contains all the services of S∗ (treated as sets of pseudo-services
of all served nodes) and the new pseudo-services added as above. The service cost of the pseudoschedule, scost(S), is defined naturally as the total weight of the nodes in all its pseudo-services.
Lemma 5.5. scost(S) ≤ D · cost(S∗ ).
Proof. It is sufficient to show that the total service cost of the new pseudo-services added inside
the while loop is at most scost(S∗ ) + D · wcost(S∗ ): Adding scost(S∗ ) once more to account for the
service cost of the services of S∗ that are included in S, and using our assumption that D ≥ 3, we
obtain scost(S) ≤ 2 · scost(S∗ ) + D · wcost(S∗ ) ≤ D · cost(S∗ ), thus the lemma follows.
To prove the claim, consider some node v, and a pair of times t, τ from one iteration of the
while loop, when a new pseudo-service was added to S at time τ . This pseudo-service has cost
`v . In S∗ , either there is a service in (t, τ ] including v, or the total waiting cost of the requests
within Tv released in this interval is equal to ωR(>t) (Tv , τ ) = `v . In the first case, we charge the
cost of `v of this pseudo-service to any service of v in S∗ in (t, τ ]. Since we consider here only the
new pseudo-services, created by the above pseudo-code, this charging will be one-to-one. In the
second case, we charge `v to the total waiting cost of the requests in Tv released in the interval
(t, τ ]. For each given v, the charges of the second type from pseudo-services at v go to disjoint
sets of requests in Tv , so each request in Tv will receive at most one charge from v. Therefore,
for each request ρ, its waiting cost in S∗ will be charged at most D times, namely at most once
from each node v on the path from σρ to q. From the above argument, the total cost of the new
pseudo-services is at most scost(S∗ ) + D · wcost(S∗ ), as claimed.
Using the bound in Lemma 5.5 will allow us to use scost(S) as an estimate of the optimal cost
in our charging scheme, losing at most a factor of D in the competitive ratio.
Charging scheme. According to Lemma 5.3, to establish constant competitiveness it is sufficient to bound only the service cost of Algorithm OnlTree. By Lemma 5.4 for any service tree
X of the algorithm we have `(X) ≤ RL · `(C). Therefore, it is in fact sufficient to bound the
total weight of the critical sets in the algorithm’s services. Further, using Lemma 5.5, instead of
using the optimal cost in this bound, we can use the pseudo-service cost. Following this idea, we
15
will show how we can charge, at a constant rate, the cost of all critical sets C in the algorithm’s
services to the adversary pseudo-services.
The basic idea of our charging method is similar to that for MLAP-D. The argument in Section 4
can be interpreted as an iterative charging scheme, where we have a charge of `q that originates
from q, and this charge is gradually distributed and transferred down the service tree, through
overdue nodes, until it reaches critically overdue nodes that can be charged directly to adversary
services. For MLAP with general waiting costs, the charge of `(C) will originate from the current
critical subtree C. Several complications arise when we attempt to distribute the charges to nodes
at deeper levels. First, due to gradual accumulation of waiting costs, it does not seem possible to
identify nodes in the same service tree that can be used as either intermediate or final nodes in this
process. Instead, when defining a charge from a node v, we will charge descendants of v in earlier
services of v. Specifically, the weight `v will be charged to the set U (v, i, t− ) for some i > depth(v),
where t− is the time of the previous service of the algorithm that includes v. The nodes — or,
more precisely, services of these nodes — that can be used as intermediate nodes for transferring
charges will be called depth-timely. As before, we will argue that each charge will eventually reach
a node u in some earlier service that can be charged to some adversary pseudo-service directly.
Such service of u will be called u-local, where the name reflects the property that this service has
an adversary pseudo-service of u nearby (to which its weight `u will be charged).
We now formalize these notions. Let (X, t) be some service of Algorithm OnlTree that
includes v, that is v ∈ X. By Prevt (v) we denote the time of the last service of v before t in the
schedule of the algorithm; if it does not exist, set Prevt (v) = −∞. By Nextt (v, i) we denote the
time of the ith service of v following t in the schedule of the algorithm; if it does not exist, set
Nextt (v, i) = +∞.
We say that the service of v at time t < H is i-timely, if M t (v) < Nextt (v, i); furthermore,
if v is depth(v)-timely, we will say simply that this service of v is depth-timely. We say that the
service of v at time t < H is v-local, if this is either the first service of v by the algorithm, or if
there is an adversary pseudo-service of v in the interval (Prevt (v), Nextt (v, depth(v))].
Given an algorithm’s service (X, t), we now define the outgoing charges from X. For any
v ∈ X − {r}, its outgoing charge is defined as follows:
(C1) If t < H and the service of v at time t is both depth-timely and v-local, charge `v to the
first adversary pseudo-service of v after time Prevt (v).
(C2) If t < H and the service of v at time t is depth-timely but not v-local, charge `v to the
algorithm’s service at time Prevt (v).
(C3) If t < H and the service of v at time t is not depth-timely, the outgoing charge is 0.
(C4) If t = H and v ∈ X, we charge `v to the first adversary pseudo-service of v.
We first argue that the charging is well-defined. To justify (C1) suppose that this service is
depth-timely and v-local. If (X, t) is the first service of v then Prevt (v) = −∞ and the charge goes
to the first pseudo-service of v which exists as all the requests must be served. Otherwise there
is an adversary pseudo-service of v in the interval (Prevt (v), Nextt (v, depth(v))] and rule (C1) is
well-defined. For (C2), note that if the service (X, t) of v is not v-local then there must be an
earlier service including v. (C3) is trivial. For (C4), note again that an adversary transmission of
v must exist, as all requests must be served.
The following lemma implies that all nodes in the critical subtree will have an outgoing charge,
as needed.
Lemma 5.6. For a transmission time t < H, each v ∈ C t (q) is 1-timely, and thus also depthtimely.
16
Proof. From Lemma 5.1, each v ∈ C t (q) satisfies M t (v) ≤ M t (q) = t < Nextt (q, 1) ≤ Nextt (v, 1),
where the sharp inequality follows from Lemma 5.2.
The following lemma captures the key property of our charging scheme. For any depth-timely
service of v ∈ X that is not v-local, it identifies a set U (v, i, t− ) in the previous service (X − , t− )
including v that is suitable for receiving a charge. It is important that each such set is used
only once, has sufficient weight, and contains only depth-timely nodes. As we show later, these
properties imply that in this charging scheme the net charge (the difference between the outgoing
and incoming charge) from each service X is at least as large as the total weight of its critical
subtree.
As in the argument for MLAP-D, we need to find an urgent node w ∈ Xv which is not in X −
and has its parent in X − . There are two important issues caused by the fact that the urgency
is given by the maturity times instead of deadlines. The first issue is that the maturity time
can decrease due to new packet arrivals — to handle this, we argue that if the new requests had
large waiting costs, they would guarantee the existence of a pseudo-service of node v in the given
time interval and thus the algorithm’s service of v would be v-local. The second issue is that
the maturity time is not given by a single descendant but by adding the node contributions from
the whole tree — thus instead of searching for w on a single path, we need a more subtle, global
argument to identify such w.
Lemma 5.7. Assume that the service of v at time t < H is depth-timely and not v-local. Let
i = depth(v), and let (X − , t− ) be the previous service of Algorithm OnlTree including v, that
is t− = Prevt (v). Then there exists j > i such that all the nodes in the set U (v, j, t− ) from the
construction of X − in the algorithm are depth-timely and `(U (v, j, t− )) ≥ `v .
Proof. Let t∗ = M t (v) and let C 0 = C t (v) be the critical subtree of v at time t. Since the service
of v at time t is i-timely, we have t∗ < Nextt (v, i). (It may be the case that t∗ < t, but that does
not hamper our proof in any way.) Also, since the service of v at time t is not v-local, it is not
the first service of v, thus t− and X − are defined.
Let P − be the set of requests pending right after time t− (including those with arrival time t−
but not those served at time t− ), and let P be the set of requests with arrival time in the interval
(t− , t]. The key observation is that the total waiting cost of all the requests in C 0 that arrived
after t− satisfies
ωP (C 0 , t∗ ) < `v .
(3)
To see this, simply note that ωP (C 0 , t∗ ) ≥ `v would imply that ωR(>t− ) (Tv , t∗ ) ≥ `v . This in turn
would imply the existence of a pseudo-service of v in the interval (t− , t∗ ] ⊆ (Prevt (v), Nextt (v, i)],
which would contradict the assumption that the service of v at time t is not v-local. (Note that if
t∗ ≤ t− then ωP (C 0 , t∗ ) = 0 as t∗ is before the arrival time of any request in P and the inequality
holds trivially.)
Since P − ∪ P contains all the requests pending at time t, the choice of t∗ and C 0 implies that
ωP − ∪P (C 0 , t∗ ) = `(C 0 ) .
(4)
P − does not contain any requests in C 0 ∩ X − , as those were served at time t− ; therefore
ωP − (C 0 , t∗ ) = ωP − (C 0 − X 0 , t∗ ). Letting B be the set of all nodes w ∈ C 0 − X − for which
S
0
0
parent(w) ∈ X − , we have C 0 − X 0 = w∈B Cw
, where all sets Cw
, for w ∈ B, are disjoint. (See
0
−
Figure 2.) Also, v ∈ C ∩ X . Combining these observations, and using inequalities (3) and (4),
17
Xq
r
v
w
B
C'w
C'
Figure 2: Illustration of the proof of Lemma 5.7.
we get
P
w∈B
S
0
0
, t∗ ) = ωP − ( w∈B Cw
, t∗ )
ωP − (Cw
= ωP − (C 0 − X 0 , t∗ )
= ωP − (C 0 , t∗ )
= ωP − ∪P (C 0 , t∗ ) − ωP (C 0 , t∗ )
> `(C 0 ) − `v
≥ `(C 0 ) − `(C 0 ∩ X − )
P
0
= `(C 0 − X − ) = w∈B `(Cw
).
It follows that there exists w ∈ B such that
0
0
ωP − (Cw
, t∗ ) > `(Cw
).
(5)
−
Equation (5) implies that M t (w) ≤ t∗ , using also the fact that w was not served at t− , so P −
−
contains exactly all the requests used to define M t (w). Let j = depth(w); note that j > i as w is
−
−
a descendant of v. Since w 6∈ X but parent(w) ∈ X − , and M t (w) is finite, the definition of the
extra sets for X − implies that U (v, j, t− ) has sufficient weight and all its nodes are more urgent
−
−
than w. More precisely, `(U (v, j, t− )) ≥ `v and any z ∈ U (v, j, t− ) has M t (z) ≤ M t (w) ≤ t∗ .
It remains to show that every z ∈ U (v, j, t− ) is depth-timely at time t− . Indeed, since
depth(z) = j ≥ i + 1 and any service containing z contains also v, we get
−
−
−
−
Nextt (z, j) ≥ Nextt (z, i + 1) ≥ Nextt (v, i + 1) = Nextt (v, i) > t∗ ≥ M t (z) ,
−
where the last step uses the inequality t∗ ≥ M t (z) derived in the previous paragraph. Thus z is
depth-timely, as needed. The proof of the lemma is now complete.
Competitive analysis. We are now ready to complete our competitive analysis of MLAP.
Theorem 5.8. There exists an O(D4 2D )-competitive algorithm for MLAP on trees of depth D.
Proof. We will show that Algorithm OnlTree’s competitive ratio for L-decreasing trees of depth
D ≥ 3 is at most 4D2 RL , where RL = (2 + 1/L)D−1 . By applying Theorem 3.1, this implies
that there is an online algorithm for arbitrary trees with ratio at most 4D3 L(2 + 1/L)D−1 . For
L = D/2, this ratio is bounded by 3D4 2D , implying the theorem (together with the fact that for
D = 1, 2, constant-competitive algorithms are known).
18
So now we fix an L-decreasing tree T and focus our attention on Algorithm OnlTree’s
schedule S and on the adversary pseudo-schedule S. Define the net charge from a service (X, t)
in S to be the difference between the outgoing and incoming charge of (X, t). Our goal is to show
that each pseudo-service in S is charged only a constant number of times and that the net charge
from each service (X, t) in S is at least `(X)/RL .
Consider first an adversary pseudo-service of v at a time τ . We argue that it is charged at
most (D + 3)`v : If this is the first pseudo-service of v, charged once from both the first service of
v by rule (C1) and from the last service of v at time t = H by rule (C4). In addition, by rule (C1)
it may be charged D times from the last D services of v before τ , and once from the first service
at or after τ . All the charges are equal to `v .
Now consider a service (X, t) of Algorithm OnlTree. For t = H, all the nodes of X have an
outgoing charge by rule (C4) and there is no incoming charge. Thus the net charge from X is
`(X) ≥ `(X)/RL .
For t < H, let X = C ∪ E, where C is the critical subtree and E is the extra set. From
Lemma 5.6, all nodes in C are depth-timely, so they generate outgoing charge of at least `(C)
from X. Next, we show that the net charge from the extra set E is non-negative. Recall that E
is a disjoint union of sets of the form U (w, k, t) and E is disjoint from C. If a future service of a
node v generates the charge of `v to X by rule (C2), it must be the service at time Nextt (v, 1),
so such a charge is unique for each v. Furthermore, Lemma 5.7 implies that one of the extra sets
U (v, j, t), for j > i, has `(U (v, j, t)) ≥ `v and consists of depth-timely nodes only. Thus these
nodes have outgoing charges adding up to at least `v ; these charges go either to the adversary’s
pseudo-services or the algorithm’s services before time t. We have shown that the net charge from
each extra set U (w, k, t) is non-negative; therefore, the net charge from E is non-negative as well.
We conclude that the net charge from X is at least `(C). Applying Lemma 5.4, we obtain that
this net charge is at least `(X)/RL .
Summing over all the services (X, t) in S, we get a bound for the service cost of schedule S:
scost(S) ≤ (D + 3)RL · scost(S). Applying Lemmata 5.3 and 5.5, we get
cost(S) ≤ 2 · scost(S)
≤ 2(D + 3)RL · scost(S)
≤ 2D(D + 3)RL · cost(S∗ ) ≤ 4D2 RL · cost(S∗ ).
We have thus shown that Algorithm OnlTree’s competitive ratio for L-decreasing trees is at
most 4D2 RL , which, as explained earlier, is sufficient to complete the proof.
6
Single-Phase MLAP
We now consider a restricted variant of MLAP that we refer to as Single-Phase MLAP, or 1P-MLAP.
In 1P-MLAP all requests arrive at the beginning, at time 0. The instance also includes a parameter
θ representing the common expiration time for all requests. We do not require that all requests
are served. Any unserved request pays only the cost of waiting until the expiration time θ.
In the online variant of 1P-MLAP, all requests, including their waiting cost functions, are known
to the online algorithm at time 0. The only unknown is the expiration time θ.
Although not explicitly named, variants of 1P-MLAP have been considered in [12, 9], where
they were used to show lower bounds on competitive ratios for MLAP. These proofs consist of
two steps, first showing a lower bound for online 1P-MLAP and then arguing that, in the online
scenario, 1P-MLAP can be expressed as a special case of MLAP. (A corresponding property holds
in the offline case as well, but is quite trivial.) We also use the same general approach in Section 7
to show our lower bounds.
To see that (in spite of the expiration feature) 1P-MLAP can be thought of as a special case
of MLAP, we map an instance J of 1P-MLAP into the instance J 0 of MLAP with the property
19
that any R-competitive algorithm for J 0 can be converted into an R-competitive algorithm for J .
We will explain the general idea when the cost function is linear; the construction for arbitrary
cost functions is based on the same idea, but it involves some minor technical obstacles. Let θ
be the expiration time from J . Choose some large integers K and M . The constructed instance
J 0 consists of K “nested” and “compressed” copies of J , that we also refer to as phases. In the
i-th phase we multiply the waiting cost function of each node by M i . We let this phase start
at time (1 − M −i )θ (that is, at this time the requests from this phase are released) and end
at time θ. Thus the length of phase i is M −i θ. The main trick is that, in J 0 , at time θ the
adversary can serve all pending requests (from all phases) at the cost that is independent of K,
so the contribution of this service cost to the cost of each phase is negligibly small. Following this
idea, any R-competitive algorithm for J 0 can be converted into an R-competitive algorithm for
J , except for some vanishing additive constant. (See [12, 9] for more details.)
6.1
Characterizing Optimal Solutions
Suppose that the expiration value is θ = t. Then the optimal solution is to serve some subtree X
(rooted at r) already at time 0 and wait until the end of the phase at time t with the remaining
requests in X = T − X. So now we consider schedules of this form, that consist of one service
subtree X ⊆ T at time 0. The cost of this schedule (that we identify with X itself) is
cost(X, t) = `(X) + ω(X, t),
P
where, for any set U ⊆ T , ω(U, t) = ρ ωρ (U, t) denotes the waiting cost of all requests in U (see
Section 5.1.)
Our first objective is to characterize those subtrees X that are optimal for θ = t. This
characterization will play a critical role in our online algorithm for 1P-MLAP, provided later
in this section and it also leads to an offline polynomial-time algorithm for computing optimal
solutions, given in Section 6.3.
The lemma below can be derived by expressing 1P-MLAP as a linear program and using strong
duality. We provide instead a simple combinatorial proof. For each subtree Z of T , we denote its
root by rZ . (Also, recall that Zv is the induced subtree of Z rooted at v, that is, Zv contains all
descendants of v in Z.)
Lemma 6.1. A service X is optimal for an expiration time θ = t if and only if it satisfies the
following two conditions:
(a) ω(Xv , t) ≥ `(Xv ) for each v ∈ X, and
(b) ω(Z, t) ≤ `(Z) for each subtree Z, disjoint with X, such that parent(rZ ) ∈ X.
Proof. (⇒) We begin by proving that (a) and (b) are necessary conditions for optimality of X.
(a) Suppose that there is a v ∈ X for which ω(Xv , t) < `(Xv ). Let Y = X − Xv . Then Y is a
service tree (empty if v = r), and we have
cost(Y, t) = `(Y ) + ω(Y , t)
= `(X) − `(Xv ) + ω(X, t) + ω(Xv , t)
< `(X) + ω(X, t) = cost(X, t),
contradicting the optimality of X.
(b) Suppose that there is a subtree Z that violates condition (b), that is Z∩X = ∅, parent(rZ ) ∈
X, but ω(Z, t) > `(Z). Let Y = X ∪ Z. Then Y is a service tree and
cost(Y, t) = `(Y ) + ω(Y , t)
= `(X) + `(Z) + ω(X, t) − ω(Z, t)
< `(X) + ω(X, t) = cost(X, t),
20
contradicting the optimality of X.
(⇐) We now prove sufficiency of conditions (a) and (b). Suppose that X satisfies (a) and
(b), and let Y be any other service subtree of T . From (b), for any node z ∈ Y − X with
parent(z) ∈ X ∩ Y we have ω(Yz , t) ≤ `(Yz ). Since both X and Y are rooted at r, any node in
Y − X is in some induced subtree Yz , for some z such that parent(z) ∈ X ∩ Y . This implies that
ω(Y − X, t) ≤ `(Y − X). Similarly, from (a), for any node v ∈ X − Y with parent(v) ∈ X ∩ Y we
have ω(Xv , t) ≥ `(Xv ). This implies that ω(X − Y, t) ≥ `(X − Y ). These inequalities give us that
cost(Y, t) = `(Y ) + ω(Y , t)
= `(X) + ω(X, t) + [ `(Y − X) − ω(Y − X, t) ] − [ `(X − Y ) − ω(X − Y, t) ]
≥ cost(X, t),
proving the optimality of X.
Following the terminology from Section 5.1, a subtree Z of T (not necessarily rooted at r) is
called mature at time t if ω(Z, t) ≥ `(Z). (We do not need to specify the set of requests in ω(Z, t),
as all requests are released at time 0.) In this section we will simplify this notation and write
“t-mature”, instead of “mature at time t”. We say that Z is t-covered if each induced subtree Zx ,
for x 6= rZ , is t-mature. (Note that in this definition Z itself is not required to be t-mature.) We
now make two observations. First, if Z is t-covered then the definition implies that each induced
subtree Zv of Z is t-covered as well. Two, if Z = {rZ }, that is if Z consists of only one node, then
Z is vacuously t-covered; thus any subtree Z of T has a t-covered subtree rooted at rZ .
Lemma 6.2. If X and Y are t-covered service subtrees of T then the service subtree X ∪ Y is
also t-covered.
Proof. If X = Y the lemma is trivial, so assume X 6= Y . Choose any z ∈ (X − Y ) ∪ (Y − X) with
parent(z) ∈ X ∩ Y . Without loss of generality, we can assume that z ∈ X − Y . By definition, Xz
is t-mature and disjoint with Y .
Take Q = Y ∪ Xz . Q is a service subtree of T . We claim that Q is t-covered. To justify
this claim, choose any v ∈ Q. If v ∈ Y and z ∈
/ Qv , then Qv is t-mature because Qv = Yv . If
v ∈ Qz = Xz then Qv is t-mature because Qv = Xv . The remaining case is when v ∈ Y and
z ∈ Qv . Then ω(Qv , t) = ω(Yv ) + ω(Xz , t) ≥ `(Yv ) + `(Xz ) = `(Qv , t), so Qv is t-mature in this
case as well. Thus indeed Q is t-covered, as claimed.
We can now update Y by setting Y = Q and applying the above argument again. By repeating
this process, we will end up with X = Y , completing the proof.
Choose Ot to be the inclusion-maximal t-covered service subtree of T (that is, a subtree rooted
at r). By Lemma 6.2, Ot is well defined and unique. Also, from Lemma 6.1 we obtain that Ot is
optimal for expiration time θ = t. Thus the optimal cost when θ = t is
t
opt(t) = cost(Ot , t) = `(Ot ) + ω(O , t).
Trivially, if a subtree Z is t-mature and t ≤ t0 then Z is t0 -mature as well. This implies the
following corollary.
0
Corollary 6.3. For every t ≤ t0 it holds that Ot ⊆ Ot .
6.2
An Online Competitive Algorithm
Without loss of generality, we can assume that minv∈T −{r} `v > 1; otherwise the distances together
with the waiting costs can be rescaled to satisfy this property. To simplify the presentation we
will assume that for θ → ∞ the optimum cost grows to ∞. (Any instance can be modified to have
this property, without changing the behavior of the algorithm on T , by adding an infinite path to
the root of T , where the nodes on this path have waiting cost functions that are initially 0 and
then gradually increase.)
21
Algorithm OnlDoubling.
each time ti serve Oti+1 .
For any i ≥ 0, define ti to be the first time when opt(ti ) = 2i . At
Algorithm OnlDoubling is in essence a doubling algorithm [15]. However, although obtaining
some constant ratio using doubling is not difficult, the formulation that achieves the optimal factor
of 4 relies critically on the structure of optimal solutions that we elucidated earlier in this section.
For example, note that the sequence of service costs of the algorithm does not necessarily grow
exponentially.
Analysis. By our assumption that minv∈T −{r} `v > 1, we have Ot0 = {r}; that is, until time
t0 the optimum solution will not make any services and will only pay the waiting cost. This also
implies that ω(Ot0 , t0 ) ≤ 1.
We now estimate the cost of Algorithm OnlDoubling, for a given expiration time θ. Suppose
first that θ = tk , by which we mean that the expiration is right after the algorithm’s service at time
Pk
tk . The total service cost of the algorithm is trivially i=0 `(Oti+1 ). To estimate the waiting cost,
consider some node v. If v ∈ Oti+1 − Oti , for some i = 0, ..., k, then the waiting cost of v is ω(v, ti ).
Otherwise, for v ∈
/ Otk+1 , the waiting cost of v is ω(v, θ) = ω(v, tk ). Thus OnlDoubling’s total
cost is
alg(tk ) =
k
X
`(Oti+1 ) + ω(Ot0 , t0 ) +
i=0
≤
k
X
ω(Oti+1 − Oti , ti ) + ω(O
tk+1
, tk )
i=0
k
X
ti
`(Oti+1 ) + ω(O , ti ) + 1
i=0
≤
k+1
X
ti
`(Oti ) + ω(O , ti ) + 1
i=0
=
k+1
X
opt(ti ) + 1 =
k+1
X
2i + 1 = 2k+2 = 4 · opt(tk ),
i=0
i=0
as needed.
Next, suppose that θ is between two service times, say tk ≤ θ ≤ tk+1 . From the optimality of
tk
θ
Otk at expiration time tk , we have opt(tk ) = `(Otk ) + ω(O , tk ) ≤ `(Oθ ) + ω(O , tk ). Using this
bound, the increase of the optimum cost from time tk to time θ can be estimated as follows:
θ
θ
opt(θ) − opt(tk ) ≥ `(Oθ ) + ω(O , θ) − `(Oθ ) + ω(O , tk )
θ
θ
= ω(O , θ) − ω(O , tk ) ≥ ω(O
tk+1
, θ) − ω(O
tk+1
, tk ),
where the last expression is the increase in Algorithm OnlDoubling’s cost from time tk to time
θ. This implies that the ratio at expiration time θ cannot be larger than the ratio at expiration
time tk .
Finally, we have the case when 0 ≤ θ < t0 . Thus opt(θ) < 1. By our assumption that all
weights are greater than 1, this implies that opt(θ) = ω(T , θ), and thus opt(θ) is the same as the
cost of the algorithm.
Summarizing, we obtain our main result of this section.
Theorem 6.4. OnlDoubling is 4-competitive for the Single-Phase MLAP.
6.3
An Offline Polynomial-Time Algorithm
The offline algorithm for computing the optimal solutions is based on the above-established properties of optimal sets Ot . It proceeds bottom up, starting at the leaves, and pruning out subtrees
that are not t-covered. The pseudo-code of our algorithm is shown below.
22
Algorithm 1 CovSubT(v, t)
Av ← {v}
δv ← ω(v, t)
for each child u of v do
(Au , δu ) ← CovSubT(u, t)
if δu ≥ `u then
Av ← Av ∪ Au
δv ← δv + δu − `u
return (Av , δv )
For each node v the algorithm outputs a pair (Av , δv ), where Av denotes the maximal (equivalently w.r.t. inclusion or cardinality) t-covered subtree of v rooted at v, and δv = ω(Av , t) −
`(Av − {v}), that is δv is the “surplus” waiting cost of Av at time t. (Note that we do not account
for `v in this formula.) To compute Ot , the algorithm returns CovSubT(r, t).
By a routine argument, the running time of Algorithm CovSubT is O(N ), where N is the size
of the instance (that is, the number of nodes in T plus the number of requests). Here, we assume
that the values ω(v, t) can be computed in time proportional to the number of requests in v.
7
MLAP on Paths
We now consider the case when the tree is just a path. For simplicity we will assume a generalization to the continuous case, that we refer to as the MLAP problem on the line, when the path
is represented by the half-line [0, ∞); that is the requests can occur at any point x ∈ [0, ∞). Then
the point 0 corresponds to the root, each node is a point x ∈ [0, ∞), and each service is an interval
of the form [0, x]. We say that an algorithm delivers from x if it serves the interval [0, x].
We provide several results for the MLAP problem on the line. We first prove that the competitive ratio of MLAP-D (the variant with deadlines) on the line is exactly 4, by providing matching
upper and lower bounds. Then later we will show that the lower bound of 4 can be modified to
work for MLAP-L (that is, for linear waiting costs).
Algorithm OnlLine. The algorithm creates a service only when a deadline of a pending request
is reached. If a deadline of a request at x is reached, then OnlLine delivers from 2x.
Theorem 7.1. Algorithm OnlLine is 4-competitive for MLAP-D on the line.
Proof. The proof uses a charging strategy. We represent each adversary service, say when the
adversary delivers from a point y, by an interval [0, y]. The cost of each service of OnlLine is
then charged to a segment of one of those adversary service intervals.
Consider a service triggered by a deadline t of a request ρ at some point x. When serving
ρ, OnlLine delivered from 2x. The adversary must have served ρ between its arrival time and
its deadline t. Fix the last such service of the adversary, where at a time t0 ≤ t the adversary
delivered from a point x0 ≥ x. We charge the cost 2x of the algorithm’s service to the segment
[x/2, x] of the adversary’s service interval [0, x0 ] at time t0 .
We now claim that no part of the adversary’s service is charged twice. To justify this claim,
suppose that there are two services of OnlLine, at times t1 < t2 , triggered by requests from points
x1 and x2 , respectively, that both charge to an adversary’s service from x0 at time t0 ≤ t1 . By the
definition of charging, the request at x2 was already present at time t0 . As x2 was not served by
OnlLine’s service at t1 , it means that x2 > 2x1 , and thus the charged segments [x1 /2, x1 ] and
[x2 /2, x2 ] of the adversary service interval at time t0 are disjoint.
23
Summarizing, for any adversary service interval [0, y], its charged segments are disjoint. Any
charged segment receives the charge equal to 4 times its length. Thus this interval receives the
total charge at most 4y. This implies that the competitive ratio is at most 4.
Lower bounds. We now show lower bounds of 4 for MLAP-D and MLAP-L on the line. In both
proofs we show the bound for the corresponding variant of 1P-MLAP, using a reduction from the
online bidding problem [15, 14]. Roughly speaking, in online bidding, for a given universe U of
real numbers, the adversary chooses a secret value u ∈ U and the goal of the algorithm is to
find an upper-bound on u. To this end, the algorithm outputs an increasing sequence of numbers
x1 , x2 , x3 , . . .. The game is stopped after the first xk that is at least u and the bidding ratio is
Pk
then defined as i=1 xi /u.
Chrobak et al. [14] proved that the optimal bidding ratio is exactly 4, even if it is restricted
to sets U of the form {1, 2, . . . , B}, for some integer B. More precisely, they proved the following
result.
Lemma 7.2. For any R < 4, there exists B > 0, such that any sequence of integers 0 = x0 <
Pk
x1 < x2 < . . . < xm−1 < xm = B has an index k ≥ 1 with i=0 xi > R · (xk−1 + 1).
Theorem 7.3. There is no online algorithm for MLAP-D on the line with competitive ratio smaller
than 4.
Proof. We show that no online algorithm for 1P-MLAP-D (the deadline variant of 1P-MLAP) on
the line can attain competitive ratio smaller than 4. Assume the contrary, i.e., that there exists
a deterministic algorithm Alg that is R-competitive, where R < 4. Let B be the integer whose
existence is guaranteed by Lemma 7.2. We create an instance of 1P-MLAP-D, where, at time 0,
for every x ∈ {1, . . . , B} there is a request at x with deadline x.
Without loss of generality, Alg issues services only at integer times 1, 2, ..., B. The strategy
of Alg can be now defined as a sequence of services at times t1 < t2 < . . . < tm , where at time
ti it delivers from xi ∈ {ti , ti + 1, ..., B}. Without loss of generality, x1 < x2 < . . . < xm . We may
assume that xm = B (otherwise the algorithm is not competitive at all); we also add a dummy
service from x0 = 0 at time t0 = 0.
The adversary now chooses some k ≥ 1 and stops the game at the expiration time that is
Pk
right after the algorithm’s kth service, say θ = tk + 21 . Alg’s cost is then i=0 xi . The request
at xk−1 + 1 is not served at time tk−1 , so, to meet the deadline of this request, the schedule of
Alg must satisfy tk ≤ xk−1 + 1. This implies that θ < xk−1 + 2, that is, all requests at points
xk−1 + 2, xk−1 + 3, ..., B expire before their deadlines and do not need to be served. Therefore, to
serve this instance, the optimal solution may simply deliver from xk−1 + 1 at time 0. Hence, the
Pk
competitive ratio of Alg is at least i=0 xi /(xk−1 + 1). By Lemma 7.2, it is possible to choose k
such that this ratio is strictly greater than R, a contradiction with R-competitiveness of Alg.
Next, we show that the same lower bound applies to MLAP-L, the version of MLAP where the
waiting cost function is linear. This improves the lower bound of 3.618 from [9].
Theorem 7.4. There is no online algorithm for MLAP-L on the line with competitive ratio smaller
than 4.
Proof. Similarly to the proof of Theorem 7.3, we create an instance of 1P-MLAP-L (the variant
of 1P-MLAP with linear waiting cost functions) that does not allow a better than 4-competitive
online algorithm. Fix any online algorithm Alg for 1P-MLAP-L and, towards a contradiction,
suppose that it is R-competitive, for some R < 4. Again, let B be the integer whose existence is
guaranteed by Lemma 7.2. In our instance of 1P-MLAP-L, there are 6B−x requests at x for any
x ∈ {1, 2, . . . , B}.
Without loss of generality, we make the same assumptions as in the proof of Theorem 7.3:
algorithm Alg is defined by a sequence of services at times 0 = t0 < t1 < t2 < . . . < tm , where
24
at each time ti it delivers from some point xi . Without loss of generality, we can assume that
0 = x0 < x1 < . . . < xm = B.
Again, the strategy of the adversary is to stop the game at some expiration time θ that is right
Pk
after some time tk , say θ = tk + , for some small > 0. The algorithm pays i=0 xi for serving
the requests. The requests at xk−1 + 1 waited for time tk in Alg’s schedule and hence Alg’s
waiting cost is at least 6B−xk−1 −1 · tk .
The adversary delivers from point xk−1 + 1 at time 0. The remaining, unserved requests
PB
at points xk−1 + 2, xk−1 + 3, . . . , B pay time θ each for waiting. There are j=xk−1 +2 6B−j ≤
1 B−xk−1 −1
such requests and hence the adversary’s waiting cost is at most 15 ·6B−xk−1 −1 ·(tk +).
5 ·6
Therefore, the algorithm-to-adversary ratio on the waiting costs is at least 5tk /(tk + ). For
any k we can choose a sufficiently small so that this ratio is larger than 4. By Lemma 7.2, it is
possible to choose k for which the ratio on servicing cost is strictly greater than R. This yields a
contradiction to the R-competitiveness of Alg.
We point out that the analysis in the proof above gives some insight into the behavior of
any 4-competitive algorithm for 1P-MLAP-L (we know such an algorithm exists, by the results in
Section 6), namely that, for the type of instances used in the above proof, its waiting cost must
be negligible compared to the service cost.
8
An Offline 2-Approximation Algorithm for MLAP-D
In this section we consider the offline version of MLAP-D, for which Becchetti et al. [5] gave
a polynomial-time 2-approximation algorithm based on LP-rounding. We give a much simpler
argument that does not rely on linear programming.
We will use an alternative specification of schedules that is easier to reason about in the context
of offline approximations. If S is a schedule, for each node x ∈ T we can specify the set Sx of times
t for which S contains a service (X, t) with x ∈ X. Then the set {Sx }x∈T uniquely determines S.
Note that we have Sx ⊆ Sy whenever y is the parent of x. Further, we can now write the service
P
cost as scost(S) = x∈T |Sx |`x . It is easy to see that (without loss of generality) in an optimal
(offline) schedule S each service time is equal to some deadline, and we will make this assumption
in this section; in particular, Sr can be assumed to be the set of all deadlines.
Let J be the given instance. For each node v, define Jv to be the set of all intervals [aρ , dρ ],
for requests ρ issued in Tv .
Algorithm OffLByL. We proceed level by level, starting at the root and in order of increasing
depth, computing the service times Sv for all nodes v ∈ T . For the root r, Sr is the set of the
deadlines of all requests. Consider now some node v with parent u for which Su has already
been computed. Using the standard earliest-deadline algorithm, compute Sv as the minimum
cardinality subset of Su that intersects all intervals in Jv .
Algorithm OffLByL clearly runs in polynomial time; in fact it can be implemented in time
O(N log N ), where N is the total size of J .
We now show that the approximation ratio of Algorithm OffLByL is at most 2. (It is easy to
find an example showing that this ratio is not better than 2.) Denote by S∗ an optimal schedule
for J . According to our convention, S∗v is then the set of times when v is served in S∗ . Since
P
P
cost(S) = v `v |Sv | and the optimum cost is cost(S∗ ) = v `v |S∗v |, it is sufficient to show that
|Sv | ≤ 2|S∗v | for each v 6= r. This is quite simple: if u is the father of v then Su intersects all
intervals in Jv . We construct S0v ⊆ Su as follows. For each t ∈ S∗v , choose the maximal t− ∈ Su
such that t− ≤ t, and the minimal t+ ∈ Su such that t+ ≥ t. Add t− , t+ to S0v . (More precisely,
each of them is added only if it is defined.) Then S0v ⊆ Su and |S0v | ≤ 2|S∗v |. Further, any
interval [aρ , dρ ] ∈ Jv contains some t ∈ S∗v and intersects Su , so it also must contain either t−
25
or t+ . Therefore S0v intersects all intervals in Jv . Since we pick Sv optimally from Su , we have
|Sv | ≤ |S0v | ≤ 2|S∗v |, completing the proof.
9
General Waiting Costs
Our model of MLAP assumes full continuity, namely that the time is continuous and that the
waiting costs are continuous functions of time, while in some earlier literature authors use the
discrete model. Thus we still need to show that our algorithms can be applied in the discrete
model without increasing their competitive ratios. We also consider the model where some request
may remain unserved. We explain how our results can be extended to these models as well. We
will also show that our results can be extended to functions that are left-continuous, and that
MLAP-D can be represented as a special case of MLAP with left-continuous functions. While those
reductions seem intuitive, they do involve some pesky technical challenges, and they have not been
yet formally treated in the literature.
Extension to the discrete model. In the discrete model (see [12], for example), requests
arrive and services may happen only at integral points t = 1, . . . , H, where H is the time horizon.
The waiting cost functions ωρ are also specified only at integral points. (The model in [12] also
allows waiting costs to be non-zero at the release time. However we can assume that ωρ (aρ ) = 0,
since increasing the waiting cost function uniformly by an additive constant can only decrease the
competitive ratio.)
We now show how to simulate the discrete time model in the model where time and waiting
costs are continuous. Suppose that A is an R-competitive online algorithm for the model with
continuous time and continuous waiting cost functions. We construct an R-competitive algorithm
B for the discrete time model.
Let J = hT , Ri be an instance given to B. We extend each waiting cost function ωρ to nonintegral times as follows: for each integral t = aρ , . . . , H − 1 we define ωρ (τ ) for τ ∈ (t, t + 1) so
that it continuously increases from ωρ (t) to ωρ (t + 1) (e.g., by linear interpolation); ωρ (τ ) = 0 for
all τ < aρ ; and ωρ (τ ) = ωρ (H) for all τ > H.
Algorithm B presents the instance J = hT , Ri with these continuous waiting cost functions
to A. At each integral time t = 1, . . . , H − 1, B simulates A on the whole interval [t, t + 1). If
A makes one or more services, B makes a single service at time t which is their union. This is
possible, since no request arrives in (t, t + 1). At time H, algorithm B issues the same service as
A.
Overall, B produces a feasible schedule in the discrete time model. The cost of B does not
exceed the cost of A. On the other hand, any feasible (offline) schedule S in the discrete time
model is also a feasible schedule in the continuous time model with the same cost. Thus B is
R-competitive.
Unserved requests with bounded waiting costs. In our definition of MLAP we require that
all the requests are eventually served. However, if the waiting cost of a request ρ is bounded, it
is natural to allow a possibility that ρ is not served in a schedule S; in that case it incurs waiting
cost wcost(ρ, S) = limt→+∞ ωρ (t). In this variant, there is no time horizon in the instance.
Our algorithm OnlTree works in this model as well, with the competitive ratio increased at
most by one. The only modification of the algorithm is that there is no final service at the time
horizon. Instead we let the time proceed to infinity, issuing services at the maturity times of q
(the quasi-root of T ).
To modify our charging scheme to this variant, the key observation is that if a node v is never
serviced both in OnlTree and in an optimal schedule S∗ , then the requests at v pay the same
waiting costs in both schedules. Thus we can ignore such nodes and requests at them. We claim
26
that for each remaining node v, the pseudo-schedule S contains at least one pseudo-service of v:
Indeed, otherwise v is not served in S∗ and the total (limit of the) waiting cost of all the (unserved)
requests in the induced subtree Tv is less than `v , which implies that the maturity time of v is
always infinite and thus v is never serviced in OnlTree either, contradicting the fact that v was
not ignored before. Now consider all the remaining unserved requests and add to the schedule of
OnlTree one last service that serves all these requests. As the unserved requests do not cause
q to mature, this increases the cost of OnlTree; at the same time the service of each node can
be charged to a pseudo-service of the same node in S, which increases the competitive ratio by at
most 1.
Extension to left-continuous waiting costs. We now argue that we can modify our algorithms to handle left-continuous waiting cost functions, i.e., functions that satisfy limτ %t ωρ (τ ) =
ωρ (t) for each time t ≥ 0. Left-continuity enables an online algorithm to serve a request at the
last time when its waiting cost is at or below some given threshold.
Some form of left-continuity is also necessary for constant competitiveness. To see this, think
of a simple example of a tree of depth 1 and with `q = 1, and a sequence of requests in q with
release times approaching 1, and waiting cost functions defined by ωρ (1) = K 1 and ωρ (t) = 0
for t < 1. If an online algorithm serves one such request before time 1, the adversary immediately
releases another. The sequence stops either after K requests or after the algorithm serves some
request at or after time 1, whichever comes first. The optimal cost is at most `q = 1, while the
online algorithm pays at least K.
The basic (but not quite correct) idea of our argument for left-continuous waiting cost functions
is this: For any time point h where some waiting cost function has a discontinuity, we replace
point h by a “gap interval” [h, h + ], for some > 0. The release times after time h and the values
of all waiting cost functions after h are shifted to the right by . In the interval [h, h + ], for
each request ρ, its waiting cost function is filled in by any non-decreasing continuous curve with
value ν − at h and ν + at h + , for ν − = ωρ (h) and ν + = limτ &h ωρ (τ ). Thus the waiting cost
functions that are continuous at h are simply “stretched” in this gap interval, where their values
remain constant. This will convert the original instance J into an instance J 0 with continuous
waiting cost functions; then we can apply a simulation similar to the one for the discrete model,
with the behavior of an algorithm A on J 0 inside [h, h + ] mimicked by the algorithm B on J
while staying at time h.
The above construction, however, has a flaw: as B is online, for each newly arrived request ρ
it would need to know the future requests in order to correctly modify ρ’s waiting cost function
(which needs to be fully revealed at the arrival time). Thus, inevitably, B will need to be able to
modify waiting cost functions of earlier requests, but the current state of A may depend on these
functions. Such changes could make the computation of A meaningless. To avoid this problem,
we will focus only on algorithms A for continuous cost functions that we call stretch-invariant.
Roughly, those are algorithms whose computation is not affected by the stretching operation
described above.
To formalize this, let I = {[hi , hi + εi ] | i = 1, . . . , k} be a finite set of gap intervals, where all
times hi are distinct. (For now we can allow the εi ’s to be any positive reals; their purpose will be
P
explained later.) Let shift(t, I) = t+ i:hi <t εi denote the time t shifted right by inserting intervals
I on the time axis. We extend this operation to requests in a natural way: for any request ρ with
a continuous waiting cost function, shift(ρ, I) denotes the request modified by inserting I on the
time axis and filling in the values of ωρ in the inserted intervals by constant functions, as described
earlier. For a set of requests P ⊆ R, the stretched set of requests shift(P, I) is the set consisting
of requests shift(ρ, I), for all ρ ∈ P .
Consider an online algorithm A for MLAP with continuous waiting cost functions. We say that
A is stretch-invariant if for every instance J = hT , Ri and any set of gap intervals I, the schedule
produced by A for the instance hT , shift(R, I)i is obtained from the schedule produced by A for
27
J by shifting it according to I, namely every service (X, t) is replaced by service (X, shift(t, I)).
Most natural algorithms for MLAP are stretch-invariant. In case of OnlTree, observe that its
behavior depends only on the maturity times MP (v) where P is the set of pending requests and
Mshift(P,I) (v) = shift(MP (v), I); in particular stretching does not change the order of the maturity
times. Using induction on the current time t, we observe OnlTree creates a service (X, t) in
its schedule for the request set R if and only if OnlTree creates a service (X, shift(t, I)) in its
schedule for the request set shift(R, I).
Suppose that A is an R-competitive online algorithm for continuous waiting cost functions that
is stretch-invariant. We convert A into an R-competitive algorithm B for left-continuous waiting
costs. Let J = hT , Ri be an instance given to B. Algorithm B maintains the set of gap intervals
I, and a set of requests P presented to A; both sets are initially empty. Algorithm B at time t
simulates the computation of A at time shift(t, I).
If a new request ρ ∈ R is released at time t = aρ , algorithm B obtains ρ0 from shift(ρ, I)
by replacing the discontinuities of ωρ by new gap intervals Iρ on which ωρ0 is defined so that it
continually increases. (If a gap interval already exists in I at the given point, it is used instead of
creating a new one, to maintain the starting points distinct.) We set aρ0 = shift(t, I), which is the
current time in A. We update I to I ∪ Iρ ; this does not change the current time in A as all new
gap intervals start at or after t. We stretch the set of requests P by Iρ ; this does not change the
past output of A, because A is stretch-invariant. (Note that the state of A at time t may change,
but this does not matter for the simulation.) Finally, we add the new request ρ0 to P.
If the current time t in B is at a start point of a gap interval, i.e., t = hi , algorithm B simulates
the computation of A on the whole shifted gap interval hshift(hi , I), εi i. If A makes one or more
services in hshift(hi , I), εi i, B makes a single service at time t which is their union.
The cost of B for requests R does not exceed the cost of A for requests P. Any adversary
schedule S for R induces a schedule S0 for P with the same cost. Since A’s cost is at most
R · cost(S0 ), we obtain that B’s cost is at most R · cost(S); hence B is R-competitive.
In the discussion above we assumed that the instance has a finite number of discontinuities.
Arbitrary left-continuous waiting cost functions may have infinitely many discontinuity points, but
the set of these points must be countable. The construction described above extends to arbitrary
left-continuous cost functions, as long as we choose the εi values so that their sum is finite.
Reduction of MLAP-D to MLAP. We now argue that MLAP-D can be expressed as a variant of
MLAP with left-continuous waiting cost functions. The idea is simple: a request ρ with deadline
dρ can be assigned a waiting cost function ωρ (t) that is 0 for times t ∈ [0, dρ ] and ∞ for t > dρ
– except that we cannot really use ∞, so we need to replace it by some sufficiently large number.
If σρ = v, we let ωρ (t) = `∗v , where `∗v is the sum of all weights on the path from v to r (the
“distance” from v to r). This will convert an instance J of MLAP-D into an instance J 0 of MLAP
with left-continuous waiting cost functions.
We claim that, without loss of generality, any online algorithm A for J 0 serves any request ρ
before or at time dρ . Otherwise, A would have to pay waiting cost of `∗v for ρ (where v = σρ ), so
we can modify A to serve ρ at time dρ instead, without increasing its cost. We can then treat A
as an algorithm for J . A will meet all deadlines in J and its cost on J will be the same as its
cost on J 0 , which means that its competitive ratio will also remain the same.
Note that algorithm OnlTree (or rather its extension to the left-continuous waiting costs, as
described above) does not need this modification, as it already guarantees that when the waiting
cost of a request at v reaches `∗v , all the nodes on the path from v to r are mature and thus the
whole path is served.
28
References
[1] A. Aggarwal and J. K. Park. Improved algorithms for economic lot sizing problems. Operations
Research, 41:549–571, 1993.
[2] S. Albers and H. Bals. Dynamic TCP acknowledgment: Penalizing long delays. SIAM Journal
on Discrete Mathematics, 19(4):938–951, 2005.
[3] E. Arkin, D. Joneja, and R. Roundy. Computational complexity of uncapacitated multiechelon production planning problems. Operations Research Letters, 8(2):61–66, 1989.
[4] B. R. Badrinath and P. Sudame. Gathercast: the design and implementation of a programmable aggregation mechanism for the internet. In Proc. 9thInternational Conference on
Computer Communications and Networks (ICCCN), pages 206–213, 2000.
[5] L. Becchetti, A. Marchetti-Spaccamela, A. Vitaletti, P. Korteweg, M. Skutella, and L. Stougie.
Latency-constrained aggregation in sensor networks. ACM Transactions on Algorithms,
6(1):13:1–13:20, 2009.
[6] M. Bienkowski, M. Böhm, J. Byrka, M. Chrobak, C. Dürr, L. Folwarczný, L. Jez, J. Sgall,
N. K. Thang, and P. Veselý. Online algorithms for multi-level aggregation. In Proc. 24th
Annual European Symposium on Algorithms (ESA’16), pages 12:1–12:17, 2016.
[7] M. Bienkowski, J. Byrka, M. Chrobak, N. B. Dobbs, T. Nowicki, M. Sviridenko, G. Swirszcz,
and N. E. Young. Approximation algorithms for the joint replenishment problem with deadlines. Journal of Scheduling, 18(6):545–560, 2015.
[8] M. Bienkowski, J. Byrka, M. Chrobak, L. Jeż, D. Nogneng, and J. Sgall. Better approximation
bounds for the joint replenishment problem. In Proc. 25th ACM-SIAM Symp. on Discrete
Algorithms (SODA), pages 42–54, 2014.
[9] M. Bienkowski, J. Byrka, M. Chrobak, L. Jeż, J. Sgall, and G. Stachowiak. Online control
message aggregation in chain networks. In Proc. 13th Int. Workshop on Algorithms and Data
Structures (WADS), pages 133–145, 2013.
[10] E. Bortnikov and R. Cohen. Schemes for scheduling of control messages by hierarchical
protocols. In Proc. 17th IEEE Int. Conference on Computer Communications (INFOCOM),
pages 865–872, 1998.
[11] C. Brito, E. Koutsoupias, and S. Vaya. Competitive analysis of organization networks or
multicast acknowledgement: How much to wait? Algorithmica, 64(4):584–605, 2012.
[12] N. Buchbinder, T. Kimbrel, R. Levi, K. Makarychev, and M. Sviridenko. Online make-to-order
joint replenishment model: Primal-dual competitive algorithms. In Proc. 19th ACM-SIAM
Symp. on Discrete Algorithms (SODA), pages 952–961, 2008.
[13] N. Buchbinder and J. S. Naor. The design of competitive online algorithms via a primal-dual
approach. Foundations and Trends in Theoretical Computer Science, 3(2–3):93–263, 2009.
[14] M. Chrobak, C. Kenyon, J. Noga, and N. E. Young. Incremental medians via online bidding.
Algorithmica, 50(4):455–478, 2008.
[15] M. Chrobak and C. Kenyon-Mathieu. SIGACT news online algorithms column 10: competitiveness via doubling. SIGACT News, 37(4):115–126, 2006.
[16] W. B. Crowston and M. H. Wagner. Dynamic lot size models for multi-stage assembly systems.
Management Science, 20(1):14–21, 1973.
29
[17] D. R. Dooly, S. A. Goldman, and S. D. Scott. On-line analysis of the TCP acknowledgment
delay problem. Journal of the ACM, 48(2):243–273, 2001.
[18] J. S. Frederiksen, K. S. Larsen, J. Noga, and P. Uthaisombut. Dynamic TCP acknowledgment
in the LogP model. Journal of Algorithms, 48(2):407–428, 2003.
[19] F. Hu, X. Cao, and C. May. Optimized scheduling for data aggregation in wireless sensor
networks. In Int. Conference on Information Technology: Coding and Computing (ITCC),
volume 2, pages 557–561, 2005.
[20] A. R. Karlin, C. Kenyon, and D. Randall. Dynamic TCP acknowledgement and other stories
about e/(e - 1). Algorithmica, 36(3):209–224, 2003.
[21] S. Khanna, J. Naor, and D. Raz. Control message aggregation in group communication
protocols. In Proc. 29th Int. Colloq. on Automata, Languages and Programming (ICALP),
pages 135–146, 2002.
[22] A. Kimms. Multi-Level Lot Sizing and Scheduling: Methods for Capacitated, Dynamic, and
Deterministic Models. Springer-Verlag, 1997.
[23] D. M. Lambert and M. C. Cooper. Issues in supply chain management. Industrial Marketing
Management, 29(1):65–83, 2000.
[24] R. Levi, R. Roundy, and D. B. Shmoys. A constant approximation algorithm for the onewarehouse multi-retailer problem. In Proc. 16th ACM-SIAM Symp. on Discrete Algorithms
(SODA), pages 365–374, 2005.
[25] R. Levi, R. Roundy, and D. B. Shmoys. Primal-dual algorithms for deterministic inventory
problems. Mathematics of Operations Research, 31(2):267–284, 2006.
[26] R. Levi, R. Roundy, D. B. Shmoys, and M. Sviridenko. A constant approximation algorithm
for the one-warehouse multiretailer problem. Management Science, 54(4):763–776, 2008.
[27] R. Levi and M. Sviridenko. Improved approximation algorithm for the one-warehouse multiretailer problem. In Proc. 9th Int. Workshop on Approximation Algorithms for Combinatorial
Optimization (APPROX), pages 188–199, 2006.
[28] T. Nonner and A. Souza. Approximating the joint replenishment problem with deadlines.
Discrete Mathematics, Algorithms and Applications, 1(2):153–174, 2009.
[29] C. Papadimitriou. Computational aspects of organization theory. In Proc. 4th European
Symp. on Algorithms (ESA), pages 559–564, 1996.
[30] L. L. C. Pedrosa. Private communication, 2013.
[31] S. S. Seiden. A guessing game and randomized online algorithms. In Proc. 32nd ACM Symp.
on Theory of Computing (STOC), pages 592–601, 2000.
[32] S. Vaya. Brief announcement: Delay or deliver dilemma in organization networks. In Proc.
31st ACM Symp. on Principles of Distributed Computing (PODC), pages 339–340, 2012.
[33] H. Wagner and T. Whitin. Dynamic version of the economic lot size model. Management
Science, 5:89–96, 1958.
[34] W. Yuan, S. V. Krishnamurthy, and S. K. Tripathi. Synchronization of multiple levels of
data fusion in wireless sensor networks. In Proc. Global Telecommunications Conference
(GLOBECOM), pages 221–225, 2003.
30
| 8 |
A Message Passing Approach for Decision
Fusion in Adversarial Multi-Sensor Networks
Andrea Abrardo, Mauro Barni, Kassem Kallas and Benedetta Tondia
arXiv:1702.08357v1 [] 27 Feb 2017
a Department
of Information Engineering and Mathematics, Via Roma 56, Siena, Italy
Abstract
We consider a simple, yet widely studied, set-up in which a Fusion Center
(FC) is asked to make a binary decision about a sequence of system states by
relying on the possibly corrupted decisions provided by byzantine nodes, i.e.
nodes which deliberately alter the result of the local decision to induce an error
at the fusion center. When independent states are considered, the optimum
fusion rule over a batch of observations has already been derived, however its
complexity prevents its use in conjunction with large observation windows.
In this paper, we propose a near-optimal algorithm based on message passing that greatly reduces the computational burden of the optimum fusion rule.
In addition, the proposed algorithm retains very good performance also in the
case of dependent system states. By first focusing on the case of small observation windows, we use numerical simulations to show that the proposed scheme
introduces a negligible increase of the decision error probability compared to
the optimum fusion rule. We then analyse the performance of the new scheme
when the FC make its decision by relying on long observation windows. We do
so by considering both the case of independent and Markovian system states and
show that the obtained performance are superior to those obtained with prior
suboptimal schemes. As an additional result, we confirm the previous finding
that, in some cases, it is preferable for the byzantine nodes to minimise the mutual information between the sequence system states and the reports submitted
to the FC, rather than always flipping the local decision.
Keywords: Adversarial signal processing, Decision fusion in adversarial
Preprint submitted to Journal of Information Fusion
February 28, 2017
setting, Decision fusion in the presence of Byzantines, Message passing
algorithm, Factor graph.
1. Introduction
Decision fusion for distributed detection has received an increasing attention
for its importance in several applications, including wireless networks, cognitive
radio, multimedia forensics and many others. One of the most common scenarios
is the parallel distributed fusion model. According to this model, the n nodes
of a multi-sensor network gather information about a system and make a local
decision about the system status. Then the nodes send the local decisions to
a Fusion Center (FC), which is in charge of making a final decision about the
state of the system. [1]
In this paper, we focus on an adversarial version of the above problem, in
which a number of malicious nodes, often referred to as Byzantines [1], aims at
inducing a decision error at the FC [2]. This is a recurrent problem in many
situations wherein the nodes may make a profit from a decision error. As an
example, consider a cognitive radio system [3, 4, 5, 6] in which secondary users
cooperate in sensing the frequency spectrum to decide about its occupancy
and the possibility to use the available spectrum to transmit their own data.
While cooperation among secondary users allows to make a better decision, it
is possible that one or more users deliberately alter their measurements to let
the system think that the spectrum is busy, when in fact it is not, in order to
gain an exclusive opportunity to use the spectrum. Online reputation systems
offer another example [7]. Here a fusion center must make a final decision about
the reputation of an item like a good or a service by relying on user’s feedback.
Even in this case, it is possible that malevolent users provide a fake feedback to
alter the reputation of the item under inspection. Similar examples are found in
many other applications, including wireless sensor networks [2], [3], distributed
detection [8], [9], multimedia forensics [10] and adversarial signal processing [11].
In this paper we focus on a binary version of the fusion problem, wherein
2
the system can assume only two states. Specifically, the nodes observe the
system over m time instants and make a local decision about the sequence of
system states. Local decisions are not error-free and hence they may be wrong
with a certain error probability. Honest nodes send their decision to the fusion
center, while byzantine nodes try to induce a decision error and hence flip the
local decision with probability Pmal before sending it to the FC. The fusion
center knows that some of the nodes are Byzantines with a certain probability
distribution, but it does not know their position.
1.1. Prior Work
In a simplified version of the problem, the FC makes its decision on the
status of the system at instant j by relying only on the corresponding reports,
and ignoring the node reports relative to different instants. In this case, and in
the absence of Byzantines, the Bayesian optimal fusion rule has been derived
in [12],[13] and it is known as Chair-Varshney rule. If local error probabilities
are symmetric and equal across the network, Chair-Varshney rule boils down to
simple majority-based decision. In the presence of Byzantines, Chair-Varshney
rule requires the knowledge of Byzantines’ positions along with the flipping
probability Pmal . Since this information is rarely available, the FC may resort
to a suboptimal fusion strategy.
In [8], by adopting a Neyman-Pearson setup and assuming that the byzantine
nodes know the true state of the system, the asymptotic performance obtainable by the FC are analysed as a function of the percentage of Byzantines in
the network. By formalising the attack problem as the minimisation of the
Kullback-Leibler distance between the reports received by the FC under the
two hypotheses, the blinding percentage, that is, the percentage of Byzantines
irremediably compromising the possibility of making a correct decision, is determined.
In order to improve the estimation of the sequence of system states, the
FC can gather a number of reports provided by the nodes before making a
global decision (multiple observation fusion). In cooperative spectrum sensing,
3
for instance, this corresponds to collectively decide about the white holes over a
time window, or, more realistically, at different frequency slots. The advantage
of deciding over a sequence of states rather than on each single state separately,
is that in such a way it is possible for the FC to understand which are the
byzantine nodes and discard the corresponding observations (such an operation
is usually referred to as Byzantine isolation). Such a scenario has also been
studied in [8], showing that - at least asymptotically - the blinding percentage
is always equal to 50%. In [14], the analysis of [8] is extended to a situation
in which the Byzantines do not know the true state of the system. Byzantine
isolation is achieved by counting the mismatches between the reports received
from each node and the global decision made by the FC. The performance of the
proposed scheme are evaluated in a cognitive-radio scenario for finite values of n.
In order to cope with the lack of knowledge about the strategy adopted by the
attacker, the decision fusion problem is casted into a game-theoretic formulation,
where each party makes the best choice without knowing the strategy adopted
by the other party.
A slightly different approach is adopted in [15]. By assuming that the FC is
able to derive the statistics of the reports submitted by honest nodes, Byzantine
isolation is carried out whenever the reports received from a node deviate from
the expected statistics. In this way, a correct decision can be made also when
the percentage of Byzantines exceeds 50%. The limit of the approach proposed
in [15], is that it does not work when the reports sent by the Byzantines have
the same statistics of those transmitted by the honest nodes. This is the case,
for instance, in a perfectly symmetric setup with equiprobable system states,
symmetric local error probabilities, and an attack strategy consisting of simple
decision flipping.
A soft isolation scheme is proposed in [16], where the reports from suspect byzantine nodes are given a lower importance rather being immediately
discarded. Even in [16], the lack of knowledge at the FC about the strategy
adopted by the attacker (and viceversa) is coped with by adopting a gametheoretic formulation. A rather different approach is adopted in [17], where a
4
tolerant scheme that mitigates the impact of Byzantines on the global decision
is used rather that removing the reports submitted by suspect nodes from the
fusion procedure.
When the value of Pmal and the probability that a node is Byzantine are
known, the optimum fusion rule under multiple observation can be derived [18].
Since Pmal is usually not known to the FC, in [18] the value of Pmal used to
define the optimum fusion rule and the value actually used by the Byzantines
are strategically chosen in a game-theoretic context. Different priors about
the distribution of Byzantines in the network are considered ranging from an
extreme case in which the exact number of Byzantines in the network is known
to a maximum entropy case. One of the main results in [18] is that the best
option for the Byzantines is not to always flip the local decision (corresponding
to Pmal = 1), since this would ease the isolation of malicious nodes. In fact,
for certain combinations of the distribution of Byzantines within the network
and the length of the observation window, it is better for the Byzantines to
minimise the mutual information between the reports submitted to the FC and
the system states.
1.2. Contribution
The main problem of the optimum decision fusion scheme proposed in [18] is
its computational complexity, which grows exponentially with the length of the
observation window. Such a complexity prevents the adoption of the optimum
decision fusion rule in many practical situations. Also the results regarding the
optimum strategies of the Byzantines and the FC derived in [18] refer only to
the case of small observation windows.
In the attempt to diminish the computational complexity while minimising
the loss of performance with respect to the optimum fusion rule, we propose
a new, nearly-optimum, fusion scheme based on message passing and factor
graphs. Message passing algorithms, based on the so called Generalised Distributive Law (GLD, [19],[20]), have been widely applied to solve a large range of
optimisation problems, including decoding of Low Density Parity Check (LDPC)
5
codes [21] and BCJR codes [19], dynamic programming [22], solution of probabilistic inference problems on Bayesian networks [23] (in this case message
passing algorithms are known as belief propagation). Here we use message passing to introduce a near-optimal solution of the decision fusion problem with
multiple observation whose complexity grows only linearly with the size of the
observation window, thus marking a dramatic improvement with respect to the
exponential complexity of the optimal scheme proposed in [18].
Using numerical simulations and by first focusing on the case of small observation windows, for which the optimum solution can still be applied, we prove
that the new scheme gives near-optimal performance at a much lower complexity than the optimum scheme. We then use numerical simulations to evaluate
the performance of the proposed method for long observation windows. As a
result, we show that, even in this case, the proposed solution maintains the performance improvement over the simple majority rule, the hard isolation scheme
in [14] and the soft isolation scheme in [16].
As opposed to previous works, we do not limit our analysis to the case of
independent system states, but we extend it to a more realistic scenario where
the sequence of states obey a Markovian distribution [24] as depicted in Figure 2.
The Markovian model is rather common in the case of cognitive radio networks
[25, 26, 27] where the primary user occupancy of the spectrum is often modelled
as a Hidden Markov Model (HMM). The Markovian case is found to be more
favourable for the FC with respect to the case of independent states, due the
additional a-priori information available to the FC in this case.
Last but not the least, we confirm that the dual optimum behaviour of
the Byzantines observed in [18] is also present in the case of large observation
windows, even if in the Markovian case, the Byzantines may continue using the
maximum attack power (Pmal = 1) for larger observation windows.
The rest of this paper is organised as follows. In Section 2, we introduce
the notation used in the paper and give a precise formulation of the addressed
problem. In Section 3, we describe the new message passing decision rule based
on factor graph. In Section 4, we first discuss the complexity of the proposed
6
Byzantine Node
Byzantine Node
Honest Node
Local Decision
Local Decision
Local Decision
Attack
Attack
Fusion Center
Figure 1: Sketch of the adversarial decision fusion scheme.
solution compared to the optimal solution. Then, by considering both independent and Markovian system states, we compare the performance of the message
passing algorithm to the majority rule, the hard isolation scheme [14], the soft
isolation scheme described in [16] and the optimal fusion rule. In addition, we
discuss the impact that the length of the observation window has on the optimal behaviour of the Byzantines. We conclude the paper in Section 5 with
some final remarks.
2. Notation and Problem Formulation
The problem faced with in this paper, is depicted in Figure 1. We let s =
{s1 , s2 , . . . , sm } with si ∈ {0, 1} indicate the sequence of system states over
an observation window of length m. The nodes collect information about the
system through the vectors x1 , x2 . . . xn , with xj indicating the observations
available at node j. Based on such observations, a node j makes a local decision
ui,j about system state si . We assume that the local error probability, hereafter
indicated as ε, does not depend on either i or j. The state of the nodes in the
network is given by the vector h = {h1 , h2 , . . . , hn } with hj = 1/0 indicating
that node j is honest or Byzantine, respectively. Finally, the matrix R =
7
Figure 2: Markovian model for system states. When ρ = 0.5 subsequent states are independent.
{ri,j }, i = 1, . . . , m, j = 1, . . . , n contains all the reports received by the FC.
Specifically, ri,j is the report sent by node j relative to si . As stated before, for
honest nodes we have ui,j = ri,j while, for Byzantines we have p(ui,j 6= ri,j ) =
Pmal . The Byzantines corrupt the local decisions independently of each other.
By assuming that the transmission between nodes and fusion center takes
place over error-free channels, the report is equal to the local decision with probability 1 for honest nodes and with probability 1 − Pmal for Byzantines. Hence,
according to the local decision error model, we can derive the probabilities of
the reports for honest nodes:
p (ri,j |si , hj = 1) = (1 − ε)δ(ri,j − si ) + ε(1 − δ(ri,j − si )),
(1)
where δ(a) is defined as:
δ(a) =
1, if a = 0
(2)
0, otherwise.
On the other hand, by introducing η = ε(1 − Pmal ) + (1 − ε)Pmal , i.e., the
probability that the fusion center receives a wrong report from a byzantine node,
we have:
p (ri,j |si , hj = 0) = (1 − η)δ(ri,j − si ) + η(1 − δ(ri,j − si ))
8
(3)
As for the number of Byzantines, we consider a situation in which the states
of the nodes are independent of each other and the state of each node is described
by a Bernoulli random variable with parameter α, that is p(hj = 0) = α, ∀j. In
this way, the number of byzantine nodes in the network is a random variable
following a binomial distribution, corresponding to the maximum entropy case
Q
[18] with p (h) = p(hj ), where p(hj ) = α(1 − hj ) + (1 − α)hj .
j
Regarding the sequence of states s, we assume a Markov model as shown in
Q
Figure 2 , i.e., p (s) =
p(si |si−1 ). The transition probabilities are given by
i
p(si |si−1 ) = 1 − ρ if si = si−1 and p(si |si−1 ) = ρ when si 6= si−1 , whereas for
i = 1 we have p(s1 |s0 ) = p(s1 ) = 0.5.
In this paper we look for the the bitwise Maximum A Posteriori Probability
(MAP) estimation of the system states {si } which reads as follows:
ŝi
arg max p (si |R)
si ∈{0,1}
P
arg max
p (s, h|R)
=
=
(law of total probability)
si ∈{0,1} {s,h}\si
=
P
arg max
p (R|s, h) p(s)p(h)
(Bayes)
si ∈{0,1} {s,h}\si
=
arg max
P
Q
p (ri,j |si , hj )
si ∈{0,1} {s,h}\si i,j
Q
i
p(si |si−1 )
Q
p(hj )
j
(4)
where the notation
P
denotes a summation over all the possible combina-
\
tions of values that the variables contained in the expression within the summation may assume by keeping the parameter listed after the operator \ fixed. For
a given h, the matrix of the observations R at the FC follows a HMM [28]. The
optimisation problem in (4) has been solved in [18] for the case of independent
system states. Even in such a simple case, however, the complexity of the optimum decision rule is exceedingly large, thus limiting the use of the optimum
decision only in the case of small observation windows (typically m not larger
than 10). In the next section we introduce a sub-optimum solution of (4) based
on message passing, which greatly reduces the computational complexity at the
price of a negligible loss of accuracy.
9
3. A Decision Fusion Algorithm Based on Message Passing
3.1. Introduction to Sum-product message passing
In this section we provide a brief introduction to the message passing (MP)
algorithm for marginalization of sum-product problems. Let us start by considering N binary variables z = {z1 , z2 , . . . , zN }, zi ∈ {0, 1}. Then, consider the
function f (z) with factorization:
f (z) =
Y
fk (Zk )
(5)
k
where fk , k = 1, . . . , M are functions of a subset Zk of the whole set of variables.
We are interested in computing the marginal of f with respect to a general
variable zi , defined as the sum of f over all possible values of z, i.e.:
µ(zi ) =
XY
z\zi
where notation
P
fk (Zk )
(6)
k
denotes a sum over all possible combinations of values of
z\zi
the variables in z by keeping zi fixed. Note that marginalization problem occurs
when we want to compute any arbitrary probability from joint probabilities by
summing out variables that we are not interested in. In this general setting, determining the marginals by exhaustive search requires 2N operations. However,
in many situations it is possible to exploit the distributive law of multiplication
to get a substantial reduction in complexity.
To elaborate, let associate with problem (6) a bipartite factor graph, in which
for each variable we draw a variable node (circle) and for each function we draw
a factor node (square). A variable node is connected to a factor node k by an
edge if and only if the corresponding variable belongs to Zk . This means that
the set of vertices is partitioned into two groups (the set of nodes corresponding
to variables and the set of nodes corresponding to factors) and that an edge
always connects a variable node to a factor node.
Let now assume that the factor graph is a single tree, i.e., a connected graph
where there is an unique path to connect two nodes. In this case, it is straightforward to derive an algorithm which allows to solve the marginalization problem
10
Figure 3: Node-to-factor message passing.
with reduced complexity. The algorithm is the MP algorithm, which has been
broadly used in the last years in channel coding applications [29], [30].
To describe how the MP algorithm works, let us first define messages as
2-dimensional vectors, denoted by m = {m(0), m(1)}. Such messages are exchanged between variable nodes and function nodes and viceversa, according to
the following rules. Let us first consider variable-to-function messages (mvf ),
and take the portion of factor graph depicted in Fig. 3 as an illustrative example. In this graph, the variable node zi is connected to L factor nodes, namely
f1 , f2 , . . . , fL . For the MP algorithm to work properly, node zi must deliver the
(l)
messages mvf , l = 1, . . . , L to all its adjacent nodes. Without loss of generality,
(1)
let us focus on message mvf . Such a message can be evaluated and delivered
(l)
upon receiving messages mf v , l = 2, . . . , L, i.e., upon receiving messages from
(1)
all function nodes except f1 . In particular, mvf may be straightforwardly evaluated by calculating the element-wise product of the incoming messages, i.e.:
(1)
mvf (q) =
L
Y
(j)
mf v (q)
(7)
j=2
for q = 0, 1. Let us now consider factor-to-variable messages, and refer to the
factor graph of Fig. 4 where P variable nodes are connected to the factor node
11
Figure 4: Factor-to-node message passing.
fk , i.e., according to the previous notation, Zk = {z1 , . . . , zP }. In this case, the
(l)
node fk must deliver the messages mf v , l = 1, . . . , P to all its adjacent nodes.
(1)
(l)
Let us consider again mf v : upon receiving the messages mvf , l = 2, . . . , P , fk
(1)
may evaluate the message mf v as:
"
(1)
mf v (q)
=
X
fk (q, z2 , . . . , zP )
z2 ,...,zP
P
Y
#
(p)
mvf (zp )
(8)
p=2
for q = 0, 1.
Given the message passing rules at each node, it is now possible to derive
the MP algorithm which allows to compute the marginals in (6). The process
starts at the leaf nodes, i.e., those nodes which have only one connecting edge.
In particular, each variable leaf node passes an all-ones message to its adjacent
(k)
factor node, whilst each factor leaf node, say fk (zi ) passes the message mf v (q) =
fk (zi = q) to its adjacent node zi . After initialization at leaf nodes, for every
edge we can compute the outgoing message as soon as all incoming messages
from all other edges connected to the same node are received (according to the
message passing rules (7) and (8)). When a message has been sent in both
directions along every edge the algorithm stops. This situation is depicted in
Fig. 5: upon receiving messages from all its adjacent factor nodes, node zi can
evaluate the exact marginal as:
12
Figure 5: End of message passing for node zi .
µ(zi ) =
Y
(k)
mf v (zi ).
(9)
k=1,...,L
With regard to complexity, factors to variables message passing can be accomplished with 2P operations, P being the number of variables in fk . On the
other hand, variables to nodes message passing’s complexity can be neglected,
and, hence, the MP algorithm allows to noticeably reduce the complexity of
the problem provided that the numerosity of Zk is much lower than N . With
regard to the optimization, Equation (9) evaluates the marginal for both zi = 0
and zi = 1, which represent the approximated computation of the sum-product
for both hypotheses. Hence, the optimization is obtained by choosing the value
of zi which maximizes it.
3.2. Nearly-optimal data fusion by means of message passing
The objective function of the optimal fusion rule expressed in (4) can be seen
as a marginalization of a sum product of functions of binary variables, and, as
such, it falls within the MP framework described in the previous Section. More
specifically, in our problem, the variables are the system states si and the status
of the nodes hj , while the functions are the probabilities of the reports shown in
13
Figure 6: Factor graph for the problem at hand.
equations (1) and (3), the conditional probabilities p(si |si−1 ), and the a-priori
probabilities p(hj ). The resulting bipartite graph is shown in Figure 6.
It is worth noting that the graph is a loopy graph, i.e., it contains cycles,
and as such it is not a tree. However, although it was originally designed for
acyclic graphical models, it was found that the MP algorithm can be used for
general graphs, e.g., in channel decoding problems [31]. In general, when the
marginalization problem is associated to a loopy graph, the implementation of
MP requires to establish a scheduling policy to initiate the procedure, so that
variable nodes may receive messages from all the connected factors, thus evaluating the marginals. In this case, a single run of the MP algorithm may not
be sufficient to achieve a good approximation of the exact marginals, and progressive refinements must be obtained through successive iterations. However,
in the presence of loopy graphs, there is no guarantee of either convergence or
optimality of the final solution. In many cases, the performance of the messagepassing algorithms is closely related to the structure of the graph, in general,
and its cycles, in particular. Many previous works in the field of channel cod-
14
ing, e.g., see [32], reached the conclusion that, for good performance, the factor
graph should not contain short cycles. In our case, it is possible to see from
Figure 6 that the shortest cycles have order 6, i.e., a message before returning
to the sender must cross at least six different nodes. We speculate that such a
minimum cycles length is sufficient to provide good performance for the problem
at hand. We will prove through simulations that such a conjecture is true.
To elaborate further, based on the graph of Figure 6 and on the general
MP rules reported in the previous Section, we are now capable of deriving the
messages for the scenario at hand. In Figure 7, we display all the exchanged
messages for the graph in Figure 6 that are exchanged to estimate in parallel
each of the states si , i ∈ {0, 1} in the vector s = {s1 , s2 , . . . , sm }. Specifically,
we have:
(l)
τi (si )
(r)
τi (si )
(l)
ϕi (si )
(r)
ϕi (si )
(r)
ϕ1 (s1 )
(u)
νi,j (si )
=
=
=
(l)
n
Q
(r)
j=1
n
Q
ϕi (si )
ϕi (si )
P
si+1 =0,1
=
P
si−1 =0,1
i = 1, . . . , m
(u)
i = 1, . . . , m
νi,j (si )
(l)
i = 1, . . . , m − 1
(r)
i = 2, . . . , m
p (si+1 |si ) τi+1 (si+1 )
p (si |si−1 ) τi−1 (si−1 )
=
=
j=1
(u)
νi,j (si )
p(s1 )
P
(u)
i = 1, . . . , m, j = 1, . . . , n
(u)
i = 1, . . . , m − 1, j = 1, . . . , n
p (ri,j |si , hj ) λj,i (hj )
hj =0,1
(d)
νi,j (si )
=
(r)
(l)
ϕi (si )ϕi (si )
n
Q
k=1
k6=j
(d)
νm,j (sm )
=
n
Q
(r)
ϕi (sm )
k=1
k6=j
(d)
λj,i (hj )
(u)
λj,i (hj )
(d)
ωj (hj )
(u)
ωj (hj )
=
(u)
νm,k (sm )
(d)
P
si =0,1
=
νi,k (si )
p (ri,j |si , hj ) νi,j (si )
(u)
ωj (hj )
m
Q
=
i=1
=
m
Q
q=1
q6=i
(d)
λj,q (hj )
j = 1, . . . , n
i = 1, . . . , m, j = 1, . . . , n
i = 1, . . . , m, j = 1, . . . , n
λj,i (hj )
(d)
j = 1, . . . , n
p(hj )
j = 1, . . . , n
(10)
As for the scheduling policy, we initiate the MP procedure by sending the
(u)
(u)
messages λj,i (hj ) = ωj (hj ) to all p (ri,j |si , hj ) factor nodes, and by sending
15
16
Figure 7: Factor graph for the problem at hand with the illustration of all the exchanged messages.
the message p(s1 ) to the variable node s1 . Hence, the MP proceeds according to the general message passing rules, until all variable nodes are able to
compute the respective marginals. When this happens, the first iteration is
concluded. Then, successive iterations are carried out by starting from leaf
nodes and by taking into account the messages received at the previous iteration for the evaluation of new messages. Hence, the algorithm is stopped upon
achieving convergence of messages, or after a maximum number of iterations.
The MP scheme described above can be simplified by observing that messages can be normalized without affecting the normalized marginals. Henceforward, let us consider as normalization factors the sum of the elements of the
(l)
messages, i.e., if we consider for example τi (si ), the normalization factor is
(l)
(l)
(l)
τi (0) + τi (1). In this case, the normalized messages, say τ̄i (si ) can be conveniently represented as scalar terms in the interval (0, 1), e.g., we can consider
(l)
(l)
(l)
τ̄i (0) only since τ̄i (1) = 1 − τ̄i (0). Accordingly, the normalized messages
17
can be evaluated as:
n
Q
(l)
(l)
τ̄i
ϕ̄i
=
n
Q
(l)
ϕ̄i
n
Q
(l)
(u)
j=1
(u)
ν̄i,j
j=1
ν̄i,j +(1−ϕ̄i )
n
Q
(r)
ϕ̄i
(u)
ν̄i,j
(r)
τ̄i
=
(l)
=
ρτ̄i+1 + (1 − ρ)(1 − τ̄i+1 )
ϕ̄i
(r)
=
ρτ̄i−1 + (1 − ρ)(1 − τ̄i−1 )
(r)
ϕ̄1
=
ϕ̄i
(u)
ν̄i,j
(d)
ν̄i,j
n
Q
(r)
ϕ̄i
j=1
j=1
ν̄i,j +(1−ϕ̄i )
(l)
i = 1, . . . , m − 1
(r)
(r)
i = 2, . . . , m
p(s1 = 0)
=
(u)
(u)
=
(u)
=
(d)
=
(u)
=
λ̄j,i
ω̄j
ω̄j
(u)
k=1
k6=j
(r)
n
Q
(l)
k=1
k6=j
(r)
ϕ̄m
(d)
(u)
p(ri,j |0,0 )λ̄j,i +p(ri,j |0,1 )(1−λ̄j,i )+p(ri,j |1,0 )λ̄j,i +p(ri,j |1,1 )(1−λ̄j,i )
n
(r) (l) Q
(u)
ϕ̄i ϕ̄i
ν̄i,k
ϕ̄i ϕ̄i
=
(u)
p(ri,j |0,0 )λ̄j,i +p(ri,j |0,1 )(1−λ̄j,i )
(u)
(r)
(l)
ν̄i,k +(1−ϕ̄i )(1−ϕ̄i )
n
Q
k=1
k6=j
λ̄j,i
j=1
(u)
=
i = 1, . . . , m
(u)
(1−ν̄i,j )
(l)
ϕ̄(r)
m
(d)
ν̄m,j
n
Q
(r)
(u)
i = 1, . . . , m
(u)
(1−ν̄i,j )
j=1
n
Q
k=1
k6=j
(u)
n
Q
k=1
k6=j
(u)
i = 1, . . . , m, j = 1, . . . , n
i = 1, . . . , m − 1, j = 1, . . . , n
(1−ν̄i,k )
(u)
ν̄m,k
(r)
ν̄m,k +(1−ϕ̄m )
n
Q
k=1
k6=j
j = 1, . . . , n
(u)
(1−ν̄m,k )
(d)
(d)
p(ri,j |0,0 )ν̄i,j +p(ri,j |1,0 )(1−ν̄i,j )
(d)
(d)
(d)
(d)
p(ri,j |0,0 )ν̄i,j +p(ri,j |1,0 )(1−ν̄i,j )+p(ri,j |0,1 )ν̄i,j +p(ri,j |1,1 )(1−ν̄i,j )
m
(d)
(u) Q
λ̄j,q
ω̄j
q=1
q6=i
m
m
Q
(d)
(u)
(d)
(u) Q
λ̄j,q +(1−ω̄j )
(1−λ̄j,q )
ω̄j
q=1
q=1
q6=i
q6=i
m
Q
(d)
λ̄j,i
i=1
m
m
Q
Q
(d)
(d)
λ̄j,i +
(1−λ̄j,i )
i=1
i=1
i = 1, . . . , m, j = 1, . . . , n
i = 1, . . . , m, j = 1, . . . , n
j = 1, . . . , n
p(hj = 0)
j = 1, . . . , n
(11)
4. Simulation Results and Discussions
In this section, we analyze the performance of the MP decision fusion algorithm. We first consider the computational complexity, then we pass to evaluate
the performance in terms of error probability. In particular, we compare the
performance of the MP-based scheme to those of the optimum fusion rule [18]
(whenever possible), the soft isolation scheme presented in [16], the hard isolation scheme described in [14] and the simple majority rule. In our comparison,
18
we consider both independent and Markovian system states, for both small and
large observation window m.
4.1. Complexity Discussion
In order to evaluate the complexity of the message passing algorithm and
compare it to that of the optimum fusion scheme, we consider both the number
of operations and the running time. By number of operations we mean the
number of additions, substractions, multiplications and divisions performed by
the algorithm to estimate the vector of system states s.
By looking at equation (11), we see that running the message passing algorithm requires the following number of operations:
(l)
• 3n + 5 operations for each of τ̄i
(r)
and τ̄i .
(l)
(r)
• 3 operations for each of ϕ̄i and ϕ̄i .
(u)
• 11 operations for ν̄i,j .
(d)
• 3n + 5 operations for ν̄i,j .
(d)
• 3n + 2 operations for ν̄m,j .
(d)
• 11 operations for λ̄j,i .
(u)
(d)
• 3m + 2 operations for each of λ̄j,i and ω̄j .
summing up to 12n+6m+49 operations for each iteration over the factor graph.
On the other hand, in the case of independent node states, the optimal scheme
in [18] requires 2m (m + n) operations. Therefore, the MP algorithm is much
less computationally expensive since it passes from an exponential to a linear
complexity in m. An example of the difference in computational complexity
between the optimum and the MP algorithms is depicted in Figure 8.
With regard to time complexity, Table 1 reports the running time of the
MP and the optimal schemes. For n = 20, the optimal scheme running time is
17.547 times larger than that of the message passing algorithm. On the other
19
log(number of operations)
10
6
Message Passing Scheme
Optimal Scheme
105
104
103
102
10
20
30
40
50
60
70
80
90
100
n
Figure 8: Number of operations required for different n, m = 10 and 5 message passing local
iterations for message passing and optimal schemes.
Table 1: Running Time (in seconds) for the Optimal and the Message Passing algorithms for:
m = 10, ε = 0.15, Number of Trials = 105 and Message Passing Iterations = 5.
Setting/Scheme
Message Passing
Optimal
n = 20,α = 0.45
943.807114
1.6561e+04
n = 100,α = 0.49
4888.821497
2.0817e+04
hand, for the case of n = 100, the optimal scheme needs around 4.258 times
more than the message passing scheme. The tests have been conducted using
Matlab 2014b running on a machine with 64-bit windows 7 OS with 16,0 GB of
installed RAM and Intel Core i7-2600 CPU @ 3.40GHz.
4.2. Performance Evaluation
In this section, we use numerical simulations to evaluate the performance of
the message passing algorithm and compare them to the state of the art schemes.
The results are divided into four parts. The first two parts consider, respectively,
simulations performed with small and large observation windows m. Then, in
the third part, we investigate the optimum behaviour of the Byzantines over
20
100
Majority
Hard Isolation Scheme
Soft Isolation Scheme
Message Passing
Optimal
log(Pe)
10-1
10-2
10-3
10
-4
0
0.1
0.2
0.3
0.4
0.5
α
Figure 9: Error probability as a function of α for the following setting: n = 20, independent
Sequence of States ρ = 0.5, ε = 0.15, m = 10 and Pmal = 1.0.
a range of observation windows size. Finally, in the last part, we compare the
case of independent and Markovian system states.
The simulations were carried out according to the following setup. We considered a network with n = 20 nodes, ε = 0.15, ρ = {0.95, 0.5} corresponding to
Markovian and independent sequence of system states, respectively. The probability α that a node is Byzantine is in the range [0, 0.45] corresponding to a
number of Byzantines between 0 and 9. As to Pmal we set it to either 0.5 or 11 .
The number of message passing iterations is 5. For each setting, we estimated
the error probability over 105 trials.
4.2.1. Small m
To start with, we considered a small observation window, namely m = 10.
With such a small value of m, in fact, it is possible to compare the performance
of the message passing algorithm to that of the optimum decision fusion rule.
The results we obtained are reported in Figure 9. Upon inspection of the figure,
the superior performance of the message passing algorithm over the Majority,
1 It
is know from [18] that for the Byzantines the optimum choice of Pmal is either 0.5 or
1 depending on the considered setup.
21
100
Majority
Hard Isolation Scheme
Soft Isolation Scheme
Message Passing
Optimal
log(Pe)
10-1
10-2
10-3
10-4
10-5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
α
Figure 10: Error probability as a function of α for the following setting: n = 20, Markovian
Sequence of States ρ = 0.95, ε = 0.15, m = 10 and Pmal = 1.0
Soft and Hard isolation schemes is confirmed. More interestingly, the message
passing algorithm gives nearly optimal performance, with only a negligible performance loss with respect to the optimum scheme.
Figure 10 confirms the results shown in Figure 9 for Markovian system states
(ρ = 0.95).
4.2.2. Large m
Having shown the near optimality of the message passing scheme for small
values of m; we now leverage on the small computational complexity of such a
scheme to evaluate its performance for large values of m (m = 30). As shown
in Figure 11, by increasing the observation window all the schemes give better
performance, with the message passing algorithm always providing the best
performance. Interestingly, in this case, when the attacker uses Pmal = 1.0, the
message passing algorithm permits to almost nullify the attack of the Byzantines
for all the values of α. Concerning the residual error probability, it is due to
the fact that, even when there are no Byzantines in the network (α = 0), there
is still an error floor caused by the local errors at the nodes ε. For the case
of independent states, such an error floor is around 10−4 . In Figure 11 and
22
100
Majority
Hard Isolation Scheme
Soft Isolation Scheme
Message Passing
log(Pe)
10-1
10-2
10-3
10-4
10-5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
α
Figure 11: Error probability as a function of α for the following setting: n = 20, Markovian
Sequence of States ρ = 0.95, ε = 0.15, m = 30 and Pmal = 1.0.
12, this error floor decreases to about 10−5 because of the additional a-priori
information available in the Markovian case.
4.2.3. Optimal choice of Pmal for the Byzantines
One of the main results proven in [18], is that setting Pmal = 1 is not
necessarily the optimal choice for the Byzantines. In fact, when the FC manages
to identify which are the malicious nodes, it can exploit the fact the malicious
nodes always flip the result of the local decision to get useful information about
the system state. In such cases, it is preferable for the Byzantines to use Pmal =
0.5 since in this way the reports send to the FC does not convey any information
about the status of the system. However, in [18], it was not possible to derive
exactly the limits determining the two different behaviours for the Byzantines
due to the impossibility of applying the optimum algorithm in conjunction with
large observation windows. By exploiting the low complexity of the message
passing scheme, we are now able to overcome the limits of the analysis carried
out in [18].
Specifically, we carried out an additional set of experiments by fixing α =
0.45 and varying the observation window in the interval [5,20]. The results we
23
10-1
log(Pe)
10
Majority
Hard Isolation Scheme
Soft Isolation Scheme
Message Passing
-2
10-3
10-4
10-5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
α
Figure 12: Error probability as a function of α for the following setting: n = 20, Markovian
Sequence of States ρ = 0.95, ε = 0.15, m = 30 and Pmal = 0.5.
obtained confirm the general behaviour observed in [18]. For instance, in Figure
13, Pmal = 1.0 remains the Byzantines’ optimal choice up to m = 13, while for
m > 13, it is preferable for them to use Pmal = 0.5. Similar results are obtained
for independent system states as shown in Figure 14.
4.2.4. Comparison between independent and Markovian System States
In this subsection, we provide a comparison between the cases of Markovian
and independent system states.
By looking at Figure 13 and 14, we see that the Byzantines switch their
strategy from Pmal = 1 to Pmal = 0.5 for a smaller observation window (m = 10)
in the case of independent states (the switching value for the Markovian case
is m = 13). We can explain this behaviour by observing that in the case
of Markovian states, using Pmal = 0.5 results in a strong deviation from the
Markovianity assumption of the reports sent to the FC thus making it easier
the isolation of byzantine nodes. This is not the case with Pmal = 1, since, due
to the symmetry of the adopted Markov model, such a value does not alter the
expected statistics of the reports.
As a last result, in Figure 15, we compare the error probability for the case
24
100
log(Pe)
10-1
10-2
10-3
Hard Isolation Pmal=0.5
Hard Isolation Pmal=1.0
10
Soft Isolation P mal=0.5
-4
Soft Isolation P mal=1.0
Message Passing Pmal=0.5
10-5
Message Passing Pmal=1.0
8
10
12
14
16
18
20
m
Figure 13: Error probability as a function of m for the following settings: n = 20, Markovian
Sequence of States ρ = 0.95, ε = 0.15 and α = 0.45.
100
log(Pe)
10-1
10-2
Hard Isolation Pmal=0.5
10
Hard Isolation Pmal=1.0
-3
Soft Isolation P mal=0.5
Soft Isolation P mal=1.0
Message Passing Pmal=0.5
Message Passing Pmal=1.0
10-4
5
10
15
20
m
Figure 14: Error probability as a function of m for the following settings: n = 20, independent
Sequence of States ρ = 0.5, ε = 0.15 and α = 0.45.
25
10-2
Message Passing ρ =0.5
Optimal ρ =0.5
Message Passing ρ =0.95
Optimal ρ =0.95
log(Pe)
10-3
10-4
10-5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
α
Figure 15: Comparison between the case of independent and Markovian system states (n = 20,
ρ = {0.5, 0.95}, ε = 0.15, m = 10, Pmal = 1.0).
of independent and Markov sources. Since we are interested in comparing the
achievable performance for the two cases, we consider only the performance
obtained by the optimum and the message passing algorithms. Upon inspection
of the figure, it turns out that the case of independent states is more favourable
to the Byzantines than the Markov case. The reason is that the FC may exploit
the additional a-priori information available in the Markov case to identify the
Byzantines and hence make a better decision. Such effect disappears when α
approaches 0.5, since in this case the Byzantines tend to dominate the network.
In that case, the Byzantines’ reports prevail the pool of reports at the FC
and hence, the FC becomes nearly blind so that even the additional a-priori
information about the Markov model does not offer a great help.
5. Conclusions
In this paper, we proposed a near-optimal message passing algorithm based
on factor graph for decision fusion in multi-sensor networks in the presence of
Byzantines. The effectiveness of the proposed scheme is evaluated by means of
extensive numerical simulations both for the case of independent and Markov
sequence of states. Experiments showed that, when compared to the optimum
26
fusion scheme, the proposed scheme permits to achieve near-optimal performance at a much lower computational cost: specifically, by adopting the new
algorithm based on message passing we were able to reduce the complexity from
exponential to linear. Such reduction of the complexity permits to deal with
large observation windows, thus further improving the performance of the decision. Results on large observation windows confirmed the dual behavior in the
attacking strategy of the Byzantines, looking for a trade-off between pushing the
FC to make a wrong decision on one hand and reducing the mutual information
between the reports and the system state on the other hand. In addition, the
experiments showed that the case of independent states is more favorable to
Byzantines than the Markovian case, due to the additional a-priori information
available at the FC in the Markovian case.
As future work, we plan to focus on a scenario more favorable to the Byzantines, by giving them the possibility to access the observation vectors. In this
way, they can focus their attack on the most profitable cases and avoid to flip
the local decision when it is very likely that their action will have no effect on
the FC decision. Considering the case where the nodes can send to the FC more
extensive reports (multi-bit case) [33] is another interesting extension.
References
References
[1] A. Vempaty, T. Lang, P. Varshney, Distributed inference with byzantine
data: State-of-the-art review on data falsification attacks, IEEE Signal
Processing Magazine 30 (5) (2013) 65–75.
[2] M. Abdelhakim, L. Lightfoot, J. Ren, T. Li, Distributed detection in mobile
access wireless sensor networks under byzantine attacks, IEEE Transactions
on Parallel and Distributed Systems, 25 (4) (2014) 950–959. doi:10.1109/
TPDS.2013.74.
27
[3] W. Wang, H. Li, Y. Sun, Z. Han, Securing collaborative spectrum sensing against untrustworthy secondary users in cognitive radio networks,
EURASIP Journal on Advances in Signal Processing 2010 (2010) 4.
[4] A. S. Rawat, P. Anand, H. Chen, P. K. Varshney, Collaborative spectrum
sensing in the presence of byzantine attacks in cognitive radio networks,
IEEE Transactions on Signal Processing 59 (2) (2011) 774–786.
[5] R. Zhang, J. Zhang, Y. Zhang, C. Zhang, Secure crowdsourcing-based cooperative spectrum sensing, in: Proc. of INFOCOM 2013, IEEE Conference
on Computer Communications, IEEE, 2013, pp. 2526–2534.
[6] W. Wang, L. Chen, K. G. Shin, L. Duan, Secure cooperative spectrum
sensing and access against intelligent malicious behaviors, in: Proc. of INFOCOM 2014, IEEE Conference on Computer Communications, IEEE,
2014, pp. 1267–1275.
[7] Y. Sun, Y. Liu, Security of online reputation systems: The evolution of
attacks and defenses, IEEE Signal Processing Magazine 29 (2) (2012) 87–
97.
[8] S. Marano, V. Matta, L. Tong, Distributed detection in the presence of
byzantine attacks, IEEE Transactions on Signal Processing, 57 (1) (2009)
16–29.
[9] B. Kailkhura, S. Brahma, P. Varshney, Optimal byzantine attacks on distributed detection in tree-based topologies, in: IEEE International Conference on Computing, Networking and Communications (ICNC), 2013, pp.
227–231. doi:10.1109/ICCNC.2013.6504085.
[10] M. Barni, B. Tondi, Multiple-observation hypothesis testing under adversarial conditions, in: Proc. of WIFS’13, IEEE International Workshop on
Information Forensics and Security, Guangzhou, China, 2013, pp. 91–96.
[11] M. Barni, F. Pérez-González, Coping with the enemy:
Advances in
adversary-aware signal processing, in: 2013 IEEE International Confer28
ence on Acoustics, Speech and Signal Processing, 2013, pp. 8682–8686.
doi:10.1109/ICASSP.2013.6639361.
[12] Z. Chair, P. Varshney, Optimal data fusion in multiple sensor detection
systems, IEEE Transactions on Aerospace and Electronic Systems AES22 (1) (1986) 98–101. doi:10.1109/TAES.1986.310699.
[13] P. K. Varshney, Distributed Detection and Data Fusion, Springer-Verlag,
1997.
[14] A. S. Rawat, P. Anand, H. Chen, P. K. Varshney, Collaborative spectrum
sensing in the presence of byzantine attacks in cognitive radio networks,
IEEE Transactions on Signal Processing 59 (2) (2011) 774–786. doi:10.
1109/TSP.2010.2091277.
[15] A. Vempaty, K. Agrawal, P. Varshney, H. Chen, Adaptive learning of byzantines’ behavior in cooperative spectrum sensing, in: Proc. of WCNC’11,
IEEE Conf. on Wireless Communications and Networking, 2011, pp. 1310–
1315. doi:10.1109/WCNC.2011.5779320.
[16] A. Abrardo, M. Barni, K. Kallas, B. Tondi, Decision fusion with corrupted
reports in multi-sensor networks: A game-theoretic approach, in: 53rd
IEEE Conference on Decision and Control, 2014, pp. 505–510. doi:10.
1109/CDC.2014.7039431.
[17] R. Chen, J. M. Park, K. Bian, Robust distributed spectrum sensing in cognitive radio networks, in: Proc. of INFOCOM 2008, 27th IEEE Conference
on Computer Communications, 2008, pp. –. doi:10.1109/INFOCOM.2008.
251.
[18] A. Abrardo, M. Barni, K. Kallas, B. Tondi, A game-theoretic framework for
optimum decision fusion in the presence of byzantines, IEEE Transactions
on Information Forensics and Security 11 (6) (2016) 1333–1345. doi:10.
1109/TIFS.2016.2526963.
29
[19] S. M. Aji, R. J. McEliece, The generalized distributive law, IEEE Transactions on Information Theory 46 (2) (2000) 325–343. doi:10.1109/18.
825794.
[20] P. Pakzad, V. Anantharam, A new look at the generalized distributive
law, IEEE Transactions on Information Theory 50 (6) (2004) 1132–1155.
doi:10.1109/TIT.2004.828058.
[21] R. Gallager, Low-density parity-check codes, IRE Transactions on Information Theory 8 (1) (1962) 21–28. doi:10.1109/TIT.1962.1057683.
[22] S. Verdu, H. V. Poor, Abstract dynamic programming models under commutativity conditions, SIAM Journal on Control and Optimization 25 (4)
(1987) 990–1006.
[23] J. Pearl, M. Kaufmann, Probabilistic reasoning in intelligent systems, san
mateo, ca, Cal.: Morgan Kaufmann.
[24] L. Rabiner, B. Juang, An introduction to hidden markov models, IEEE
ASSP Magazine 3 (1) (1986) 4–16. doi:10.1109/MASSP.1986.1165342.
[25] K. W. Choi, E. Hossain, Estimation of primary user parameters in cognitive radio systems via hidden markov model, IEEE Transactions on Signal
Processing 61 (3) (2013) 782–795. doi:10.1109/TSP.2012.2229998.
[26] I. A. Akbar, W. H. Tranter, Dynamic spectrum allocation in cognitive
radio using hidden markov models: Poisson distributed case, in: IEEE
Proceedings of SoutheastCon, 2007, pp. 196–201. doi:10.1109/SECON.
2007.342884.
[27] T. Jiang, H. Wang, A. V. Vasilakos, Qoe-driven channel allocation schemes
for multimedia transmission of priority-based secondary users over cognitive
radio networks, IEEE Journal on Selected Areas in Communications 30 (7)
(2012) 1215–1224. doi:10.1109/JSAC.2012.120807.
30
[28] Y. Ephraim, N. Merhav, Hidden markov processes, IEEE Transactions on
information theory 48 (6) (2002) 1518–1569.
[29] D. J. MacKay, Information theory, inference and learning algorithms, Cambridge university press, 2003.
[30] F. R. Kschischang, B. J. Frey, H.-A. Loeliger, Factor graphs and the sumproduct algorithm, IEEE Transactions on Information Theory 47 (2) (2001)
498–519.
[31] T. Richardson, R. Urbanke, Modern coding theory, Cambridge University
Press, 2008.
[32] Y. Mao, A. H. Banihashemi, A heuristic search for good low-density paritycheck codes at short block lengths, in: Communications, 2001. ICC 2001.
IEEE International Conference on, Vol. 1, IEEE, 2001, pp. 41–44.
[33] B. Kailkhura, S. Brahma, P. Varshney, On the performance analysis of
data fusion schemes with byzantines, in: IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), 2014, pp. 7411–7415.
doi:10.1109/ICASSP.2014.6855040.
31
| 3 |
arXiv:1704.08165v1 [stat.ML] 26 Apr 2017
A Generalization of Convolutional Neural
Networks to Graph-Structured Data
Yotam Hechtlinger, Purvasha Chakravarti & Jining Qin
Department of Statistics
Carnegie Mellon University
{yhechtli,pchakrav,jiningq}@stat.cmu.edu
April 27, 2017
Abstract
This paper introduces a generalization of Convolutional Neural Networks (CNNs)
from low-dimensional grid data, such as images, to graph-structured data. We propose a novel spatial convolution utilizing a random walk to uncover the relations
within the input, analogous to the way the standard convolution uses the spatial
neighborhood of a pixel on the grid. The convolution has an intuitive interpretation, is efficient and scalable and can also be used on data with varying graph structure. Furthermore, this generalization can be applied to many standard regression
or classification problems, by learning the the underlying graph. We empirically
demonstrate the performance of the proposed CNN on MNIST, and challenge the
state-of-the-art on Merck molecular activity data set.
1
Introduction
Convolutional Neural Networks (CNNs) are a leading tool used to address a large set of
machine learning problems (LeCun et al. (1998), LeCun et al. (2015)). They have successfully provided significant improvements in numerous fields, such as image processing, speech recognition, computer vision and pattern recognition, language processing
and even the game of Go boards ( Krizhevsky et al. (2012), Hinton et al. (2012), Le
et al. (2011), Kim (2014), Silver et al. (2016) respectively).
The major success of CNNs is justly credited to the convolution. But any successful
application of the CNNs implicitly capitalizes on the underlying attributes of the input.
Specifically, a standard convolution layer can only be applied on grid-structured input,
since it learns localized rectangular filters by repeatedly convolving them over multiple
patches of the input. Furthermore, for the convolution to be effective, the input needs
to be locally connective, which means the signal should be highly correlated in local
regions and mostly uncorrelated in global regions. It also requires the input to be
stationary in order to make the convolution filters shift-invariant so that they can select
local features independent of the spatial location.
1
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Figure 1: Visualization of the graph convolution size 5. For a given node, the convolution is
applied on the node and its 4 closest neighbors selected by the random walk. As the right figure
demonstrates, the random walk can expand further into the graph to higher degree neighbors. The
convolution weights are shared according to the neighbors’ closeness to the nodes and applied
globally on all nodes.
Therefore, CNNs are inherently restricted to a (rich) subset of datasets. Nevertheless, the impressive improvements made by applying CNNs encourage us to generalize
CNNs to non-grid structured data that have local connectivity and stationarity properties. The main contribution of this work is a generalization of CNNs to general
graph-structured data, directed or undirected, offering a supervised algorithm that incorporates the structural information present in a graph. Moreover our algorithm can
be applied to a wide range of regression and classification problems, by first estimating the graph structure of the data and then applying the proposed CNN on it. Active
research in learning graph structure from data makes this feasible, as demonstrated by
the experiments in the paper.
The fundamental hurdle in generalizing CNNs to graph-structured data is to find a
corresponding generalized convolution operator. Recall that the standard convolution
operator picks the neighboring pixels of a given pixel and computes the inner product
of the weights and these neighbors. We propose a spatial convolution that performs a
random walk on the graph in order to select the top p closest neighbors for every node,
as Figure 1 shows. Then for each of the nodes, the convolution is computed as the
inner product of the weights and the selected p closest neighbors, which are ordered
according to their relative position from the node. This allows us to use the same set of
weights (shared weights) for the convolution at every node and reflects the dependency
between each node and its closest neighbors. When an image is considered as an
undirected graph with edges between neighboring pixels, this convolution operation is
the same as the standard convolution.
The proposed convolution possesses many desired advantages:
• It is natural and intuitive. The proposed CNN, similar to the standard CNN,
convolves every node with its closest spatial neighbors, providing an intuitive
generalization. For example, if we learn the graph structure using the correla2
tion matrix, then selecting a node’s p nearest neighbors is similar to selecting its
p most correlated variables, and the weights correspond to the neighbors’ relative position to the node (i.e. ith weight globally corresponds to the ith most
correlated variable for every node).
• It is transferable. Since the criterion by which the p relevant variables are
selected is their relative position to the node, the convolution is invariant to the
spatial location of the node on the graph. This enables the application of the
same filter globally across the data on all nodes on varying graph structures. It
can even be transfered to different data domains, overcoming a known limitation
of many other generalizations of CNNs on graphs.
• It is scalable. Each forward call of the graph convolution requires O (N · p)
flops, where N is the number of nodes in the graph or variables. This is also the
amount of memory required for the convolution to run. Since p N , it provides
a scalable and fast operation that can efficiently be implemented on a GPU.
• It is effective. Experimental results on the Merck molecular activity challenge
and the MNIST data sets demonstrates that by learning the graph structure for
standard regression or classification problems, a simple application of the graph
convolutional neural network gives results that are comparable to state-of-the-art
models.
To the best of our knowledge, the proposed graph CNN is the first generalization
of convolutions on graphs that demonstrates all of these properties.
2
Literature review
Graph theory and differential geometry have been heavily studied in the last few decades,
both from mathematical and statistical or computational perspectives, with a large body
of algorithms being developed for a variety of problems. This has laid the foundations
required for the recent surge of research on generalizing deep learning methods to new
geometrical structures. Bronstein et al. (2016) provide an extensive review of the newly
emerging field.
Currently, there are two main approaches generalizing CNNs to graph structured
data, spectral and spatial approaches (Bronstein et al. (2016)). The spectral approach
generalizes the convolution operator using the eigenvectors derived from the spectral
decomposition of the graph Laplacian. The motivation is to create a convolution operator that commutes with the graph Laplacian similar to the way the regular convolution
operator commutes with the Laplacian operator. This approach is studied by Bruna
et al. (2013) and Henaff et al. (2015), which used the eigenvectors of the graph Laplacian to do the convolution, weighting out the distance induced by the similarity matrix.
The major drawback of the spectral approach is that it is graph dependent, as it learns
filters that are a function of the particular graph Laplacian. This constrains the operation to a fixed graph structure and restricts the transfer of knowledge between different
different domains.
3
Defferrard et al. (2016) introduce ChebNet, which is a spectral approach with spatial properties. It uses the k th order Chebyshev polynomials of the Laplacian, to learn
filters that act on k-hop neighborhoods of the graph, giving them spatial interpretation.
Their approach was later simplified and extended to semi-supervised settings by Kipf
& Welling (2016). Although in spirit the spatial property is similar to the one suggested
in this paper, since it builds upon the Laplacian, the method is also restricted to a fixed
graph structure.
The spatial approach generalizes the convolution using the graph’s spatial structure, capturing the essence of the convolution as an inner product of the parameters
with spatially close neighbors. The main challenge with the spatial approach is that it
is difficult to find a shift-invariance convolution for non-grid data. Spatial convolutions
are usually position dependent and lack meaningful global interpretation. The convolution proposed in this paper is spatial, and utilizes the relative distance between nodes
to overcome this difficulty.
Diffusion Convolutional Neural Network (DCNN) proposed by Atwood & Towsley
(2016) is a similar convolution that follows the spatial approach. This convolution also
performs a random walk on the graph in order to select spatially close neighbors for the
convolution while maintaining the shared weights. DCNN’s convolution associates the
ith parameter (wi ) with the ith power of the transition matrix (P i ), which is the transition matrix after i steps in a random walk. Therefore, the inner product is considered
between the parameters and a weighted average of all the nodes that can be visited in i
steps. In practice, for dense graphs the number of nodes visited in i steps can be quite
large, which might over-smooth the signal in dense graphs. Furthermore, Atwood &
Towsley (2016) note that implementation of DCNN requires the power series of the
full transition matrix, requiring O(N 2 ) complexity, which limits the scalability of the
method.
Another example of a spatial generalization is provided by Bruna et al. (2013),
which uses multi-scale clustering to define the network architecture, with the convolutions being defined per cluster without the weight sharing property. Duvenaud et al.
(2015) on the other hand, propose a neural network to extract features or molecular
fingerprints from molecules that can be of arbitrary size and shape by designing layers
which are local filters applied to all the nodes and their neighbors.
In addition to the research generalizing convolution on graph, there is active research on the application of different types of Neural Networks on graph structured
data. The earliest work in the field is the Graph Neural Network by Scarselli and others,
starting with Gori et al. (2005) and fully presented in Scarselli et al. (2009). The model
connect each node in the graph with its first order neighbors and edges and design the
architecture in a recursive way inspired by recursive neural networks. Recently it has
been extended by Li et al. (2015) to output sequences, and there are many other models
inspired from the original work on Graph Neural Networks. For example, Battaglia
et al. (2016) introduce ”interaction networks” studying spatial binary relations to learn
objects and relations and physics.
The problem of selecting nodes from a graph for a convolution is analogous to
the problem of selecting local receptive fields in a general neural network. The work
of Coates & Ng (2011) suggests selecting the local receptive fields in a feed-forward
neural network using the closest neighbors induced by the similarity matrix, with the
4
weights not being shared among the different hidden units.
In contrast to previous research, we suggest a novel scalable convolution operator
that captures the local connectivity within the graph and demonstrates the weight sharing property, which helps in transferring it to different domains. We achieve this by
considering the closest neighbors, found by using a random walk on the graph, in a
way that intuitively extends the spatial nature of the standard convolution.
3
Graph Convolutional Neural Network
The key step which differentiates CNNs on images from regular neural networks is
the selection of neighbors on the grid in a p × p window combined with the shared
weight assumption. In order to select the local neighbors of a given node, we use the
graph transition matrix and calculate the expected number of visits of a random walk
starting from the given node. The convolution for this node, is then applied on the top
p nodes with highest expected number of visits from it. In this section, we discuss
the application of the convolution in a single layer on a single graph. It is immediate
to extend the definition to more complex structures, as will be explicitly explained in
section 3.4. We introduce some notation in order to proceed into further discussion.
Notation: Let G = (V, E) be a graph over a set of N features, V = (X1 , . . . , XN ),
and a set of edges E. Let P denote the transition matrix of a random walk on the graph,
such that Pij is the probability to move from node Xi to Xj . Let the similarity matrix
and the correlation matrix of theP
graph be given by S and R respectively. Define D as
a diagonal matrix where Dii = j Sij .
3.1
Transition matrix and expected number of visits
3.1.1
Transition matrix existence
This work assumes the existence of the graph transition matrix P . If graph structure
of the data is already known, i.e. if the similarity matrix S is already known, then the
transition matrix can be obtained, as explained in Lovász et al. (1996), by
P = D−1 S.
(1)
If the graph structure is unknown, it can be learned using several unsupervised or supervised graph learning algorithms. Learning the data graph structure is an active research
topic and is not in the scope of this paper. The interested reader can start with Belkin &
Niyogi (2001) and Henaff et al. (2015) discussing similarity matrix estimation. We use
the absolute value of the correlation matrix as the similarity matrix, following Roux
et al. (2008) who showed that correlation between the features is usually enough to
capture the geometrical structure of images. That is, we assume
Sij = |Rij | ∀ i, j.
5
(2)
Figure 2: Visualization of a row of Q(k) on the graph generated over the 2-D grid at a node
near the center, when connecting each node to its 8 adjacent neighbors. For k = 1, most of
the weight is on the node, with smaller weights on the first order neighbors. This corresponds
to a standard 3 × 3 convolution. As k increases the number of active neighbors also increases,
providing greater weight to neighbors farther away, while still keeping the local information.
3.1.2
Expected number of visits
Pk
Once we derive the transition matrix P , we define Q(k) := i=0 P k , where [P k ]ij is
the probability of transitioning from Xi to Xj in k steps. That is,
(0)
Q
= I, Q
(1)
(k)
= I + P, · · · , Q
=
k
X
P k.
(3)
i=0
(k)
Note that Qij is also the expected number of visits to node Xj starting from Xi in
(k)
k steps. The ith row, Qi· provides a measure of similarity between node Xi and its
neighbors by considering a random walk on the graph. As k increases we incorporate
neighbors further away from the node, while the summation gives appropriate weights
to the node and its closest neighbors. Figure 2 provides a visualization of the matrix Q
over the 2-D grid.
3.2
Convolutions on graphs
As discussed earlier, each row of Q(k) can be used to obtain the closest neighbors of a
node. Hence, it seems natural to define the convolution over the graph node Xi using
(k)
the ith row of Q(k) . In order to do so, we denote πi as the permutation order of the
th
(k)
i row of Q in descending order. That is, for every i = 1, 2, ..., N and every k,
(k)
πi
: {1, 2, ..., N } −→ {1, 2, ..., N },
such that Qiπ(k) (1) > Qiπ(k) (2) > ... > Qiπ(k) (N ) .
i
i
i
The notion of ordered position between the nodes is a global feature of all graphs
and nodes. Therefore, we can take advantage of it to satisfy the desired shared weights
assumption, enabling meaningful and transferable filters. We define Conv1 as the size
p convolution over the graph G with nodes x = (x1 , . . . , xN )T ∈ RN and weights
6
w ∈ Rp , for the p nearest neighbors of each node, as the inner product:
xπ(k) (1) · · · xπ(k) (p)
w1
1
1
x (k)
w2
π2 (1) · · · xπ2(k) (p)
·
Conv1 (x) =
.
.
..
..
..
..
.
.
wp
x (k)
· · · x (k)
πN (1)
(4)
πN (p)
Therefore the weights are decided according to the distance induced by the transition matrix. That is, w1 will be convolved with the variable which has the largest value
in each row of the matrix Q(k) . For example, when Q(1) = I + P , w1 will always
correspond to the node itself and w2 will correspond to the node’s closest neighbor.
For higher values of k, the order will be determined by the graph’s unique structure.
It should be noted that Conv1 doesn’t take into account the actual distance between
the nodes, and might be susceptible (for example) to the effects of negative correlation
between the features. For that reason, we have also experimented with Conv2 , defined
as:
y1,π(k) (1) · · · y1,π(k) (p)
w1
1
1
y (k)
w2
2,π2 (1) · · · y2,π2(k) (p)
·
Conv2 (x) =
.. ,
..
..
..
.
.
.
.
wp
yN,π(k) (1) · · · yN,π(k) (p)
N
N
x1
x2
where x = . and yij = sign(Rij ) Qkij xj .
..
(5)
xN
In practice the performance of Conv1 was on par with Conv2 , and the major differences between them were smoothed out during the training process. As Conv1 is more
intuitive, we decided to focus on using Conv1 .
3.3
Selection of the power of Q
The selection of the value of k is data dependent, but there are two main components
affecting its value. Firstly, it is necessary for k to be large enough to detect the top p
neighbors of every node. If the transition matrix P is sparse, it might require higher
values of k. Secondly, from properties of stochastic processes, we know that if we
denote π as the Markov chain stationary distribution, then
(k)
Qij
= πj ∀ i, j.
(6)
k→∞ k
This implies that for large values of k, local information will be smoothed out and the
convolution will repeatedly be applied on the features with maximum connections. For
this reason, we suggest keeping k relatively low (but high enough to capture sufficient
features).
lim
7
3.4
Implementation
3.4.1
The convolution
An important feature of the suggested convolution is the complexity of the operation.
For a graph with N nodes, a single p level convolution only requires O(N · p) flops
and memory, where p N .
Furthermore, similar to standard convolution implementation (Chellapilla et al.,
2006), it is possible to represent the graph convolution as a tensor dot product, transferring most of the computational burden to the GPU using highly optimized matrix
multiplication libraries.
For every graph convolution layer, we have as an input a 3D tensor of M observations with N features at depth d. We first extend the input with an additional dimension
that includes the top p neighbors of each feature selected by Q(k) , transforming the input dimension from 3D to 4D tensor as
(M, N, d) → (M, N, p, d) .
Now if we apply a graph convolution layer with dnew filters, the convolution weights
will be a 3D tensor of size (p, d, dnew ). Therefore application of a graph convolution
which is a tensor dot product between the input and the weights along the (p, d) axes
results in an output of size:
(M, N ) , (p, d) • (p, d) , (dnew ) = (M, N, dnew ) .
We have implemented the algorithm using Keras (Chollet, 2015) and Theano (Theano
Development Team, 2016) libraries in Python, inheriting all the tools provided by the
libraries to train neural networks, such as dropout regularization, advanced optimizers
and efficient initialization methods. The source code is publicly available on Github 1 .
3.4.2
The selection of neighbors
The major computational effort in this algorithm is the computation of Q, which is
performed once per graph structure as a pre-processing step. As it is usually a onetime computation, it is not a significant constraint.
However, for very large graphs, if done naively, this might be challenging. An
alternative can be achieved by recalling that Q is only needed in order to calculate
the expected number of visits from a given node after k steps in a random walk. In
most applications, when the graph is very large, it is also usually very sparse. This
facilitates an efficient implementation of Breadth First Search algorithm (BFS). Hence,
the selection of the p neighbors can be parallelized and would only require O(N · p)
memory for every unique graph structure, making the method scalable for very large
graphs, when the number of different graphs is manageable.
Any problem that has many different large graphs is inherently computationally
hard. The graph CNN reduces the memory required after the preprocessing from
O(N 2 ) to O(N · p) per graph. This is because the only information required from
the graph is the p nearest neighbors of every node.
1 https://github.com/hechtlinger/graph_cnn
8
4
Experiments
In order to test the feasibility of the proposed CNN on graphs, we conducted experiments on well known data sets functioning as benchmarks: Merck molecular activity
challenge and MNIST. These data sets are popular and well-studied challenges in computational biology and computer vision, respectively.
In our implementations, in order to enable better comparisons between the models and reduce the chance of over-fitting during the model selection process, we consider shallow and simple architectures instead of deep and complex ones. The hyperparameters were chosen arbitrarily when possible rather than being tuned and optimized. Nevertheless, we still report state-of-the-art or competitive results on the data
sets.
In this section, we denote a graph convolution layer with k feature maps by Ck and
a fully connected layer with k hidden units by F Ck .
4.1
Merck molecular activity challenge
The Merck molecular activity is a Kaggle 2 challenge which is based on 15 molecular
activity data sets. The target is predicting activity levels for different molecules based
on the structure between the different atoms in the molecule. This helps in identifying
molecules in medicines which hit the intended target and do not cause side effects.
Following Henaff et al. (2015), we apply our algorithm on the DPP4 dataset. DPP4
contains 6148 training and 2045 test molecules. Some of the features of the molecules
are very sparse and are only active in a few molecules. For these features, the correlation estimation is not very accurate. Therefore, we use features that are active in at least
20 molecules (observations), resulting in 2153 features. As can be seen in Figure 3,
there is significant correlation structure between different features. This implies strong
connectivity among the features which is important for the application of the proposed
method.
The training in the experiments was performed using Adam optimization procedure (Kingma & Ba, 2014) where the gradients are derived by the back-propagation
algorithm, using the root mean-squared error loss (RMSE). We used learning rate
α = 0.001, fixed the number of epochs to 40 and implemented dropout regularization
on every layer during the optimization procedure. The absolute values of the correlation matrix were used to learn the graph structure. We found that a small number of
nearest neighbors (p) between 5 to 10 works the best, and used p = 5 in all models.
Following the standard set by the Kaggle challenge, results are reported in terms of
the squared correlation (R2 ), that is,
R2 = Corr(Y, Ŷ )2 ,
where Y is the actual activity level and Ŷ is the predicted one.
The convergence plot given in Figure 3 demonstrates convergence of the selected
architectures. The contribution of the suggested convolution is explained in view of the
alternatives:
2 Challenge
website is https://www.kaggle.com/c/MerckActivity
9
Figure 3: Left: Visualization of the correlation matrix between the first 100 molecular descriptors (features) in the DPP4 Merck molecular activity challenge training set. The proposed
method utilizes the correlation structure between the features. Right: Convergence of R2 for the
different methods on the test set. The graph convolution converges more steadily as it uses fewer
parameters.
• Fully connected Neural Network: Models first applying convolution followed
by a fully connected hidden layer, converge better than more complex fully connected models. Furthermore, convergence in the former methods are more stable
in comparison to the fully connected methods, due to the parameter reduction.
• Linear Regression: Optimizing over the set of convolutions is often considered
as automation of the feature extraction process. From that perspective, a simple
application of one layer of convolution, followed by linear regression, significantly outperforms the results of a standalone linear regression.
Table 1 provides more thorough R2 results for the different architectures explored,
and compares it to two of the winners of the Kaggle challenge, namely the Deep Neural
Network and the random forest in Ma et al. (2015). We perform better than both the
winners of the Kaggle contest.
The models in Henaff et al. (2015) and Bruna et al. (2013) use a spectral approach
and currently are the state-of-the-art. In comparison to them, we perform better than the
Spectral Networks CNN on unsupervised graph structure, which is equivalent to what
was done by using the correlation matrix as similarity matrix. The one using Spectral
Networks on supervised graph structure holds the state-of-the-art by learning the graph
structure. This is a direction we have not yet explored, as graph learning is beyond
the scope of this paper, although it will be straightforward to apply the proposed graph
CNN in a similar way to any learned graph.
4.2
MNIST data
The MNIST data often functions as a benchmark data set to test new machine learning
methods. We experimented with two different graph structures for the images. In the
first experiment, we considered the images as observations from an undirected graph
on the 2-D grid, where each pixel is connected to its 8 adjoining neighbor pixels. This
10
Method
OLS Regression
Random Forest
Merck winner DNN
Spectral Networks
Spectral Networks
(supervised graph)
Fully connected NN
Graph CNN
Graph CNN
Graph CNN
Architecture
R2
C64 -P8 -C64 -P8 -FC1000
C16 -P4 -C16 -P4 -FC1000
0.135
0.232
0.224
0.204
0.277
FC300 -FC100
C10
C10 -FC100
C10 -C20 -FC300
0.195
0.246
0.258
0.264
Table 1: The squared correlation between the actual activity levels and predicted activity levels, R2 for different methods on DPP4 data set from Merck molecular activity
challenge.
experiment was done, to demonstrate how the graph convolution compares to standard
CNN on data with grid structure.
We used the convolutions over the grid structure as presented in Figure 2 using
Q(3) with p = 25 as the number of nearest neighbors. Due to the symmetry of the
graph, in most regions of the image, multiple pixels are equidistant from the pixel being
convolved. In order to solve this, if the ties were broken in a consistent manner, the
convolution would be reduced to the regular convolution on a 5 × 5 window. The only
exceptions to this would be the pixels close to the boundary. To make the example more
compelling, we broke ties arbitrarily, making the training process harder compared
to regular CNN. Imitating LeNet LeCun et al. (1998), we considered an architecture
with C40 , P ooling(2×2) , C80 , P ooling(2×2) , F C100 followed by a linear classifier
that resulted in a 0.87% error rate. This is comparable to a regular CNN with the
same architecture that achieves an error rate of about 0.75%-0.8%. We outperform a
fully connected neural network which achieves an error rate of around 1.4%, which is
expected due to the differences in the complexities of the models.
In the second experiment, we used the correlation matrix to estimate the graph
structure directly from the pixels. Since some of the MNIST pixels are constant (e.g
the corners are always black), we restricted the data only to the active 717 pixels that
are not constant. We used Q(1) with p = 6 as the number of neighbors. This was
done in order to ensure that the spatial structure of the image no longer effected the
results. With only 6 neighbors, and a partial subset of the pixels under consideration,
the relative location of the top correlated pixels necessarily varies from pixel to pixel.
As a result, regular CNNs are no longer applicable on the data whereas the convolution
proposed in this paper is. We compared the performance of our CNN to fully connected
Neural Networks.
During the training process, we used a dropout rate of 0.2 on all layers to prevent over-fitting. In all the architectures the final layer is a standard softmax logistic
regression classifier.
11
Method
Error (%)
# of Parameteres
Logistic Regression
C20
C20 − C20
C20 − F C512
F C512 − F C512
7.49
1.94
1.59
1.45
1.59
7, 180
143, 550
145, 970
7, 347, 862
635, 402
Table 2: Error rates of different methods on MNIST digit recognition task without the
underlying grid structure.
Table 2 presents the experimental results. The Graph CNN performs on par with the
fully connected neural networks, with fewer parameters. A single layer of graph convolution followed by logistic regression greatly improves the performance of logistic
regression, demonstrating the potential of the graph convolution for feature extraction
purposes. As with regular convolutions, C20 − F C512 required over 7 million parameters as each convolution uses small amount of parameters to generate different maps
of the input. This suggests that the graph convolution can be made even more effective with the development of an efficient spatial pooling method on graphs, which is a
known but unsolved problem.
5
Conclusions
We propose a generalization of convolutional neural networks from grid-structured data
to graph-structured data, a problem that is being actively researched by our community. Our novel contribution is a convolution over a graph that can handle different
graph structures as its input. The proposed convolution contains many sought-after attributes; it has a natural and intuitive interpretation, it can be transferred within different
domains of knowledge, it is computationally efficient and it is effective.
Furthermore, the convolution can be applied on standard regression or classification problems by learning the graph structure in the data, using the correlation matrix
or other methods. Compared to a fully connected layer, the suggested convolution
has significantly fewer parameters while providing stable convergence and comparable
performance. Our experimental results on the Merck Molecular Activity data set and
MNIST data demonstrate the potential of this approach.
Convolutional Neural Networks have already revolutionized the fields of computer
vision, speech recognition and language processing. We think an important step forward is to extend it to other problems which have an inherent graph structure.
Acknowledgments
We would like to thank Alessandro Rinaldo, Ruslan Salakhutdinov and Matthew Gormley for suggestions, insights and remarks that have greatly improved the quality of this
paper.
12
References
Atwood, James and Towsley, Don. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1993–2001, 2016.
Battaglia, Peter, Pascanu, Razvan, Lai, Matthew, Rezende, Danilo Jimenez, et al. Interaction networks for learning about objects, relations and physics. In Advances in
Neural Information Processing Systems, pp. 4502–4510, 2016.
Belkin, Mikhail and Niyogi, Partha. Laplacian eigenmaps and spectral techniques for
embedding and clustering. In NIPS, volume 14, pp. 585–591, 2001.
Bronstein, Michael M, Bruna, Joan, LeCun, Yann, Szlam, Arthur, and Vandergheynst,
Pierre. Geometric deep learning: going beyond euclidean data. arXiv preprint
arXiv:1611.08097, 2016.
Bruna, Joan, Zaremba, Wojciech, Szlam, Arthur, and LeCun, Yann. Spectral networks
and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
Chellapilla, Kumar, Puri, Sidd, and Simard, Patrice. High performance convolutional
neural networks for document processing. In Tenth International Workshop on Frontiers in Handwriting Recognition. Suvisoft, 2006.
Chollet, François. Keras. https://github.com/fchollet/keras, 2015.
Coates, Adam and Ng, Andrew Y. Selecting receptive fields in deep networks. pp.
2528–2536, 2011.
Defferrard, Michaël, Bresson, Xavier, and Vandergheynst, Pierre. Convolutional neural
networks on graphs with fast localized spectral filtering. In Advances in Neural
Information Processing Systems, pp. 3837–3845, 2016.
Duvenaud, David K, Maclaurin, Dougal, Iparraguirre, Jorge, Bombarell, Rafael,
Hirzel, Timothy, Aspuru-Guzik, Alán, and Adams, Ryan P. Convolutional networks
on graphs for learning molecular fingerprints. In Advances in Neural Information
Processing Systems, pp. 2215–2223, 2015.
Gori, Marco, Monfardini, Gabriele, and Scarselli, Franco. A new model for learning
in graph domains. In Neural Networks, 2005. IJCNN’05. Proceedings. 2005 IEEE
International Joint Conference on, volume 2, pp. 729–734. IEEE, 2005.
Henaff, Mikael, Bruna, Joan, and LeCun, Yann. Deep convolutional networks on
graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman,
Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath,
Tara N, et al. Deep neural networks for acoustic modeling in speech recognition:
The shared views of four research groups. Signal Processing Magazine, IEEE, 29
(6):82–97, 2012.
13
Kim, Yoon. Convolutional neural networks for sentence classification. arXiv preprint
arXiv:1408.5882, 2014.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv
preprint arXiv:1412.6980, 2014.
Kipf, Thomas N and Welling, Max. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with
deep convolutional neural networks. In Advances in neural information processing
systems, pp. 1097–1105, 2012.
Le, Quoc V, Zou, Will Y, Yeung, Serena Y, and Ng, Andrew Y. Learning hierarchical
invariant spatio-temporal features for action recognition with independent subspace
analysis. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 3361–3368. IEEE, 2011.
LeCun, Yann, Bottou, Léon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based
learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–
2324, 1998.
LeCun, Yann, Bengio, Yoshua, and Hinton, Geoffrey. Deep learning. Nature, 521
(7553):436–444, 2015.
Li, Yujia, Tarlow, Daniel, Brockschmidt, Marc, and Zemel, Richard. Gated graph
sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
Lovász, László et al. Random walks on graphs: A survey. Combinatorics, Paul Erdos
is Eighty, 2:353–398, 1996.
Ma, Junshui, Sheridan, Robert P, Liaw, Andy, Dahl, George E, and Svetnik, Vladimir.
Deep neural nets as a method for quantitative structure–activity relationships. Journal of chemical information and modeling, 55(2):263–274, 2015.
Roux, Nicolas L, Bengio, Yoshua, Lamblin, Pascal, Joliveau, Marc, and Kégl, Balázs.
Learning the 2-d topology of images. In Advances in Neural Information Processing
Systems, pp. 841–848, 2008.
Scarselli, Franco, Gori, Marco, Tsoi, Ah Chung, Hagenbuchner, Markus, and Monfardini, Gabriele. The graph neural network model. IEEE Transactions on Neural
Networks, 20(1):61–80, 2009.
Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van
Den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, et al. Mastering the game of go with deep neural networks
and tree search. Nature, 529(7587):484–489, 2016.
Theano Development Team. Theano: A Python framework for fast computation of
mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016.
14
| 2 |
Tree-Deletion Pruning in Label-Correcting
Algorithms for the Multiobjective Shortest Path
Problem
arXiv:1604.08147v1 [] 27 Apr 2016
Fritz Bökler and Petra Mutzel
Department of Computer Science, TU Dortmund, Germany
{fritz.boekler, petra.mutzel}@tu-dortmund.de
Abstract. In this paper, we re-evaluate the basic strategies for label correcting algorithms for the multiobjective shortest path (MOSP) problem,
i.e., node and label selection. In contrast to common believe, we show
that—when carefully implemented—the node-selection strategy usually
beats the label-selection strategy. Moreover, we present a new pruning
method which is easy to implement and performs very well on real-world
road networks. In this study, we test our hypotheses on artificial MOSP
instances from the literature with up to 15 objectives and real-world road
networks with up to almost 160,000 nodes.
1
Introduction
In this paper we are concerned with one of the most famous problems from
multiobjective optimization, the multiobjective shortest path (MOSP) problem.
We are given a directed graph G, consisting of a finite set of nodes V and a set of
directed arcs A ⊆ V × V . We are interested in paths between a given source node
s and a given target node t. Instead of a single-objective cost function, we are
given an objective function c which maps each arc to a vector, i.e., c : A → Qd
for d ∈ N. The set of all directed paths from s to t in a given graph is called Ps,t
and we assume that the objective function c is extended on these paths in the
canonical way.
In contrast to the single-objective case, where there exists only one unique
optimal value, in the multiobjective case there usually does not exist a path minimizing all objectives at once. Thus, we are concerned with finding the Paretofront of all s-t-paths, i.e., the minimal vectors of the set c(Ps,t ) with respect to
the canonical componentwise partial order on vectors ≤. Moreover, we also want
to find for each point y of the Pareto-front one representative path p ∈ Ps,t ,
such that c(p) = y. Each such path is called Pareto-optimal. For more information on multiobjective path and tree problems we refer the reader to the latest
survey [4].
It is long known that the Pareto-front of a MOSP instance can be of exponential size in the input [7]. Moreover, it has been recently shown in [2], that
there does not exist an output-sensitive algorithm for this problem even in the
case of d = 2 unless P = NP.
2
1.1
Previous Work
The techniques used for solving the MOSP problem are based on labeling algorithms. The majority of the literature is concerned with the biobjective case.
The latest computational study for more than 2 objectives is from 2009 [9] and
compares 27 variants of labeling algorithms on 9,050 artificial instances. These
are the instances we also use for our study. In summary, a label correcting version with a label-selection strategy in a FIFO manner is concluded to be the
fastest strategy on the instance classes provided. The authors do not investigate
a node-selection strategy with the argument that it is harder to implement and
is less efficient (cf. also [10]).
In an older study from 2001 [6], also label-selection and node-selection strategies are compared. The authors conclude that, in general, label-selection methods are faster than node-selection methods. However, the test set is rather small,
consisting of only 8 artificial grid-graph instances ranging from 100 to 500 nodes
and 2 to 4 objectives and 18 artificial random-graph instances ranging from 500
to 40,000 nodes and densities of 1.5 to 30 with 2 to 4 objectives.
In the work by Delling and Wagner [5], the authors solve a variant of the
multiobjective shortest path problem where a preprocessing is allowed and we
want to query the Pareto-front of paths between a pair of nodes as fast as
possible. The authors use a variant of SHARC to solve this problem. Though
being a different problem, this study is the first computational study where
an implementation is tested on real-world road networks instead of artificial
instances. The instances have sizes of 30,661, 71,619 and 892,392 nodes and 2
to 4 objectives. Though, the largest instance could only be solved using highly
correlated objective functions resulting in Pareto-front sizes of only at most 2.5
points on average.
1.2
Our Contribution
In this paper, we investigate the efficiency of label correcting methods for the
multiobjective shortest path problem. We focus on label correcting methods,
because the literature (cf. [9, 11]) and our experience shows that label setting
algorithms do not perform well on instances with more than two objectives. We
investigate the question if label-selection or node-selection methods are more
promising and test codes based on the recent literature.
We also perform the first computational study of these algorithms not only
on artificial instances but also on real-world road networks based on the road
network of Western Europe provided by the PTV AG for scientific use. The road
network sizes vary from 23,094 to 159,945 nodes and include three objective
functions. The artificial instances are taken from the latest study on the MOSP
problem.
Moreover, we propose a new pruning technique which performs very well on
the road networks, achieving large speed-ups. We also show the limits of this
technique and reason under which circumstances it works well.
3
1.3
Organization
In Sec. 2, we describe the basic techniques of labeling algorithms in multiobjective optimization. The tree-deletion pruning is the concern of Sec. 3. Because
the implementation of the algorithms is crucial for this computational study, we
give details in Sec. 4. The computational study and results then follow in Sec. 5.
2
Multiobjective Labeling Algorithms
In general, a labeling algorithm for enumerating the Pareto-front of paths in a
graph maintains a set of labels Lu at each node u ∈ V . A label is a tuple which
holds its cost vector, the associated node and, for retrieving the actual path, a
reference to the predecessor label. The algorithm is initialized by setting each
label-set to ∅ and adding the label (0, s, nil) to Ls .
These algorithms can be divided into two groups, depending if they select
either a label or a node in each iteration. These strategies are called label or
ˆ at a
node selection strategies, respectively. When we select a label ` = (v, u, `)
node u, this label is pushed along all out-arcs a = (u, w) of u, meaning that a
new label at the head of the arc is created with cost v + c(a), predecessor label
` and associated node w. This strategy is due to [12] for d > 2. If we follow a
node-selection strategy, all labels in Lu will be pushed along the out-arcs of a
selected node u. This strategy was first proposed in [3] for the biobjective case.
Nodes or labels which are ready to be selected are called open.
After pushing a set of labels, the label sets at the head of each considered
arc are cleaned, i.e. all dominated labels are removed from the modified label
ˆ dominates a label `0 = (v0 , u0 , `ˆ0 , ) if v ≤ v0 and
sets. We say a label ` = (v, u, `)
0
v 6= v .
There are many ways in which a label or node can be selected. A comparative study was conducted by Paixao and Santos [10]. For example, a pure FIFO
strategy seems to work best in the aforementioned study. But also other strategies
P are possible: For example, we can sort the labels by their average cost, i.e.,
i∈{1,...,d} vi /d and always select the smallest one. A less expensive variant is
ˆ in a FIFO
due to [1, 9], where we decide depending on the top label ` = (v, u, `)
0
0
0 ˆ0
0
queue Q where to place a new label ` = (v , u , ` ): If v is lexicographic smaller
than v, then it is placed in the front of Q, otherwise it is placed at the back of
Q.
Both available computational studies on labeling algorithms for the multiobjective shortest path problem with more than 2 criteria suggest that labelselection strategies are far superior compared to node-selection strategies [6, 9].
2.1
Label Setting vs. Label Correcting Algorithms
In the paper by Martins [8], the author describes an algorithm using a labelselection strategy. The algorithm selects the next
S label by choosing the lexicographic smallest label among all labels in L := u∈V Lu . In general, whenever
4
ˆ in L, this label represents a nondomwe select a nondominated label ` = (v, u, `)
inated path from s to u. Labeling algorithms having the property that whenever
we select a label we know that the represented path is a Pareto-optimal path,
are called label setting algorithms. Labeling algorithms which do not have this
property, and thus sometimes delete or correct a label, are called label correcting
algorithms. On the plus side, in label setting algorithms, we never select a label
which will be deleted in the process of the algorithm. But selecting these labels
is not trivial. For example selecting a lexicographic smallest label requires a priority queue data structure, whereas the simplest label-selection strategy requires
only a simple FIFO queue.
In many studies the label correcting algorithms are superior to the label
setting variants. See for example [6, 9, 11]. It is not clear why this is the case.
One possible reason is, that the cost of the data structure can not make up for
the advantage of not pushing too many unneeded labels. Since the case is clear
for the comparison of label setting and label correcting algorithms, we do not
evaluate label setting algorithms in our study.
3
Tree-Deletion Pruning
c
a
v
s
b
Fig. 1: Illustration of the tree-deletion pruning
The main difficulty which is faced by label correcting algorithms is that we
can push labels which will later be dominated by a new label. To address this
issue, let us take a label-selection algorithm into consideration which selects the
next label in a FIFO manner. In Fig. 1, we see the situation where a label `
at node v, which encodes a path a from s to v, is dominated by a label `0 ,
which encodes a different path b from s to v. Based on the label `, we might
already have built a tree of descendand labels c. If we proceed with the usual
label correcting algorithm, first the descendant labels of ` will be pushed and
later be dominated by the descandants of `0 . To avoid the unnecessary pushes
of descendants of `, we can delete the whole tree c after the label ` is deleted.
This pruning method will be called tree-deletion pruning (TD). We can employ
5
Algorithm 1 Abstract version of the LS algorithm
Require: Graph G = (V, A), nodes s, t ∈ V and objective function c : A → Qd
Ensure: List R of pairs (p, y) for all y ∈ c(Ps,t ) and some p ∈ Ps,t such that c(p) = y
1: Lu ← ∅ for all u ∈ V \{s}
2: ` ← (0, s, nil)
3: Ls ← {`}
4: Q.push(`)
5: while not Q.empty do
ˆ ← Q.pop
6:
` = (v, u, `)
7:
for (u, w) ∈ A do
8:
Push ` along (u, w) and add the new label `0 to Lw
9:
Clean Lw
. Tree-deletion for every deleted label in Lw
10:
if `0 is nondominated in Lw then
11:
Q.push(`0 )
12: Reconstruct paths for each label in Lt and output path/vector pairs
Algorithm 2 Abstract version of the NS algorithm
Require: Graph G = (V, A), nodes s, t ∈ V and objective function c : A → Qd
Ensure: List R of pairs (p, y) for all y ∈ c(Ps,t ) and some p ∈ Ps,t such that c(p) = y
1: Lu = ∅ for all u ∈ V \{s}
2: Ls = {(0, s, nil)}
3: Q.push(s)
4: while not Q.empty do
5:
u ← Q.pop
6:
for (u, w) ∈ A do
7:
for not yet pushed label ` in Lu do
8:
Push ` along (u, w) and add the new label to Lw
9:
Clean Lw
. Tree-deletion for every deleted label in Lw
10:
if at least one new label survived the cleaning process and w ∈
/ Q then
11:
Q.push(w)
12: Reconstruct paths for each label in Lt and output path/vector pairs
this method in label correcting algorithms using both, the label-selection and
node-selection strategies.
4
Implementation Details
We implemented both label-correcting algorithms, a version of the FIFO labelselection (LS) and FIFO node-selection (NS) algorithms in C++11. The reason
for choosing these variants is that the LS-algorithm is the fastest method in the
latest comparative study [9]. Both alternatives are also implemented using treedeletion pruning (LS-TD and NS-TD, respectively). Pseudocode can be found
in Algorithms 1 and 2.
6
We use the OGDF1 for the representation of graphs. Wherever possible, we
try to use std::vector for collections of data.
Label Selection In the label-selection variant we use a std::deque to implement
the queue of open labels.
Node Selection In the node-selection variants we use our own implementation of
a ring-buffer to implement the queue of open nodes. Also, only those labels of a
node are pushed, which have not been pushed before.
Tree-Deletion Pruning The successors of each label are stored in an std::list.
The reason for this is that the sucessor-lists are constructed empty and most
of them remain empty for the whole process of the algorithm. Construction of
empty std::lists is the cheapest operation among the creation of all other
relevant data structures.
Cleaning Step While there is a considerable literature on finding the subset of
minimal vectors in a set of vectors, it is not clear which method to use in practice. In the node-selection algorithm, we could use the fact that we try to find
the nondominated vectors of two sets of nondominated vectors, which has been
successfully exploited for the biobjective problem in [3]. To make the comparison between label-selection and node-selection strategies more focused on the
strategies itself, we use a simple pairwaise comparison between all pushed labels
and all labels at the head of the arc. A more sophisticated method would make
the node-selection strategy only faster. Moreover, the studies on multiobjective
labeling algorithms also employ this method.
5
Computational Study
The experiments were performed on an Intel Core i7-3770, 3.4 GHz and 16 GB
of memory running Ubuntu Linux 12.04. We compiled the code using LLVM 3.4
with compiler flag -O3.
In the computational study, we are concerned with the following questions:
1. Is label selection really faster than node selection?
2. In which circumstances is tree-deletion pruning useful?
We will answer these questions in the following subsections.
5.1
Instances
We used two sets of instances. First, we took the instances of [9] and tested our
implementations on them. The aforementioned study is the latest study which
tested implementations of labelling algorithms for MOSP. For each problem type,
1
http://ogdf.net/
7
Name
n
d Name
n
d
CompleteN-medium 10–200
3 CompleteN-large 10–140
6
CompleteK-medium
50
2–15 CompleteK-large
100
2–9
GridN-medium
441–1225 3 GridN-large
25–289
6
GridK-medium
81
2–15 GridK-large
100
2–9
RandomN-medium 500–10000 3 RandomN-large 1000–20000 6
RandomK-medium
2500
2–15 RandomK-large
5000
2–9
Fig. 2: Overview on the artificial test instances
there are 50 randomly drawn instances. In summary they make up a set of 9,050
test problems.
The properties of these instances can be seen in Fig. 2. The random graph
instances are based on a hamiltonian cycle where arcs are randomly added to
the graph. In the complete graph, arcs are added between each pair of nodes
in both directions. The grid graphs are all square. The arc costs were choosen
uniformly random in [1, 1000] ∩ N. The instances are available online, see [9] for
details.
The second set of instances is similar to the instances from [5]. They are
based on the road network of Western Europe provided by PTV AG for scientific
use. We conducted our experiments on the road network of the Czech Republic
(CZE, 23,094 nodes, 53,412 edges), Luxembourg (LUX, 30,661 nodes, 71,619
edges), Ireland (IRL, 32,868 nodes, 71,655 edges) and Portugal (PRT, 159,945
nodes, 372,129 edges, only for the tree-deletion experiments). As metrics we
used similar metrics as in [5]: travel distance, cost based on fuel consumption
and travel time. The median Pareto-front sizes are 13.0, 30.5, 12.5 and 133.5,
respectively and thus comparable or larger than those in [5]. For each of the
instances we drew 50 pairs of source and target nodes.
5.2
Running Times Node Selection vs. Label Selection
In Fig. 3 we see a selection of the results on the RandomN-large, RandomK-large,
GridK-large as well as the real-world road network instance sets.
To evaluate the results we decided to show box plots, because the deviation
of the running times is very large and it is easier to recognize trends. The box
plots give a direct overview on the quartiles (box dimension), median (horizontal
line inside the box), deviation from the mean (whiskers: lines above and below
the box) and outliers (points above and below the whiskers).
We see that the node-selection strategy performs better on all these instances
than the label-selection strategy. The node-selection strategy is up to a factor of
3 faster on both test sets. This is also true for the other large and medium sized
instances where the maximum factors range from 1.4 to 3.17. Detailed box plots
can be found in the appendix.
Also on the real-world road networks the results are positive. The nodeselection strategy is up to factor of 3.16 faster than the label-selection strategy.
8
2
3
4
5
6
7
8
9
CZE
IRL
LUX
12
100
10
8
Method
LS
6
Method
LS
10
NS
NS
4
2
1
0
2
3
4
5
6
7
8
9
CZE
IRL
LUX
Number of Objectives
(a) RandomK-large
1
2
3
4
5
6
7
8
(b) Real-world road networks
9 10 11 12 13 14 15 16 17 18 19 20
2
3
4
5
6
7
8
9
10
20
8
15
Method
LS
10
NS
5
Method
6
LS
NS
4
2
0
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Number of Nodes (in thousands)
(c) RandomN-large
2
3
4
5
6
7
8
9
Number of Objectives
(d) GridK-large
Fig. 3: Comparison of the running times (in seconds) of the label-selection (LS)
and node-selection (NS) strategies
A partial explanation can be attributed to memory management: In the nodeselection strategy a consecutive chunk of memory which contains the values of the
labels pushed along an arc can be accessed in one cache access. While in the labelselection strategy only one label is picked in each iteration, producing potentially
many cache misses when the next label—potentially at a very different location—
is accessed.
5.3
Tree-Deletion Pruning
The results concerning TD are ambiguous. First, to see how well TD might
work, we performed a set of experiments showing how many labels are touched
by the label correcting algorithms which could have been deleted when using
tree-deletion pruning. In Fig. 4, we see the results of this experiment on the
CompleteN-large, CompleteK-large, GridK-large and RandomK-large test
sets.
We observe that on these instances, especially the node-selection strategy
tends to produce larger obsolete trees than the label-selection strategy. We also
9
10
20
30
40
50
60
70
80
90 100 110 120 130 140
2
3
4
5
6
7
8
9
80000
150000
60000
Method
100000
LS
Method
LS
40000
NS
50000
NS
20000
0
0
10
20
30
40
50
60
70
80
90 100 110 120 130 140
2
3
Number of Nodes
3
4
5
6
7
8
5
6
7
8
9
Number of Objectives
(a) CompleteN-large
2
4
(b) CompleteK-large
9 10 11 12 13 14 15 16 17 18 19 20
2
3
4
5
6
7
8
9
20000
1000
15000
Method
Method
LS
LS
10000
NS
500
NS
5000
0
0
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Number of Objectives
(c) GridK-large
2
3
4
5
6
7
8
9
Number of Objectives
(d) RandomK-large
Fig. 4: Measuring how many nodes have been touched which could have been
deleted by tree-deletion pruning in the label-selection (LS) and node-selection
(NS) strategies
observe this behavior on the complete-graph instances of medium and large size.
The situation is different on the grid-graph instances, where both algorithms
have a similar tendency to produce obsolete trees.
Another observation is that when increasing the number of objectives, the
number of obsolete trees which could have been deleted decreases in the grid and
random graph instances (see Figs. 4 b and c). This happens because when looking
at instances with a large number of objectives and totally random objective
values, most labels remain nondominated in the cleaning step. The completegraph instances are an exception here, because the number and kind of labels
pushed to each node are very diverse und so still many labels are dominated.
Hence, what we expect is that on instances where a large number of labels
is dominated in the cleaning step the tree-deletion pruning is very useful. The
results of the comparison of the running times can be seen in Fig. 5. It can
be seen on the real-world road networks that the tree-deletion pruning works
10
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CZE
IRL
LUX
PRT
1.00
1000
Method
Method
NS
NS
NS-TD
NS-TD
10
0.01
2
3
4
5
6
7
8
9
10
11
12
13
14
15
CZE
IRL
LUX
PRT
Number of Objectives
(a) GridK-medium
1
2
3
4
5
6
7
8
(b) Real-world road networks
9 10 11 12 13 14 15 16 17 18 19 20
10
20
30
40
50
60
70
80
90 100 110 120 130 140
8
10
8
6
Method
NS
4
NS-TD
Method
6
NS
NS-TD
4
2
2
0
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
NumberDofDNodesD(inDthousands)
(c) RandomN-large
10
20
30
40
50
60
70
80
90 100 110 120 130 140
Number of Nodes
(d) CompleteN-large
Fig. 5: Comparison of the running times (in seconds) of the node-selection strategy with (NS-TD) and without (NS) tree-deletion pruning
very well and we can achieve a speed-up of up to 3.5 in comparison to the pure
node-selection strategy.
On the artificial benchmark instances however, the results are not so clear.
TD works well on medium sized grid graphs with a small number of objectives
and also on complete graphs of any size. On the other instances of the artificial
benchmark set, TD performs slightly worse than the pure node-selection strategy,
especially on the large instances.
This behavior can be explained by the large number of labels which are
dominated in the road network instances. The size of the Pareto-fronts are small
compared to the instances of the artificial test set. So, we hypothesize that the
pruning strategy is especially useful if many labels are dominated in the cleaning
step and large obsolete trees can be deleted in this process. TD seems also to
work better on denser networks.
To test this hypothesis, we created a new set of random graphs. To make the
graphs more dense than in the previous instances, we drew 0.3 times the possible
number of edges and to match the small sized Pareto-fronts of the instances
11
251
501
751
1001
10
Method
LS
LS-TD
1
251
501
751
1001
Number of Nodes
Fig. 6: Comparison of the running times (in seconds) of the node-selection strategy with and without TD on the corrolated random networks
from [5] with a high correlation of the objective functions, i.e., we used a Gauss
copula distribution with a fixed correlation of 0.7. If the hypothesis is false, TD
should run slower than the pure node-selection strategy on these instances.
But the results in Fig. 6 show that TD beats the pure node-selection strategy
on these graphs. TD achieves a speed-up of up to 1.14. Using a wilcoxon signed
rank test we can also see that the hypothesis that TD is slower than the pure
node-selection strategy on these instances can be refused with a p-value of less
than 0.001.
6
Conclusion
To conclude, we showed in this paper that node-selection strategies in labeling algorithms for the MOSP problem can be advantageous, especially if implemented
carefully. So node-selection strategies should not be neglected as an option for
certain instance classes.
We showed that the tree-deletion pruning we introduced in this paper, works
well on the real-world road networks. On the artificial instances it does not seem
to work to well, which we can explain by the very low densities and unrealistic
objective functions used in these instances. To show that TD works well when
having larger correlations as in the real-world road networks and higher densities,
we also created instances which had the potential to refute this hypothesis. But
the hypothesis passed the test.
References
1. Bertsekas, D.P., Guerriero, F., Musmanno, R.: Parallel asynchronous labelcorrecting methods for shortest paths. Journal of Optimization Theory and Applications 88(2) (1996) 297–320
2. Bökler, F., Ehrgott, M., Morris, C., Mutzel, P.: Output-sensitive complexity
for multiobjective combinatorial optimization. Journal of Multi-Criteria Decision
Analysis (submitted) (2016)
12
3. Brumbaugh-Smith, J., Shier, D.: An empirical investigation of some bicriterion
shortest path algorithms. European Journal of Operational Research 43(2) (1989)
216–224
4. Clímaco, J.C.N., Pascoal, M.M.B.: Multicriteria path and tree problems: Discussion on exact algorithms and applications. International Transactions in Operational Research 19(1–2) (2012) 63–98
5. Delling, D., Wagner, D.: Pareto paths with sharc. In Vahrenhold, J., ed.: SEA
2009. Volume 5526 of Lecture Notes in Computer Science., Springer-Verlag Berlin
Heidelberg (2009) 125–136
6. Guerriero, F., Musmanno, R.: Label correcting methods to solve multicriteria
shortest path problems. Journal of Optimization Theory and Applications 111(3)
(December 2001) 589–613
7. Hansen, P.: Bicriterion path problems. In Fandel, G., Gal, T., eds.: Multiple
Criteria Decision Making Theory and Application. Volume 177 of Lecture Notes
in Economics and Mathematical Systems. Springer Berlin Heidelberg New York
(1979) 109–127
8. Martins, E.Q.V.: On a multicriteria shortest path problem. European Journal of
Operational Research 16 (1984) 236–245
9. Paixao, J., Santos, J.: Labelling methods for the general case of the multiobjective
shortest path problem: a computational study. In: Computational Intelligence
and Decision Making. Intelligent Systems, Control and Automation: Science and
Engineering. Springer Netherlands (2009) 489–502
10. Paixao, J.M., Santos, J.L.: Labelling methods for the general case of the multiobjective shortest path problem - a computational study. Technical Report 07-42,
Universidade de Coimbra (2007)
11. Raith, A., Ehrgott, M.: A comparison of solution strategies for biobjective shortest
path problems. Computers & OR 36(4) (2009) 1299–1331
12. Tung, C.T., Chew, K.L.: A multicriteria Pareto-optimal path algorithm. European
Journal of Operational Research 62 (1992) 203–209
13
Appendix
We provide all boxplots for all experiments in this appendix. Figs. 7 and 8 show
all the running times on all large and medium sized instances of the artificial
instance sets. Figs. 9 and 10 show the comparison of the node-selection strategy
with and without TD. And in Figs. 11 and 12 we can see the number of labels
which could have been delete when we had used TD. The figures are oversized
for better readability and will be removed in a final version of the paper.
14
10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200
2
4
5
6
7
8
9
10
11
12
13
14
15
15
1.0
Method
LS
NS
Running]Time][s]
1.5
Running Time [s]
3
10
Method
LS
NS
5
0.5
0.0
0
10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200
2
3
4
5
6
Number of Nodes
7
8
9
10
11
12
13
14
15
13
14
15
Number]of]Objectives
(a) CompleteN-medium
(b) CompleteK-medium
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225
2
3
4
5
6
7
8
9
10
11
12
6
5
Method
100
LS
NS
Running]Time][s]
Running Time [s]
150
4
Method
LS
3
NS
2
50
1
0
0
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225
2
3
4
5
6
Number of Nodes
(c) GridN-medium
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
6
6.5
7
8
9
10
11
12
13
14
15
13
14
15
Number]of]Objectives
7
7.5
(d) GridK-medium
8
8.5
9
9.5 10
2
3
4
5
6
7
8
9
10
11
12
25
20
Method
LS
NS
0.5
Running]Time][s]
RunningTTimeT[s]
1.0
Method
15
LS
NS
10
5
0
0.0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
6
6.5
7
7.5
NumberTofTNodesT(inTthousands)
(e) RandomN-medium
8
8.5
9
9.5 10
2
3
4
5
6
7
8
9
10
11
12
Number]of]Objectives
(f) RandomK-medium
Fig. 7: Comparison of the running times of the label-selection (LS) and nodeselection (NS) strategies
13
14
15
15
10
20
30
40
50
60
70
80
90
100
110
120
130
140
2
3
4
5
6
7
8
9
40
15
10
Method
LS
NS
Running]Time][s]
Running Time [s]
30
Method
LS
20
NS
5
10
0
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
2
3
4
Number of Nodes
(a) CompleteN-large
100
121
144
6
7
8
9
(b) CompleteK-large
169
2
80
Running Time [s]
5
Number]of]Objectives
3
4
5
6
7
8
9
10
8
60
Method
LS
Method
6
LS
NS
40
NS
4
20
2
0
0
100
121
144
2
169
3
4
(c) GridN-large
1
2
3
4
5
6
7
8
5
6
7
8
9
8
9
Number of Objectives
Number of Nodes
(d) GridK-large
9 10 11 12 13 14 15 16 17 18 19 20
2
3
4
5
6
7
12
20
10
8
15
Method
LS
10
Method
LS
6
NS
NS
4
5
2
0
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Number of Nodes (in thousands)
(e) RandomN-large
2
3
4
5
6
7
Number of Objectives
(f) RandomK-large
Fig. 8: Comparison of the running times of the label-selection (LS) and nodeselection (NS) strategies
8
9
16
10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200
2
3
4
5
6
7
8
9
10
11
12
13
14
15
10.0
0.6
Method
NS
NS-TD
0.4
Running[Time[[s]
RunningDTimeD[s]
0.8
Method
NS
NS-TD
0.1
0.2
0.0
10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200
2
3
4
5
6
NumberDofDNodes
7
8
9
10
11
12
13
14
15
Number[of[Objectives
(a) CompleteN-medium
(b) CompleteK-medium
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225
2
3
4
5
6
7
8
9
10
11
12
13
14
15
100
1.00
Running Time [s]
80
Method
60
Method
NS
NS
NS-TD
NS-TD
40
0.01
20
0
2
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225
3
4
5
6
7
8
9
10
11
12
13
14
15
Number of Objectives
Number of Nodes
(c) GridN-medium
(d) GridK-medium
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
2
3
4
5
6
7
8
9
10
11
12
13
14
15
10.0
0.8
Method
NS
0.4
NS-TD
Running[Time[[s]
RunningTTimeT[s]
0.6
1.0
Method
NS
NS-TD
0.2
0.1
0.0
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
NumberTofTNodesT(inTthousands)
(e) RandomN-medium
2
3
4
5
6
7
8
9
10
11
12
Number[of[Objectives
(f) RandomK-medium
Fig. 9: Comparison of the running times of the node-selection strategy with (NSTD) and without (NS) tree-deletion pruning
13
14
15
17
2
10
20
30
40
50
60
70
80
3
4
5
6
7
8
9
90 100 110 120 130 140
10
8
Method
6
NS
NS-TD
4
Running[Time[[s]
10.0
2
Method
NS
1.0
NS-TD
0.1
0
10
20
30
40
50
60
70
80
90 100 110 120 130 140
2
Number of Nodes
3
4
(a) CompleteN-large
100
121
144
5
6
7
8
9
Number[of[Objectives
(b) CompleteK-large
169
2
3
4
5
6
7
8
9
Method
40
NS
NS-TD
Running[Time[[s]
Running Time [s]
60
1.0
Method
NS
NS-TD
20
0
0.1
100
121
144
169
2
3
4
Number of Nodes
(c) GridN-large
2
3
4
5
6
7
8
6
7
8
9
8
9
(d) GridK-large
2
1
5
Number[of[Objectives
3
4
5
6
7
9 10 11 12 13 14 15 16 17 18 19 20
6
Method
NS
4
NS-TD
Running[Time[[s]
8
1.0
Method
NS
NS-TD
2
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
NumberDofDNodesD(inDthousands)
(e) RandomN-large
0.1
2
3
4
5
6
7
Number[of[Objectives
(f) RandomK-large
Fig. 10: Comparison of the running times of the node-selection strategy with
(NS-TD) and without (NS) tree-deletion pruning
8
9
18
10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200
2
3
4
5
6
7
8
9
10
11
12
13
14
15
150000
Method
100000
LS
NS
50000
TouchedyRecursivelyyDeletedyLabels
TouchedLRecursivelyLDeletedLLabels
8000
6000
Method
LS
4000
NS
2000
0
0
10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200
2
3
4
5
6
NumberLofLNodes
7
8
9
10
11
12
13
14
15
13
14
15
NumberyofyObjectives
(a) CompleteN-medium
(b) CompleteK-medium
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225
2
3
4
5
6
7
8
9
10
11
12
1000000
Method
LS
NS
500000
TouchedyRecursivelyyDeletedyLabels
TouchedaRecursivelyaDeletedaLabels
1500000
600
Method
400
LS
NS
200
0
0
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225
2
3
4
5
6
7
NumberaofaNodes
(c) GridN-medium
9
10
11
12
13
14
15
12
13
14
15
(d) GridK-medium
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
2
3
4
5
6
7
8
9
10
11
12000
40000
Method
30000
LS
NS
20000
10000
TouchedyRecursivelyyDeletedyLabels
50000
TouchedRRecursivelyRDeletedRLabels
8
NumberyofyObjectives
10000
8000
Method
LS
6000
NS
4000
2000
0
0
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
NumberRofRNodesR(inRthousands)
(e) RandomN-medium
2
3
4
5
6
7
8
9
10
11
12
NumberyofyObjectives
(f) RandomK-medium
Fig. 11: Measuring how many nodes have been touched which could have been
deleted by tree-deletion pruning in the label-selection (LS) and node-selection
(NS) strategies
13
14
15
19
10
20
30
40
50
60
70
80
90 100 110 120 130 140
2
3
4
5
6
7
8
9
80000
150000
60000
Method
100000
LS
Method
LS
40000
NS
50000
NS
20000
0
0
10
20
30
40
50
60
70
80
90 100 110 120 130 140
2
3
4
Number of Nodes
(a) CompleteN-large
100
121
144
169
196
TouchedSRecursivelySDeletedSLabels
6
7
8
9
225
(b) CompleteK-large
256
289
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
80000
1000
60000
Method
Method
LS
40000
LS
NS
NS
500
20000
0
0
100
121
144
169
196
225
256
2
289
3
4
5
6
7
8
(c) GridN-large
1
2
3
4
5
6
7
8
9
9 10 11 12 13 14 15 16 17 18 19 20
Number of Objectives
NumberSofSNodes
(d) GridK-large
10 11 12 13 14 15 16 17 18 19 20
2
TouchedvRecursivelyvDeletedvLabels
5
Number of Objectives
3
4
5
6
7
8
9
80000
20000
60000
Method
15000
Method
LS
40000
NS
LS
10000
NS
20000
5000
0
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20
NumbervofvNodesv(invthousands)
(e) RandomN-large
2
3
4
5
6
7
Number of Objectives
(f) RandomK-large
Fig. 12: Measuring how many nodes have been touched which could have been
deleted by tree-deletion pruning in the label-selection (LS) and node-selection
(NS) strategies
8
9
| 8 |